text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
scanner problem - Java Beginners
scanner problem the program that enters sides of the triangle using scanner and outputs the area of triangle Hi Friend,
We... main(String []args){
Scanner scanner = new Scanner(System.in
port number - JavaMail
port number i want james2.1.3 port number like
microsoft telnet>open local host _______(some port number)
i wnat that port number
Hp scanner - Java Beginners
Hp scanner Hi guys,
i would like to access hp scanner using java programme. could you refer me some useful information to proceed Class Error - Java Beginners
scanner Class Error Hello Sir ,When i run the program of Scanner Class ,there is Error
Can not Resolve Symbol-Scanner
how i can solve..., Scanner class is not provided. Check your version.It should work with java 1.5
Setting source port on a Java Socket?
Setting source port on a Java Socket? Is it possible to explicitly set the the source port on a Java Socket
barcode scanner device usage in database application
barcode scanner device usage in database application Am trying to design a supermarket inventory desktop database application however I can't seem to know how to use a barcode scanner device on my application can anyone put COM Port Value in to database
Serial COM Port Value in to database How save the serial COMport value in mysql databse.
Following code is here.
package simpleread;
import java.io.*;
import java.sql.Connection;
import java.sql.PreparedStatement
serial port communication using bluetooth in j2me
serial port communication using bluetooth in j2me how to make serial port communication using bluetooth in j2me ?
what r prerequisites
MySQL default port number in Linux
MySQL default port number in Linux - How to find the default port number... server and you
are not aware on which port it is listening then this tutorial is for you. In
this tutorial you learn "How to find the default port number
Post your Comment | http://roseindia.net/discussion/24231-Local-Port-Scanner.html | CC-MAIN-2014-35 | refinedweb | 322 | 54.22 |
E2::Interface - A client interface to the everything2.com collaborative database
use E2::Interface; use E2::Message; # Login my $e2 = new E2::Interface; $e2->login( "username", "password" ); # Print client information print "Info about " . $e2->client_name . "/" . $e2->version . ":"; print "\n domain: " . $e2->domain"; print "\n cookie: " . $e2->cookie"; print "\n parse links:" . ($e2->parse_links ? "yes" : "no"); print "\n username: " . $e2->this_username; print "\n user_id: " . $e2->this_userid; # Load a page from e2 my $page = $e2->process_request( node_id => 124, displaytype => "xmltrue" ); # Now send a chatterbox message using the current # settings of $e2 my $msg = new E2::Message; $msg->clone( $e2 ); $msg->send( "This is a message" ); # See E2::Message # Logout $e2->logout;
This module is the base class for e2interface, a set of modules that interface with everything2.com. It maintains an agent that connects to E2 via HTTP and that holds a persistent state (a cookie) that can be
cloned to allow multiple descendants of
E2::Interface to act a single, consistent client. It also contains a few convenience methods.
The modules that compose e2interface are listed below and indented to show their inheritance structure.
E2::Interface - The base module E2::Node - Loads regular (non-ticker) nodes E2::E2Node - Loads and manipulates e2nodes E2::Writeup - Loads and manipulates writeups E2::User - Loads user information E2::Superdoc - Loads superdocs E2::Room - Loads room information E2::Usergroup - Loads usergroup information E2::Ticker - Modules for loading ticker nodes E2::Message - Loads, stores, and posts msgs E2::Search - Title-based searches E2::Usersearch - Search for writeups by user E2::Session - Session information E2::ClientVersion - Client version information E2::Scratchpad - Load and update scratchpads
See the manpages of each module for information on how to use that particular module.
e2interface uses Perl's exception-handling system,
Carp::croak and
eval. An example:
my $e2 = new E2::Interface; print "Enter username:"; my $name = <>; chomp $name; print "Enter password:"; my $pass = <>; chomp $pass; eval { if( $e2->login( $name, $pass ) ) { print "$name successfully logged in."; } else { print "Unable to login."; } }; if( $@ ) { if $@ =~ /Unable to process request/ { print "Network exception: $@\n"; } else { print "Unknown exception: $@\n"; } }
In this case,
login may generate an "Unable to process request" exception if it's unable to communicate with or receives a server error from everything2.com. This exception may be raised by any method in any package in e2interface that attempts to communicate with the everything2.com server.
Common exceptions include the following (those ending in ':' contain more specific data after that ':'):
'Unable to process request' - HTTP communication error. 'Invalid document' - Invalid document received. 'Parse error:' - Exception raised while parsing document (the error output of XML::Twig::parse is placed after the ':' 'Usage:' - Usage error (method called with improper parameters)
I'd suggest not trying to catch 'Usage:' exceptions: they can be raised by any method in e2interface and if they are triggered it is almost certainly due to a bug in the calling code.
All methods list which exceptions (besides 'Usage:') that they may potentially throw.
Network access is slow. Methods that rely upon network access may hold control of your program for a number of seconds, perhaps even minutes. In an interactive program, this sort of wait may be unacceptable.
e2interface supports a limited form of multithreading (in versions of perl that support ithreads--i.e. 5.8.0 and later) that allows network-dependant members to be called in the background and their return values to be retrieved later on. This is enabled by calling
use_threads on an instance of any class derived from E2::Interface (threading is
cloned, so
use_threads affects all instances of e2interface classes that have been
cloned from one-another). After enabling threading, any method that relies on network access will return (-1, job_id) and be executed in the background.
This job_id can then be passed to
finish to retrieve the return value of the method. If, in the call to
finish, the method has not yet completed, it returns (-1, job_id). If the method has completed,
finish returns a list consisting of the job_id followed by the return value of the method.
A code reference can be also be attached to a background method. See
thread_then.
A simple example of threading in e2interface:
use E2::Message; my $catbox = new E2::Message; $catbox->use_threads; # Turn on threading my @r = $catbox->list_public; # This will run in the background while( $r[0] eq "-1" ) { # While method deferred (use a string # comparison--if $r[0] happens to be # a string, you'll get a warning when # using a numeric comparison) # Do stuff here........ @r = $catbox->finish( $r[1] ); # $r[1] == job_id } # Once we're here, @r contains: ( job_id, return value ) shift @r; # Discard the job_id foreach( @r ) { print $_->{text}; # Print out each message }
Or, the same thing could be done using
thread_then:
use E2::Message; my $catbox = new E2::Message; $catbox->use_threads; # Execute $catbox->list_public in the background $catbox->thread_then( [\&E2::Message::list_public, $self], # This subroutine will be called when list_public # finishes, and will be passed its return value in @_ sub { foreach( @_ ) { print $_->{text}; } # If we were to return something here, it could # be retrieved in the call to finish() below. } ); # Do stuff here..... # Discard the return value of the deferred method (this will be # the point where the above anonymous subroutine actually # gets executed, during a call to finish()) while( $node->finish ) {} # Finish will not return a false # value until all deferred methods # have completed
new creates an
E2::Interface object. It defaults to using 'Guest User' until either
login or
cookie is used to log in a user.
This method attempts to login to Everything2.com with the specified USERNAME and PASSWORD.
This method returns true on success and
undef on failure.
Exceptions: 'Unable to process request', 'Invalid document'
This method can be called after setting
cookie to verify the login.
It (1) verifies that the everything2 server accepted the cookie as valid, and (2) determines the user_id of the logged-in user, which would otherwise be unavailable.
logout attempts to log the user out of Everything2.com.
Returns true on success and
undef on failure.
process_request requests the specified page via HTTP and returns its text.
It assembles a URL based upon the key/value pairs in HASH (example:
process_request( node_id => 124 ) would translate to "" (well, technically, a POST is used rather than a GET, but you get the idea)).
The returned text is stripped of HTTP headers and smart quotes and other MS weirdness prior te the return.
For those pages that may be retrieved with or without link parsing (conversion of "[link]" to a markup tag), this method uses this object's
parse_links setting.
All necessary character escaping is handled by
process_request.
Exceptions: 'Unable to process request'
clone copies various members from the
E2::Interface-derived object OBJECT to this object so that both objects will use the same agent to process requests to Everything2.com.
This is useful if, for example, one wants to use both an E2::Node and an E2::Message object to communicate with Everything2.com as the same user. This would work as follows:
$msg = new E2::Message; $msg->login( $username, $password ); $node = new E2::Node; $node->clone( $msg )
clone copies the cookie, domain, parse_links value, and agentstring, and it does so in such a way that if any of the clones (or the original) change any of these values, the changes will be propogated to all the others. It also clones background threads, so these threads are shared among cloned objects.
clone returns
$self if successful, otherwise returns
undef.
debug sets the debug level of e2interface.
The default debug level is zero. This value is shared by all instances of e2interface classes.
Debug levels (each displays all messages from levels lower than it):
0 : No debug information displayed 1 : E2::Interface info displayed once; vital debug messages displayed (example: trying to perform an operation that requires being logged in will cause a debug message if you're not logged in) 2 : Each non-trivial subroutine displays its name when called 3 : Important data structures are displayed as processed
Debug messages are output on STDERR.
client_name return the name of this client, "e2interface-perl".
version returns the version number of this client.
this_username returns the username currently being used by this agent.
this_user_id returns the user_id of the current user.
This is only available after
login or
verify_login has been called (in this instance or another
cloned instance).
This method returns, and (if DOMAIN is specified) sets the domain used to fetch pages from e2.
By default, this is "everything2.com".
DOMAIN should contain neither an "http://" or a trailing "/".
cookie returns the current everything2.com cookie (used to maintain login).
If COOKIE is specified,
cookie sets everything2.com's cookie to "COOKIE" and returns that value.
"COOKIE" is a string value of the "userpass" cookie at everything2.com. Example: an account with the username "willie" and password "S3KRet" would have a cookie of "willie%257CwirQfxAfmq8I6". This is generated by the everything2 servers.
This is how
cookie would normally be used:
# Store the cookie so we can save it to a file if( $e2->login( $user, $pass ) ) { $cookies{$user} = $e2->cookie; } ... print CONFIG_FILE "[cookies]\n"; foreach( keys %cookies ) { print CONFIG_FILE "$_ = $cookies{$_}\n"; }
Or:
# Load the appropriate cookie while( $_ = <CONFIG_FILE> ) { chomp; if( /^$username = (.*)$/ ) { $e2->cookie( $1 ); last; } }
If COOKIE is not valid, this function returns
undef and the login cookie remains unchanged.
agentstring returns and optionally sets the value prependend to e2interface's agentstring, which is then used in HTTP requests.
document returns the text of the last document retrieved by this instance in a call to
process_request.
Note: if threading is turned on, this is updated by a call to
finish, and will refer to the document from the most recent method
finished.
logged_in returns a boolean value, true if the user is logged in and
undef if not.
Exceptions: 'Unable to process request', 'Parse error:'
use_threads creates a background thread (or NUMBER background threads) to be used to execute network-dependant methods.
This method can only be called once for any instance (or set of
cloned instances), and must be disabled again (by a call to
join_threads or
detach_threads) before it can be re-enabled (this would be useful if you wanted to change the NUMBER of threads).
use_threads returns true on success and
undef on failure.
These methods disable e2interface's threading for an instance or a set of
cloned instances.
join_threads waits for the background threads to run through the remainder of their queues before destroying them.
detach_threads detaches the threads immediately, discarding any incomplete jobs on the queue.
Both methods process any finished jobs that have not yet been
finished and return a list of these jobs. i.e.:
my @r; my @i; while( @i = $e2->finish ) { push @r, \@i if $i[0] ne "-1" } return @r;
finish handles all post-processing of deferred methods, and returns the final return value of the deferred method.
(See
thread_then for information on adding post-processing to a method.)
If JOB_ID is specified, it attempts to return the return value of that job, otherwise it attempts to return the return value of the first completed job on its queue.
It returns a list consisting of the job_id of the deferred method followed by the return value of the method in list context. If JOB_ID is specified and the corresponding method is not yet completed, this method returns -1. If JOB_ID is not specified, and there are methods left on the deferred queue but none of them are completed, it returns (-1, -1). If the deferred queue is empty, it returns an empty list.
If exceptions have been raised by a deferred method, or by post-processing code, they will be raised in the call to
finish.
thread_then executes METHOD (which is a reference to an array that consists of a method and its parameters, e.g.: [ \&E2::Node::load, $e2, $title, $type ]), and sets up CODE (a code reference) to be passed the return value of METHOD when METHOD completes.
thread_then is named as a sort of mnemonic device: "thread this method, then do this..."
thread_then returns (-1, job_id) if METHOD is deferred; if METHOD is not deferred, thread_then immediately passes its return value to CODE and then returns the return value of CODE. This allows code to be written that can be run as either threaded or unthreaded; indeed this is how e2interface is implemented internally.
If METHOD throws an exception (threaded exceptions are thrown during the call to
finish), CODE will not be executed. If CODE throws an exception, any post-processing chained after CODE will not be executed. For this reason, a third code reference, FINAL, can be specified. This code will be passed no parameters, and its return value will be discarded, but it is guaranteed to be executed after all post-processing is complete, or, in the case of an exception thrown by METHOD or CODE, to be executed before
finish throws that exception.
E2::Node, E2::E2Node, E2::Writeup, E2::User, E2::Superdoc, E2::Usergroup, E2::Room, E2::Ticker, E2::Message, E2::Search, E2::UserSearch, E2::ClientVersion, E2::Session, E2::Scratchpad,,
Jose M. Weeks <jose@joseweeks.com> (Simpleton on E2)
This software is public domain. | http://search.cpan.org/dist/E2-Interface/Interface.pm | CC-MAIN-2015-11 | refinedweb | 2,209 | 60.35 |
Following on from that previous post about real data, of course we can also make sample data in Blend at the document or project level over in the data tab;
All of those 3 menu options raised from the little icon with the drop down relate to creating sample data and whether you have “Project” or “This document” selected seems irrelevant as to where the sample data ends up.
New Sample Data…
If you go down this route then you’ll end up with a dialog asking where you want to put the data source;
and whether you want to leave the sample data in place when the application is running or not (i.e. make it real data).
The difference between the two affects what happens when you use items from the data source and whether they end up being set as the DataContext or the d: DataContext for the elements where you use them.
When you create one of these, you’ll find that it shows up in the Data tab;
and, if you’re of a curious disposition, then you’ll notice that it shows up in the project structure too;
Opening up that XSD in Visual Studio 2010 shows me that what I’m editing in Blend is actually an XSD;
I’ve never actually looked at how Blend does this but I’m guessing that it runs a transform on this in order to generate classes from this XSD because the generated MySampleData.xaml.cs has classes that match this structure. Here they are in a Visual Studio class diagram;
and then some piece of smarts knows how to take these bits and generate a XAML serialization of these types with test data slotted into it as you’ll find in the generated .xaml file;
<!-- ********* DO NOT MODIFY THIS FILE ********* This file is regenerated by a design tool. Making changes to this file can cause errors. --> <SampleData:MySampleData xmlns: <SampleData:MySampleData.Collection> <SampleData:Item <SampleData:Item <SampleData:Item <SampleData:Item <SampleData:Item <SampleData:Item <SampleData:Item <SampleData:Item <SampleData:Item <SampleData:Item </SampleData:MySampleData.Collection> </SampleData:MySampleData>
None of this matters too much. What matters is that you can shape the test data over in the Data tab in a fairly intuitive way by adding properties of type simple/complex/collection from the little menu;
Those pretty much speak for themselves and so if I was trying to build a representation of a set of people with a set of addresses then I might end up with;
but the data types over on the right hand side (highlighted) are all over the place so I can use the little drop down to try and fix them;
and that lets me choose between properties of type string/number/boolean/image (number is not an integer) and it also lets me apply various facets like length and so on to the data being generated.
I wish there were more data types – dates and times would be a good starting point. When it comes to using pictures Blend has the ability to read some from a folder for you;
and you’ll notice these show up in your project;
In terms of collections, you can always click on the icon and edit the details about the collection;
where you can tweak the data types and facets again and control the cardinality. You can even manually edit the data if you don’t like the generated values.
Import Sample Data from XML…
Another way to get data into the environment is to import it from an existing XML file.
I pondered over where to get an XML file from and then remembered my good old friend Northwind and I dug out an old FOR XML query I had kicking around on my disk somewhere;
select c.customerid [@id], c.country [@country], c.contactName [@contactName], c.contactTitle [@contactTitle], ( select o.orderid [@id], o.orderdate [@date], ( select od.unitprice [@price], od.quantity [@quantity] from [order details] od where od.orderid = o.orderid for xml path('item'), root('items'), type ) from orders o where o.customerid = c.customerid for xml path('order'), root('orders'), type ) from customers c where c.contactName is not null and c.contactTitle is not null for xml path('customer'), root('customers')
and I fed this into the “Import Sample Data from XML” dialog;
and that seemed to do a pretty decent job (once I’d fixed the problem I had with the text encoding) in that it showed up in my data tab as;
although it’s fair to say that all strings/dates/etc come in as strings with no particular facets applied to them.
It looks like a similar approach is taken – there’s a generated XSD in my project which looks like this;
and then some generated classes and a XAML file which contains the serialized actual data from my sample XML file persisted in a form that could be de-serialized back to the generated classes in question.
Create Sample Data from Class…
This was added in Blend 4 and is really useful if you’ve already got ViewModels kicking around that you can bring into the environment and have it make test data for.
If I’ve got classes such as;
public class PeopleData { public ObservableCollection<Person> People { get; set; } } public class Person { public string FirstName { get; set; } public string LastName { get; set; } public Uri Picture { get; set; } public AgeDetails AgeDetails { get; set; } public ObservableCollection<Address> Addresses { get; set; } } public class AgeDetails { public int Age { get; set; } public DateTime DateOfBirth { get; set; } } public class Address { public string HouseName { get; set; } public string AddressLine { get; set; } public string PostCode { get; set; } } }
then I can add a sample data source around this, picking my data class from the dialog;
and I then get that new data source which always seems to go into the Project rather than the current document although I daresay that you could move it manually. The data source shows up in the Data tab;
with more type information than any of the other mechanisms I’d tried and it’s interesting to see Int32 and Uri and DateTime show up there.
Now, when I look at the project here there’s no XSD created. There’s just a XAML file;
and there’s no editing experience for this sample data from the Data tab. Trying to edit just drops me straight into the XAML editor for that file. That is, clicking this;
equals this;
and it’s quite interesting because I think all of the DateTime values are set to the equivalent of DateTime.Now when the generation happened but the Integer values look to have been randomised and the Uri values look to have been omitted.
So, it’s a little hit-and-miss but it’s a lot better than having no data whatsoever at design time.
Using the Sample Data
Once you’ve got the sample data created via one of the 3 mechanisms then it’s into a drag/drop in order to make use of it on the design surface. You can work in either List/Details mode and if you’re in one of those modes then holding down the ALT key during a drag will temporarily move you to the other mode for that operation.
If I drag something like the People collection out onto the design surface then I can see 1 of three things.
A regular drag (in list mode) offers to create a ListBox and bind its ItemsSource to People. Automatic!
if I hold down the SHIFT key then I see instead;
which will pop that very familiar dialog rather than just assuming I want to make a ListBox and set its ItemsSource (which is a good assumption usually);
and if I hold down the CTRL key then I see;
and finally if I have both ALT+SHIFT down then it’ll give me;
By default if I go with dragging that People collection to the design surface then I get a ListBox with an ItemTemplate which displays the FirstName, LastName, Picture
If I wanted to also include the Age then I’d just edit the template and drag it out there;
and if I (say) wanted an Expander that displayed the addresses in a TreeView (yes, I know – this is nuts) rather than a ListBox then I can create the TreeView in the Expander and then drag the Addresses out to that TreeView;
It’s worth saying that I could have selected a subset of properties if I wanted a template based around them – e.g. starting again here but only selecting the FirstName and Picture property and dragging them to this new TreeView;
and then if I want to build up a details view I can drop a Grid over on the right hand side and then in details mode I can drag out some properties like FirstName and Picture again and Blend will make me some controls.
and it’s “kind of” interesting as to the binding that Blend throws out for this. It sets that Grid’s DataContext 2 ways;
<Grid DataContext="{Binding SelectedItem, ElementName=treeView}" d:
one for design time and one for runtime. The rest of the bindings within the grid are standard. And then if I wanted to display the addresses here maybe in a DataGrid then I can just draw a DataGrid onto the surface (setting its AutoGenerate property to true) and then in list mode drag those addresses to that DataGrid;
Dragging/Dropping Methods/Commands
It’s probably worth saying that if you have created sample data from a set of .NET classes then you might have methods or ICommand implementations on the classes here like this one I just quickly added to my Person class;
public class Person { public void ShowInMessageBox() { MessageBox.Show( string.Format("{0} {1}", this.FirstName, this.AgeDetails));; }
then they’ll also show up in the Data panel and you’ll find that you can’t drag like this;
but you can drag out the FirstName (e.g.) to make a ListBox then edit the ItemTemplate for that ListBox and add some piece of UI (e.g. this star shape below) and then drag the ShowInMessageBox method to that star shape;
and that will create triggers and actions from Blend but that’s really another story… | https://mtaulty.com/2010/11/26/m_13185/ | CC-MAIN-2020-45 | refinedweb | 1,729 | 58.35 |
A simple TrimmedTextBox for WPF
So I wanted a TextBox that trims automatically. At first I played with the idea of a converter, but that just didn’t work out. Try using a trim converter with UpdateSourceTrigger=PropertyChanged and you will see what I mean. Yes, it seems you cannot even type a space. I need the trimming to occur after losing focus.
After thinking about it, a converter was the wrong method anyway. If I want a TextBox that always trims when it loses focus, why not just make one by inheriting from TextBox and adding a method to the LostFocus event. So I did.
using System.Windows.Controls; namespace WpfSharp.UserControls { public class TrimmedTextBox : TextBox { public TrimmedTextBox() { LostFocus += TrimOnLostFocus; } void TrimOnLostFocus(object sender, System.Windows.RoutedEventArgs e) { var trimTextBox = sender as TrimmedTextBox; if (trimTextBox != null) trimTextBox.Text = trimTextBox.Text.Trim(); } } }
Now to use this in XAML, add this namespace to your UserControl or Window:
xmlns:wpfsharp="clr-namespace:WpfSharp.UserControls;assembly=WpfSharp"
Then use the object like this:
<wpfsharp:TrimmedTextBox
Remember, it is the simple solutions that are best. | http://www.wpfsharp.com/2014/05/15/a-simple-trimmedtextbox-for-wpf/ | CC-MAIN-2017-39 | refinedweb | 181 | 50.33 |
cmd – Create line-oriented command processors¶
The cmd module contains one public class, Cmd, designed to be used as a base class for command processors such as interactive shells and other command interpreters. By default it uses readline for interactive prompt handling, command line editing, and command completion.
Processing Commands¶ for a command, the method default() is called with the entire input line as an argument. The built-in implementation of default() reports an error.
(Cmd) foo *** Unknown syntax: foo
Since do_EOF() returns True, typing Ctrl-D will drop us out of the interpreter.
(Cmd) ^D$
Notice that no newline is printed, so the results are a little messy.
Command Arguments¶): print output shows one optional argument to the greet command, person. Although the argument is optional to the command, there is a distinction between the command and the callback method. The method always takes the argument, but sometimes the value is an empty string. It is left up to the command processor to determine if an empty argument is valid, or do any further parsing and processing of the command. In this example, if a person’s name is provided then the greeting is personalized.
(Cmd) greet Alice hi, Alice (Cmd) greet hi
Whether an argument is given by the user or not, the value passed to the command processor does not include the command itself. That simplifies parsing in the command processor, if multiple arguments are needed.
Live Help¶
In the previous example, the formatting of the help text leaves something to be desired. Since it comes from the docstring, it retains the indentation from our source. We could edit the source to remove the extra white-space, but that would leave our application looking poorly formatted. An alternative solution is to implement a help handler for the greet command, named help_greet(). When present, the help handler is called on to produce help text for the named command. processor methods. The user triggers completion by hitting the tab key at an input prompt., " + person else: greeting = 'hello' print greeting def complete_greet(self, text, line, begidx, endidx): if not text: completions = self.FRIENDS[:] else: completions = [ f for f in self.FRIENDS if f.startswith(text) ] return completions def do_EOF(self, line): return True if __name__ == '__main__': HelloWorld().cmdloop()
When there is input text, complete_greet() returns a list of friends that match. Otherwise, the full list of friends is returned.
$ python cmd_arg_completion.py (Cmd) greet <tab><tab> Adam Alice Barbara Bob (Cmd) greet A<tab><tab> Adam Alice (Cmd) greet Ad<tab> (Cmd) greet Adam hi, Adam!
If the name given is not in the list of friends, the formal greeting is given.
(Cmd) greet Joe hello, Joe
Overriding Base Class Methods¶
Cmd includes. The actual input line is parsed with parseline() to create a tuple containing the command, and the remaining portion of the line.
If the line is empty, emptyline() is called. The default implementation runs the previous command again. If the line contains a command, first precmd() is called then the processor is looked up and invoked. If none is found, default() is called instead. Finally postcmd() is called.
Here’s an example session with print statements added:
$ python cmd_illustrate_methods.py cmdloop(Illustrating the methods of cmd.Cmd) preloop() Illustrating the methods of cmd.Cmd (Cmd) greet Bob precmd(greet Bob) onecmd(greet Bob) parseline(greet Bob) => ('greet', 'Bob', 'greet Bob') hello, Bob postcmd(None, greet Bob) (Cmd) ^Dprecmd(EOF) onecmd(EOF) parseline(EOF) => ('EOF', '', 'EOF') postcmd(True, EOF) postloop()
Configuring Cmd Through Attributes¶
In addition to the methods described above, there are several attributes for controlling command interpreters.
prompt can be set to a string to be printed each time the user is asked for a new command.
intro is the “welcome” message printed at the start of the program. cmdloop() takes an argument for this value, or()
$ python cmd_attributes.py Simple command processor example. prompt: prompt hello hello: help doc_header ---------- prompt undoc_header ------------ EOF help hello:
Shelling Out¶
To supplement the standard command processing, Cmd includes¶
While the default mode for Cmd() is to interact with the user through the readline library, interacts and prompt set to an empty string, we can call the script on this input file:
greet greet Alice and Bob
to produce output like:
$ python cmd_file.py cmd_file.txt hello, hello, Alice and Bob
Commands from sys.argv¶
You can also process command line arguments to the program as a command for your interpreter class, instead of reading commands from stdin or a file. To use the command line arguments, you can call onecmd() directly, as in this example.
import cmd class InteractiveOrCommandLine(cmd.Cmd): """Accepts commands via the normal interactive prompt or on the command line.""" def do_greet(self, line): print 'hello,', line def do_EOF(self, line): return True if __name__ == '__main__': import sys if len(sys.argv) > 1: InteractiveOrCommandLine().onecmd(' '.join(sys.argv[1:])) else: InteractiveOrCommandLine().cmdloop()
Since onecmd() takes a single string as input, the arguments to the program need to be joined together before being passed in.
$ python cmd_argv.py greet Command Line User hello, Command Line User $ python cmd_argv.py (Cmd) greet Interactive User hello, Interactive User (Cmd)
See also
- cmd
- The standard library documentation for this module.
- cmd2
- Drop-in replacement for cmd with additional features.
- GNU readline
- The GNU Readline library provides functions that allow users to edit input lines as they are typed.
- readline
- The Python standard library interface to readline. | https://pymotw.com/2/cmd/index.html | CC-MAIN-2018-51 | refinedweb | 907 | 56.15 |
OK, in reference to my last thread, I finally got PIL installed and set up. I can even successfully import it (from PIL import Image). All that remains to be done is for me to successfully get an image into my GUIs, and I'll be good to go.
Can someone link me to a helpful thread or website?
ETA: I am having trouble figuring this out. I have the following as a "test" code, before I apply PIL to my main code:
import PIL from PIL import Image from PIL import ImageDraw im = Image.open("1.gif") im.show() print 'done.'
This is EXACTLY what a sample program has, with the lone exception of the file. And yet my program pops up the Microsoft Windows Viewer but shows no picture. Why? | https://www.daniweb.com/programming/software-development/threads/277019/more-pil-help-but-i-m-getting-warmer | CC-MAIN-2017-34 | refinedweb | 132 | 74.08 |
[SOLVED] android - Could not start process "make" - linux opensus
Hi,
I installed qt creator on windows and it works ok, but problems starts with linux.
I have linux opensus, Qt Creator 3.1.2 (opensource),Based on Qt 5.3.1 (GCC 4.6.1, 64 bit). The problem appears when I want to build the app. This is output from console
@12:24:02: Could not start process "make"
Error while building/deploying project tested (kit: Android for armeabi-v7a (GCC 4.9, Qt 5.3.1))
When executing step 'Make'@
when I look at my kits I see red circle next to desktop qt 5.3 gcc 64 bit and error says: "no compiler set in kit" I try to do it manually but apparently I do it in wrong way. The rest of kits look right.
When i open qt creator I got this message "/bin/sh: .../android-ndk-r10/toolchains/x86-4.8/prebuilt/linux-x86_64/bin/i686-linux-android-gcc: No such file or directory"
I have no idea how to solve this problem, I feeling like baby in the fog, cause I'm newbie in linux and qt-creator.
I try to install qt creator both ways:
online/offline.
I try to put commands in system folder of qt : "sudo apt-get install build-essential" but I get apt-get command not found. I took it from this "topic:":
I have directory of gcc_64/qmake in qt folder
I would be grateful for any help
Hi,
Is your OpenSuse 32 bit or 64 bit ?
Have you properly set ndk and sdk's ?
Can you post the screenshot of Tools > Options > Android of QtCreator ?
[quote author="p3c0" date="1407237084"]
Is your OpenSuse 32 bit or 64 bit ?[/quote]
OpenSuse 64 bit.
[quote author="p3c0" date="1407237084"]
Have you properly set ndk and sdk's ?[/quote]
yeah I set it in the same way like for windows, but obviously I download linux version.
[quote author="p3c0" date="1407237084"]
Can you post the screenshot of Tools > Options > Android of QtCreator ?[/quote]
yeah of course,
[IMG][/IMG]
!!
This seems good.
Now Can you post Screenshot of Kit ?
Are you compiling for Android or Desktop?
Also
bq.
I try to put commands in system folder of qt : “sudo apt-get install build-essential” but I get apt-get command not found. I took it from this topic: [qt-project.org]
You must find something equivalent for apt-get in OpenSuse
Probably YaST -> Software -> Software Management and install make command.
To make sure if make installed on system, type make on terminal
@
make --version
@
Firstly after type @make --version@ in terminal I get message "If 'make' is not a typo..."
after installing make by typing this name into yast ->software -> software management, when I write into terminal @make --version@ I get back:
"GNU Make 3.82
Built for x86_64-unknown-linux-gnu
License GPLv3+: GNU GPL version 3 or later
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law"
Ok, here is the screen
!!
I compiling on Android.
After Installing 'make', when I try to compile I get this output:
@14:28:38: Running steps for project newone...
14:28:39: Starting: "/home/ctewoj/Qt5.3.1/5.3/android_armv7/bin/qmake" /home/ctewoj/newone/newone.pro -r -spec android-g++ CONFIG+=debug
14:28:41: The process "/home/ctewoj/Qt5.3.1/5.3/android_armv7/bin/qmake" exited normally.
14:28:41: Starting: "/usr/bin/make"
/home/ctewoj/Qt5.3.1/5.3/android_armv7/bin/uic ../newone/mainwindow.ui -o ui_mainwindow.h
/home/ctewoj/android-ndk-r10/toolchains/arm-linux-androideabi-4.9/prebuilt/linux../newone -ndk-r10/sources/cxx-stl/gnu-libstdc++/4.9/include -I../android-ndk-r10/sources/cxx-stl/gnu-libstdc++/4.9/libs/armeabi-v7a/include -I/home/ctewoj/android-ndk-r10/platforms/android-9/arch-arm//usr/include -I. -o main.o ../newone/main.cpp
In file included from ../android-ndk-r10/sources/cxx-stl/gnu-libstdc++/4.9/include/bits/stl_algo.h:59:0,
from ../android-ndk-r10/sources/cxx-stl/gnu-libstdc++/4.9/include/algorithm:62,
from ../Qt5.3.1/5.3/android_armv7/include/QtCore/qglobal.h:85, ../newone/mainwindow.h:4,
from ../newone/main.cpp:1:
../android-ndk-r10/sources/cxx-stl/gnu-libstdc++/4.9/include/cstdlib:72:20: fatal error: stdlib.h: No such file or directory
#include <stdlib.h>
^
compilation terminated.
make: *** [main.o] Error 1
14:28:44: The process "/usr/bin/make" exited with code 2.
Error while building/deploying project newone (kit: Android for armeabi-v7a (GCC 4.9, Qt 5.3.1))
When executing step 'Make'
14:28:44: Elapsed time: 00:05.@
EDIT:
Now I'm trying to reinstall whole qt creator
well issue for.desktop is that it is not able to find make. Set the path to /usr/bin. but thats for desktop
for android try compiling for platform higher than android-9
I just reinstall the qt creator and then remove folder with android-ndk-r10 and miracle appears cause it works :D thx for help I think the crucial thing was to install 'make'. On windows there is no need to do this, but the linux is way different ;) Then, as I said before I reinstall this stuff and finally it works after day of war.
Well that's fine, Welcome :)
I actually overlooked the make command requirement at first :o
On some linux distros make command is not installed by default. | https://forum.qt.io/topic/44501/solved-android-could-not-start-process-make-linux-opensus | CC-MAIN-2018-17 | refinedweb | 928 | 58.89 |
>>?
log.name("twiggy").info("What's new, what's next")November 09, 2010 at 11:00 AM | Tags: python, release, logging, twiggy
This post was import from an earlier version of this blog. Original here.
An update about Twiggy, my new Pythonic logger.
What’s New
Yesterday I released a new version 0.4.1 of Twiggy. This release adds full test coverage (over 1000 lines, nearly twice the lines of actual code). I’ve fixed a number of important bugs in the process, so you’re encouraged to upgrade.
The features system is currently deprecated, pending a reimplementation in version 0.5. Features are currently global (shared by all log instances); they really should be per-object so libraries can use them without stepping on each other. Expect some clever metaprogramming voodoo to make this work while keeping things running fast.
What’s Next
Here’s a little preview of what you can expect over the next few weeks:
Be the best, steal from the rest
I’ll be adding support for context fields, a feature inspired by Logbook’s stacks. This allows an application to add fields to all log messages on a per-thread or per-process basis.
>>>
This is a killer feature for logging/debugging in webapps. One often wants to inject the request ID into all messages, including libraries that don’t know/care that they’re running on the web. There’ll be methods for clearing these contexts, as well as context managers to use with the with: statement.
Stdlib compatibility layer
0.5 will improve compatibility with the standard library’s logging package. This compatiblity will be two-way. You’ll be able to:
- configure twiggy to use stdlib logging as an output backend
- inject an API shim that emulates basic logging functionality
The later requires some explanation. 90-plus percent of the logging code I’ve ever seen only uses the most basic functionality: creating loggers, logging messages and capturing tracebacks. For such code, it should be possible to do:
from twiggy import logging_compat as logging log = logging.getLogger("oldcode") log.info("Shh, don't tell")
Even better, twiggy will provide a logging_compat.hijack() method to inject itself into sys.modules so that no modification to old code is needed at all.
I don’t expect this compatibility layer to work for everyone – notably, custom handlers won’t be supported (the underlying models are just too different), but this should ease the transition pain for many people.
Indentation
Also planned for 0.5 is support for user-defined counters. This feature is still taking shape, but it’ll look something like:
>>> def deep(): ... with log.increment('depth'): ... log.info("it's dark") ... abyss() ... log.warning("coming back up") ... >>> def abyss(): ... with log.increment('depth'): ... log.info("it's cold") ... >>> deep() INFO:depth=1:it's dark INFO:depth=2:it's cold WARNING:depth=1:coming back up
Outputs will be able to transform the depth field into useful visual formatting – for example, by using indentation to group lines together in a console app, or by setting a CSS class in HTML. Hell yeah, structured logging.
Etc.
Other forthcoming changes include: a port to Python 3, PEP-8 compliance, rewriting the features system, support for the warnings module and various minor enhancements. I’ll continue to support Python 2.7 using 3to2
N+1
I should probably stop there, but I’m excited by what’s further down the road. That includes:
- lazy logging: an output backend that groups messages together by a key, and only outputs them if some condition is met. For example, capture messages by request ID, and output all of them together if any one message is ERROR or higher.
- cluster logging: Twiggy will support easily settting up a master logging daemon to receive messages from multiple processes on a machine or across your cluster.
- unittest support: stuff the expected log output in your test docstring, apply a decorator, and Twiggy will add additional asserts to ensure your logs come out right.
- backends, backends, backends: email, HTTP, SQL, CouchDB, syslog, NT event log… Maybe even backends that open tickets in your bug tracker or stream live logs to your browser. Yeah.
What do you want?
Now is your opportunity to let me know what you want in a logger. Got a feature I haven’t thought of? Crazy idea? Think I should implement your favorite backend sooner? Tell me in the comments below.
Pete cooks, rides bikes and hacks Python. Maybe for you?. Don’t worry, he wears pants. | http://i.wearpants.org/blog/lognametwiggyinfowhats-new-whats-next/ | CC-MAIN-2013-48 | refinedweb | 761 | 64.81 |
In celebration of our king finding the village that was cheating him he decides to throw a celebration for the other 9 villages. In preparation for the celebration he orders 1000 barrels of the finest wine.
When the members of the uninvited village find out about the party they send an assassin to poison one of the barrels of wine. The poison takes 7 days to kill so the party guests won't realize what is happening for awhile.
However, after poisoning a random barrel the king's guard finds out and has the assassin executed. There is no time to order more wine so the king devises a genius plan to have his 10 loyal servants taste test the wine to find the poisoned barrel just in time for the party in 10 days. What is the plan that he devises so that he is left with 999 barrels of wine for the party?
As a bonus, RSS and Atom feeds are now up!
Update: Added number of days until party (10)
Jake - 6 years ago
Divide the 1000 barrels into two even groups of 500. Mix one drop of wine from each barrel in the subset of 500 into a cup and have a servant drink that cup. If he dies then the poison is in one of those 500 barrels, if he lives its in the other set of 500 barrels. Take the subset of barrels you know to contain the poison and repeat this process. It would at most kill all 10 men but after that the poisoned barrel would be obvious.
reply permalink
THOR - 6 years ago
test 10 barrels per day.
the poisonous barel will be know in the 7th day (best) or in the 107th day (worst case)
reply permalink
Oli - 6 years ago
For 3 days, each servant tastes 100 barrels: - Day 1: 1st servant tastes barrels 1-100, 2nd servant tastes barrels 101-200, ... , 10th servant tastes 901-1000 - Day 2: 1st servant tastes barrels 1-10, 101-110, ... 901-910, 2nd servant tastes barrels 11-20, 111-120, ... 911-920, ..., 10th servant tastes barrels 91-100, 191-200, ... 991-1000 - Day 3: 1st servant tastes barrels 1, 11, 21, ..., 991, 2nd servant tastes barrels 2, 12, 22, ..., 992, ... 10th servant tastes barrels 10, 20, 30, ..., 1000. Wait 7 days, and you will know which barrel contains the poison...
reply permalink
Anonymous - 6 years ago
Day 8: Servant 1 dies. Day 9: Servant 2 dies. Day 10: No servant dies.
Need more steps.
reply permalink
c2k - 6 years ago
Divide the wine up into 10 batches.
Have each servant take a sip from each batch. One of the servants will die. Isolate that batch.
You now have 100 barrels of wine and 9 servants left.
split up your barrels again, 8 batches of 11 barrels and 1 batch of 12 barrels. Mix'em on up, have the servants partake.
Again 1 servant will die. You are left with either a batch of 12 barrels or 11 barrels, worst case, let's go with 12.
So you have 12 barrels of wine and 7 servants left, not bad!
At this point you can be lazy and just start going down the line 1 by 1, eventually a servant will die and that is the barrel.
This is the most piss poor inefficient method in terms of time of course. You can do a lot better if you stagger days of consumption and note what day a servant dies on.
reply permalink
M - 6 years ago
Divide the barrels into groups of 1, 10, 45, 120, 210, 252, 210, 120, and 32.
Teh first barrel can be set aside untasted.
Let the ten servants taste one each of the next 10 barrels. Keep track of who taste what.
Then have two persons taste wine from each of the next 45 so that no two servants taste more than one barrel together. From combinatorics we have (10 choose 2) = 45.
Further (10 choose 3) = 120 so we can have a unike set of three servants drink from everyone the next 120 barrels.
Now, just continue this system. (10 choose 4) = 210, (10 choose 5) = 252, (10 choose 6) = 210, (10 choose 7) = 120.
We now have 32 barrels left. We let 8 servants drink from every one of the barrels, again such that the exact same combination of servants never drink from more than one barrel. ((10 choose 8) = 45, so some groups of 8 will have to pass, but I bet they are drunk enough anyway.)
Then we wait a week. If no one dies, the first barrel we put aside contains the poison. If three persons die, we find the one barrel that those three one no one else tasted. And so on. For every combination of servants that might die, there is exactly one barrel they and only they have tasted together.
reply permalink
ende76 - 6 years ago
This is a really good solution! I've made use of this – admittedly very common – concept of combinatorics as well, trying to motivate a way to arrive at an (optimal solution) that ends up killing at most 4 servants!
reply permalink
ende76 - 6 years ago
Sorry, I seem to have messed up (the link)[] in the previous post.
reply permalink
ende76 - 6 years ago
Sorry, I seem to have messed up the link in the previous post.
reply permalink
John - 6 years ago
Number the barrels 0-999. Convert each of these numbers to binary, and use the servants as the binary digits. For each barrel, have the corresponding 1 digits take a drink.
The loyal but dead subjects will tell you exactly what barrel is poisoned.
Solved in 7 days, just in time for the banquet...
However, a wiser king would use subjects of the uninvited town instead of his own loyal servants.
reply permalink
Max Burstein - 6 years ago
lol touché
reply permalink
toteto - 6 years ago
Nice one, never thought of this one.
reply permalink
Tim - 6 years ago
Here's some Java, solves within 7 days.
package com.tim.march;
import java.util.ArrayList; import java.util.Collection; import java.util.List; import java.util.Map; import java.util.TreeMap;
import junit.framework.TestCase;
public class Problem10 extends TestCase {
}
reply permalink
sean - 6 years ago
This works.
reply permalink
tim - 6 years ago
The real problem is getting the comments to load!
Joking aside, in the case of the event happening in seven days, the King asks each of his servants to sip from one wine barrel of their own, then pairs them up and each of them sip from a barrel unique to the pair. This combining and pairing of servants continues until all possible combinations of servants up to 7 have tried unique barrels, leaving 32 barrels to test with combinations of 8 servants and one barrel which no one touches.
At the end of seven days, which combination of servants drop dead (if any do, there's 1 barrel unsipped!) will show which barrel was poisoned.
That said, the king is a just man! If he has 8 days, he will, on the first day, only divide his servants into singletons, doubles, triples, quadruples, and 114 sets of five servants, making a unique 499 barrels per day and risking at most 5 servants' lives.
A tournament in 9 days would risk at most 4 servants, and so on and so on.
An interesting question would be, given a number of servants and the necessity that all of them may die in the process, how far away is the tournament? But I've procrastinated enough for one day, maybe this would make a good programming challenge for the future.
reply permalink
Anonymous - 6 years ago
Name the barrels according to their binary representation: 1 -> 00_0000_0001 8 -> 00_0000_0100
Have each servant assigned to one of the 'bits'. The servant samples the barrels with that bit. So servant 1 samples barrels 1,3,5... servant 2 samples barrels 2, 3, 6, 7, ...
After 7 days, based on the servants that die, you know which barrel was poisoned. If servants 1,3, and 5 die, then barrel 10101 = 21 was poisoned.
reply permalink
Amo - 6 years ago
I like this answer the best!
reply permalink
rankfocus - 6 years ago
The problem here with this binary representation is that it is not optimal. You are not taking into account that you have ten days, but the poison starts after seven days - meaning you have three days to use. You are unnecessarily killing servants. My solution below can use five or less servants.
reply permalink
John - 6 years ago
This is not actually true, because you will not know if day 1 was successful until day 7, so on day 2, you still have to have your servants drink. The only way to use extra time to save servant lives is if you have at least 7 extra days.
However, your algorithm does hold, only using 7 as the "extra iteration" number, instead of 1.
reply permalink
John - 6 years ago
Wish I could delete that. I was wrong.
reply permalink
FartsFTW - 6 years ago
Here's the solution written in MUMPS - Here, I hardcoded the poison to barrel#729 (PABPSN) and jerry-rigged this with the binary method another commenter mentioned. For non-mumps users, I essentially created a database called PABTMP that stored a decimal equivalent of the binary poisition in ten separate positions. (There might be a more practical way of doing this) If the poison is in barrel 729, the binary representation of that would be 10-1101-1001, meaning that persons (from left to right) 1, 3, 4, 6, 7, and 10 died for the king's cause.
BITS S ^PABTMP(1,0)=512 S ^PABTMP(2,0)=256 S ^PABTMP(3,0)=128 S ^PABTMP(4,0)=64 S ^PABTMP(5,0)=32 S ^PABTMP(6,0)=16 S ^PABTMP(7,0)=8 S ^PABTMP(8,0)=4 S ^PABTMP(9,0)=2 S ^PABTMP(10,0)=1 ; S PABPSN=729 S PAB1=0 S PABANS=0 ; F PABA=1:1 S PAB1=$O(^PABTMP(PAB1)) Q:PAB1="" D .S PAB1A=$P(^PABTMP(PAB1,0),U,1) .I ((PABPSN-PAB1A) >= 0) D ..S PABPSN=PABPSN-PAB1A ..S ^PABTMP(PABA,1)=1 ..Q .E S ^PABTMP(PABA,1)=0 .W $P(^PABTMP(PABA,1),U,1) .I ^PABTMP(PABA,1)=1 D ..S PABANS=PAB1A+PABANS .Q W !,PABANS K ^PABTMP,PABPSN,PABA,PAB1,PABANS,PAB1A
reply permalink
FartsFTW - 6 years ago
Sorry for the mess - Idk how to format on this page.
reply permalink
FartsFTW - 6 years ago
and now is when I give up. nobody can read mumps anyways :/
reply permalink
Max Burstein - 6 years ago
Never heard of mumps. Cool solution though. The site currently uses markdown so you can wrap your code in `s and it'll work. Here is some info on markdown formatting
I plan to move to a slightly more advanced markdown parser in the coming days to and prevent some of the weirdly formatted code.
reply permalink
Abomm - 6 years ago
This is a great question!
In my solution, I can actually guarantee that I save the lives of at least five (5) servants, and probably only four less servants will die while testing this wine. To do so, my clarification here is that upon tasting the wine, the servant dies in exactly seven (7) days. Thus, we have three days from the beginning where we can actually time the deaths of servants and analyze the situation.
My solutio works without this exact timing of death, but then we risk all ten servants' lives!
Here is what you do: Label the wine bottles from 1 to 1000. Each bottle will have a unique encoding of servants' names AND the day they drank it. Based on the which servants died (and did not die) and which day each person died at the end of the ten days, we can tell which unique bottle caused the poisoning.
Example: Servants: Allan, Bob, Charlie [A,B,C] We have three different days we can sample wines since we have to wait a week, so the days are [1,2,3]
One label could be [A3,B2,C2]. This means Allan tried this on day 1, Bob tried the same wine on day 2, and Charlie also tried this bottle of wine on day 2 (with Bob). If Allan dies on day 10 (which is a seven days after he sampled it), and Bob dies on day 9, and Charlie dies on day 9, we know it was the bottle with label [A3,B2,C2].
Another label could be ['',B1,C3]. Allan did NOT drink the wine at all (he is a blank), Bob drank it on day 1, and Charlie drank it one day 3. So, if Allan stays alive, Bob dies on day 8, and Charlie dies on day 10, the bottle with label ['',B1,C3] is poisoned.
This describes my encoding. So how do I generate all the possible legal encodings? Remember, you can have each servant sample the wine one or zero times, and the wine must be sampled on exactly one of the three available days.
Assume that we have N = 5 servants, so we have N lists, and each list has the double (2-tuple) of (servant name, day) and a blank. Each list looks like ['', A1,A2,A3 ], ... ['',E1,E2,E3] for servants A through E. We then just take the cartesian product of each of these N lists, which gives us the encodings I described above. There are a total of 1024 such encodings for N=5 servants and three days.
Below is the python 3 code to generate these encodings!
`import itertools
people = ['A','B','C','D','E'] ndays = 3
plist = [[str(p)+str(d) for d in range(1,ndays+1)] for p in people]
need to add blank spaceholder to each at beginning to save lives!
for lister in plist: lister.insert(0,'')
tot = 0 for p in itertools.product(*plist): tot += 1 print(p)
print("final count: {}".format(tot))`
reply permalink
Max Burstein - 6 years ago
Well explained. Thanks! Sorry about the weird formatting.
reply permalink
Sakington - 6 years ago
List a thru j ( these ten letters signify 10 people )
FIRST DAY each letter tries 100 barrels ( a tries 1-100, b tries 101-200, etc.)
SECOND DAY each letter tries 10 from the previous groups of 100 ( a tries 1-10, 101 -110, 201-210 and b tries 11-20, 111-120, 211-220)
THIRD DAY each tries 1 from from the previous groups of ten ( each person drinks 100 a day) (a tries 1,101,201 b tries 2,102,202 etc.)
wait 7 days
now say barrel 55 of the thousand are poisoned A DIES on day 7 F DIES on day 8 E DIES on day 9
hope this makes sense I did this on my phone but it works and only three should die...
reply permalink
Sakington - 6 years ago
Made a typo:
THIRD DAY a tries 1,11,21,31,41,101,111,121,131,141 etc. b tries 2,12,22,32,42,102,112,122,132,142 etc.
reply permalink
Duncan - 6 years ago
Represent each of the 1000 barrels as a 10 bit binary number. Assign each servant a bit and each servent drinks from all the barrels where their bit is 1. The dead servants will create a number unique to one barrel
reply permalink
ende76 - 6 years ago
I'd like to present a number of solutions, each one a little bit more optimized to save the lives of as many servants as possible.
Solution 1: Binary Digits for numbers 0-999Assign the numbers 0-999 to the barrels, consider the binary representation of those numbers, use the servants as binary digits.
After 7 days, the number that corresponds to the dead servants' digits points to the poisoned barrel.
####Analysis Now, this is a great solution. It's simple yet elegant, and solves the problem with time to spare. However, we'd like to optimize for lives saved, and there's definitely room for improvement on that front! In one of the worst cases – of which there are 5 (barrels #511, #767, #895, #959, #991) – 9 of our servants die. In 36 other cases, 8 servants will die. (Full distribution of deaths, starting with zero deaths: [ 1, 10, 45, 120, 210, 252, 208, 113, 36, 5 ] ).
Let's try to get those numbers down!
Solution 2: Using all the days!Disclaimer: I will assume that the servants have 4 days to sample the barrels, since the time limit is arbitrary anyway. This simplifies some partitions and numbers, conveying the ideas easily. If they really only have 3 days, the same ideas can be applied, albeit with some odd numbers and possible remainders.
A simple way to reduce the number of potentially dead servants is to reduce the number of barrels. However, we have to test all 1000 barrels, so we're stuck here.
Or are we? We could let our servants test only 250 barrels on the first day. So, we number the barrels 0-249, and use Solution 1. If we have dead servants 7 days later, we can identify the barrel just as in Solution 1 using binary digits.
On the second day, the servants test the second batch of 250 barrels. This time, we label those barrels 1-250. If we have dead servants 7 days after this test, we know the poisoned barrel was in this second batch.
We do this with two more batches of 250 each on days 3 and 4, using labels 1-250. If no servant dies at all, we know the poisoned barrel was #0 of the first batch.
####Analysis This solution is better already! By effectively reducing the number of barrels we have to encode each time to 250, we only require 7 servants to begin with, and can send 3 home to their families right away. If the poisoned barrel is in the first batch, the distribution of deaths over the number 0-249 is [ 1, 8, 28, 56, 70, 56, 26, 5 ], giving us 5 out of 250 cases where 7 servants die. If the barrel is in one of the later batches with labels 1-250, we get [ 0, 8, 28, 56, 70, 56, 27, 5 ], meaning that at least one servant will die, but still we only kill 7 servants in the worst case.
Solution 3: Pack those digits tight!Solution 2 looked pretty good, but it seemed odd that we would not make use of 3 servants. Somehow we should be able to improve our results by using all of our resources, right?
We will keep the batches of 250 barrels. But this time, instead of labeling them 0-249, we choose different labels. We will use: * 0, * 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, * 3, 5, 9, 17, 33, 65, 129, 257, 513, 6, 10, 18, 34, 66, 130, 258, 514, …
Now, why those magic numbers, and where do they come from?
Why: We select our numbers from the range 0-1023 (inclusive) so that the binary representation of a number uses as few 1-digits as possible. There's one number with zero 1-digits. There are 10 numbers with exactly one 1-digit. There are 45 numbers with exactly two 1-digits, …
In general, there are 10!/(k!(10-k)!) numbers with exactly k 1-digits. A quick calculation shows us that we can easily cover 250 barrels with numbers having at most four 1-digits!
####Analysis Some of our servants might feel ambivalent about this solution. Because with this approach, we don't send anyone home right away. For our own benevolent purposes however, this solution is great!
In the worst case, only 4 servants will die, and we even get to minimize the number of those cases in our Distribution Of Death!
For the first batch, where we can use the 0, we get a DoD of [1, 10, 45, 120, 74], meaning that in 74 out of 250 cases, 4 servants will die.
For the subsequent 3 batches, we get a DoD each of [0, 10, 45, 120, 75].
Epilogue
This is as good as it gets for this line of thinking. For 4 Days of Potential Death and 10 Willing Testers, we get a worst case of 4 deaths, with an overall distribution as outlined in the analysis above.
The number of days do matter, as do the number of servants.
Extremes: We could have 999 servants. Each servant would sample exactly one barrel, and after 7 days, either: * the dead servant corresponds to the poisoned barrel, or * nobody dies, and the unsampled barrel is the poisoned one. We have 1006 days to find the poisoned barrel. One servant could sample one of 999 of the 1000 barrels each day, and * if he dies, we know the poisoned barrel was the one he tested 7 days earlier, or * if he lives, we know the poisoned barrel was the one he did not test at all.
I'm sure those factors could be optimized for expectation of survival for the individual servants.
If someone can come up with a solution that uses even less than 4 servants in the worst case, or if I have made mistakes, I'd love to hear them!
reply permalink
ende76 - 6 years ago
Welp, apprently the site does not support as much markdown as the linked page suggests. It's easier to read in this gist
reply permalink
El Ninja - 6 years ago
Easy, with 10 people we get ten bits of address space that maps to the barrels (1 through 1023)
Note we only have 1000 barrels, knowing that actually guarantees that at least 1 will live.
Best case barrel #1 (1 dead)
Worst case barrel #991 (9 dead)
We order the people in a straight line (and for no reason we brand them with a letter A to J)
then in succession, every person with the positive bit in the binary representation of the barrel will have a drink.
$$ A B C D E F G H I J $$
$$ 0 0 0 0 0 0 0 0 0 1 = 001 $$
$$ 0 0 0 0 0 0 0 0 1 0 = 002 $$
$$ 0 0 0 0 0 0 0 0 1 1 = 003 $$
$$ 0 0 0 0 0 0 0 1 0 0 = 004 $$
$$ ----- All Up To---- $$
$$ 1 1 1 1 1 0 0 1 1 0 = 998 $$
$$ 1 1 1 1 1 0 0 1 1 1 = 999 $$
Now we only wait the 7 days, and we will get our binary representation of the barrel that was poisoned.
reply permalink | http://www.problemotd.com/problem/kings-wine/ | CC-MAIN-2020-16 | refinedweb | 3,821 | 79.4 |
Asyncio middleware for SQLAlchemy
Project description
The aiopg and aiomysql projects allow to operate asynchronously with PostgresQL and MySQL, respectively, even thru SQLAlchemy. Their approach isn’t ideal, though, because they reimplement a considerable amount of SA low level stuff with undesirable glitches.
The Twisted based Alchimia way is much lighter and even if maybe slightly less performant it does not introduce unexpected surprises.
Usage
Basically the module wraps a minimal set of SA classes (currently just Engine, Connection, Transaction and ResultProxy) into asyncronous counterparts, and you work on them like the following example:
from asyncio import coroutine from metapensiero.sqlalchemy.asyncio import create_engine @coroutine def do_something(db_url): engine = create_engine(db_url) with (yield from engine.connect()) as conn: with (yield from conn.begin()) as trans: yield from conn.execute(users.insert() .values(id=42, name="Async",)) res = yield from conn.execute(users.select() .where(users.c.id == 42)) rows = yield from res.fetchall() res = yield from conn.execute(users.delete() .where(users.c.id == 42)) assert res.rowcount == 1
If you are using Python 3.5 or better, the above should be written as:
from metapensiero.sqlalchemy.asyncio import create_engine async def do_something(db_url): engine = create_engine(db_url) async with await engine.connect() as conn: async with await conn.begin() as trans: await conn.execute(users.insert() .values(id=42, name="Async",)) res = await conn.execute(users.select() .where(users.c.id == 42)) rows = await res.fetchall() res = await conn.execute(users.delete() .where(users.c.id == 42)) assert res.rowcount == 1
Tests
To run the unit tests, you should:
create a Python virtual environment and install this package in development mode:
python3 -m venv env source env/bin/activate python setup.py develop
install pytest, pytest-asyncio and either psycopg2-binary or pymysql:
pip install pytest pytest-asyncio psycopg2-binary pymysql
create a test database, for example with createdb testdb
execute the py.test runner with an environment variable with the SA URL of the db:
TEST_DB_URL="postgresql://localhost/testdb" py.test tests TEST_DB_URL="mysql+pymysql://localhost/testdb" py.test tests
Changes
1.0 (2018-07-01)
- Renamed to metapensiero.sqlalchemy.asyncio
0.4 (2015-09-25)
- Packaging tweaks
0.3 (2015-09-23)
- Support Python 3.5 asynchronous context managers
0.2 (2015-09-09)
- First (usable) distribution on PyPI
0.1 (private)
Works reasonably well!
0.0 (private)
Initial effort.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/metapensiero.sqlalchemy.asyncio/ | CC-MAIN-2019-04 | refinedweb | 419 | 53.17 |
Wake On … IR
One of my old computers is used as a file backup server in my basement office. It is kept off most of the time and only powered on when I needed to sync up files. Unlike most of the computers nowadays, this old DELL computer does not have a Wake On LAN (WOL) compatible network card. And thus, it makes turning on the computer less than convenient.
So, I thought why not build a remote control for it myself? I quickly put together a circuity (see below) and with a few lines of code, I was able to control the rely using one of my spare remote controls within several feet.
The relay terminals (1 and 2) connect in parallel to the main power switch on my computer.
Here is the Arduino code listing:
#include <IRremote.h> #include <TimerOne.h> int RECV_PIN = 11; int RELAY_PIN = 10; int LED_PIN = 12; boolean signaled = false; unsigned long counter = 0; IRrecv irrecv(RECV_PIN); decode_results results; void setup() { pinMode(RELAY_PIN, OUTPUT); digitalWrite(RELAY_PIN, LOW); pinMode(LED_PIN, OUTPUT); //polling every 200 ms Timer1.initialize(200000L); Timer1.attachInterrupt(callback); Serial.begin(9600); irrecv.enableIRIn(); // Start the receiver } void loop() { if (irrecv.decode(&results)) { signaled = true; if (signaled) { Serial.println("ON"); digitalWrite(RELAY_PIN, HIGH); digitalWrite(LED_PIN, HIGH); counter = 0; } irrecv.resume(); // Receive the next value } } void callback() { if (signaled) { //blink the LED counter++; digitalWrite(LED_PIN, counter % 2); } //hold the relay for 2 seconds if (counter > 10) { signaled = false; counter = 0; digitalWrite(RELAY_PIN, LOW); digitalWrite(LED_PIN, LOW); Serial.println("OFF"); } }
I used a modified Ken Shirriff’s IR remote control library. The simple modification I made (see code below) allows me to use any of the keys on the the remote control I have to turn on and off the computer.
#define DAW 5 #define DAW_BITS 17 #define DAW_HDR_MARK 8000 long IRrecv::decodeDAW(decode_results *results) { long data = 0xf; int offset = 0; if (!MATCH_MARK(results->rawbuf[offset], DAW_HDR_MARK)) return ERR; offset++; // Success results->bits = DAW_BITS; results->value = data; results->decode_type = DAW; return DECODED; } | http://www.kerrywong.com/2011/03/27/wake-on-ir/ | CC-MAIN-2019-04 | refinedweb | 336 | 54.42 |
US20010037281A1 - Request for quote (RFQ) system and method - Google PatentsRequest for quote (RFQ) system and method Download PDF
Info
- Publication number
- US20010037281A1US20010037281A1 US09808137 US80813701A US2001037281A1 US 20010037281 A1 US20010037281 A1 US 20010037281A1 US 09808137 US09808137 US 09808137 US 80813701 A US80813701 A US 80813701A US 2001037281 A1 US2001037281 A1 US 2001037281A1
- Authority
- US
- Grant status
- Application
- Patent type
-
- Prior art keywords
- quote
- method
- carrier
- customer
- broker
- system and method for conducting an electronic auction has one or more carriers submitting one or more price quotes for goods or services in response to a request by a consumer who desires to purchase the goods or services.
Description
- The present invention relates to an electronic system and method, and more particularly to a new system and method for an on-line auction in which certain selected carriers of goods and services can compete with one another to provide their best price quote or best rate-of-return quote in response to a request from a customer who is interested in purchasing the carriers' goods or services. [0001]
- Dealers of goods and services are continually seeking markets for their products. Consumers who desire these products are always looking to be provided the best financial terms, i.e. the best “deal” on the product. Until recently, comparing prices and terms was almost always a tedious and arduous affair known as “comparison shopping.” Even traditional brokers, or middlemen, were of little help to the consumer. These agents, while often having ready access to a large number of dealers and suppliers of products, often did not have access to a reliable, easy system by which they could quickly obtain the most competitive prices for the consumer. [0002]
- On-line shopping has revolutionized the way consumers and merchants do business with the advent of such services as priceline.com as described in U.S. Pat. No. 5,897,620. According to this method, the consumer names a price that he or she is willing to pay for a good or service. Interested merchants then notify the consumer that the product is available at the asking price, and a deal is consummated. If the price set by the consumer is not at or above a certain minimum, then the merchant does not provide the product. Unfortunately, the consumer does not know this when he places his bid. Therefore, there is always the risk that the consumer will bid higher than the price for which the product could still profitably be sold. The consumer thus may not obtain the product at the best possible price. The consumer's best interests are therefore not always protected by this method of shopping. [0003]
- In addition, in the financial services sector there are commercial entities such as banks which often seek to purchase life insurance policies on their employees. Such policies are termed BOLI, or bank-owned life insurance. The bank sells a portion of its investments on the open market, and then purchases BOLI using the proceeds to fund the premiums which are then paid to a BOLI carrier. The BOLI carrier in turn then insures a group of the bank's employees under a specially designed BOLI plan. The bank owns the policies, the cash values and receives the death benefits. The BOLI policies also create additional income to the bank, as the funds (premiums) are deposited by the BOLI carrier in one or more revenue-generating media, and thereby earn the bank an annual rate of return on its funds. These yields may be tax-advantaged. This increases the bank's annual net income and earnings per share, and the bank can utilize this money to finance its employee benefit plan costs. According to federal regulations, a bank is permitted to purchase enough BOLI to cover the cost of its employee plan expenses. It is therefore in the bank's interest to obtain the highest possible yield, or highest rate of return, on its BOLI premiums, or at least a certain level of return which will allow it to cover its benefit plans' costs. [0004]
- Thus, there is a need in the art for a better system and method by which a customer can obtain the best possible price on-line for a particular good or service. There is also a need for a system and method for a commercial customer, such as bank, to obtain the best rate of return on an insurance plan such as BOLI. The system and method should substantially eliminate haphazard guessing by the customer as to what is the best price or best rate of return, etc. [0005]
- The invention according to a first embodiment provides a method for a consumer to obtain a price quote for a product on-line. According to this method the consumer submits a request for a price quote on a certain product to an electronic staging area, wherein the quote desirably includes at least one product specification. The method further involves at least one carrier, in turn, submitting at least one quote to the consumer via the staging area in response to the request for a quote. Preferably, the request for quote is forwarded to at least two carriers who compete with one another during a specified auction period to provide the consumer with the best price quote for the product. [0006]
- Also provided as part of the invention is a method by which a carrier, and preferably at least two carriers of a certain product can provide a price quote for the product in response to a request for a quote from a consumer. At least one carrier submits a first price quote to a staging area for review or screening. A second carrier can then submit a second price quote to a staging area. The consumer can then decide which of the price quotes is the most competitive. [0007]
- In another embodiment of the invention, there is provided a method of brokering a transaction on-line. The method first comprises displaying in an electronic staging area at least one request for a price quote from a consumer for a product the consumer is interested in purchasing. Next, a quote is submitted from a carrier who supplies the particular product, and this quote is forwarded to the staging area for viewing by the consumer. Preferably, the request for a quote is forwarded to at least two carriers of the product who then compete with one another to provide the consumer with the best price quote via the electronic staging area. [0008]
- In yet another embodiment of the invention, there is provided a system for conducting an on-line auction. The system includes a broker interface which monitors and controls an electronic staging area. The electronic staging area displays requests for price quotes from consumers who are interested in purchasing a product, and also displays the price quotes received from one or more carriers who sell the desired product. [0009]
- Also provided as part of the invention is a method for a broker to conduct an on-line auction. The method entails electronically pre-registering at least one customer who is interested in obtaining a competitive price quote on a product, as well as pre-registering at least two carriers of the subject product. The method also involves establishing a time for the on-line auction. During the auction period, the broker will have the customer submit a request for a price quote on the product, and then will have at least one of the carriers submit a first price quote in response to the customer's request. The first price quote is then posted by the broker for viewing by the customer and/or by the carriers participating in the auction. Next, the broker will have a second carrier submit a second price quote in response to both the customer's request and the first price quote. This second price quote is then posted by the broker for viewing by at least one of the parties. The second price quote is desirably more competitive than was the first price quote. [0010]
- Further provided as part of the invention is a method for competitively quoting a rate of return on funds deposited with a bank-owned life insurance (BOLI) policy plan. The method involves pre-registering at least one financial institution that is seeking to male a purchase of BOLI and that desires to receive a competitive quote on the rate of return from its premiums paid into BOLI. The method also involves pre-registering at least two carriers of BOLI. An auction time and period is also established. The financial institution is then invited to submit a request for a quote during the auction, wherein the financial institution forwards its quote during the auction period. A first carrier is invited to submit a rate of return in response to the request, wherein the carrier then forwards its quote where it is posted in an electronic staging area. A second carrier is invited to submit a rate of return in response to the request and to the quote submitted by the first carrier, wherein the second carrier forwards a second quote which is also posted in the electronic staging area. The second quote should be more competitive than the first quote. [0011]
- The invention is also directed to an electronic system useful in conducting an on-line auction for rates-of-return on finds deposited in bank-owned life insurance (BOLI). The system includes a broker-controlled staging area for displaying requests by financial institutions for rates of return on BOLI funds, and for displaying responses received to the requests during an on-line auction. The staging area is in communication with a broker interface. [0012]
- Additional advantages and features of the present invention will become more readily apparent from the following detailed description and drawings which illustrate various embodiments of the invention. [0013]
- FIG. 1 depicts a block diagram of a request for quote (RFQ) system. [0014]
- FIG. 2 is an exemplary flow chart depicting an RFQ process and auction using the system of FIG. 1. [0015]
- FIG. 3 is a sample census data form for use as part of the method shown in FIG. 2 according to one embodiment of the invention. [0016]
- FIG. 4 is a schematic representation of step [0017] 1113 shown in FIG. 2 according to one embodiment of the invention.
- FIG. 5 is a first chart with real-time postings of rates of return on BOLI as part of the cycle of steps [0018] 1130 through 1160 shown in FIG. 2 according to one embodiment of the invention.
- FIG. 6 is a second chart with real-time postings of rates of return on BOLI with various named carriers.[0019]
- Referring now to the drawings, FIG. 1 illustrates a request for quote (RFQ) system [0020] 100. Shown in FIG. 1 as part of the system 100 is a customer interface 110, a broker interface 120 which contains an RFQ staging area 130, and carrier interfaces 140, 240, and n40, corresponding to the number of carriers who have access to the RFQ system.
- As that term is used herein, “customer” may be used interchangeably with “consumer” and shall refer to any individual, group, business, entity or entities which is interested in purchasing at least one product or service at the most competitive price. The most competitive price can refer to the best possible price or to the lowest possible price, but can also mean the best rate of return, for example, on funds which the customer seeks to invest. The most competitive price can also be subjective, meaning whatever the customer thinks it is according to his/her best judgment. In a preferred embodiment of the invention, the customer would represent a bank or other financial institution, either individually or collectively with other banks (e.g. as a “pool”), that was interested in obtaining the most competitive rate of return on a bank-owned life insurance (BOLI) plan. [0021]
- The customer interface [0022] 110 shall include all means which allows the customer to utilize the RFQ system and method. Preferably, the customer interface 110 shall be an electronic medium, and more preferably shall include a website on the internet accessible by a computer. For purposes of clarity, FIG. 1 shows one customer interface 110, but it is to be understood that the system 100 would preferably be configured to have as many separate customer interfaces n10 as there were different customers who desired to participate in the RFQ process and submit requests for quotes on-line.
- In addition, “broker” shall refer to any entity which can control and direct the RFQ system and method, including the auction as hereinafter described between the customer and one or more carrier(s). Broker shall also mean a consultant or middleman in the traditional sense. The broker interface [0023] 120 shall include any means by which the broker can access, control and direct the RFQ system and method, including the RFQ staging area 130, and thus encompasses the means by which the broker can coordinate the RFQ activities between the customer and the carrier(s). Preferably, this shall include an electronic medium such as a computer, along with the available databases, hardware and software that will program and maintain the RFQ system and method. The RFQ staging area 130 shall refer to any venue at which the activities associated with the RFQ system and method may be staged. Preferably, the staging area is an electronic venue, e.g. internet website, which is accessible to the customer and the carrier(s), as well as to the broker.
- The term “carrier” as used herein shall refer to any dealer or supplier of any goods or services, including suppliers of financial services, e.g. insurance services and investment products. In a preferred embodiment of the invention, the carrier is an insurance carrier that specializes in BOLI. The carrier interface [0024] 140 shall include all means by which the carrier(s) can utilize the RFQ system and method. Like the customer interface 110, this is preferably an electronic medium such as a website on the internet accessible by a computer or similar device.
- Line [0025] 112 in FIG. 1 represents a means of communication, preferably an electronic link or modem, between the customer interface 110 and the broker interface 120, and between the customer interface 110 and the RFQ staging area 130. Line 112 could therefore represent more than one link. Lines 132 a, 132 b and 132 n each represent at least means of communication, e.g. electronic link or modem, between carrier interfaces 140, 240, and n40, respectively, and the broker interface 120 and the RFQ staging area 130. Optionally, the system 100 shown in FIG. 1 could be configured so that communications means, e.g. modem links or the link, could exist between the customer interface 110 and the carrier interfaces 140, 240, n40 as well, which in certain embodiments could obviate the need for a broker, broker interface 120 and an electronic staging area 130. In these embodiments, the respective customer and carrier interfaces would function as electronic staging areas. In addition, the entire system 100 is also shown without the attendant available hardware, e.g. computer screens, and software programs which would otherwise be inherently configured therein, and which is otherwise available to the skilled artisan.
- Prior to initiating an actual RFQ process, a customer would first indicate his desire to be a participant in the RFQ system and process by pre-registration. This would be done by first accessing the system [0026] 100. Preferably, this would involve directly logging on to the appropriate web site(s) which may be designated “www.(name(s) of website).com”, via a computer at the customer interface 110. Access could also be gained via electronic links, e.g. hyperlinks, from other websites. The first step in pre-registration would preferably involve the customer providing relevant basic data about himself and where applicable, his/her company or organization, by entering the information in designated spaces on a webpage contained within the website at the interface 110. Such information could include, for example, individual surname and/or company name, place of business - including street address, city, state, zip code, telephone number(s) and the like. Even more detailed information could include relevant financial data, such as capital assets, liabilities, tax structure and the like pertaining to the customer's business or organization.
- The basic data would be electronically transmitted to the broker interface [0027] 120 via communication line 112 and could be used to establish a “profile” for the customer which would be stored in a database contained within the interface 120. An optional feature of the RFQ system and method would be the availability of the broker via the broker interface 120 (on-line) during the customer pre-registration process to consult with the customer and thereby address any questions that the customer may have about the RFQ process, etc.
- As a further part of pre-registration, the customer would also preferably be required to agree to any posted terms and any legal disclaimers relating to the RFQ process as might be established by the broker via the interface [0028] 120. Upon receipt of the customer pre-registration information at the broker interface 120, the broker would then issue the customer a unique password via line 112 to the customer interface 110. Subsequent access to the system 100 necessary to initiate the RFQ process, hereinafter described, would typically be attained through use of the unique password provided to the customer. The RFQ system 100 would preferably be configured so that the RFQ process could only be initiated by a password. As those skilled in the art will recognize, the password could also be issued prior to the customer providing any relevant data about itself, its organization, or its employees.
- Also prior to the start of an RFQ process, one or more carriers would indicate their desire to participate in the RFQ system and method by pre-registration at each of the respective carrier interfaces [0029] 140, 240 and n40. Pre-registration for each carrier would be done in substantially the same manner as set forth above for the customer, including supplying relevant basic data such as company name, place of business, state of incorporation, goods and services the company deals in, and the like. In a preferred embodiment of the invention, each carrier would also furnish additional information or “financial data” about itself, which could include number of years in business, capitalization/assets, and liabilities etc. In another desirable embodiment, wherein the carrier was a financial services or insurance entity and was a purveyor of a BOLI plan, one or more of the carrier's most recent ratings, e.g. A++, as established by one or more rating companies such as Moody's, A.M. Best's, etc. could be supplied as part of the pre-registration data. Both the basic data and the financial data would be electronically transmitted to the broker interface 120 via line 112. This information could be used to establish a “profile” for each carrier which would be stored in a database contained within the interface 120. As a further part of pre-registration, the carrier would also preferably be required to agree to any posted terms and any legal disclaimers relating to the RFQ process as might be established by the broker at the interface 120. The broker would then issue each carrier a unique password via lines 132 a, 132 b and 132 n from the broker interface 120 to each of the respective carrier interfaces 140, 240 and n40. Subsequent access to the system 100 necessary to participate in the RFQ process, hereinafter described, would typically be attained through use of the unique password provided to each carrier. In a preferred embodiment of the invention, the system 100 could be further configured, e.g. via available software at the broker interface 120, so that the broker could pre-screen a potential carrier before issuing it a password. In this way, the broker could ensure that only carriers meeting certain minimal financial requirements, for example those having adequate capitalization, would be participants in the RFQ process. As with the customer, the password for the carrier could also be issued prior to the carrier providing salient information about itself.
- Referring now also to FIG. 2, the RFQ process is described in further detail. The customer would interface the system [0030] 100 via the customer interface 110 to initiate the RFQ process and auction 1000. After pre-registration as set forth above, the customer would log on to the designated website and enter in his password to begin the RFQ process as shown in step 1010. As part of step 1010, the customer would also enter in his request for quote or “RFQ” by indicating a product he wished to receive a price quote on, along with any optional specification(s) related to the product. The product could be selected from any number of goods and even services. Preferably, the customer's RFQ would pertain to banking and insurance services, such as a rate of return on funds deposited through a BOLI policy or other financial vehicles. Thus, the customer would be seeking the best rate of return on its deposited funds when submitting an RFQ on a BOLI plan.
- Referring now also to FIG. 3, optional product specifications as part of step [0031] 1010 could include any number of descriptive parameters which the customer could use to further define the product. For tangible goods, this information could include size, color, model, geographic origin and the like. In a preferred embodiment of the invention where the customer was a bank or other financial institution interested in purchasing BOLI, then the information could include whether or not the customer had ever purchased BOLI before. The customer would also provide additional information known as “census data” at the interface 110. Census data would include actuarial information and statistics about the customer's employees, such as age, sex, marital status, number of dependents and income etc. This census information would in turn be useful for an insurance carrier in providing quotes and/or rates of return for one or more BOLI life insurance policies on the customer's employees. FIG. 3 illustrates a sample census data form to be completed by the customer. In addition, for services such as BOLI, a product specification could include a guaranteed number of years on a rate of return.
- As a further component of the RFQ process in step [0032] 1010, the customer could also optionally provide some sort of “product usage” information along with the RFQ. The product usage information could include certain statistics such as the estimated quantity of the product(s) that the customer may have used within a certain time period, etc. The system 100 could also be configured so that the customer could successively request quotes for any number of additional products via the interface 110. As a final component of step 1010, the customer would then transmit its RFQ from the customer interface 110 to the broker interface 120 via line 112, which as previously set forth is preferably any electronic link, e.g. modem link.
- As shown in step [0033] 1020, upon receipt of the RFQ via the broker interface 120, the broker would preferably review the customer's RFQ to ascertain that all requisite information had been supplied. This could be done, for example, with appropriate available software installed at the broker interface 120 that would ascertain that all “required fields” had been entered. Upon receipt and optional review of the RFQ, the broker would submit a list of suitable, pre-registered carriers to the customer at the interface 110 via communications link 112. As an example, if the customer submitted an RFQ for Product A, and the carriers of Product A included pre-registered companies BCD, EFG and HIJ, then the broker could so notify the customer as shown in step 1020. If all carriers were acceptable to the consumer, then he would so indicate via step 1030. Alternatively, if the customer did not wish company BCD or any other carrier, for example, to participate in the RFQ process and auction, then he would indicate as such to the broker via step 1035. As part of step 1035, the customer would thus effectively choose the carriers to participate in the RFQ process and subsequent auction. Alternatively, the customer could affirmatively select the desired carriers from a listing while initiating the RFQ process in step 1010 above.
- As a further option of the RFQ process illustrated in steps [0034] 1040, 1050 and 1060, the customer could also indicate to the broker a “ceiling quote”, or a maximum price above which the customer would not want to receive a quote on the desired product. In another embodiment of the invention, the customer could indicate to the broker a “floor quote”, or a price below which the customer would not want to receive a quote on a certain product. The customer might wish to establish a floor quote when requesting a quote on rates of return for life insurance premiums paid as part of a BOLI plan, for example, of say “6.47% annually for a term of X years.” This would mean that the customer, e.g. a bank, would not accept a rate of return lower than the stated amount and thus would not be interested in any BOLI carrier whose quote was below that minimum, or floor quote. The broker interface 120 could then be programmed by the broker to automatically exclude any quotes greater than or above the ceiling quote, or any quotes less than or below the floor quote, as the case may be, from the RFQ process and auction. In this way, the broker interface 120 would essentially pre-screen all quotes, preferably using available software, to ensure that each was at or below the optional specified ceiling quote, or alternatively, was at or above the optional specified floor quote, prior to posting in the staging area 130, as hereinafter described.
- As shown in step [0035] 1070, the broker via the broker interface 120 would confirm receipt of the customer's RFQ and provide a proposed time and date for the RFQ Phase I qualifying period and/or Phase II auction, hereinafter described, to the customer at its interface 110 via line 112. Preferably, only the date and time for the Phase I qualifying period would be proposed in step 1070. As shown in steps 1080 and 1090, the customer, in turn, would confirm his approval of the RFQ Phase I date and time. Alternatively, as shown in step 1085, the customer would interact with the broker to establish a mutually agreed upon date and time. As a further alternative in step 1085, the customer would simply choose the RFQ Phase I and/or Phase II auction date from a listing provided by the broker. As an example, an RFQ bidding date and time could be established as dd/mm/yy from 9:00 am to 5:00 pm EST, or as otherwise mutually agreed upon by the customer and broker through the cycle of steps 1080, 1085 and 1090.
- Referring now to step [0036] 1 100 in FIG. 2, the customer's RFQ would be posted in the staging area 130 of the interface 120 by the broker, and the carriers which had been selected to participate in the RFQ process would be notified of the RFQ Phase I and/or Phase II bidding date and time established above via lines 132 a, 132 b and 132 n, respectively. and would be able to view the actual submitted RFQ through access to the staging area 130. Notification could optionally be done anonymously so that the carriers would not know the customer's identity. Preferably, notification would be partially anonymous, with the carriers not knowing the exact identity of the customer, but still receiving “census data” about the customer or its employees. This embodiment would be especially desirable where a customer had submitted an RFQ on BOLI insurance policy prices and/or rates of return, for example. The carrier would require the “census data” or actuarial data about the employees (age, whether a smoker, etc.) in order to formulate its best quote on a rate of return on the premiums deposited through BOLI. In addition, the system could optionally be configured so that carriers which had not been selected to participate in the RFQ process could also be notified, and preferably in a different manner than those carriers which had been chosen.
- At step [0037] 1110, either the customer and/or the broker would decide whether to proceed directly to the auction phase of the RFQ process as shown in step 1120 (“Phase II”), or preferably would request an optional initial quote from each of the selected carriers through the Phase I qualifying period illustrated in steps 1113, 1116 and 1119 before proceeding to the Phase II auction. As shown starting with step 1113, each of the selected carriers would transmit an initial quote in response to the RFQ on the date and during the time period which had been specified in step 1070. An optional feature of the system could include a quote matrix and/or notes field by which each carrier could qualify/describe its initial quote or include product specifications as part of step 1113. This initial quote, along with any optional conditional terms, qualifications or product specifications (quote matrix), would then be transmitted via line 132 to the broker interface 120. Each initial quote from each of the carriers could be transmitted successively or simultaneously, but in any event it is preferred that each carrier would submit its initial quote without knowing another carrier's quote. In this way, each carrier is encouraged to submit a reasonably competitive quote right from the start. Each initial quote would then be posted by the broker, preferably electronically, in the staging area 130 for viewing by the customer, and if desired, by the other chosen carriers as well. The quote could also be forwarded directly to the customer at the customer interface 110. An optional step would comprise the broker first pre-screening or reviewing the quote to ensure that it was within preestablished guidelines, e.g. as regards any specified ceiling quote or floor quote as heretofore described, before posting to the staging area 130 or forwarding it directly to the customer.
- Referring now also to FIG. 4, there is illustrated a schematic representation of step [0038] 1113 for a BOLI transaction. Next to each of the carrier interfaces 140-540 is shown a sample percentage rate of return (%) which each carrier 1-5, respectively, has submitted in response to an RFQ from a customer, e.g. a bank, that was interested in purchasing BOLI. These percentages could represent, for example, annual rates of return on the funds (premiums) which the bank as customer was seeking to deposit in a BOLI plan. Each rate of return along with the carrier's name (and optional quote matrix/product terms or specifications) would be posted by the broker in the RFQ staging area 130 for viewing by the customer at the customer interface 110.
- In a further optional embodiment of the invention starting with step [0039] 1113 in FIG. 2 and then proceeding to step 1115, each carrier after submitting its initial quote may be allowed to update its quote after viewing the quotes submitted by other carriers. In this way, each carrier would be permitted to provide a more competitive quote during the Phase I stage of the RFQ process before proceeding to step 1116 below. Phase I of the process thus becomes a preliminary or qualifying auction stage of the RFQ process according to this embodiment.
- In step [0040] 1116, the customer would then select and transmit to the broker interface 120 via line 112 a “slate” of finalists, or carriers who had submitted the n best initial quotes in step 1113. As an example, if the customer had chosen seven carriers to submit initial quotes, and after step 1113 only felt that four of those carriers had submitted acceptable bids and should therefore compete against one another in the auction phase (“Phase II”) of step 1120, then he would so indicate in step 1116. Optional interactive technology as part of the broker interface 120 and linked to the customer interface 110 via line 112 could allow the customer, before making its final choice of carriers, to query the broker as to any qualifications or explanations which may have been provided with one or more of the initial quotes. At step 1119, the broker would notify each of the “finalist” carriers that it had been selected to participate in the auction phase (“Phase II”) of the RFQ process by posting the customer's choices in the staging area 130 for viewing via lines 132 a, 132 b and 132 n. (Alternatively, the customer could simply select one carrier in step 1116 and go with that carrier's quote on the product, obviating the need to proceed to Phase II altogether.) If a date and time for the Phase II auction period had not already been established, then as part of step 1119 the broker with input from the customer and/or carriers would establish a Phase II auction period.
- The RFQ process then would continue to the auction phase (“Phase II”) such that the broker as shown in step [0041] 1120 would initiate the auction on the bidding date and at the time which was established in steps 1070 through 1090, or as part of step 1119. Also as part of step 1120 the broker would notify the customer at the customer interface 110, and the selected carriers via the carrier interfaces 140, 240 and n40 that the Phase II auction had started. Notification would occur via the links 112 and 132 series, respectively.
- At step [0042] 1130 in the process, the Phase II auction phase would preferably commence with a posting by the broker in the staging area 130 of the successful carriers and their respective quotes that had emerged from Phase I. Thereafter, the auction itself would commence when a first carrier would transmit a first or opening quote at its interface 140 in response to the posted RFQ from step 1100. If the Phase I series of steps 1113, 1116 and 1119 had previously been taken, then preferably the opening quote at step 1130 would be different, e.g. more competitive, than the initial quote submitted in step 1113 as part of Phase I
- In step [0043] 1140, the quote would be received by the broker at the broker interface 120. An optional step as previously mentioned would comprise the broker pre-screening or reviewing the quote. This would again ensure that the quote was within pre-established guidelines, e.g. as regards any specified ceiling quote or floor quote as heretofore described. (Optionally, a carrier whose quote did not meet pre-established guidelines could then be electronically notified that its quote was not acceptable and/or could be further advised to then re-submit another quote.) As part of step 1140, the broker would then post the prescreened quote to the staging area 130. Preferably, this would be done electronically via the broker interface 120. The RFQ system and process is desirably configured so that this quote could then be viewed at the customer interface 110, as well as at the other carrier interfaces 240 and n40. (Alternatively, the quote could be transmitted directly to the customer interface and/or the carrier interfaces without going through the staging area 130.) After the first quote was posted, a second carrier could choose to submit a second quote as shown in step 1150, provided sufficient time remained in the RFQ auction phase as shown in step 1160. If there was no time remaining in the auction period, or if no carrier wished to submit a second quote, then the broker in step 1165 would post the auction results in the staging area 130 at the broker interface 120 for viewing by all parties via the links to the respective interfaces as heretofore described. The RFQ process and auction would then end. More preferably, however, a second carrier would submit a second quote as shown again in step 1130. This second quote could either be the same or depending upon the product, would be a “better” or more competitive quote than the first quote. This second quote could either be higher (e.g. rate of investment return), or could be lower (e.g. price for a sofa) than the first quote submitted by the first carrier. As shown again in step 1140, this second quote would also be routed from carrier interface 240 via line 132 b to the broker interface 120 where it would be posted at the staging area 130 for viewing by the customer and the other carriers via the lines as set forth above. (Alternatively, the quote could be transmitted directly to the customer interface and/or the carrier interfaces without going through the staging area 130.)
- In a preferred embodiment of the invention, the RFQ system could be configured so that at least one additional carrier must submit a quote in response to another quote within a specified time period in order for the auction phase to continue, or in order for the additional carrier(s) to remain in the auction. For example, if a first carrier submitted its quote at 9 a.m., then a second carrier would preferably have a fixed time period, for example, one hour, within which to submit a second quote. In this way, the RFQ process would become a live on-line auction, with each carrier bidding competitively against all other selected carriers to deliver its best or most competitive quote for a product desired by the consumer. [0044]
- Referring now also to FIGS. 5 and 6, successive quotes would then be submitted by the same or additional participating carriers and posted by the broker using the cycle of steps [0045] 1130, 1140, 1150 and 1160. Preferably, the only limitations on the number of quotes which could be submitted during the RFQ auction phase would be the time limitation which had previously been approved in steps 1070 through 1090. Another optional feature of the system and process could be notification to either the customer or the carrier(s), or preferably both, during the auction phase of the time remaining until the end of the auction. FIGS. 5 and 6 represent real-time charts for posting each carriers' most recent quotes for BOLI rates of return for years 1-30 in the RFQ staging area 130. As can be seen from the chart in FIG. 5, carrier 3 has currently submitted the highest quotes during the Phase II auction. At a further point in time during the auction phase not shown in FIG. 5, carriers 1 and 2 could update their quotes through the cycle of steps 1130 - 1160 so as to exceed the quotes provided by carrier 3. These higher quote(s) would then be reflected in the chart shown. FIG. 6 is a more detailed view of the RFQ Phase II auction quotes in which actual BOLI carriers are shown in the left-hand column.
- An optional feature of the invention would allow the customer to sort the quotes according to one or more parameters, such as for example, highest to lowest quote, by carrier, or by any other available parameter. Another optional feature would allow the customer either before, during or after the auction phase to “click” on one or more of the carriers to gather information about them. In the case of a BOLI carrier, the information could include the carrier's financial rating, how much BOLI they have sold, etc. The information would preferably be formatted for easy viewing by the customer. [0046]
- As shown in step [0047] 1165, the auction phase and RFQ process would be terminated by the broker after all carriers had finished submitting their quotes, and/or the designated time for the auction had expired. As part of step 1165, there would be a posting by the broker of all the quotes submitted, which preferably would include the best or most competitive quote in the staging area 130. This “winning” quote could thus be viewed at customer interface 110 and at the respective carrier interfaces 140, 240, n40 etc. in the manner as heretofore described using the configuration and system of FIG. 1. As part of step 1165, other optional statistics could also be posted by the broker with the winning quote, preferably in summary format. These could include, for example, all quotes received, the median quote, the total number of quotes, and any optional condition terms or qualifications included with a quote by any of the participating carriers. An optional further step not shown in FIG. 2 could include notification by the customer to the “winning” carrier, indicating that the customer had selected that carrier's most competitive quote from the just-ended RFQ process auction. Most likely, the customer would select either the lowest quote or the highest quote as the most competitive quote, depending on the good or service, but could choose to select a “middle” quote, for whatever reason. In a preferred example where the customer had submitted an RFQ for a BOLI product, then the “winning” quote would most likely be the highest rate of return, e.g. “7.14%”, which a participating BOLI carrier could provide to the customer on funds which the carrier would invest for the customer. From that point, the customer would be free to fully consummate the transaction by submitting an actual purchase order for the product to the “winning” carrier. A further optional feature of the RFQ system 100 could include means for ordering the product desired once the RFQ process had been completed, as in for example an on-line order form.
- In a further embodiment of the invention, the RFQ system [0048] 100 could be configured for implementation of a more traditional “reverse” auction. In this type of auction, a carrier would present its goods or services for bidding on-line by one or more consumers.
- The foregoing description is illustrative of exemplary embodiments which achieve the objects, features and advantages of the present invention. It should be apparent that many changes, modifications, substitutions may be made to the described embodiments without departing from the spirit or scope of the invention. For example, while specific reference has been made to BOLI products and services, it is to be understood that the system and method of the invention is applicable to a wide range of goods and services. In addition, further reference has been made to an electronic medium, e.g. the internet, useful in practicing the system and method. However, other non-electronic mediums are also encompassed by the invention. Thus, the invention is not to be considered as limited by the foregoing description or embodiments, but is only limited by the construed scope of the appended claims.[0049]
Claims (54)
- 1. A method for a consumer to obtain a price quote for a product, comprising:submitting a request for a quote by the consumer to a staging area, wherein said quote includes at least one product specification;forwarding said request from said staging area to at least one carrier; androuting at least one quote from said carrier to said consumer via said staging area in response to said request.
- 2. The method ofclaim 1
- 3. The method ofclaim 2
- 4. The method ofclaim 3
- 5. The method ofclaim 4
- 6. The method ofclaim 5
- 7. The method ofclaim 6
- 8. The method ofclaim 7
- 9. The method ofclaim 8
- 10. The method ofclaim 1
- 11. The method ofclaim 4
- 12. The method ofclaim 11
- 13. The method ofclaim 1
- 14. The method ofclaim 13
- 15. The method ofclaim 1
- 16. The method ofclaim 1
- 17. The method ofclaim 16
- 18. The method ofclaim 17
- 19. A method for at least two carriers to provide a price quote for a product in response to a request for said quote from a consumer, comprising:submitting a first price quote from a first carrier to a staging area;reviewing said price quote; andsubmitting a second price quote from a second carrier to said staging area.
- 20. The method ofclaim 19
- 21. The method ofclaim 20
- 22. The method ofclaim 21
- 23. The method ofclaim 21
- 24. The method ofclaim 23
- 25. The method ofclaim 24
- 26. The method ofclaim 21
- 27. The method ofclaim 26
- 28. The method ofclaim 27
- 29. The method ofclaim 21
- 30. The method ofclaim 29
- 31. The method ofclaim 30
- 32. The method ofclaim 31
- 33. The method ofclaim 32
- 34. The method ofclaim 33
- 35. The method ofclaim 34
- 36. The method ofclaim 35
- 37. The method ofclaim 19
- 38. The method ofclaim 37
- 39. A method of brokering a transaction, comprising:receiving at least one request for a price quote from a consumer for a product the consumer is interested in purchasing;receiving at least one price quote from a first carrier of said product;receiving a second quote from a second carrier of said product; andposting said request and said price quotes to a staging area.
- 40. The method ofclaim 39
- 41. The method ofclaim 40
- 42. A system for conducting an on-line auction, comprising:an electronic staging area linked to a broker interface, wherein said staging area displays requests for price quotes on products submitted by consumers, and also displays responses to said requests by one or more carriers of said products.
- 43. The system ofclaim 42
- 44. The system ofclaim 42
- 45. The system ofclaim 42
- 46. The system ofclaim 42
- 47. The system ofclaim 46
- 48. The system ofclaim 47
- 49. The system ofclaim 48
- 50. The system ofclaim 44
- 51. The system ofclaim 44
- 52. A method for a broker to conduct an on-line auction, comprising:pre-registering at least one customer who is interested in obtaining a competitive price quote on a product;pre-registering at least two carriers of said product;establishing a time for said auction;having said at least one customer submit a request for a price quote during said auction time;having a first carrier submit a first price quote in response to said request;posting said first price quote for viewing by at least one of said customer and said carriers;having a second carrier submit a second price quote in response to said request and to said first price quote; andposting said second price quote for viewing by at least one of said customer and said carriers.
- 53. A method for competitively quoting a rate of return for premiums deposited in a bank-owned life insurance (BOLI) policy, comprising:pre-registering at least one financial institution that is seeking to make a purchase of BOLI and that desires to receive a competitive quote on the rate of return from said deposit in BOLI;pre-registering at least two carriers of BOLI;establishing an auction time and period;having said financial institution submit a request for a quote during said auction;having a first carrier submit a first rate-of-return quote in response to said request;posting said quote;having a second carrier submit a second rate-of-return quote in response to said request and to said response from said first carrier; andposting said second quote.
- 54. An electronic system useful in conducting an on-line auction for rates-of-return on funds deposited in bank-owned life insurance (BOLI), comprising:a broker-controlled staging area for displaying requests by financial institutions for rates of return on said BOLI funds, and for displaying responses received to said requests during an on-line auction, wherein said staging area is in communication with a broker interface. | https://patents.google.com/patent/US20010037281A1/en | CC-MAIN-2019-04 | refinedweb | 8,019 | 56.59 |
5.2. Accelerating pure Python code with Numba and just-in-timeba () is a package created by Anaconda, Inc (). Numba takes pure Python code and translates it automatically (just-in-time) into optimized machine code. In practice, this means that we can write a non-vectorized function in pure Python, using
for loops, and have this function vectorized automatically by using a single decorator. Performance speedups when compared to pure Python code can reach several orders of magnitude and may even outmatch manually-vectorized NumPy code.
In this section, we will show how to accelerate pure Python code generating the Mandelbrot fractal.
Getting ready
Numba should already be installed in Anaconda, but you can also install it manually with
conda install numba.
How to do it...
1. Let's import NumPy and define a few variables:
import numpy as np import matplotlib.pyplot as plt %matplotlib inline
size = 400 iterations = 100
2. The following function generates the fractal in pure Python. It accepts an empty array
m as argument.
def mandelbrot_python
3. Let's run the simulation and display the fractal:
m = mandelbrot_python(size, iterations)
fig, ax = plt.subplots(1, 1, figsize=(10, 10)) ax.imshow(np.log(m), cmap=plt.cm.hot) ax.set_axis_off()
4. Now, we evaluate the time taken by this function:
%timeit mandelbrot_python(size, iterations)
5.45 s ± 18.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
5. Let's try to accelerate this function using Numba. First, we import the package:
from numba import jit
6. Next, we add the
@jit decorator right above the function definition, without changing a single line of code in the body of the function:
@jit def mandelbrot_numba
7. This function works just like the pure Python version. How much faster is it?
mandelbrot_numba(size, iterations)
%timeit mandelbrot_numba(size, iterations)
34.5 ms ± 59.4 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
The Numba version is about 150 times faster than the pure Python version here!
How it works...
Python bytecode is normally interpreted at runtime by the Python interpreter (most often, CPython). By contrast, a Numba function is parsed and translated directly to machine code ahead of execution, using a powerful compiler architecture named LLVM (Low Level Virtual Machine).
Numba supports a significant but not exhaustive subset of Python semantics. You can find the list of supported Python features at. When Numba cannot compile Python code to assembly, it will automatically fallback to a much slower mode. You can prevent this behavior with
@jit(nopython=True).
Numba generally gives the most impressive speedups on functions that involve tight loops on NumPy arrays (such as in this recipe). This is because there is an overhead running loops in Python, and this overhead becomes non-negligible when there are many iterations of few cheap operations. In this example, the number of iterations is
size * size * iterations = 16,000,000.
There's more...
Let's compare the performance of Numba with manually-vectorized code using NumPy, which is the standard way of accelerating pure Python code such as the code given in this recipe. In practice, it means replacing the code inside the two loops over
i and
j with array computations. This is relatively easy here as the operations closely follow the Single Instruction, Multiple Data (SIMD) paradigm:
def initialize(size): x, y = np.meshgrid(np.linspace(-2, 1, size), np.linspace(-1.5, 1.5, size)) c = x + 1j * y z = c.copy() m = np.zeros((size, size)) return c, z, m
def mandelbrot_numpy(c, z, m, iterations): for n in range(iterations): indices = np.abs(z) <= 10 z[indices] = z[indices] ** 2 + c[indices] m[indices] = n
%%timeit -n1 -r10 c, z, m = initialize(size) mandelbrot_numpy(c, z, m, iterations)
174 ms ± 2.91 ms per loop (mean ± std. dev. of 10 runs, 1 loop each)
In this example, Numba still beats NumPy.
Numba supports many other features, like multiprocessing and GPU computing.
Here are a few references:
- Documentation of Numba available at
- Supported Python features in Numba, available at
- Supported NumPy features in Numba, available at
See also
- Accelerating array computations with Numexpr | https://ipython-books.github.io/52-accelerating-pure-python-code-with-numba-and-just-in-time-compilation/ | CC-MAIN-2019-09 | refinedweb | 695 | 58.38 |
Textbox.
This is my first real project using C#/.NET, having come from C++/MFC and Java. I decided that a great way to learn was to take something I'm already familiar with and convert it to C#. This would allow me to learn the new technology as well as to make note of the differences. This article is the result of converting my Validating Edit Controls code, originally written in C++/MFC. The effort took around a month to complete and was a terrific learning experience. Enjoy!
There are two groups of classes, the behavior classes and the textbox classes, both of which are contained in the AMS.TextBox namespace. The behavior classes are designed to alter the standard behavior of textboxes so that the user can only enter a specific type of text into the control. For example, the DateBehavior only allows a date value to be entered into the textbox associated with it. The rest of the classes are simple TextBox-derived controls containing specific behaviors. For example, the DateTextBox control has the DateBehavior field inside of it and a property used to retrieve it.
AMS.TextBox
DateBehavior
TextBox
DateTextBox
Here's a listing of all the classes. If you need specific documentation on the available methods and properties, you may view the AMS.TextBox.chm help file, or just use the editor's Intellisense.
Behavior
Base class for all behavior classes. It has some basic functionality.
AlphanumericBehavior
Prohibits input of one or more characters and restricts length.
NumericBehavior
Used to input a decimal number with a maximum number of digits before and/or after the decimal point.
IntegerBehavior
Only allows a whole number to be entered.
CurrencyBehavior
Inserts a monetary symbol in front of the value and a separator for the thousands.
Allows input of a date in the mm/dd/yyyy or dd/mm/yyyy format, depending on the locale.
TimeBehavior
Allows input of a time with or without seconds and in 12 or 24 hour format, depending on the locale.
DateTimeBehavior
MaskedBehavior
Takes a mask containing '#' symbols, each one corresponding to a digit. Any characters between the #s are automatically inserted as the user types. It may be customized to accept additional symbols.
AlphanumericTextBox
Supports the Alphanumeric behavior.
NumericTextBox
Supports the Numeric behavior.
IntegerTextBox
Supports the Integer behavior.
CurrencyTextBox
Supports the Currency behavior.
Supports the Date behavior.
TimeTextBox
Supports the Time behavior.
DateTimeTextBox
Supports the DateTime behavior.
MaskedTextBox
Supports the Masked behavior.
MultiMaskedBehavior
Takes a mask and adopts its behavior to any of the above classes.
I've built all these classes into a DLL so that the TextBox-derived classes may be used inside the Visual Studio .NET IDE as custom controls. Here are the steps required for adding them to your project and using them in your code:
While porting these classes from C++, I came upon several important differences in how things are done. I decided to make note of these differences for future reference and added them here for anyone who may work on a similar task. If you want to read on, I recommend you first become familiar with the C++ classes.
In the .NET world, Edit boxes are called TextBoxes (and Text controls are called Labels). So I knew that to keep things as .NETish as possible I had to rename my classes. I replaced the "Edit" suffix with "TextBox" and I put everything into a namespace called AMS.TextBox to keep the names as simple as possible.
Labels
For years I've used my own variation of Hungarian notation in C++ code. I thought it was a great way to ease the pain of maintenance.
Then I started writing Java code and I had to conform to the standards set by my group: no Hungarian notation. So I stopped using it, at least for Java. And to my surprise after about a month, I wasn't missing it! It was actually liberating to write variables without the extra baggage of the type prefix. It also made the code easier to read. I came to the realization that in well written, modular code it's rarely necessary to make each variable's type evident within the name itself. It's just overkill and it makes the name larger and more cluttered.
So I gave it up for good in Java, while I kept it in my existing C++ code for the sake of consistency. I had even replaced the m_ prefix (for member variables) with the this. prefix.
m_
this.
Then came C# and this project. I knew that Microsoft's convention had been to drop Hungarian notation for .NET, so I happily continued on as I had with Java. But I faced a small problem. I wanted to make the code CLS compliant so that it could be used and extended by any .NET language. This caused a problem for protected variables with names like "separator" and then a corresponding property called "Separator". Since the names were the same except for the casing, the code was not CLS compliant (for case-insensitive languages). So I decided to switch from using this. for member variables back to the old m_ convention, which I had always liked.
m_
And that's the notation I use in this project. No Hungarian notation except for the m_ for all member variables (fields).
When I originally wrote my C++ classes, I made the decision to split the CEdit classes from their behaviors - I believe this follows the Bridge pattern. This strategy gave me the flexibility of being able to plug the behaviors into multiple CEdit-derived classes as needed, which worked great for the CAMSMultiMaskedEdit class. In addition, I used C++ multiple-inheritance feature to conveniently inherit the classes from CEdit and their respective behavior(s) simultaneously.
CEdit
CAMSMultiMaskedEdit
As we all know, .NET does not support multiple implementation inheritance, so I had to come up with an alternative. I initially decided to forgo the idea of the TextBox-derived classes having the methods and properties of their respective behaviors, which multiple inheritance so conveniently allowed me. Instead I added a read-only property to each class called Behavior which returns the Behavior-derived object currently associated with the TextBox object. So any behavior-specific action would be taken via this property.
This approach made life much easier for me, the library developer, but not for anyone using the library. Whereas in C++ you could do this:
CAMSDateTimeEdit dt ...
int day = dt.GetDay();
In C# now you would need to do this:
DateTimeTextBox dt ...
int day = dt.Behavior.Day;
I went with this design for some time until I came to the conclusion that it just wasn't right. It was a step backward, and all because I didn't want to spend extra time wrapping the Behavior's public members in the TextBox-derived classes. So I bit the bullet and did it; I went through each TextBox class and added its corresponding Behavior's public methods and properties as members of the class. This essentially turned the TextBox classes as wrappers of their Behavior classes. It was a lot of extra work (caused by the lack of MI), but I think it was worth it. Now the TextBox classes are similar to their C++ counterparts:
DateTimeTextBox dt ...
int day = dt.Day;
This is not only more intuitive but also makes the TextBox classes more friendly for the form designer.
For the C++ classes, I decided to make the CAMSEdit::Behavior class work only with classes derived from CAMSEdit. This definitely made life easier, since the CAMSEdit class was mine and I could enhance it with whatever methods were needed by the Behavior classes (i.e. GetText, GetValidText, IsReadOnly). But the problem was that this created a tight coupling between the Behavior classes and CAMSEdit, which any new class would have to account for.
CAMSEdit::Behavior
CAMSEdit
GetText
GetValidText
IsReadOnly
For C#, I changed the Behavior classes to be much more independent. Now they work with any classes derived from System.Windows.Forms.TextBoxBase. This gives them the flexibility to be associated with just about any TextBox class and not just the ones derived some class of mine.
System.Windows.Forms.TextBoxBase
Additionally, I made it very easy to associate Behavior classes to textboxes. You just instantiate the behavior class and pass the textbox object in the constructor. Here's an example:
MyTrustyTextBox textbox = new MyTrustyTextBox();
// Alter the textbox to only take Time values
TimeBehavior behavior = new TimeBehavior(textbox);
That's it! From that point on, the textbox behaves according to the rules of that behavior. This is in sharp contrast to C++ where not only did the class need to be derived from CAMSEdit, but you also needed to forward several message handlers to the associated Behavior, as explained next.
If you look at the C++ code, you'll notice that the Behavior classes rely on their associated CAMSEdit object to forward the relevant messages to them (OnChar calls _OnChar, OnKeyDown calls _OnKeyDown, etc.). Well, thanks to delegates I didn't need to do that in C#. All I had to do was make the Behavior class add event handlers to the textbox object that would call methods in the Behavior class. And since these methods are declared virtual in the Behavior class, then all the derived classes needed to do was, override them to provide their own functionality. This is a more elegant approach that in C++ would have ended up looking more like a hack if I had decided to implement it.
OnChar
_OnChar
OnKeyDown
_OnKeyDown
In addition, while in C++ I handled the messages directly (i.e., OnChar for WM_CHAR, OnKeyDown for WM_KEYDOWN, etc.), .NET does not provide direct handlers for some of these messages. The only way to do it, as far as I could see, was to override WndProc inside the textbox classes themselves and trap the messages there inside a switch statement.
WM_CHAR
WM_KEYDOWN
WndProc
switch
Instead, I decided to try the available event handlers to see if they could do the job. So I used the KeyPress event for WM_CHAR, KeyDown for WM_KEYDOWN, TextChanged for WM_SETTEXT and LostFocus for WM_KILLFOCUS. They worked just fine and allowed the Behavior classes do all the work, as described above.
KeyPress
KeyDown
TextChanged
WM_SETTEXT
LostFocus
WM_KILLFOCUS
The Behavior classes are all about validations - basically ensuring that the user enters the proper data into the textbox. Some validations are performed as the user types while others happen when the user leaves the control.
As you may know, the System.Windows.Forms.Control class contains properties and events designed to help the developer validate the control's value when control loses focus. I decided to take advantage of this built-in mechanism and move a lot of the functionality in the old OnKillFocus handlers to a Validating event hander I added to the Behavior class. This handler not only validates the data, but also gives error feedback to the user via a message beep, message box, or a small icon (ErrorProvider). It can even be configured to automatically set the control to a valid value if necessary.
System.Windows.Forms.Control
OnKillFocus
Validating
ErrorProvider
This is all accomplished by modifying the Flags property and setting the corresponding ValidationFlag value(s). Here's an example of how to make a beeping sound and show an icon if the control's value is empty or not valid when the Validating event is triggered:
Flags
ValidationFlag
DateTimeTextBox dt ...
dt.ModifyFlags((int)ValidatingFlag.ShowIcon
| (int)ValidatingFlag.Beep, true);
Of course, this also requires that the textbox's CausesValidation property is set to true, which by default it is. As an alternative, you may invoke all this functionality yourself via the textbox's Validate method. It is called by the Validating event handler to set the Cancel property.
CausesValidation
true
Validate
Cancel
First of all, I just have to say that I love properties! They're a welcome addition to C# (and they should have been part of Java). When converting these classes, a lot of methods became excellent candidates for properties. So I happily went and converted all of them.
Then I took a second look and reconsidered what I had done. I found that while a lot of methods were undeniably property-material, others were a bit more questionable. The most prominent one was Behavior.GetValidText. This one initially appeared like another method worthy of becoming a property with a getter. However, I later decided that properties should be treated by the programmer as convenient ways of quickly reading attributes of an object. If you look at the code for most GetValidText implementations, there is quite a bit of processing going on in there, much more than the typical return someField; which you find in most property getters. In other words, the "valid text" is not really a property of the behavior. It needs to be deciphered every time the method is called. So leaving it as a method does not give the impression that it's readily available and quickly retrieved.
Behavior.GetValidText
return someField;
This was the criteria I used when deciding which methods to turn into properties. If the property's getter had a simple implementation and the property itself made sense for the class, then I converted it; otherwise I left it as a method.
After I had finished porting the classes, I decided it would be nice to document the code. The C++ code already had comments on the top of every method, so that gave me a head start. However I decided to explore the XML Documentation tags to see what additional benefits I could gain.
My first impression is that they were very verbose. Most of them require an opening and closing tag, which if written on separate lines can make each section take at least 3 lines! For methods that take multiple parameters that would mean an extra 3 lines of comment per parameter. That's a lot of space taken up in comments!
The benefits were another story. You spend some time up-front putting up with all the verbosity, but the end result is nicely formatted online help. This would also mean that I wouldn't have to spend extra time documenting every method within this article, like I did for the C++ one. You just add the DLL and its XML file to your project - Visual Studio takes care of the IntelliSense for you automatically!
So I did it! I manually documented every method, property and field in the classes, even the private ones, right in the code. To cut down on the waste of space produced by the opening and closing tags, I decided to only put the opening tags on lines by themselves. Closing tags would simply go as part of the last line of the section. I also added a couple of spaces for indentation to the contents to make them easier to read. Here's an example:
/// <summary>
/// Initializes a new instance of the class by also setting its
/// mask and corresponding Behavior class. </summary>
/// <param name="mask">
/// The mask string to use for determining the behavior. </param>
/// <seealso cref="Mask" />
There's a tool called NDoc that generates help files from source code, in a variety of formats. I used it to generate an MSDN-style help file, AMS.TextBox.chm, which I've included in the download. Enjoy!
I'd like to thank Gerd Klevesaat for helping me understand the complexities of dealing with controls having sub-properties. I wanted to give users of my textbox controls, the ability to directly manipulate the various properties of the Behavior property, right from the form designer. After many trials and tribulations, I decided not to implement such functionality since it doesn't work as it should, but it was fine since I ended up wrapping most of the behavior public methods and properties inside the textbox classes.
Browsable(false)
Day
Month
Year
ControlDesigner
NumericBehavior.LostFocusFlag
Text
CallHandlerWhenTextPropertyIsSet
Hour
Minute
Second
DataBindings
Decimal
ErrorCa. | https://www.codeproject.com/Articles/5015/Validating-Edit-Controls?fid=23232&df=90&mpp=25&sort=Position&spc=Relaxed&tid=3414102 | CC-MAIN-2017-39 | refinedweb | 2,677 | 55.54 |
This article really started out of a desire for me to learn the new generics functionality added to the newer versions of .NET and the version 2.0 of the Framework. Whilst at this moment I don't feel that a full fledged tutorial on generics is appropriate, this article shows my use of generics in solving a problem I recently encountered.
In my current job, I recently had the misfortune to discover a memory issue with our main development. We use hundreds of bitmaps at any one time. Long ago, someone thought that in order to improve performance we would cache the bitmaps in memory. Nice idea, except my current project seems to have pushed the technology further than anyone else had, and now we're out of memory because of the hundreds of Megabytes of images we have cached.
Thinking about the problem, it was obvious that we didn't need to cache every image, just the ones we were working with. Trying the system without any caching demonstrated why the caching support was originally added, performance started to get sluggish as the hard-disk is accessed so much. However, if an image hasn't been used for ages, then we can quite easily read that again from disk (as we had first time it was used) keeping the frequently and recently accessed images in the cache.
At work, a couple of hours of coding and I had implemented a very basic C++ cache using the STL and templates. This article however takes over from the point of me wondering how much alike the STL and the new C# generics are.
One of the things I really disliked about coding in C# was that whenever I used a collection (which I did lots) I had to worry about the types I was adding and retrieving, or else I had to create my own type-safe implementations of the collections. Whenever I use the STL in C++, this was never an issue. Constantly checking and casting objects was cluttering up my code.
On the most basic level, generics allow me to specify the types I want to be working with. Take the following two examples, the first in C# 1.x and the second in C# 2.x.
ArrayList items = new ArrayList();
items.Add("a string");
items.Add(123);
items.Add(this);
List<string> items = new List<string>();
items.Add("a string");
items.Add(123); // Compile time error!!
items.Add(this); // Compile time error!!
Using generics, I can specify the type that I want to have a list of, in this case strings. The compiler will then ensure that I do not start adding lots of incompatible types.
The implementation of the cache that I have created uses the signature of cache<Key,Value> which looks very similar to the signature of a hashtable or dictionary. What I wanted is to associate the cached data with some form of identifier (e.g., the filename of the bitmap I have cached). The actual class definition is shown below:
cache<Key,Value>
public class Cache
where Key: IComparable
where Value: ICacheable
{
Notice that there are some changes to the typical class definition. The new where keyword allows the use of generics to be refined. In the code above, I have specified that I want the Key type to support at least IComparable (this is necessary for me to search for the keys in the collections).
where
Key
IComparable
I have also defined a need for the values to implement ICacheable. The ICacheable interface is a very simple interface I created:
ICacheable
ICacheable
public interface ICacheable
{
int BytesUsed { get; }
}
All it really needs to do is supply the property BytesUsed in order to let the cache know how big its represented data is. This can be seen being accessed in the Add method.
BytesUsed
Add
public void Add(Key k, Value v)
{
// Check if we're using this yet
if (ContainsKey(k))
{
// Simple replacement by removing and adding again, this
// will ensure we do the size calculation in only one place.
Remove(k);
}
// Need to get current total size and see if this will fit.
int projectedUsage = v.BytesUsed
+ this.CurrentCacheUsage;
if (projectedUsage > maxBytes)
PurgeSpace(v.BytesUsed);
// Store this value now..
StoreItem(k, v);
}
As is typical with any container class, there are a number of other methods for getting or changing the members of the collection. Implemented methods are:
void Remove(Key k)
void Touch(Key k)
Value GetValue(Key k)
bool ContainsKey(Key k)
Value this[Key k]
void PurgeAll()
In order to access the stored members of the cache, a number of enumerators are implemented, these allow getting the keys, the values, and both the key and value as a pair. The implementation of enumerators in C# 2.0 has been made very easy.
IEnumerator<Value> GetEnumerator()
IEnumerable<Key> GetKeys()
IEnumerable<Value> GetValues()
IEnumerable<KeyValuePair<Key, Value>> GetItems()
public IEnumerable<Value> GetValues()
{
foreach (KeyValuePair<Key, Value> i in cacheStore)
yield return i.Value;
}
Note the new yield keyword. This, in effect, allows the control of the program execution to be passed back to (or yielded to) the caller. Next time the enumerator is accessed, the execution continues at the same point in the collection. In the above example, I am accessing the internal store, cacheStore, and enumerating it with the foreach command, however, I am exposing the value of each to the caller of GetValues(). Whilst this might take a little understanding, trust me, it is simpler than what was needed before!
yield
cacheStore
foreach
GetValues()
There are only a few properties defined for the cache class, their functionality is largely obvious.
int MaxBytes { get; set; }
int Count{ get; } // Get the current number of items in cache
int CurrentCacheUsage{ get; }
As a condition of using the beta software from Microsoft, it is not possible to quote performance of this class in concrete terms. The compiler and its generated code is likely to get much faster in the future… also, it isn't really possible to directly compare generic code to non-generic code without having two different implementations. I have tested this code with many Megabytes of bitmaps and its caching performance was as good, no noticeable impact on performance. It must be noted that once an item is knocked out of the cache, the run-time system and its garbage collection may not dispose off the object for quite a while. What the cache implementation does do is allow you to control the lifespan of your objects!
The implementation of the cache class used here is very basic; it could be further enhanced in many ways. For my needs, the solution implemented is not too bad and allows me some control over how much memory I want my applications using in order to keep large resources active in memory.
At the time of writing, C# 2.0 is very new. I downloaded the beta from Microsoft only last week. If in the future, this article is still around, be careful to research any differences between what I have documented here and the final release!
If you want to download a copy of the beta, see MSDN.
I have not included a binary of this code because I expect the framework to change between each beta. The two C# source files supplied comprise of the implementation and the test application, drop them into a blank console solution to test!
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
public class MyImage : Image, ICacheable
{
// Will need to override the "Image" constructors here
#region ICacheable information
public int BytesUsed
{
get
{
// Some basic size calculation
return base.Size * 4;
}
}
#endregion
}
Cache<string, MyImage> myImageCache = new Cache<string, MyImage>();
Alexander Pikus wrote:Thank you, but can you be more specific ?
class TouchableQueue
{
IList list = new ArrayList();
public void Enqueue(object obj) {
list.Add(obj);
}
public object Dequeue() {
if (list.Count == 0) return null;
object obj = list[0];
list.RemoveAt(0);
return obj;
}
public void Touch(object key) {
if (!list.Contains(key)) return;
list.Remove(key);
list.Add(key);
}
}
Cannot implicitly convert type 'GenericCollection<TKey,TItem>' to 'GenericCollection<TKey,ICollectionItem<TKey>>'
ICollectionItem
/// <summary>
/// Generic Collection Item Interface
/// </summary>
/// <typeparam name="TKey"></typeparam>
public interface ICollectionItem<TKey> : IDisposable
where TKey : IComparable {
/// <summary>
/// Gets or sets the Key for the Item in the Collection
/// </summary>
TKey Key { get; set; }
/// <summary>
/// Gets or sets the Index of the Item in the Collection
/// </summary>
int Index { get; set; }
/// <summary>
/// Gets or sets the Collection that the Item is contained in
/// </summary>
GenericCollection<TKey, ICollectionItem<TKey>> Collection { get; set; }
}
/// <summary>
/// Generic Collection Class
/// </summary>
/// <typeparam name="TKey"></typeparam>
/// <typeparam name="TItem"></typeparam>
public class GenericCollection<TKey, TItem> : IDictionary<TKey, TItem>
where TKey : IComparable
where TItem : ICollectionItem<TKey> {
Collection
Item.Collection = this; // This fails with the error mentioned above
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/7684/Using-C-Generics-to-implement-a-Cache-collection?msg=2310047 | CC-MAIN-2015-14 | refinedweb | 1,508 | 52.39 |
This page was last modified 14:41, 7 March 2008.
How to get info on cell location
From Forum Nokia Wiki
For 3rd Edition you must have the Location capability. Only dev cert Python version can be used for this purpose. Python selfsigned version return None value !
import location mcc, mnc, lac, cellid = location.gsm_location() # lac, cellid can be used to guess your location
mcc and mnc is the same wherever you are. Normally, you would collect some data of your location and match it with lac/cellid. Then later you can guess your location from current lac/cellid.
mcc is the Mobile Country Code which defines the country where you are.
mnc is the Mobile Network Code which defines the network within the country. In most countries every operator has its own unique mnc.
lac is the Location Area Code which defines an area covered by a group of radio cells.
cellid defines one single transmitter. In many cases, especially in populated areas, there are 3 cells on a single base-station, each covering 120 degrees. | http://wiki.forum.nokia.com/index.php/How_to_get_info_on_cell_location | crawl-001 | refinedweb | 177 | 67.45 |
38363/how-to-properly-print-timezone-information-using-python
See the following code:
import datetime
import pytz
fmt = '%Y-%m-%d %H:%M:%S %Z'
d = datetime.datetime.now(pytz.timezone("America/New_York"))
d_string = d.strftime(fmt)
d2 = datetime.datetime.strptime(d_string, fmt)
print d_string
print d2.strftime(fmt)
the output is
2013-02-07 17:42:31 EST
2013-02-07 17:42:31
The timezone information simply got lost in the translation.
If I switch '%Z' to '%z', I get
ValueError: 'z' is a bad directive in format '%Y-%m-%d %H:%M:%S %z'
I know I can use python-dateutil, but I just found it bizzare that I can't achieve this simple feature in datetime and have to introduce more dependency?
Part of the problem here is that the strings usually used to represent timezones are not actually unique. "EST" only means "America/New_York" to people in North America. This is a limitation in the C time API, and the Python solution is… to add full tz features in some future version any day now, if anyone is willing to write the PEP.
You can format and parse a timezone as an offset, but that loses daylight savings/summer time information (e.g., you can't distinguish "America/Phoenix" from "America/Los_Angeles" in the summer). You can format a timezone as a 3-letter abbreviation, but you can't parse it back from that.
If you want something that's fuzzy and ambiguous but usually what you want, you need a third-party library like dateutil.
If you want something that's actually unambiguous, just append the actual tz name to the local datetime string yourself, and split it back off on the other end:
d = datetime.datetime.now(pytz.timezone("America/New_York"))
dtz_string = d.strftime(fmt) + ' ' + "America/New_York"
d_string, tz_string = dtz_string.rsplit(' ', 1)
d2 = datetime.datetime.strptime(d_string, fmt)
tz2 = pytz.timezone(tz_string)
print dtz_string
print d2.strftime(fmt) + ' ' + tz_string
Or… halfway between those two, you're already using the pytz library, which can parse (according to some arbitrary but well-defined disambiguation rules) formats like "EST". So, if you really want to, you can leave the %Z in on the formatting side, then pull it off and parse it with pytz.timezone()before passing the rest to strptime.
print(datetime.datetime.today()) READ MORE
>>> class Test:
... ...READ MORE
Can you make a program with nested ...READ MORE
down voteacceptTheeThe problem is that you're iterating ..
If you are talking about the length ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/38363/how-to-properly-print-timezone-information-using-python | CC-MAIN-2021-10 | refinedweb | 428 | 58.18 |
Have you ever wanted to respond to a change in your Redux store’s state by dispatching another action?
Now you know that this is frowned on. You know that if you have enough information to dispatch an action after the reducer does its thing, then it is a mathematical certainty that you can do what you want without dispatching another action.
But for some reason, you just don’t care. Maybe your store is structured in such a way that it is easier to send requests after an action is processed. Maybe you don’t want your actions or components to be in charge of fetching remote data for each new route. Or maybe you’re just a dark side kind of person. Whatever the reason, actors will allow you to dispatch with impunity.
Why you need actors
But before we jump into the details, lets review the alternative:
connect. Here’s an example using react-redux:
class Counter extends React.Component { render() { return ( <button onClick={this.props.onIncrement}> {this.props.value} </button> ) } } const CounterContainer = connect( state => ({ value: state.counter }), dispatch => ({ onIncrement: () => dispatch(increment()) }) )(Counter) const targetEl = document.getElementById('root') ReactDOM.render( <Provider store={store}> <CounterContainer /> </Provider>, targetEl )
Instead of subscribing to the store’s state and rendering the UI explicitly, we’ve delegated the job of re-rendering to the
connect function. The simplicity here is beautiful, but it presents a problem to us dark-siders:
connect renders every state that the connected data passes through, even when we don’t want it to.
But don’t we want to render every state? Not if we’re dispatching actions after actions. For an example, what if we want to fetch out-of-date data after each action is processed?
store.subscribe(function fetcher() { const state = store.getState() const location = state.navigation.location switch (location.name) { case 'documentList': if (outOfDate(state.data.documentIndex)) { store.dispatch(data.document.fetchList()) } return case 'documentEdit': if (outOfDate(state.data.document[location.id])) { store.dispatch(data.document.fetch(location.id)) } return } } })
Each time our subscriber receives a new
state, an additional
fetch action may be dispatched. This will cause your app to render once after an action is processed, and then again if
fetch is dispatched. This is bad for performance, but more importantly, it will flash out-of-date data on screen — which looks terrible.
And that’s why we need actors!
What are } })
And now you can dispatch with impunity, safe in the knowledge that your final actor will receive a
state object which takes into account any actions dispatched by previous actors.
Examples of actors
Knowing what actors are is half the battle; the rest is knowing how to use them. With that in mind, I’ve put together some examples of actors I use.
redirector(state, dispatch)
Redirecting is one of those things which probably should be handled before your store even knows about the new URL, i.e. in an action creator. But even so, I find this actor from the Unicorn Standard Starter Kit easier to grok than any action creator based approach:.start(name, options)) } else if (name == 'root') { // If we've hit the root location, redirect the user to the main page dispatch(navigation.start('documentList')) } }
fetcher(state, dispatch)
Say you have a route which renders data from your store, but that data needs to be fetched from an API before you can use it. How do you fetch this data?
- Option 1: Call the API in the action creators which navigate to that route. Just hope that these actions are the only place you need the API.
- Option 2: Dispatch fetch actions from your route Containers. Just hope that you don’t want to display any metadata about the fetch process – or you’ll get that pesky double-render.
- Option 3: Make a
fetcherActor.
To me, it feels like the Dark Side is actually less dark than the alternatives:
function fetcher(state, dispatch) { const location = state.navigation.location switch (location.name) { case 'documentList': if (outOfDate(state.data.documentIndex)) { dispatch(data.document.fetchList()) } return case 'document': if (outOfDate(state.data.document[location.id])) { dispatch(data.document.fetch(location.id)) } return } } }
renderer(state, dispatch)
This is the actor which every application has. It usually sits at the end of your
actors sequence, rendering the result of the preceeding actors.
Your renderer can be implemented with anything: Vanilla JS, jQuery, riot.js, the list goes on. Personally, I use React:
var APP_NODE = document.getElementById('react-app') function renderer(state, dispatch) { ReactDOM.render( <Application state={state} dispatch={dispatch} />, APP_NODE ) }
But what is this
<Application /> component? Think of it as a nexus; it accepts the entire application state, then uses that state to decide which view to render.
Great, but how do I make this work with my router?
If you’re using react-router, then your
<Router> component does most of the job of
<Application> for you. The problem is that it doesn’t do the most important part; it doesn’t pass
state or
dispatch through to your route handlers.
One solution is to wrap your
<Router> component with an
<Application> component which passes
state and
dispatch to your route handlers via a React Context. But if that sounds a little complex, there is a simpler way: doing the routing yourself!
function Application(props) { const location = props.state.navigation.location switch (location.name) { case 'documentEdit': return <DocumentContainer {...props} id={location.options.id} /> case 'documentList': return <DocumentListContainer {...props} id={location.options.id} /> default: return <div>Not Found</div> } }
But how do you go about writing Container components now that you don’t need
connect? How do you resolve routes and store them in
state.navigation?
All will be revealed in my free guide to Simple Routing with Redux and React! Join my newsletter now to make sure you don’t miss out. And in return for your e-mail, you’ll immediately receive three print-optimised PDF cheatsheets – on React (see preview), ES6,!
- Learn Raw React
- Learn why you need Containers: Smart and Dumb Components by Dan Abramov
Related Projects
- See actors in action in Unicorn Standard’s Redux/React Boilerplate
Hi James, nice post, sounds like a very interesting concept.
I just had some trouble understanding: would actors work like function composition? ( The result of one actor’s action is passed to the next actor) or do they process in parallel?
Actors don’t really have a result, but they can modify the current application state by dispatching an action. And any dispatched actions are run before the next actor is run – so I guess you could say they’re run in series, not parallel.
I don’t really like the term “actor” because it makes me think of the actor pattern used in Akka / Erlang.
I agree with your pattern, mostly for coordination purpose. In the backend / CQRS / DDD / EventSourcing world, we use Sagas for this. I’ve written a little bit about that here and this pattern can be applied to React too.
I don’t really agree with the implementation detail, and think this coordinator should be able to manage its own state instead of calling store.getState(). A reducer could be an actor actually if that actor needs state to take a decision.
One more thing: connect of redux is also done to solve potential performance issues when rendering from the very top. At Stample we have a production app that renders from the very top, and manage all state outside of React (even text inputs), and we start to see the performance limits of this approach (on inputs mostly) particularly on mobile devices that have a bad CPU.
See also:
–
–
Good points.
Regarding the performance issues, do you use ImmutableJS (or something similar) and
shouldComponentUpdatein your components, to ensure you’re only rendering Virtual DOM when your props have actually changed?
Regarding reducers being able to be actors – this would undermine Redux’s stance on reducers being pure functions. Too many actors would certainly make code harder to reason about – not easier.
Yes, all my state is outside of React and absolutly immutable (but not with ImmutableJS: it did not exist at this time). There are still little performance issues even if PureRenderMixin kicks in everywhere it can, mostly because when rendering from the VERY TOP, you always have to re-render the layout and it can become quite expensive on complex SPA apps.
I understand the point of keeping reducers pure, but actually these “saga reducers” would never be used to drive react renderings but only drive complex transversal features (like an onboarding). The reducers that compute state to render should absolutly stay pure.
With your “actor pattern”, you have side effects on events. Whether it comes from a reducer or not is just an implementation detail that does not matter that much. Just wanted to point out that it is not necessary to introduce new technology like RxJS (see SO post). What I don’t like in your implementation is that the actor has to understand too much of the structure of the state. And it uses the state that is computed for rendering purpose. I think the “actor” should be able to manage its own state instead of using UI state to take appropriate decisions.
Just in case you missed it:
There’s one potential problem in this solution, I think. What if we will receive two actions in a row? First action will block the subscription until all actors will be finished, so the second action won’t be detected at all. Am I wrong? | http://jamesknelson.com/join-the-dark-side-of-the-flux-responding-to-actions-with-actors/ | CC-MAIN-2017-34 | refinedweb | 1,596 | 56.35 |
Opened 4 years ago
Closed 4 years ago
#638 closed defect (fixed)
Error in index type inference.
Description
def bar(foo): qux = foo quux = foo[qux.baz] The error message: $ cython bar.py Error compiling Cython file: ------------------------------------------------------------ ... def bar(foo): qux = foo quux = foo[qux.baz] ^ ------------------------------------------------------------ /Users/daniel/Desktop/cython-test/bar.py:3:15: Object of type '<unspecified>' has no attribute 'baz'
Change History (2)
comment:1 Changed 4 years ago by robertwb
- Owner changed from somebody to robertwb
comment:2 Changed 4 years ago by robertwb
- Resolution set to fixed
- Status changed from new to closed
Note: See TracTickets for help on using tickets.
The problem was that the indexing operator inference was changed to depend on the index type, but its type_dependencies method wasn't updated to reflect this. | http://trac.cython.org/ticket/638 | CC-MAIN-2015-22 | refinedweb | 132 | 54.93 |
The variation is "How can we load the MongoDB data into a relational database?"
I'm always perplexed by this question. It has a subtext that I find baffling. The subtext is this "all databases are relational,.
This assumption is remarkably hard to overcome.
THEM: "How can we move this mongo data into a spreadsheet?"
ME: "What?"
THEM: "You know. Get a bulk CSV extract."
ME: "Of complex, nested documents?"
THEM: "Nested documents?"
ME: "Mongo database documents include arrays and -- well -- subdocuments. They're not in first normal form. They don't fit the spreadsheet data model."
THEM: "Whatever. Every database has a bulk unload into CSV. How do you do that in Mongo?"
ME: "You can't represent a mongo document in rows and columns."
THEM: (Thumping desk for emphasis.) "Relational Theory is explicit. ALL DATA CAN BE REDUCED TO ROWS AND COLUMNS!"
ME: "Right. Through a process of normalization. The Mongo data you're looking at isn't normalized. You'd have to normalize it into a relational table model. Then you could write a customized extract focused on that relational model."
THEM: "That's absurd."
At this point, all we can do is give them the minimal pymongo MongoClient code block. Hands-on queries seem to be the only way to make progress.
from pymongo import MongoClient from pprint import pprint with MongoClient("mongodb://somehost:27017") as mongo: collection = mongo.database.collection for document in collection.find(): pprint(document)
Explanations seem to wind up in a weird circular pattern where they keep repeating their relational assumptions. Not much seems to work: diagrams, hand-waving, links to tutorials are all implicitly rejected because they don't confirm SQL bias.
A few days later they call asking how they are supposed to work with a document that has complex nested fields inside it.
This could be the beginning of wisdom. Or it could be the beginning of a lengthy reiteration of SQL Hegemony talking points and desk thumping.
THEM: "The document has an array of values."
ME: "Correct."
THEM: "What's that mean?"
ME: "It means there are multiple occurrences of the child object within each parent object."
THEM: "I can see that. What does it mean?"
ME: (Rising inflection.) "The parent is associated with multiple instances of the child."
THEM: "Don't patronize me! Stop using mongo mumbo-jumbo. Just a simple explanation is all I want."
ME: "One Parent. Many Children."
THEM: "That's stupid. One-to-many absolutely requires a foreign key. The children don't even have keys. Mongo must have hidden keys somewhere. How can I see the keys on the children in this so-called 'array' structure? How can expose the underlying implementation?"
The best I can do is show them an approach to normalizing some of the data in their collection.
from pymongo import MongoClient from pprint import pprint with MongoClient("mongodb://your_host:27017") as mongo: collection = mongo.your_database.your_collection for document in collection.find(): for child in parent['child_array']: print( document['parent_field'], child['child_field'] )
This leads to endless confusion when some documents lack a particular field. The Python document.get('field') is an elegant way to handle optional fields. I like to warn them that they should not rely on this. Sometimes document['field'] is appropriate because the field really is mandatory. If it's missing, there are serious problems. Of course, the simple get() method doesn't work for optional nested documents. For this, we need document.get('field', {}). And for optional arrays, we can use document.get('field', []).
Interestingly we sometimes have confusion over {} for document and [] for array. I chalk that up to folks who are too used to very wordy SQL and Java. I save the questions for my next book on Python.
At some point, the "optional" items may be more significant than this. Perhaps an if statement is required to handle business rules that are reflected as different document structures in a single collection.
This leads to yet more desk-thumping. It's accompanied with the laughable claim that a "real" database doesn't rely on if statements to distinguish variant subentities that are persisted in a single table. The presence of SQL ifnull() functions, case expressions, and application code with if statements apparently doesn't exist. Or -- when it is pointed out -- isn't the same thing as writing an if statement to handle variant document subentities in a Mongo database.
It appears to take about two weeks to successfully challenge entrenched relational assumptions. Even then, we have to go over some of the basics of optional fields and arrays more than once. | https://slott-softwarearchitect.blogspot.com/2016/02/ | CC-MAIN-2021-39 | refinedweb | 769 | 61.43 |
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.
Fixincludes fixes that just add #ifndef about some definition that may be in more than one header are unnecessary since duplicate macro definitions, even with different values, are allowed in system headers and if some definition is actually wrong there's just as much case for fixing it with or without the #ifndef. This patch removes such fixes which just add #ifndef. (With the #ifndef, you get the definition from the first header included. Without, you get the definition from the second header included. There's no clear advantage one way or another; if there is a significant difference between the definitions, that is its own header bug which may merit its own fix.) Bootstrapped with no regressions on hppa2.0w-hp-hpux11.11. OK to commit (with corresponding testsuite updates)? -- Joseph S. Myers jsm@polyomino.org.uk (personal mail) joseph@codesourcery.com (CodeSourcery mail) jsm28@gcc.gnu.org (Bugzilla assignments and CCs) 2004-11-13 Joseph S. Myers <joseph@codesourcery.com> * inclhack.def (hpux_maxint, limits_ifndefs, math_huge_val_ifndef, svr4__p, undefine_null): Remove. * fixincl.x: Regenerate. diff -rupN fixincludes.orig/inclhack.def fixincludes/inclhack.def --- fixincludes.orig/inclhack.def 2004-11-12 20:37:09.000000000 +0000 +++ fixincludes/inclhack.def 2004-11-12 20:36:58.000000000 +0000 @@ -1481,26 +1481,6 @@ fix = { /* - * HPUX 10.x sys/param.h defines MAXINT which clashes with values.h - */ -fix = { - hackname = hpux_maxint; - files = sys/param.h; - files = values.h; - select = "^#[ \t]*define[ \t]+MAXINT[ \t]"; - bypass = "^#[ \t]*ifndef[ \t]+MAXINT"; - test = - "-n \"`egrep '#[ \t]*define[ \t]+MAXINT[ \t]' sys/param.h`\""; - - c_fix = format; - c_fix_arg = "#ifndef MAXINT\n%0\n#endif"; - c_fix_arg = "^#[ \t]*define[ \t]+MAXINT[ \t].*"; - - test_text = '#define MAXINT 0x7FFFFFFF'; -}; - - -/* * Fix hpux10.20 <sys/time.h> to avoid invalid forward decl */ fix = { @@ -1863,34 +1843,6 @@ fix = { }; -/* - * In limits.h, put #ifndefs around things that are supposed to be defined - * in float.h to avoid redefinition errors if float.h is included first. - * On HP/UX this patch does not work, because on HP/UX limits.h uses - * multi line comments and the inserted #endif winds up inside the - * comment. Fortunately, HP/UX already uses #ifndefs in limits.h; if - * we find a #ifndef FLT_MIN we assume that all the required #ifndefs - * are there, and we do not add them ourselves. - * - * QNX Software Systems also guards the defines, but doesn't define - * FLT_MIN. Therefore, bypass the fix for *either* guarded FLT_MIN - * or guarded FLT_MAX. - */ -fix = { - hackname = limits_ifndefs; - files = "sys/limits.h"; - files = "limits.h"; - select = "^[ \t]*#[ \t]*define[ \t]+" - "((FLT|DBL)_(MIN|MAX|DIG))[ \t].*"; - bypass = "ifndef[ \t]+FLT_(MIN|MAX)"; - - c_fix = format; - c_fix_arg = "#ifndef %1\n%0\n#endif"; - /* Second arg is select expression */ - test_text = " #\tdefine\tDBL_DIG \t 0 /* somthin' */"; -}; - - /* The /usr/include/sys/ucontext.h on ia64-*linux-gnu systems defines * an _SC_GR0_OFFSET macro using an idiom that isn't a compile time * constant on recent versions of g++. @@ -2060,23 +2012,6 @@ fix = { /* - * In any case, put #ifndef .. #endif around #define HUGE_VAL in math.h. - */ -fix = { - hackname = math_huge_val_ifndef; - files = math.h; - files = math/math.h; - select = "define[ \t]+HUGE_VAL"; - - c_fix = format; - c_fix_arg = "#ifndef HUGE_VAL\n%0\n#endif"; - c_fix_arg = "^[ \t]*#[ \t]*define[ \t]+HUGE_VAL[ \t].*"; - - test_text = "# define\tHUGE_VAL 3.4e+40"; -}; - - -/* * nested comment */ fix = { @@ -3037,23 +2972,6 @@ fix = { /* - * Solaris math.h and floatingpoint.h define __P without protection, - * which conflicts with the fixproto definition. The fixproto - * definition and the Solaris definition are used the same way. - */ -fix = { - hackname = svr4__p; - files = math.h; - files = floatingpoint.h; - select = "^#define[ \t]+__P.*"; - c_fix = format; - c_fix_arg = "#ifndef __P\n%0\n#endif"; - - test_text = "#define __P(a) a"; -}; - - -/* * Disable apparent native compiler optimization cruft in SVR4.2 <string.h> * that is visible to any ANSI compiler using this include. Simply * delete the lines that #define some string functions to internal forms. @@ -3908,25 +3826,6 @@ fix = { /* - * Fix multiple defines for NULL. Sometimes, we stumble into \r\n - * terminated lines, so accommodate these. Test both ways. - * Don't bother to reproduce the \r\n termination, as GCC has to - * recognize \n termination anyway. - */ -fix = { - hackname = undefine_null; - select = "^#[ \t]*define[ \t]+NULL[ \t]"; - bypass = "#[ \t]*(ifn|un)def[ \t]+NULL($|[ \t\r])"; - - c_fix = format; - c_fix_arg = "#ifndef NULL\n#define NULL%1\n#endif\n"; - c_fix_arg = "^#[ \t]*define[ \t]+NULL([^\r\n]+)[\r]*\n"; - - test_text = "#define NULL 0UL\r\n" - "#define NULL\t((void*)0)\n"; -}; - -/* * On Cray Unicos/Mk some standard headers use the C99 keyword "restrict" * which must be replaced by __restrict__ for GCC. */ | http://gcc.gnu.org/ml/gcc-patches/2004-11/msg01013.html | CC-MAIN-2019-51 | refinedweb | 755 | 53.07 |
.
To install Elixir or learn more about it, check our getting started guide. We also have online documentation available and a Crash Course for Erlang developers.
Highlights
Everything is an expression
defmodule Hello do IO.puts "Defining the function world" def world do IO.puts "Hello World" end IO.puts "Function world defined" end Hello.world
Running the program above will print:
Defining the function world Function world defined Hello World
This allows a module to be defined in terms of many expressions, programmable by the developer, being the basic foundation for meta-programming. This is similar to what Joe Armstrong (creator of Erlang) proposes with his erl2 project.
Meta-programming and DSLs
With expressions and meta-programming, Elixir developers can easily create Domain Specific Languages:
defmodule MathTest do use ExUnit.Case test "can add two numbers" do assert 1 + 1 == 2 end end
DSLs allow a developer to write abstractions for specific domains, often getting rid of boilerplate code.
Polymorphism via protocols
Protocols allow developers to provide type-specific functionality at chosen extension points. For example, the
Enum module in Elixir is commonly used to iterate collections:
Enum.map([1,2,3], fn(x) -> x * 2 end) #=> [2,4,6]
Since the
Enum module is built on top of protocols, it is not only limited to the data types that ship with Elixir. A developer can use his own collections with
Enum as long as it implements the
Enumerableprotocol. For example, a developer can use all the convenience of the
Enum module to easily manipulate a file, line by line:
file = File.iterator!("README.md") lines = Enum.map(file, fn(line) -> Regex.replace(%r/"/, line, "'") end) File.write("README.md", lines)
Documentation as first-class citizen
Documentation is supported at language level, in the form of docstrings. Markdown is Elixir's defacto markup language of choice for use in docstrings:
defmodule MyModule do @moduledoc """ Documentation for my module. With **formatting**. """ @doc "Hello" def world do "World" end end
Different tools can easily access the documentation. For instance, IEx (Elixir's interactive shell) can show the documentation for any module or function with the help of the function
h:
iex> h MyModule # MyModule Documentation for my module. With **formatting**.
There is also a documentation generator for Elixir called ExDoc that can produce a static site using docstrings extracted from the source code.
Pattern matching
Pattern matching allows developers to easily destructure data and access its contents:
{ User, name, age } = User.get("John Doe")
When mixed with guards, it allows us to easily express our problem:
def serve_drinks({ User, name, age }) when age < 21 do raise "No way #{name}!" end def serve_drinks({ User, name, age }) do # Code that serves drinks! end serve_drinks User.get("John") #=> Raises "No way John!" if John is under 21
Erlang all the way down
After all, Elixir is still Erlang. An Elixir programmer can invoke any Erlang function with no runtime cost:
:application.start(:crypto) :crypto.md5("Using crypto from Erlang OTP") #=> <<192,223,75,115,...>>
Since Elixir compiles to the same bytecode, it is fully OTP compliant and works seamlessly with all the battle-tested techniques Erlang/OTP is famous for. Erlang type specifications, behaviors and module attributes are all supported. It is easy to add Elixir to your existing Erlang programs too (including rebar support)!
To install Elixir or learn more about it, check our getting started guide. We also have online documentation available and a Crash Course for Erlang developers. | https://www.csdn.net/tags/Mtjakg2sNDQwNTMtYmxvZwO0O0OO0O0O.html | CC-MAIN-2021-43 | refinedweb | 577 | 56.45 |
This question already has an answer here:
i'm looking for a conclusive, 100%-effective way to block this latest hideous trend of marketers and scammers alike. you'll be familiar with it yourself - it's a very easy way to tell whether or not to trust a website if it flags up a warning like this when you try and navigate away:
i'm aware some sites use this benevolently - even superuser does it when attempting to close the browser - but i hate it. if i close a tab it's because i want it closed. i do NOT appreciate my browser second-guessing my choices. if there is a way to kill this, i expect it to kill superuser's implementation of it as well.
there are more than a few userscripts that aim to kill this, and i'll post them here. all of them refuse to work with a certain link, which i'll also post, which renders them moot immediately.
here's the site that somehow manages to overcome any protection against it:
(PLEASE NOTE: this is a dangerous site! do NOT visit it if you are not protected with an anti-virus, if you're running an obsolete web browser, if you have java installed, etc., etc. it WILL attempt to mislead you. also, be aware it has a penchant for playing awful music immediately upon loading at full volume so best to put the mute on.)
the above site will either load something about free sex videos or, more often, a video sharing site trying to be youtube. this evades any "block onunload" script i throw at it.
you can also try posting a reply to this topic (i tried commenting, but that didn't cause the dialogue, so try answering, which i can't yet) and then closing your browser - superuser do the same thing, for their sins.
thanks for any suggestions. let's kill this user-hostile nonsense for good.
This question was marked as an exact duplicate of an existing question.
There are actually two ways to see the dialog you mention.
Only one of them is marketing spam. The sure-fire way to block that is to us NoScript or an equivalent that blocks JavaScript.
The other way you get it is, in fact, legitimate and helpful. You get it if you try and navigate away from a page where you have been adding content. This very site is a good example. If I'm halfway through typing and answer and I decide not to bother, I get a warning asking me if I am sure I want to leave. That's good!
Install TamperMonkey
Create a new script with the following text:
// ==UserScript==
// @name Disable Leave Page
// @namespace
// @include *
// ==/UserScript==
location.href = "javascript:(" + function() {
window.onbeforeunload = null;
window.onunload = null;
} + ")()";
With a little fiddling you should be able to get it to work.
asked
2 years ago
viewed
8589 times
active | http://superuser.com/questions/717388/conclusive-way-to-block-onunload-spam-are-you-sure-you-want-to-navigate-away | CC-MAIN-2016-26 | refinedweb | 493 | 71.95 |
table of contents
NAME¶
nda — NVMe Direct
Access device driver
SYNOPSIS¶
device nvme
device scbus
DESCRIPTION¶
The
nda driver provides support for direct
access devices, implementing the NVMe command protocol, that are attached to
the system through a host adapter supported by the CAM subsystem.
SYSCTL VARIABLES¶
The following variables are available as both sysctl(8) variables and loader(8) tunables:
- hw.nvme.use_nvd
- The nvme(4) driver will create
ndadevice nodes for block storage when set to 0. Create nvd(4) device nodes for block storage when set to 1. See nvd(4) when set to 1.
- kern.cam.nda.nvd_compat
- When set to 1, nvd(4) aliases will be created for all
ndadevices, including partitions and other geom(4) providers that take their names from the disk's name. nvd devices will not, however, be reported in the kern.disks sysctl(8).
- kern.cam.nda.sort_io_queue
- This variable determines whether the software queued entries are sorted in LBA order or not. Sorting is almost always a waste of time. The default is to not sort.
- kern.cam.nda.enable_biospeedup
- This variable determines if the
ndadevices participate in the speedup protocol. When the device participates in the speedup, then when the upper layers send a BIO_SPEEDUP, all current BIO_DELETE requests not yet sent to the hardware are completed successfully immediate without sending them to the hardware. Used in low disk space scenarios when the filesystem encounters a critical shortage and needs blocks immediately. Since trims have maximum benefit when the LBA is unused for a long time, skipping the trim when space is needed for immediate writes results in little to no excess wear. When participation is disabled, BIO_SPEEDUP requests are ignored.
- kern.cam.nda.max_trim
- The maximum number of LBA ranges to be collected together for each DSM trims send to the hardware. Defaults to 256, which is the maximum number of ranges the protocol supports. Sometimes poor trim performance can be mitigated by limiting the number of ranges sent to the device. This value must be between 1 and 256 inclusive.
The following report per-device settings, and are read-only unless otherwise indicated. Replace N with the device unit number.
- kern.cam.nda.N.rotating
- This variable reports whether the storage volume is spinning or flash. Its value is hard coded to 0 indicating flash.
- kern.cam.nda.N.unmapped_io
- This variable reports whether the
ndadriver accepts unmapped I/O for this unit.
- kern.cam.nda.N.flags
- This variable reports the current flags.
- kern.cam.nda.N.sort_io_queue
- Same as the kern.cam.nda.sort_io_queue tunable.
- kern.cam.nda.N.trim_ticks
- Writable. When greater than zero, hold trims for up to this many ticks before sending to the drive. Sometimes waiting a little bit to collect more trims to send at one time improves trim performance. When 0, no delaying of trims are done.
- kern.cam.nda.N.trim_goal
- Writable. When delaying a bit to collect multiple trims, send the accumulated DSM TRIM to the drive.
- kern.cam.nda.N.trim_lbas
- Total number of LBAs that have been trimmed.
- kern.cam.nda.N.trim_ranges
- Total number of LBA ranges that have been trimmed.
- kern.cam.nda.N.trim_count
- Total number of trims sent to the hardware.
- kern.cam.nda.N.deletes
- Total number of BIO_DELETE requests queued to the device.
NAMESPACE MAPPING¶
Each nvme(4) drive has one or more namespaces
associated with it. One instance of the
nda driver
will be created for each of the namespaces on the drive. All the
nda nodes for a nvme(4) device are
at target 0. However, the namespace ID maps to the CAM lun, as reported in
kernel messages and in the devlist sub command of
camcontrol(8).
Namespaces are managed with the ns sub
command of nvmecontrol(8). Not all drives support
namespace management, but all drives support at least one namespace. Device
nodes for
nda will be created and destroyed
dynamically as namespaces are activated or detached.
FILES¶
- /dev/nda*
- NVMe storage device nodes
SEE ALSO¶
cam(4), geom(4), nvd(4), nvme(4), gpart(8)
HISTORY¶
The
nda driver first appeared in
FreeBSD 12.0.
AUTHORS¶
Warner Losh <imp@FreeBSD.org> | https://manpages.debian.org/testing/freebsd-manpages/nda.4freebsd.en.html | CC-MAIN-2022-05 | refinedweb | 698 | 56.96 |
- Cookie policy
- Advertise with us
© Future Publishing Limited, Quay House, The Ambury, Bath BA1 1UA. All rights reserved. England and Wales company registration number 2008885.
In depth:..
All the other pieces you find in a Linux distribution - the Bash shell, the KDE window manager, web browsers, the X server, Tux Racer and everything else - are just applications that happen to run on Linux and are emphatically not part of the operating system itself. To give some sense of scale, a fresh installation of RHEL5 occupies about 2.5GB of disk space (depending, obviously, on what you choose to include). Of this, the kernel, including all of its modules, occupies 47MB, or about 2%.
But what does the kernel actually do? The diagram below shows the big picture. The kernel makes its services available to the application programs that run on it through a large collection of entry points, known technically as system calls.
The kernel uses system calls such as 'read' and 'write' to provide an abstraction of your hardware.. By way of example, here's a short program (written in C) that opens a file and copies its contents to standard output:
#include <fcntl.h> int main() { int fd, count; char buf[1000]; fd=open("mydata", O_RDONLY); count = read(fd, buf, 1000); write(1, buf, count); close(fd); }
Here, you see examples of four system calls - open, read, write and close. Don't fret over the details of the syntax; that's not important right now. The point is this: through these system calls (and a few others) the Linux kernel provides the illusion of a 'file' - a sequence of bytes of data that has a name - and protects you from the underlying details of tracks and sectors and heads and free block lists that you'd have to get into if you wanted to talk to the hardware directly. That's what we mean by an abstraction.
As you'll see from the picture above, the kernel has to work hard to maintain this same abstraction when the filesystem itself might be stored in any of several formats, on local storage devices such as hard disks, CDs or USB memory sticks - or might even be on a remote system and accessed through a network protocol such as NFS or CIFS.
There may even be an additional device mapper layer to support logical volumes or RAID. The virtual filesystem layer within the kernel enables it to present these underlying forms of storage as a collection of files within a single hierarchical filesystem.
The filesystem is one of the more obvious abstractions provided by the kernel.. Here's another little C program:
#include <stdlib.h> main() { if (fork()) { write(1, "Parent\n", 7); wait(0); exit(0); } else { write(1, "Child\n", 6); exit(0); } }
This program creates a new process; the original process (the parent) and the new process (the child) each write a message to standard output, then terminate. Again, don't stress about the syntax. Just notice that the system calls fork(), exit() and wait() perform process creation, termination and synchronisation respectively. These are elegantly simple calls that hide the underlying compexities of process management and scheduling..
Another aspect of memory management is that it prevents one process from accessing the address space of another - a necessary precaution to preserve the integrity of a multi-processing operating system..
Finally (apologies to the many programmers who've written pieces of the kernel that do things that aren't on this brief list), the kernel provides a large collection of modules that know how to handle the low-level details of talking to hardware devices - how to read a sector from a disk, how to retrieve a packet from a network interface card and so on. These are sometimes called device drivers.
Now we have some idea of what the kernel does, let's look briefly at its physical organisation. Early versions of the Linux kernel were monolithic - that is, all the bits and pieces were statically linked into one (rather large) executable file.
In contrast, modern Linux kernels are modular: a lot of the functionality is contained in modules that are loaded into the kernel dynamically. This keeps the core of the kernel small and makes it possible to load or replace modules in a running kernel without rebooting.
The core of the kernel is loaded into memory at boot time from a file in the /boot directory called something like vmlinuz-KERNELVERSION, where KERNELVERSION is, of course, the kernel version. (To find out what kernel version you have, run the command uname -r.) The kernel's modules are under the directory /lib/modules/KERNELVERSION. All of these pieces were copied into place when the kernel was installed.
For the most part, Linux manages its modules without your help, but there are commands to examine and manage the modules manually, should the need arise. For example, to find out which modules are currently loaded into the kernel, use lsmod. Here's a sample of the output:
# lsmod pcspkr 4224 0 hci_usb 18204 2 psmouse 38920 0 bluetooth 55908 7 rfcomm,l2cap,hci_usb yenta_socket 27532 5 rsrc_nonstatic 14080 1 yenta_socket isofs 36284 0
The fields in this output are the module's name, its size, its usage count and a list of the modules that are dependent on it. The usage count is important to prevent unloading a module that's currently active. Linux will only enable a module to be removed if its usage count is zero.
You can manually load and unload modules using modprobe. (There are two lower-level commands called insmod and rmmod that do the job, but modprobe is easier to use because it automatically resolves module dependencies.) For example, the output of lsmod on our machine shows a loaded module called isofs, which has a usage count of zero and no dependent modules. (isofs is the module that supports the ISO filesystem format used on CDs.) The kernel is happy to let us unload the module, like this:
# modprobe -r isofs
Now isofs doesn't show up on the output of lsmod and, for what it's worth, the kernel is using 36,284 bytes less memory. If you put in a CD and let it automount, the kernel will automatically reload the isofs module and its usage count will rise to 1. If you try to remove the module now, you won't succeed because it's in use:
# modprobe -r isofs FATAL: Module isofs is in use.
Whereas lsmod just lists the modules that are currently loaded, modprobe -l will list all the available modules. The output essentially shows all the modules living under /lib/modules/KERNELVERSION; be prepared for a long list!
In reality, it would be unusual to load a module manually with modprobe, but if you did you could pass parameters to the module via the modprobe command line. Here's an example:
# modprobe usbcore blinkenlights=1
No, we haven't just invented blinkenlights - it's a real parameter for the usbcore module.
The tricky bit is knowing what parameters a module accepts. You could phone a friend or even ask the audience, but a better approach is to use the modinfo command, which lists a variety of information about the module.
Here's an example for the module snd-hda-intel. We've pruned the output somewhat in the interests of brevity:
# modinfo snd-hda-intel filename: /lib/modules/2.6.20-16-generic/kernel/sound/pci/hda/snd-hda-intel.ko description: Intel HDA driver license: GPL srcversion: A3552B2DF3A932D88FFC00C alias: pci:v000010DEd0000055Dsv*sd*bc*sc*i* alias: pci:v000010DEd0000055Csv*sd*bc*sc*i* depends: snd-pcm,snd-page-alloc,snd-hda-codec,snd vermagic: 2.6.20-16-generic SMP mod_unload 586 parm: index:Index value for Intel HD audio interface. (int) parm: id:ID string for Intel HD audio interface. (charp) parm: model:Use the given board model. (charp) parm: position_fix:Fix DMA pointer (0 = auto, 1 = none, 2 = POSBUF, 3 = FIFO size). (int) parm: probe_mask:Bitmask to probe codecs (default = -1). (int) parm: single_cmd:Use single command to communicate with codecs (for debugging only). (bool) parm: enable_msi:Enable Message Signaled Interrupt (MSI) (int) parm: enable:bool
The lines of interest to us here are those starting with parm: - these show the parameters accepted by that module. These descriptions are terse, to say the least. To go hunting for further documentation, install the kernel source code. Then you'll find a directory called something like /usr/src/KERNELVERSION/Documentation.
There's some interesting stuff under here; for example, the file /usr/src/KERNELVERSION/Documentation/sound/alsa/ALSA-Configuration.txt describes the parameters recognised by many of the ALSA sound modules. The file /usr/src/KERNELVERSION/Documentation/kernel-parameters.txt is also helpful.
An example of needing to pass parameters to a module came up quite recently on one of the Ubuntu forums (see). Essentially the point was that the snd-hda-intel module needed a little help in driving the sound hardware correctly and would sometimes hang when it loaded at boot time. Part of the fix was to supply the option probe_mask=1 to the module. So, if you were loading the module manually, you'd type:
# modprobe snd-hda-intel probe_mask=1
More likely, you'd place a line in the file /etc/modprobe.conf like this:
options snd-hda-intel probe_mask=1
This tells modprobe to include the probe_mask=1 option every time it loads the snd-hda-intel module. Some recent Linux distrubutions split this information up into multiple files under /etc/modprobe.d rather than putting it all in modprobe.conf..
In a similar vein, the contents of /proc/meminfo provides more detail about the current status of the virtual memory system than you could shake a stick at, whereas tools such as vmstat and top provide some of this information in a (marginally) more accessible format. As another example, /proc/net/arp shows the current contents of the system's ARP cache; from the command line, arp -a shows the same information.
Of particular interest are the 'files' under /proc/sys. As an example, the setting under /proc/sys/net/ipv4/ip_forward says whether the kernel will forward IP datagrams - that is, whether it will function as a gateway. Right now, the kernel is telling us that this is turned off:
# cat /proc/sys/net/ipv4/ip_forward 0
It gets much more interesting when you discover that you can write to these files, too. Continuing our example:
# echo 1 > /proc/sys/net/ipv4/ip_forward
...will turn on IP forwarding in the running kernel.
Instead of using cat and echo to examine and modify the settings under /proc/sys, you can also use the sysctl command:
# sysctl net.ipv4.ip_forward net.ipv4.ip_forward = 0
Which is equivalent to:
# cat /proc/sys/net/ipv4/ip_forward 0
And:
# sysctl -w net.ipv4.ip_forward=1 net.ipv4.ip_forward = 1
...is the same as
# echo 1 > /proc/sys/net/ipv4/ip_forward
Notice that the pathnames you supply to sysctl use a full stop (.) to separate the components instead of the usual forward slash (/), and that the paths are all relative to /proc/sys.
Be aware that settings you change in this way only affect the current running kernel - they will not survive a reboot. To make settings permanent, put them into the file /etc/sysctl.conf. At boot time, sysctl will automatically re-establish any settings it finds in this file.
A line in /etc/sysctl.conf might look like this:
net.ipv4.ip_forward=1
The writeable parameters under /proc/sys have spawned a whole sub-culture of Linux performance tuning. Personally, I think this is overrated, but here are a few examples should you wish to try it.
The installation instructions for Oracle 10g () ask you to set a number of parameters, including:
kernel.shmmax=2147483648
...which sets the maximum shared memory segment size to 2GB. (Shared memory is an inter-process communication mechanism that enables a memory segment to be visible within the address space of multiple processes.)
The IBM 'Redpaper' on Linux performance and tuning guidelines () makes many suggestions for adjusting parameters under /proc/sys, including this:
vm.swappiness=100
This parameter controls how aggressively memory pages are swapped to disk.
Some parameters may be adjusted to improve security. Bob Cromwell's website () has some good examples, including this:
net.ipv4.icmp_echo_ignore_broadcasts=1
...which tells the kernel not to respond to broadcast ICMP ping requests, making your network less vulnerable to a type of denial-of-service attack known as a Smurf attack.
Here's another example:
net.ipv4.conf.all.rp_filter=1
That tells the kernel to enforce sanity checking, also called ingress filtering or egress filtering. The point is to drop a packet if the source and destination IP addresses in the IP header don't make sense when considered in light of the physical interface on which it arrived.
So, is there any documentation on all these parameters? Well, the command
# sysctl -a
will show you all their names and current values. It's a long list, but it gives you no clue what any of them actually do. So what else is there? As it turns out, O'Reilly has published a book, written by Olivier Daudel and called /proc et /sys. Oui, mes amis, it's in French, and we're not aware of an English translation.
Another useful reference is the Red Hat Enterprise Linux Reference Guide, which devotes an entire chapter to the subject. You can download it from. The definitive book about the Linux kernel is Understanding the Linux Kernel by Bovet and Cesati (O'Reilly), but be aware that this is mainly about kernel internals and is probably more of interest to wannabe kernel developers and computer science students rather than system administrators.
It's also possible to configure and build your own kernel. For this, you might try Greg Kroah-Hartman's Linux Kernel in a Nutshell, an O'Reilly title that makes a delightful but presumably unintended play on words. But, of course, you have to be nuts to make a kernel. position. They know there are all kinds of parameters they can tweak that might improve performance, but have little idea of what most of them do and no good way to measure performance. So, our advice is: unless you know what you're doing, and/or have a way to measure performance, leave these settings alone!
First published in Linux Format magazine
You should follow us on Identi.ca or Twitter
Your comments
Great Article.
Anonymous Penguin! (not verified) - November. | http://www.tuxradar.com/content/how-linux-kernel-works?page=1 | CC-MAIN-2016-36 | refinedweb | 2,442 | 53.21 |
close - close a file descriptor
Synopsis
Description
Return Value
Errors
Examples
Reassigning a File Descriptor
Closing a File Descriptor
Application Usage
Rationale
Future Directions
See Also
#include <unistd.h>
int close(int fildes); unspecified. modules process, may be canceled. An I/O operation that is not canceled completes as if the close() operation had not yet occurred. All operations that are not canceled shall complete as if the close() blocked until the operations contents, -1 shall be returned and errno set to indicate the error.
The close() function shall fail if:The following sections are informative. had used the stdio routine fopen() to open a file should use the corresponding fclose() routine rather than close(). Once a file is closed, the file descriptor no longer exists, since the integer corresponding to it no longer refers to a file.() , . | http://www.squarebox.co.uk/cgi-squarebox/manServer/usr/share/man/man3p/close.3p | crawl-003 | refinedweb | 138 | 54.42 |
What does it take to add UI tests in your CICD pipelines?
On March 12, Angie Jones, Senior Developer Advocate at Applitools, sat down with Jessica Deen, Senior Cloud Advocate for Microsoft, held a webinar to discuss their approaches to automated testing and CI.
Angie loves to share her experiences with test automation. She shares her wealth of knowledge by speaking and teaching at software conferences all over the world, as well as writing tutorials and blog posts on angiejones.tech..
Jessica’s work at Microsoft focuses on Azure, Containers, OSS, and, of course, DevOps. Prior to joining Microsoft, she spent over a decade as an IT Consultant / Systems Administrator for various corporate and enterprise environments, catering to end users and IT professionals in the San Francisco Bay Area.
Jessica holds two Microsoft Certifications (MCP, MSTS), 3 CompTIA certifications (A+, Network+, and Security+), 4 Apple Certifications, and is a former 4-year Microsoft Most Valuable Professional for Windows and Devices for IT.
The TalkAngie and Jessica broke the talk into three parts. First, Angie would discuss factors anyone should consider in creating automated tests. Second, Angie and Jessica would demonstrate writing UI tests for a test application. Finally, they would work on adding UI tests to a CI/CD pipeline.
Let’s get into the meat of it.
Four Factors to Consider in Automated Tests
Angie first introduced the four factors you need to consider when creating test automation:
- Speed
- Reliability
- Quantity
- Maintenance
She went through each in turn.
Speed
Angie started off by making this point:
“When your team checks in code, they want to know if the check-in is good as quickly as possible. Meaning, not overnight, not hours from now.”
Angie points out that the talk covers UI tests primarily because lots of engineers struggle with UI testing. However, most of your check-in tests should not be UI tests because they run relatively slowly. From this she referred to the testing pyramid idea
- Most of your tests are unit tests – they run the fastest and should pass (especially if written by the same team that wrote the code)
- The next largest group is either system-level or business-layer tests. These tests don’t require a user interface and show the functionality of units working together
- UI tests have the smallest number of total tests and should provide sufficient coverage to give you confidence in the user-level behavior.
While UI tests take time, Angie points out that they are the only tests showing user experience of your application. So, don’t skimp on UI tests.
Having said that, when UI tests become part of your build, you need to make sure that your build time doesn’t become bogged down with your UI tests. If all your conditions run over 15 minutes, that’s way too long.
To keep your testing to a minimum, Angie suggests running UI tests in parallel. To determine whether or not you need to split up one test into several parallel tests, give yourself a time limit. Let’s say your build needs to complete in five minutes. Once you have a time limit, you can figure out how many parallel tests to set up. Like – with the 15 minute example, you might need to divide into three or more parallel tests.
Reliability
Next, you need reliable tests. Dependable. Consistent.
Unreliable tests interfere with CI processes. False negatives, said Angie, plague your team by making them waste time tracking down errors that don’t exist. False positives, she continues, corrupt your product by permitting the check-in of defective code. And, false positives corrupt your team because bugs found later in the process interfere with team cohesion and team trust.
For every successful CICD team, check-in success serves as the standard for writing quality code. You need reliable tests.
How do you make your tests reliable?
Angie has a suggestion that you make sure your app includes testability – which involves you leaning on your team. If you develop code, grab one of your test counterparts. If you test, sit down with your development team. Take the opportunity to discuss app testability.
What makes an app testable? Identifiers. Any test runner uses identifiers to control the application. And, you can also use identifiers to validate outputs. So, a consistent regime to create identifiers helps you deliver consistency.
If you lack identifiers, you get stuck with CSS Selectors or Xpath selectors. Those can get messy – especially over time.
Another way to make your app testable, Angie says, requires code that lets your test set initial conditions. If your UI tests depend on certain data values, then you need code to set those values prior to running those tests. Your developers need to create that code – via API or stored procedure – to ensure that the tests always begin with the proper conditions. This setup code can help you create the parallel tests that help your tests run more quickly.
You can also use code to restore conditions after your tests run – leaving the app in the proper state for another test.
Quantity
Next, Angie said, you need to consider the number of tests you run.
There is a common misconception that you need to automate every possible test condition you can think about, she said. People get into trouble trying to do this in practice.
First, lots of tests increase your test time. And, as Angie said already, you don’t want longer test times.
Second, you end up with low value as well as high-value UI tests. Angie asks a question to help triage her tests:
“Which test would I want to stop an integration or deployment? If I don’t want this test to stop a deployment, it doesn’t get automated. Or maybe it’s automated, but it’s run like once a day on some other cycle, not on my CICD.”
Angie also asks about the value of the functionality:
“Which test exercises critical, core functionality? Those are the ones you want in there. Which tests cover areas of my application that have a history of failing? You’re nervous anytime you have to touch that code. You want some tests around that area, too.”
Lastly, Angie asks, which tests provide information already covered by other tests in the pipeline? So many people forget to think about total coverage. They create repetitive tests and leave them in the pipeline. And, as many developers know, a single check-in that triggers multiple failures can do so because it was a single code error that had been tested, and failed, multiple times.
“Don’t be afraid to delete tests,” Angie said. If it’s redundant, get rid of it, and reduce your overall test code maintenance. She talked about how long it took her to become comfortable with deleting tests, but she appreciates the exercise now.
Maintenance
“Test code is code,” Angie said. “You need to write it with the same rules, the same guidelines, the same care that you would any production code.”
Angie continued, saying that people ask, “‘Well, Angie, why do I need to be so rigorous with my test code?’”
Angie made the point that test code monitors production code. In your CICD development, the state of the build depends on test acceptance. If you build sloppy test code, you run the risk of false positives and false negatives.
As your production code changes, your test code must change as well. The sloppier your test code, the more difficult time you will have in test maintenance.
Writing test code with the same care as you write production gives you the best chance to keep your CICD pipeline in fast, consistent delivery. Alternatively, Angie said, if your test code stays a mess, you will have a tendency to avoid code maintenance. Avoiding maintenance will lead to untrustworthy builds.
Writing UI Tests – Introduction
Next, Angie introduced the application she and Jessica were using for their coding demonstration. The app – a chat app, looks like this:
The welcome screen asks you to enter your username and click “Start Chatting” – the red button. Once you have done so, you’re in the app. Going forward, you enter text and click the “Send” button and it shows up on a chat screen along with your username. Other users can do the same thing.
With this as a starting point, Angie and Jessica began the process of test writing.
Writing UI Tests – Coding Tests
Angie and Jessica were on a LiveShare of code, which looked like this:
From here, Angie started building her UI tests for the sign-in functionality. And, because she likes to code in Java, she coded in Java.
All the objects she used were identified in the BaseTests class she inherited.
Her full code to sign-in looked like this:
public class ChattyBotTests extends BaseTests { private ChatPage chatPage: @Test public void newSession(){ driver.get(appUrl); homePage.enterUsername("angie"); chatPage = homePage.clickStartChatting(); validateWindow(); }
The test code gest the URL previously defined in the BaseTests class, fills in the username box with “angie”, and clicks the “Start Chatting” button. Finally, Angie added the validateWindow() method inherited from BaseTests, which uses Applitools visual testing to validate the new screen after the Start Chatting button has been clicked.
Next, Angie wrote the code to enter a message, click send message, and validate that the message was on the screen.
@Test public void enterMessage(){ chatPage.sendMessage("hello world"); validateWindow(): }
The inherited chatPage.sendMessage method both enters the text and clicks the Send Message button. validateWindow() again checks the screen using Applitools.
Are these usable as-is for CICD? Nope.
Coding Pre-Test Setup
If we want to run tests in parallel, these tests, as written, block parallel operation, since the enterMessage() depends on the newSession() being run previously.
So solve this, Angie creates a pre-test startSession() that runs before all tests. It includes the first three lines of newSession() which go to the app URL, enter “angie” as the username, and click the “Start Chatting” button. Next, Angie modifies her newSession() test so all it does is the validation.
@Before public void startSession(){ driver.get(appUrl); homePage.enterUsername("angie"); chatPage = homePage.clickStartChatting(); } @Test public void newSession(){ validateWindow(); }
With this @Before setup, Angie can create independent tests.
Adding Multi-User Test
Finally, Angie added a multi-user test. In this test, she assumed the @Before gest run, and her new test looked like this:
@Test public void multiPersonChat(){ //Angie sends a message chatPage.sendMessage(“hello world”); //Jessica sends a message WindowUtils.openNewTab(driver, appUrl); homePage.enterUsername("jessica"); chatPage = homePage.clickStartChatting(); chatPage.sendMessage("goodbye world"); WindowUtils.switchToTab(driver, 1); validateWindow(); }
Here, user “angie” sends the message “hello world”. Then, Angie codes the browser to:
- open a new tab for the app URL,
- create a new chat session for “jessica”
- has “jessica” send the message “goodbye world”
- Switch back to the original tab
- Validate the window
Integrating UI Tests Into CICD
Now, it was Jessica’s turn to control the code.
Before she got started coding, Jessica shared her screen from Visual Studio Code, to demonstrate the LiveShare feature of VS Code:
Angie and Jessica were working on the same file using LiveShare. LiveShare highlights Angie’s cursor on Jessica’s screen.
When Angie selects a block of text, the text gets highlighted on Jessica’s screen.
This extension to Visual Studio Code makes it easy to collaborate on coding projects remotely. It’s available for download on the Visual Studio Code Marketplace. It’s great for pair programming when compared with remote screen share.
To begin the discussion of using these tests in CICD, Jessica started describing the environment for running the tests from a developer perspective versus a CICD perspective. A developer might imagine running locally, with IntelliJ or command line opening up browser windows. In contrast, CICD needs to run unattended. So, we need to consider headless.
Jessica showed how she coded for different environments in which she might run her tests.
Her code explains that the environment gets defined by a variable called runWhere, which can equal one of three values:
- local – uses a ChromeDriver
- pipeline – uses a dedicated build server and sets the options –headless and –no-sandbox for ChromeDriver (note: for Windows you add the option “–disable-gui”)
- container – instructs the driver to be a remote web driver based on the selenium hub remote URL and passes the –headless and –no-sandbox chromeOptions
Testing Locally
First, Jessica needed to verify that the testa ran using the local settings.
Jessica set the RUNWHERE variable to ‘local’ using the command
export RUNWHERE=local
She had already exported other settings, such as her Applitools API Key, so she can use Applitools.
Since Jessica was already in her visual test folder, she run her standard maven command:
mvn -f visual_tests/pom.xml clean test
The tests ran as expected with no errors. The test opened up a local browser window and she showed the tests running.
Testing Pipeline
Next, Jessica set up to test her pipeline environment settings.
She changed the RUNWHERE variable using the command:
export RUNWHERE=pipeline
Again, she executed the same maven tests
mvn -f visual_tests/pom.xml clean test
With the expectation that the tests would run as expected using her pipeline server, meaning that the tests run without opening a browser window on her local machine.
This is important because whatever CICD pipeline you use – Azure DevOps, Github Actions, Travis CI, or any traditional non-container-based CICD system – will want to use this headless interaction with the browser that keeps the GUI from opening up and possibly throwing an error.
Once these passed, Jessica moved on to testing with containers.
Testing Containers
Looking back, the container-based tests used a call to RemoteWebDriver, which in turns called selenium_hub:
Selenium_hub let Jessica spin up whatever browser she wanted. To specify what she wanted, she used a docker-compose file, docker-compose.yaml:
These container-based approaches align with the current use of cloud-native pipelines for CICD. Jessica noted you can use Jenkins, Jenkins X for Kubernetes native, and CodeFresh, among others. Jessica decided to show CodeFresh. It’s a CICD pipeline dedicated to Kubernetes and microservices. Every task runs in a container.
Selenium_Hub let Jessica choose to run tests on both a chorme_node and a firefox_node in her container setup.
She simply needed to modify her RUNWHERE variable
export RUNWHERE=container
However, before running her tests, she needed to spin up her docker-compose on her local system. And, because selenium_hub wasn’t something that her system could identify by DNS at that moment (it was running on her local system), she ensured that the selenium_hub running locally would port forward onto her local system’s 127.0.0.1 connection. Once she made these changes, and changed the container definition to use 127.0.0.1:4444, she was ready to run her maven pom.xml file.
When the tests ran successfully, her local validation confirmed that her tests should run in her pipeline of choice.
Jessica pointed out that CICD really comes down to a collection of tasks you would run manually.
After that, Jessica said, we need to automate those tasks in a definition file. Typically, that’s Yaml, unless you really like pain and choose Groovy in Jenkins… (no judgement, she said).
Looking at Azure DevOps
Next, Jessica did a quick look into Azure DevOps.
Inside Azure DevOps, Jessica showed that she had a number of pipelines already written, and she chose the one she had set aside for the project. This pipeline already had three separate stages:
- Build Stage
- Deploy to Dev
- Deploy to Prod
Opening up the build stage shows all the steps contained just within that stage in its 74 seconds of runtime:
Jessica pointed out that this little ChattyBot application is running on a large cluster in Azure. It’s running in Kubernetes, and it’s deployed with Helm. The whole build stage includes:
- using JFrog to package up all the maven dependencies and run the maven build
- jfrog xray to make sure that the dependencies don’t result in security errors,
- Creating a helm chart and packaging that,
- Sending Slack notifications
This is a pretty extensive pipeline. Jessica wondered how hard it would be to integrate Angie’s tests into an existing environment.
But, because of the work Jessica had done to make Angie’s tests ready for CICD, it was really easy to add those tests into the deploy workflow.
First, Jessica reviewed the Deploy to Dev stage.
Adding UI Tests in Your CICD Pipeline
Now, Jessica started doing the work to add Angie’s tests into her existing CICD pipeline.
After the RUNWHERE=container tests finished successfully, Jessica went back into VS Code, where she started inspecting her azure-pipelines.yml file.
Jessica made it clear that she wanted to add the tests everywhere that it made sense prior to promoting code to production:
- Dev
- Test
- QA
- Canary
Jessica reinforced Angie’s earlier points – these UI tests were critical and needed to pass. So, in order to include them in her pipeline, she needed to add them in an order that makes sense.
In her Deploy to Dev pipeline, she added the following:
-
This script checks to see if the url $hostname is available and gives up if not available after five tries after sleeping 20 seconds. Each try it displays a “.” to show it is working. And, the name “HTTP Check” shows what it is doing.
Now, to add the tests, Jessica needed to capture the environment variable declarations and then run the maven commands. And, as Jessica pointed out, this is where things can become challenging, especially when writing the tests from scratch, because people may not know the syntax.
Editing the azure-pipelines.yml in Azure DevOps
Now, Jessica moved back from Visual Studio Code to Azure DevOps, where she could also edit an azure-pipelines.yml file directly in the browser.
And, here, on the right side of her screen (I captured it separately) are tasks she can add to her pipeline. The ability to add tasks makes this process really, really simple and eliminates a lot of the errors that can happen when you code by hand.
One of those tasks is an Applitools Build Task that she was able to add by installing an extension.
Just clicking on this Applitools Build Task adds it to the azure_pipelines.yml file.
And, now Jessica wanted to add her maven build task – but instead of doing a bash script, she wanted to use the maven task in Azure DevOps. Finding the task and clicking on it shows all the options for the task.
The values are all defaults. Jessica changed the address for her pom.xml file to visual_tests/pom.xml (the file location for the test file), set her goal as ‘test’ and options as ‘clean test’. She checked everything else, and since it looked okay, she clicked the “Add” button. The following code got added to her azure-pipelines.yml file.
- task: Maven
inputs:
mavenPomFile: ‘visual_tests/pom.xml’
goals: 'test'
options: 'clean test'
publishJUnitResults: true
testResultsFiles: '**/surefire-report/TEST-*.xml'
javaHomeOption: 'JDKVersion'
mavenVersionOption: 'Default'
mavenAuthenticationFeed: false
effectivePomSkip: false
sonarQubeRunAnalysis: false
Going Back To The Test Code
Jessica copied the Applitools Built Task and Maven task code file back into the azure-pipelines.yml file she was already editing in Visual Studio Code.
Then, she added the environment variables needed to run the tests. This included the Applitools API Key, which is a secret value from Applitools. In this case, Jessica defined this variable in Azure DevOps and could call it by the variable name.
Beyond the Applitools API Key, Jessica also set the RUNWHERE environment variable to ‘pipeline’ and the TEST_START_PAGE environment variable to the $hostname – same as used elsewhere in her code. All this made her tests dynamic.
The added code reads:
env:
APPLITOOLSAPIKEY: $APPLITOOLS_API_KEY
RUNWHERE: pipeline
TEST_START_PAGE:
So, now, the tests are ready to commit.
One thing Jessica noted is that LiveShare automatically adds the co-author’s id to the commit whenever two people have jointly worked on code. It’s a cool feature of LiveShare.
Verifying That UI Tests Work In CICD
So, now that the pipeline code had been added, Jessica wanted to demonstrate that the visual validation with Applitools worked as expected and found visual differences.
Jessica modified the ChattyBot application so that, instead of reading:
“Hello, DevOps Days Madrid 2020!!!”
it read:
“Hello, awesome webinar attendees!”
She saved the change, double-checked the test code, saw that everything looked right, and pushed the commit.
This kicked off a new build in Azure DevOps. Jessica showed the build underway. She said that, with the visual difference, we expect the Deploy to Dev pipeline to fail.
Since we had time to wait, she showed what happened on an earlier build that she had done just before the webinar. During that build, the Deploy to Dev passed. She was able to show how Azure DevOps seamlessly linked the Applitools dashboard – and, assuming you were logged in, you would see the dashboard screen just by clicking on the Applitools tab.
Here, the green boxes on the Status column show that the tests passed.
Jessica drilled into the enterMessage test to show how the baseline and the new checkpoint compared (even though the comparison passed), just to show the Applitools UI.
As Jessica said, were any part of this test to be visually different due to color, sizing, text, or any other visual artifact, she could select the region and give it a thumbs-up to approve it as a change (and cause the test to pass), or give it a thumbs-down and inform the dev team of the unexpected difference.
And, she has all this information from within her Azure DevOps build.
What If I Don’t Use Azure DevOps?
Jessica said she gets this question all the time, because not everyone uses AzureDevOps.
You could be using Azure DevOps, TeamCity CI, Octopus Deploy, Jenkins – it doesn’t matter. You’re still going to be organizing tasks that make sense. You will need to run an HTTP check to make sure your site is up and running. You will need to make sure you have access to your environment variables. And, then, finally, you will need to run your maven command-line test.
Jessica jumped into Github Actions, where she had an existing pipeline, and she showed that her deploy step looked identical.
It had an http check, an Applitools Build Task, and a call for Visual Testing. The only difference was that the Applitools Build Task included several lines of bash to export Applitools environment variables.
The one extra step she added, just as a sanity check, was to set the JDK version.
And, while she was in Github Actions, she referred back to the container scenario. She noted the challenges with spinning up Docker Compose and services. For this reason, when looking at container tests, she pointed to CodeFresh, which is Kubernetes-native.
Inside her CodeFresh pipelines, everything runs in a container.
As she pointed out, by running on CodeFresh, she didn’t need a huge server to handle everything. Each container handled just what it needed to handle. Spinning up Docker Compose just requires docker. She needed just jFrog for her Artifactory image. Helm lint – again, just what she needed.
The image above shows the pipelines before adding the visual tests. The below image shows the Deploy Dev pipeline with the same three additions.
There’s the HTTP check, the Applitools Build Task, and Running Visual Tests.
The only difference really is that the visual tests ran alongside services that were spinning up alongside the test.
This is really easy to do in your codefresh.yml file, and the syntax looks a lot like Docker Compose.
Seeing the Visual Failure
Back in Azure DevOps, Jessica checked in on her Deploy to Dev step. She already knew there was a problem from her Slack notifications.
The error report showed that the visual tests all failed.
Clicking on the Applitools tab, she saw the following.
All three tests showed as unresolved. Clicking in to the multiPersonChat test, Jessica saw this:
Sure enough, the text change from “Hello, DevOps Days Madrid 2020!!!” to “Hello, awesome webinar attendees!” caused a difference. We totally expected this difference, and we would find that this difference had also shown up in the other tests.
The change may not have been a behavioral change expected in your tests, so you may or may not have thought to test for the “Hello…” text or check for its modification. Applitools makes it easy to capture any visual difference.
Jessica didn’t go through this, but one feature in Applitools is the ability to use Auto Maintenance. With Auto Maintenance, if Jessica had approved the change on this first page, she could automatically approve identical changes on other pages. So, if this was an intended change, it would go from “Unresolved” to “Passed” on all the pages where the change had been observed.
Summing Up
Jessica handed back presentation to Angie, who shared Jessica’s link for code from the webinar:
All the code from Angie and Jessica’s demo can be downloaded from:
Happy Testing!
For More Information
- Read Ask 288 Of Your Peers About Visual AI
- Read How I ran 100 UI tests in just 20 seconds
- Take Angie Jones’s course on Removing Visual Blind Spots
- Sign up for Test Automation University and start taking classes
- Take Raja Rao’s Course on Modern Functional Testing
- Request an Applitools demo | https://applitools.com/blog/ui-tests-in-cicd/ | CC-MAIN-2022-05 | refinedweb | 4,279 | 63.49 |
Python main characteristics:
*.pycfiles)
I assume that you have some programming experience:
I also ().
Required Python packages (libraries) to run this notebook:
Python
Jupyter/
IPython- interactive Python shell
numpy- linear algebra library
matplotlib- plotting library
Scientific Python Distributions (available for almost every platform):
def quicksort(arr): if len(arr) <= 1: return arr pivot = arr[int(len(arr) / 2)] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quicksort(left) + middle + quicksort(right)
quicksort([3,6,8,10,1,2,1])
[1, 1, 2, 3, 6, 8, 10]
Note: You can check your Python version at the command line by running
python --version.
x = 3 print(x, type(x))
3 <class 'int'>
print(x + 1) # Addition; print(x - 1) # Subtraction; print(x * 2) # Multiplication; print(x ** 2) # Exponentiation.
4 2 6 9
Autoincrements:
x += 1 print(x) # Prints "4" x *= 2 print(x) # Prints "8"
4 8
Note that unlike many languages, Python does not have unary increment (
x++) or decrement (
x--) operators.
Python also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.
Real numbers (
float)
y = 2.5 print(type(y)) # Prints "<class 'float'>" print(y, y + 1, y * 2, y ** 2) # Prints "2.5 3.5 5.0 6.25"
<class 'float'> 2.5 3.5 5.0 6.25
Python implements all of the usual operators for Boolean algebra, but uses English words rather than symbols (
&&,
||,
!, etc.):
t, f, aa, bb = True, False, True, False # Hmm... yes you can do this in Python print(t, f, type(t))
True False <class 'bool'>
Now we let's look at the operations:
print(t and f) # Logical AND; print(t or f) # Logical OR; print(not t) # Logical NOT; print(t != f) # Logical XOR;
False True False True
day = "Sunday" if day == 'Sunday': print('Sleep!!!') else: print('Go to work')
Sleep!!!
There is no
switch or
case statements
if day == 'Monday': print('Week end is over!') elif day == 'Sunday' or day =='Saturday': print('Sleep!') else: print('Meeeh')
Week end is over!
var_1 = 234 if var_1: print('Do something with', var_1) else: print('Nothing to do')
Do something with 234
1 == 2
False
50 == 2*25
True
There is another boolean operator
is, that tests whether two objects are the same type:
1 is 1
True
But not...
1 is int
False
print(type(1), type(int))
<class 'int'> <class 'type'>
1 is 1.0
False
hello = 'hello' world = "world" print(hello, len(hello))
hello 5
hw = hello + ' ' + world # String concatenation print(hw)
hello world
String formatting
hw12 = '%s %s! your number is: %d' % (hello, world, 12) # sprintf style string formatting print(hw12)
hello world! your number is: 12
Note: Checkout for string formatting specs.:
xs = [3, 1, 2] # Create a list print(xs, xs[2]) print(xs[-1]) # Negative indices count from the end of the list; prints "2"
[3, 1, 2] 2 2
xs[2] = 'foo' # Lists can contain elements of different types print(xs)
[3, 1, 'foo']
xs.append('bar') # Add a new element to the end of the list print(xs)
[3, 1, 'foo', 'bar']
xs = xs + ['thing1', 'thing2'] # Adding lists (the += op works too) print(xs)
[3, 1, 'foo', 'bar', 'thing1', 'thing2']
x = xs.pop() # Remove and return the last element of the list print(x, xs)
thing2 [3, 1, 'foo', 'bar', 'thing1']
As usual, you can find all the gory details about lists in the documentation.
In addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing:
nums = list(range(5)) # range is a built-in function (more on this later) print(nums)
, 9, 4]"
[2, 3] [2, 3, 4] [0, 1] [0, 1, 2, 3, 4] [0, 1, 2, 3] [0, 1, 8, 9, 4]
You can loop over the elements of a list like this:
animals = ['cat', 'dog', 'monkey'] for animal in animals: aa = animal + ' :)' print(aa)
cat :) dog :) monkey :)
If you want access to the index of each element within the body of a loop, use the built-in
enumerate function:
animals = ['cat', 'dog', 'monkey'] for idx, animal in enumerate(animals): print('Item number %d is a %s' % (idx + 1, animal))
Item number 1 is a cat Item number 2 is a dog Item number 3 is a monkey
i = 0 while i < 3: print(i) i += 1
0 1 2
When programming, frequently we want to transform one type of data into another.
For example, consider the following code that computes square numbers:
nums = [0, 1, 2, 3, 4] squares = [] for x in nums: squares.append(x ** 2) print(squares)
[0, 1, 4, 9, 16]
You can make this code simpler using a list comprehension:
nums = [0, 1, 2, 3, 4] squares = [x ** 2 for x in nums] print(squares)
[0, 1, 4, 9, 16]
List comprehensions can also contain conditions:
nums = [0, 1, 2, 3, 4] even_squares = [x ** 2 for x in nums if x % 2 == 0] print(even_squares)
[0, 4, 16]"
cute True
d['fish'] = 'wet' # Set an entry in a dictionary print(d['fish']) # Prints "wet"
wet
Handling not found keys:
print(d['monkey']) # KeyError: 'monkey' not a key of d
--------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-38-e3ac4f3aa8c2> in <module>() ----> 1 print(d['monkey']) # KeyError: 'monkey' not a key of d KeyError: 'monkey'
print(d.get('monkey', 'N/A')) # Get an element with a default; prints "N/A" print(d.get('fish', 'N/A')) # Get an element with a default; prints "wet"
N/A wet
del d['fish'] # Remove an element from a dictionary print(d.get('fish', 'N/A')) # "fish" is no longer a key; prints "N/A"
N/A
It is easy to iterate over the keys in a dictionary:
d = {'person': 2, 'cat': 4, 'spider': 8} for animal in d: legs = d[animal] print('A %s has %d legs' % (animal, legs))
A person has 2 legs A cat has 4 legs A spider has 8 legs:
animals = {'cat', 'dog'} print(animals)
{'dog', 'cat'}
print('cat' in animals) # Check if an element is in a set; prints "True" print('fish' in animals) # prints "False"
True False
animals.add('fish') # Add an element to a set print('fish' in animals) print(len(animals)) # Number of elements in a set;
True 3
animals.add('cat') # Adding an element that is already in the set does nothing print(animals, len(animals)) animals.remove('cat') # Remove an element from a set print(animals,len(animals))
{'dog', 'cat', 'fish'} 3 {'dog', 'fish'} 2
animals = {'cat', 'dog', 'fish'} for idx, animal in enumerate(animals): print('#%d: %s' % (idx + 1, animal)) # Prints "#1: fish", "#2: dog", "#3: cat"
#1: dog #2: cat #3: fish
from math import sqrt print('Set:', {int(sqrt(x)) for x in range(30)}) print('List:', [int(sqrt(x)) for x in range(30)])
Set: {0, 1, 2, 3, 4, 5} List: [0, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5]
Here is a trivial example:
t = (5, 6) print(t, type(t))
(5, 6) <class 'tuple'> 5 6
d = {(x, x + 1): x for x in range(10)} # Create a dictionary with tuple keys print(d)
{(0, 1): 0, (1, 2): 1, (2, 3): 2, (3, 4): 3, (4, 5): 4, (5, 6): 5, (6, 7): 6, (7, 8): 7, (8, 9): 8, (9, 10): 9}
print(d[t]) print(d[(1, 2)])
5 1
t[0] = 1 # Produces TypeError 'tuple' object does not support item assignment
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-52-5757bc7a932f> in <module>() ----> 1 t[0] = 1 # Produces TypeError 'tuple' object does not support item assignment TypeError: 'tuple' object does not support item assignment
Python functions are defined using the
def keyword. For example:
def sign(x): if x > 0: return 'positive' elif x < 0: return 'negative' else: return 'zero'
for x in [-1, 0, 1]: print(sign(x))
negative zero positive
We will often define functions to take optional keyword arguments, like this:
def hello(name, loud=False): if loud: print('HELLO, %s' % name.upper()) else: print('Hello, %s!' % name)
hello('Bob') hello('Fred', loud=True)
Hello, Bob! HELLO, FRED
func = lambda x: x**x + 2 print(type(func))
<class 'function'>
func(3)
29
x = [1, 5, 3, 2, 7, 8, 3, 6] print([func(elem) for elem in x]) list(map(lambda x: x**x + 2, x)) # note the map() function
[3, 3127, 29, 6, 823545, 16777218, 29, 46658]
[3, 3127, 29, 6, 823545, 16777218, 29, 46658]
selfas the first parameter of the class methods.!
def my_func(a): return a * a
print(my_func(2))
4
my_func('AAAAA')
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-27-c43b8d3fab9a> in <module>() ----> 1 my_func('AAAAA') <ipython-input-25-77fcce2d5fdd> in my_func(a) 1 def my_func(a): ----> 2 return a * a TypeError: can't multiply sequence by non-int of type 'str'
try: print(my_func("A")) except TypeError as e: print('Caught exception:', e)
Caught exception: can't multiply sequence by non-int of type 'str'
groups related code into a module makes the code easier to understand and use.
A module is a file consisting of Python code, can define functions, classes and variables.
Note: A module is also a Python object with arbitrarily named attributes that you can bind and reference.
import scipy.spatial.distance
scipy.spatial.distance.euclidean((1,1), (2,2))
1.4142135623730951
import scipy.spatial.distance as dists
dists.euclidean((1,1), (2,2))
1.4142135623730951
from math import sqrt
and then
sqrt(81)
9.0
arrayobject, and tools for working with these arrays.
To use
numpy, we first need to
import numpy package:
import numpy as np
A numpy
array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers.
scipy.
We can initialize numpy arrays from nested Python lists, and access elements using square brackets:
a = np.array([1, 2, 3]) # Create a rank 1 array print(type(a), a.shape, a[0], a[1], a[2]) a[0] = 5 # Change an element of the array print(a)
<class 'numpy.ndarray'> (3,) 1 2 3 [5 2 3]
b = np.array([[1, 2, 3], [4, 5, 6]]) # Create a rank 2 array print(b)
[[1 2 3] [4 5 6]]
print(b.shape) print(b[0, 0], b[0, 1], b[1, 0])
(2, 3) 1 2 4
Numpy provides many shortcut functions to create arrays:
a = np.zeros((2,2)) # Create an array of all zeros print(a)
[[ 0. 0.] [ 0. 0.]]
b = np.ones((1,2)) # Create an array of all ones print(b)
[[ 1. 1.]]
c = np.full((2,2), 7) print(c)
[[7 7] [7 7]]
d = np.eye(2) # Create a 2x2 identity matrix print(d)
[[ 1. 0.] [ 0. 1.]]
e = np.random.random((2,2)) # Create an array filled with random values print(e)
[[ 0.09845823 0.09568912] [ 0.18341987 0.72051223]]
Numpy offers several ways to index into arrays.
Slicing: Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array:
# Create the following rank 2 array with shape (3, 4) a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]]) a
array([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12]])
Use slicing to pull out the subarray consisting of the first 2 rows and columns 1 and 2; b is the following array of shape
(2, 2):
b = a[:2, 1:3] print(b)
[[2 3] [6 7]]
A slice of an array is a view into the same data, so modifying it will modify the original array.: This is quite different from the way that MATLAB handles array slicing.
#:)
[5 6 7 8] (4,) [[5 6 7 8]] (1, 4) [[5 6 7 8]] (1, 4)
We can make the same distinction when accessing columns of an array:
col_r1 = a[:, 1] col_r2 = a[:, 1:2] print(col_r1, col_r1.shape) print(col_r2, col_r2.shape)
[ 2 6 10] (3,) [[ 2] [ 6] [10]] (3, 1)
Integer array indexing
Here is an example:
a = np.array([[1,2], [3, 4], [5, 6]]) print(a)
[[1 2] [3 4] [5 6]]
An example of integer array indexing. The returned array will have shape
(3,).
a[[0, 1, 2], [0, 1, 0]]
array([1, 4, 5])
The above example of integer array indexing is equivalent to this:
np.array([a[0, 0], a[1, 1], a[2, 0]])
array([1, 4, 5])
When using integer array indexing, you can reuse the same element from the source array:
a[[0, 0], [1, 1]]
array([2, 2])
Equivalent to the previous integer array indexing example
np.array([a[0, 1], a[0, 1]])
array([2, 2])
One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix:
Create a new array from which we will select elements
a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]]) print(a)
[[ 1 2 3] [ 4 5 6] [ 7 8 9] [10 11 12]]
Create an array of indices
b = np.array([0, 2, 0, 1])
Select one element from each row of a using the indices in
b.
print(a[np.arange(4), b]) # Prints "[ 1 6 7 11]"
[ 1 6 7 11]
Mutate one element from each row of a using the indices in
b.
a[np.arange(4), b] += 10 print(a)
[:
a = np.array([[1,2], [3, 4], [5, 6]])
Find the elements of a that are bigger than 2:
bool_idx = (a > 2) print(bool_idx)
[[False False] [ True True] [ True True]]
This returns a numpy array of Booleans of the same shape as a, where each slot of bool_idx tells whether that element of a is > 2.
We use boolean array indexing to construct a rank 1 array consisting of the elements of a corresponding to the
True values of
bool_idx.
print(a[bool_idx])
[3 4 5 6]
We can do all of the above in a single concise statement:
print(a[a > 2])
:
import numpy as np x = np.array([1, 2]) # Let numpy choose the datatype y = np.array([1.0, 2.0]) # Let numpy choose the datatype z = np.array([1, 2], dtype=np.float32) # Force a particular datatype print(x.dtype, y.dtype, z.dtype)
int64 float64 float32
Basic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module:
x = np.array([[1,2],[3,4]], dtype=np.float64) y = np.array([[5,6],[7,8]], dtype=np.float64)
print(x + y)
[[ 6. 8.] [ 10. 12.]]
print(np.add(x, y))
[[ 6. 8.] [ 10. 12.]]
Elementwise difference
print(x - y) print(np.subtract(x, y))
[[-4. -4.] [-4. -4.]] [[-4. -4.] [-4. -4.]]
Elementwise product
print(x * y) print(np.multiply(x, y))
[[ 5. 12.] [ 21. 32.]] [[ 5. 12.] [ 21. 32.]]
Elementwise division - both produce the same result.
print(x / y) print(np.divide(x, y))
[[ 0.2 0.33333333] [ 0.42857143 0.5 ]] [[ 0.2 0.33333333] [ 0.42857143 0.5 ]]
Elementwise square root;
print(np.sqrt(x))
[[ 1. 1.41421356] [ 1.73205081 2. ]]
*is elementwise multiplication, not matrix multiplication.
numpy.dot()function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices.
dot()is available both as a function in the
numpymodule and as an instance method of array objects.
x = np.array([[1,2],[3,4]]) y = np.array([[5,6],[7,8]]) v = np.array([9,10]) w = np.array([11, 12]) # Inner product of vectors; both produce 219 print(v.dot(w)) print(np.dot(v, w))
219 219
# Matrix / vector product; both produce the rank 1 array [29 67] print(x.dot(v)) print(np.dot(x, v))
[29 67] [29 67]
# Matrix / matrix product; both produce the rank 2 array # [[19 22] # [43 50]] print(x.dot(y)) print(np.dot(x, y))
[[19 22] [43 50]] [[19 22] [43 50]]
Numpy provides many useful functions for performing computations on arrays; one of the most useful is
sum:
x = np.array([[1,2],[3,4]]) print(x)
[[1 2] [3 4]]
np.sum(x) # Compute sum of all elements; prints "10"
10
np.sum(x, axis=0) # Compute sum of each column; prints "[4 6]"
array([4, 6])
np.sum(x, axis=1) # Compute sum of each row; prints "[3 7]"
array(:
print(x) print(x.T)
[[1 2] [3 4]] [[1 3] [2 4]]
v = np.array([1,2,3]) print(v) print(v.T)
[1 2 3] [1 2 3]
Is
numpy actually faster than 'plain' Python?
import random import numpy as np
%timeit sum([random.random() for _ in range(random.randint(10000,10100))])
2.1 ms ± 93 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit np.sum(np.random.random(random.randint(10000,10100)))
177 µs ± 3.67 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Broadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations.
For example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this:
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]]) x
array([[ 1, 2, 3], [ 4, 5, 6], [ 7, 8, 9], [10, 11, 12]])
v = np.array([1, 0, 1]) v
array([1, 0, 1])
y = np.empty_like(x) # Creates an empty matrix with the same shape as x # Add the vector v to each row of the matrix x with an explicit loop for i in range(4): y[i, :] = x[i, :] + v
y
array([[:
vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other vv
array([[1, 0, 1], [1, 0, 1], [1, 0, 1], [1, 0, 1]])
y = x + vv # Add x and vv elementwise y
array([[ 2, 2, 4], [ 5, 5, 7], [ 8, 8, 10], [11, 11, 13]])
Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting:
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]]) v = np.array([1, 0, 1]) y = x + v # Add v to each row of x using broadcasting y
array([[.
Compute outer product of vectors
v = np.array([1,2,3]) # v has shape (3,) w = np.array([4,5]) # w has shape (2,)
To compute an outer product, we
vto be a column vector of shape
(3, 1), and
wto yield an output of shape
(3, 2), which is the outer product of
vand
w:
print(np.reshape(v, (3, 1)) * w)
[[ 4 5] [ 8 10] [12 15]]
Add a vector to each row of a matrix
x = np.array([[1,2,3], [4,5,6]])
x has shape
(2, 3) and
v has shape
(3,) so they broadcast to
(2, 3), producing the following matrix:
print(x + v)
[[2 4 6] [5 7 9]]
Add a vector to each column of a matrix
x has shape
(2, 3) and
w has shape
(2,).
xthen it has shape
(3, 2)and can be broadcast against
wto yield a result of shape
(3, 2);
(2, 3)which is the matrix
xwith the vector
wadded to each column.
This gives the following matrix:
print((x.T + w).T)
[[ 5 6 7] [ 9 10 11]]
Another solution is to reshape
w to be a row vector of shape
(2, 1); we can then broadcast it directly against
x to produce the same output.
print(x + np.reshape(w, (2, 1)))
[[ 5 6 7] [ 9 10 11]]
Multiply a matrix by a constant
xhas shape
(2, 3).
numpytreats scalars as arrays of shape
();
(2, 3),
print(x * 2)
[[ 2 4 6] [ 8 10 12]]
Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.
So far, we touched on many of the important things that you need to know about numpy, but is far from complete.
numpy.
import matplotlib.pyplot as plt
By running this special IPython notebook magic command, we will be displaying plots inline and with appropiate resolutions for retina displays. Check the docs for more options.
%matplotlib inline %config InlineBackend.figure_format = 'retina'
The most important function in
matplotlib is
plot, which allows you to plot 2D data.
Here is a simple example:
# Compute the x and y coordinates for points on a sine curve x = np.arange(0, 3 * np.pi, 0.1) y_sin = np.sin(x) # Plot the points using matplotlib plt.plot(x, y_sin);
With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels:
y_cos = np.cos(x) plt.plot(x, y_sin) plt.plot(x, y_cos) plt.xlabel('x axis label') plt.ylabel('y axis label') plt.title('Sine and Cosine') plt.legend(['Sine', 'Cosine']);
You can plot different things in the same figure using the subplot function. Here is an example:
plt.subplot(2, 1, 1) plt.plot(x, y_sin) plt.title('Sine') plt.subplot(2, 1, 2) plt.plot(x, y_cos) plt.title('Cosine') plt.tight_layout()
CS231nPython tutorial by Justin Johnson ().
%load_ext version_information %version_information scipy, numpy, matplotlib
# this code is here for cosmetic reasons from IPython.core.display import HTML from urllib.request import urlopen HTML(urlopen('').read().decode('utf-8')) | https://nbviewer.jupyter.org/github/lmarti/machine-learning/blob/master/00.%20Python%20Tutorial.ipynb | CC-MAIN-2020-40 | refinedweb | 3,630 | 69.21 |
Details
- Type:
Wish
- Status: Closed
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: None
- Fix Version/s: Jena 2.10.1
-
- Labels:None
Description
Unlike RDF/XML* serializers, N3 and TURTLE ignore the base URI in their output.
val turtle =
"""
@prefix foaf: <> .
@prefix rdf: <> .
<#JL>
a foaf:Person ;
foaf:homepage </2007/wiki/people/JoeLambda> ;
foaf:img <images/me.jpg> ;
foaf:name "Joe Lambda" .
"""
val base = ""
val model ={ val m = ModelFactory.createDefaultModel() m.getReader("TURTLE").read(m, new StringReader(turtle), base) m }
model.getWriter("TTL").write(model, System.out, base) // doesn't work as expected
model.getWriter("RDF/XML-ABBREV").write(model, System.out, base) // this one is ok
Activity
- All
- Work Log
- History
- Activity
- Transitions
PLEA FOR RELATIVE URIs
Many systems, including most of the ones I have built and use day to day,
have many internal relative links but really are better
built without a knowledge of their own base URI, in that stored URIs
are always relative. Even if the software absolutizes them
in processing them, as there are no absolute URIs for the local identifiers
one can take a system such as a bug tracker and clone it easily to make
a new copy of the same system.
For example, issue tracking system, a calendar system, build with RDF
are interesting to clone.
This is not to say all systems are like this, but some are and they are an important
class.
For example one project I have a bunch RDF and rules, where there are
locally defined instances and local ontologies, and I process it
in file:// space much of the time, and browse it in http://
space though a web server, but when I edit it the http space
is actually proxies to a read-write-linked-data server on a different port.
With Jena serializes, suddenly the URI of the slave server behind the proxy
crops up in the files.
All the URIs within the system are relative.
Attempting to introduce Jena-based code to this system is currently blocked
on the need for this bug to be resolved in Jena.
For this reason, for example cwm's serializers use relative URIs, and even the
RDF/XML one has an option to put relative URIs in namespaces.
The rdflib.js library from Tabulator uses relative URIs by default.
Yes, yjere are cases when people want to design systems without this properties,
so absolute URIs should be an option.
Note other reasons for relative URIs include readability, and storage space and transmission length.
There is a classic failure mode for RDF systems in which
developers bring up a system on test.acme.com and then move it to production.acme.com
and everything breaking. I know there are cases for absolute URIs but the relative URIs are
very important best practices.
Tim
PS: This sort of RDF system is like writing program or set of programs
Imagine writing a program
pi = 3.14159265359;
print (2 * pi);
an it being saved in circles.py as
<> = 3.14159265359;
print (2 * <>);
Not practical, not the sort of thing you can copy and move around.
Tim - this is not a bug. A bug would be where it does something wrong and the writer writes something that can't be parsed or isn't the RDF given as input.
The writer writes correct and legal Turtle for the test case given.
The relevant file is probably N3JenaWriterCommon and unused code controlled by doAbbreviatedBaseURIref. It needs enabling and testing, and test cases written.
I look forward to a contribution to Jena.
Thanks. I didn't find N3JenaWriterCommon but did find
hmm no doAbbreviatedBaseURIref though ... wrong repo? wrong branch?
In that file it looks like it is at
Line 652 return "<"">" ;
Where is thing hosted nowadays? Excuse my ignorance.
Alexandre, the N3JenaWriterCommon.java file which Andy pointed you at is here:
Here is how you can check out Jena sources, run the tests and produce a patch:
svn co jena
cd jena
mvn test
... here you make your changes (and add tests) ...
mvn test
svn diff >
JENA-132.patch
Use More Actions > Attach Files to attach the patch.
I wonder why the functionality you want was there and it has always been commented code (since 2009).
If I remember correctly because it needs finishing and testing.
Also, from memory, all the <#> stuff need removing because the base is not going to have a fragment if used with HTTP GET.
But it was a while ago.
Jena RIOT writes output RDF.
if the data has relative IRIs, then relative IRIs are output.
The parsers can be used to output relative IRIs by adding a processing stage (StreamRDF).
Relative URIs are also used in output when the base is set in the write operation with
a @base directive as the first line and can be removed.
The output from the TTL writer is correct RDF - it just has not used the base URI to abbreviate the data. It's the same triples.
The output files from the TTL writer are a failthful representation of the triples and are fully portable (i.e. the RDF will be read the same where ever the file is read from). Some people would argue this is the better behaviour to the relative URIs generated by the RDF/XML-ABBREV writer.
A contribution to enable the writing of relative URIs by the TTL writer would be good.
Reclassified as a feature request. | https://issues.apache.org/jira/browse/JENA-132 | CC-MAIN-2016-30 | refinedweb | 902 | 64.91 |
Opened 6 years ago
Closed 6 years ago
Last modified 6 years ago
#12843 closed (worksforme)
add user in admin causes template exception: maximum recursion depth exceeded
Description
Running Django 1.1.1, set up basic model structure and activated admin site. Turned on auth as described in tutorial, using 1.1.1 urls.py instructions. My models work fine in the admin interface, but I get an error when adding a user.
From Home > Auth I choose to add a user, fill in the username and password, hit Save, and:
Template error In template /usr/lib64/python2.6/site-packages/django/contrib/admin/templates/admin/includes/fieldset.html, error at line 12 Caught an exception while rendering: maximum recursion depth exceeded 2 {% if fieldset.name %}<h2>{{ fieldset.name }}</h2>{% endif %} 3 {% if fieldset.description %}<div class="description">{{ fieldset.description|safe }}</div>{% endif %} 4 {% for line in fieldset %} 5 <div class="form-row{% if line.errors %} errors{% endif %} {% for field in line %}{{ field.field.name }} {% endfor %} "> 6 {{ line.errors }} 7 {% for field in line %} 8 <div{% if not line.fields|length_is:"1" %}{{ field.field.field.help_text|safe }}</p>{% endif %} 15 </div>
Change History (5)
comment:1 Changed 6 years ago by mark@…
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 6 years ago by mark@…
Installed latest svn code (12407), problem goes away.
comment:3 Changed 6 years ago by kmtracey
- Resolution set to worksforme
- Status changed from new to closed
I can't recreate with 1.1.1 and a machine running Python 2.6. It would be nice to know if you can recreate the problem with the latest 1.1.X branch level (I assume by latest you mean latest trunk), so we'd know whether 1.1.2, when released, might be subject to the same problem. If so we could investigate more, but we'll need more debug info. A hint of where the recursion is happening in Python code would help -- is there really no Python traceback at all in the debug information?
comment:4 follow-up: ↓ 5 Changed 6 years ago by mark@…
It was 1.1.1 from the distribution tarball (Django version 1.1.1 SVN-12407). There was a full traceback, I just didn't paste all of it.
Here's the debug output:
... and the error from the server:
[11/Feb/2010 14:53:23] "GET /admin/auth/user/ HTTP/1.1" 200 4963 Traceback (most recent call last): File "/usr/lib64/python2.6/site-packages/django/core/servers/basehttp.py", line 280, in run self.finish_response() File "/usr/lib64/python2.6/site-packages/django/core/servers/basehttp.py", line 320, in finish_response self.write(data) File "/usr/lib64/python2.6/site-packages/django/core/servers/basehttp.py", line 416, in write self._write(data) File "/usr/lib64/python2.6/socket.py", line 300, in write self.flush() File "/usr/lib64/python2.6/socket.py", line 286, in flush self._sock.sendall(buffer) error: [Errno 32] Broken pipe
Maybe something screwy in my settings, but I was pretty much following the instructions, and the current 1.2 works for me.
comment:5 in reply to: ↑ 4 Changed 6 years ago by kmtracey
Replying to mark@tranchant.co.uk:
It was 1.1.1 from the distribution tarball (Django version 1.1.1 SVN-12407).
I'm confused. 1.1.1 from the distribution tarball would report itself as simply 1.1.1, not 1.1.1 SVN-something. Having the SVN there in the reported level implies you are running from an SVN checkout. Further the [12407] level is less than a day old, so it appears to be a fairly new checkout. However the 1.1.X branch reports itself as, for example, 1.1.2 pre-alpha SVN-12416, not 1.1.1 SVN-something. Based on that reported version, it sounds like you are running from an SVN checkout of.
If you could switch to an SVN checkout of the 1.1.X branch () then see if you can recreate the problem at that level, it would tell us whether whatever has fixed the problem for current trunk has also been applied to 1.1.X, and thus will be in 1.1.2 when it is released. Unfortunately I can't recreate the problem with either 1.1.1 or the current 1.1.X branch, but then I don't have the same type of machine as you do, and it may be a problem that is specific to a certain build of Python.
If you can recreate on current 1.1.X branch code, then it would also help if you could make the change shown here:
to the django/template/debug.py file. That will reveal where the recursion is happening. Unfortunately due to the way TemplateSyntaxErrors wrap other exceptions, and a change made to Python 2.6, the real traceback where the problem is occurring is currently omitted from the debug information.
At this stage, the user is created, but the admin site does not allow me to edit the user object further. I can successfully edit and use the user object via the shell. | https://code.djangoproject.com/ticket/12843 | CC-MAIN-2016-30 | refinedweb | 869 | 68.57 |
Abstract base class for all ImageViz engines.
More...
#include <ImageViz/Engines/SoImageVizEngine.h>
Compute Mode This enum specifies whether the main input will be interpreted as a 3D volume or a stack of 2D images for processing.
Neighborhood Connectivity 3D.
Abort current processing as soon as possible.
If no processing is currently running, do nothing.
Returns the type identifier for this specific instance.
Implements SoTypedObject.
Returns true if the engine evaluation is in progress.
Wait for the end of engine evaluation.
This method blocks the current thread.
Event raised when the processing begins.
Event raised when processing ends and the result is available.
Important note: Do not use waitEvaluate() method in this event because this can cause a dead-lock during evaluation.
Event raised while processing is running.
Nb: This event is regularly called during processing. It can be used to observe progression of the computing (e.g. to display a progress bar). | https://developer.openinventor.com/refmans/9.9/RefManCpp/class_so_image_viz_engine.html | CC-MAIN-2020-10 | refinedweb | 153 | 62.04 |
I’m trying to replicate the methodology from this article, 538 Post about Most Repetitive Phrases, in which the author mined US presidential debate transcripts to determine the most repetitive phrases for each candidate.
I'm trying to implement this methodology with another dataset in R with the
tm
prune_substrings()
def prune_substrings(tfidf_dicts, prune_thru=1000):
pruned = tfidf_dicts
for candidate in range(len(candidates)):
# growing list of n-grams in list form
so_far = []
ngrams_sorted = sorted(tfidf_dicts[candidate].items(), key=operator.itemgetter(1), reverse=True)[:prune_thru]
for ngram in ngrams_sorted:
# contained in a previous aka 'better' phrase
for better_ngram in so_far:
if overlap(list(better_ngram), list(ngram[0])):
#print "PRUNING!! "
#print list(better_ngram)
#print list(ngram[0])
pruned[candidate][ngram[0]] = 0
# not contained, so add to so_far to prevent future subphrases
else:
so_far += [list(ngram[0])]
return pruned
tfidf_dicts
trump.tfidf.dict = {'we don't win': 83.2, 'you have to': 72.8, ... }
tfidf_dicts = {trump.tfidf.dict, rubio.tfidf.dict, etc }
prune_substrings
else if
A. create list : pruned as tfidf_dicts; a list of tfidf dicts for each candidate
B loop through each candidate:
- so_far = start an empty list of ngrams gone through so so_far
- ngrams_sorted = sorted member's tf-idf dict from smallest to biggest
- loop through each ngram in sorted
- loop through each better_ngram in so_far
- IF overlap b/w (below) == TRUE:
- better_ngram (from so_far) and
- ngram (from ngrams_sorted)
- THEN zero out tf-idf for ngram
- ELSE if (WHAT?!?)
- add ngram to list, so_far
C. return pruned, i.e. list of unique ngrams sorted in order
4 months later but here's my solution. I'm sure there is a more efficient solution, but for my purposes, it worked. The pythonic for-else doesn't translate to R. So the steps are different.
nngrams.
t, where each element of the list is a logical vector of length
nthat says whether ngram in question overlaps all other ngrams (but fix 1:x to be false automatically)
tinto a table,
t2
t2row sum is zero set elements 1:n to FALSE (i.e. no overlap)
Ouala!
#' GetPrunedList #' #' takes a word freq df with columns Words and LenNorm, returns df of nonoverlapping strings GetPrunedList <- function(wordfreqdf, prune_thru = 100) { #take only first n items in list tmp <- head(wordfreqdf, n = prune_thru) %>% select(ngrams = Words, tfidfXlength = LenNorm) #for each ngram in list: t <- (lapply(1:nrow(tmp), function(x) { #find overlap between ngram and all items in list (overlap = TRUE) idx <- overlap(tmp[x, "ngrams"], tmp$ngrams) #set overlap as false for itself and higher-scoring ngrams idx[1:x] <- FALSE idx })) #bind each ngram's overlap vector together to make a matrix t2 <- do.call(cbind, t) #find rows(i.e. ngrams) that do not overlap with those below idx <- rowSums(t2) == 0 pruned <- tmp[idx,] rownames(pruned) <- NULL pruned }
#' overlap #' OBJ: takes two ngrams (as strings) and to see if they overlap #' INPUT: a,b ngrams as strings #' OUTPUT: TRUE if overlap overlap <- function(a, b) { max_overlap <- min(3, CountWords(a), CountWords(b)) a.beg <- word(a, start = 1L, end = max_overlap) a.end <- word(a, start = -max_overlap, end = -1L) b.beg <- word(b, start = 1L, end = max_overlap) b.end <- word(b, start = -max_overlap, end = -1L) # b contains a's beginning w <- str_detect(b, coll(a.beg, TRUE)) # b contains a's end x <- str_detect(b, coll(a.end, TRUE)) # a contains b's beginning y <- str_detect(a, coll(b.beg, TRUE)) # a contains b's end z <- str_detect(a, coll(b.end, TRUE)) #return TRUE if any of above are true (w | x | y | z) } | https://codedump.io/share/TuRR6ZFmI9mr/1/understanding-another39s-text-mining-function-that-removes-similar-strings | CC-MAIN-2017-30 | refinedweb | 592 | 61.36 |
The view consists of the Html page or the jsp components that provide the user interface. View is what we see. It is mainly for presentation.
Java Bean: - It is a reusable component. It contains the setter and getter methods to set and get the values from the jsp page or database.
In session.
In this program we are using the java Beans to set the
values from the html page and set it to the jsp page which will works like a
controller.
setAttribute(): is used to set the object. A key is set to look up the object.
getAttribute(): is used to get the object.
The code of the program is given below:
<html> <head> <title>View and Session</title> </head> <body> <form method = "get" action = "JspAndSessionJsp.jsp"> Enter your first Name : <input type = "text" name = "firstName"><br> Enter your last Name : <input type = "text" name = "lastName"><br> Enter your email : <input type = "text" name = "email"><br> Enter the address : <input type = "text" name = "address"><br> <input type = "submit" name = "submit" value ="submit"><br> </form> </body> </html>
package Mybean; public class JspAndSession{ private String firstName; private String lastName; private String email; private String address;Email() { return email; } public void setEmail(String email) { this.email = email; } }
<%@ pageClick Here to go forward</a> </body> </html>
The output of the program is given below:: View Session View All Comments
Post your Comment | http://www.roseindia.net/jsp/jspsession/JspSessionView.shtml | CC-MAIN-2016-50 | refinedweb | 231 | 69.82 |
If you work with data in Python, you probably use Pandas. Pandas provides nearly instant gratification: sophisticated data processing routines can be implemented in a few lines of code. However, if you have used Pandas on large projects over many years, you may have had some challenges. Complex Pandas applications can produce Python code that is hard to maintain and error-prone. This happens because Pandas provides many ways to do the same thing, has inconsistent interfaces, and broadly supports in-place mutation. For those coming from Pandas, StaticFrame offers a more consistent interface and reduces opportunities for error. This article demonstrates ten reasons you might use StaticFrame instead of Pandas.
Why StaticFrame
After years of using Pandas to develop back-end financial systems, it became clear to me that Pandas was not the right tool for the job. Pandas's handling of labeled data and missing values, with performance close to NumPy, certainly accelerated my productivity. And yet, the numerous inconsistencies in Pandas's API led to hard-to-maintain code. Further, Pandas's support for in-place mutation led to serious opportunities for error. So in May of 2017 I began implementing a library more suitable for critical production systems.
Now, after years of development and refinement, we are seeing excellent results in our production systems by replacing Pandas with StaticFrame. Libraries and applications written with StaticFrame are easier to maintain and test. And we often see StaticFrame out-perform Pandas in large-scale, real-world use cases, even though, for many isolated operations, StaticFrame is not yet as fast as Pandas.
What follows are ten reasons to favor using StaticFrame over Pandas. As the primary author of StaticFrame, I am certainly biased in this presentation. However, having worked with Pandas since 2013, I hope to have some perspective to share.
All examples use Pandas 1.0.3 and StaticFrame 0.6.20. Imports use the following convention:
>>> import pandas as pd >>> import static_frame as sf
No. 1: Consistent and Discoverable Interfaces
An application programming interface (API) can be consistent in where functions are located, how functions are named, and the name and types of arguments those functions accept. StaticFrame deviates from Pandas's API to support greater consistency in all of these areas.
To create a
sf.Series or a
sf.Frame, you need constructors. Pandas places its
pd.DataFrame constructors in two places: on the root namespace (
pd, as commonly imported) and on the
pd.DataFrame class.
For example, JSON data is loaded from a function on the
pd namespace, while record data (an iterable of Python sequences) is loaded from the
pd.DataFrame class.
>>> pd.read_json('[{"name":"muon", "mass":0.106}, {"name":"tau", "mass":1.777}]') name mass 0 muon 0.106 1 tau 1.777 >>> pd.DataFrame.from_records([{"name":"muon", "mass":0.106}, {"name":"tau", "mass":1.777}]) name mass 0 muon 0.106 1 tau 1.777
Even though Pandas has specialized constructors, the default
pd.DataFrame constructor accepts a staggering diversity of inputs, including many of the same inputs as
pd.DataFrame.from_records().
>>> pd.DataFrame([{"name":"muon", "mass":0.106}, {"name":"tau", "mass":1.777}]) name mass 0 muon 0.106 1 tau 1.777
For the user, there is little benefit to this diversity and redundancy. StaticFrame places all constructors on the class they construct, and as much as possible, narrowly focuses their functionality. As they are easier to maintain, explicit, specialized constructors are common in StaticFrame. For example,
sf.Frame.from_json() and
sf.Frame.from_dict_records():
>>> sf.Frame.from_json('[{"name":"muon", "mass":0.106}, {"name":"tau", "mass":1.777}]') <Frame> <Index> name mass <<U4> <Index> 0 muon 0.106 1 tau 1.777 <int64> <<U4> <float64> >>> sf.Frame.from_dict_records([{"name":"muon", "mass":0.106}, {"name":"tau", "mass":1.777}]) <Frame> <Index> name mass <<U4> <Index> 0 muon 0.106 1 tau 1.777 <int64> <<U4> <float64>
Being explicit leads to lots of constructors. To help you find what you are looking for, StaticFrame containers expose an
interface attribute that provides the entire public interface of the calling class or instance as a
sf.Frame. We can filter this table to show only constructors by using a
sf.Frame.loc[] selection.
>>> sf.Frame.interface.loc[sf.Frame.interface['group'] == 'Constructor'] <Frame: Frame> <Index> cls_name group doc <<U18> <Index: signature> __init__(data, *, index, columns,... Frame Constructor from_arrow(value, *, index_depth,... Frame Constructor Convert an Arrow ... from_concat(frames, *, axis, unio... Frame Constructor Concatenate multi... from_concat_items(items, *, axis,... Frame Constructor Produce a Frame w... from_csv(fp, *, index_depth, inde... Frame Constructor Specialized versi... from_delimited(fp, *, delimiter, ... Frame Constructor Create a Frame fr... from_dict(mapping, *, index, fill... Frame Constructor Create a Frame fr... from_dict_records(records, *, ind... Frame Constructor Frame constructor... from_dict_records_items(items, *,... Frame Constructor Frame constructor... from_element(element, *, index, c... Frame Constructor Create a Frame fr... from_element_iloc_items(items, *,... Frame Constructor Given an iterable... from_element_loc_items(items, *, ... Frame Constructor This function is ... from_elements(elements, *, index,... Frame Constructor Create a Frame fr... from_hdf5(fp, *, label, index_dep... Frame Constructor Load Frame from t... from_items(pairs, *, index, fill_... Frame Constructor Frame constructor... from_json(json_data, *, dtypes, n... Frame Constructor Frame constructor... from_json_url(url, *, dtypes, nam... Frame Constructor Frame constructor... from_pandas(value, *, index_const... Frame Constructor Given a Pandas Da... from_parquet(fp, *, index_depth, ... Frame Constructor Realize a Frame f... from_records(records, *, index, c... Frame Constructor Construct a Frame... from_records_items(items, *, colu... Frame Constructor Frame constructor... from_series(series, *, name, colu... Frame Constructor Frame constructor... from_sql(query, *, connection, in... Frame Constructor Frame constructor... from_sqlite(fp, *, label, index_d... Frame Constructor Load Frame from t... from_structured_array(array, *, i... Frame Constructor Convert a NumPy s... from_tsv(fp, *, index_depth, inde... Frame Constructor Specialized versi... from_xlsx(fp, *, label, index_dep... Frame Constructor Load Frame from t... <<U94> <<U5> <<U17> <<U83>
No. 2: Consistent and Colorful Display
Pandas displays its containers in diverse ways. For example, a
pd.Series is shown with its name and type, while a
pd.DataFrame shows neither of those attributes. If you display a
pd.Index or
pd.MultiIndex, you get a third approach: a string suitable for
eval() which is inscrutable when large.
>>> df = pd.DataFrame.from_records([{'symbol':'c', 'mass':1.3}, {'symbol':'s', 'mass':0.1}], index=('charm', 'strange')) >>> df symbol mass charm c 1.3 strange s 0.1 >>> df['mass'] charm 1.3 strange 0.1 Name: mass, dtype: float64 >>> df.index Index(['charm', 'strange'], dtype='object')
StaticFrame offers a consistent, configurable display for all containers. The display of
sf.Series,
sf.Frame,
sf.Index, and
sf.IndexHierarchy all share a common implementation and design. A priority of that design is to always make explicit container classes and underlying array types.
>>> f = sf.Frame.from_dict_records_items((('charm', {'symbol':'c', 'mass':1.3}), ('strange', {'symbol':'s', 'mass':0.1}))) >>> f <Frame> <Index> symbol mass <<U6> <Index> charm c 1.3 strange s 0.1 <<U7> <<U1> <float64> >>> f['mass'] <Series: mass> <Index> charm 1.3 strange 0.1 <<U7> <float64> >>> f.columns <Index> symbol mass <<U6>
As much time is spent visually exploring the contents of these containers, StaticFrame offers numerous display configuration options, all exposed through the
sf.DisplayConfig class. For persistent changes,
sf.DisplayConfig instances can be passed to
sf.DisplayActive.set(); for one-off changes,
sf.DisplayConfig instances can be passed to the container's
display() method.
While
pd.set_option() can similarly be used to set Pandas display characteristics, StaticFrame provides more extensive options for making types discoverable. As shown in the following terminal animation, specific types can be colored or type annotations can be removed entirely.
No. 3: Immutable Data: Efficient Memory Management without Defensive Copies
Pandas displays inconsistent behavior in regard to ownership of data inputs and data exposed from within containers. In some cases, it is possible to mutate NumPy arrays "behind-the-back" of Pandas, exposing opportunities for undesirable side-effects and coding errors.
For example, if we supply a 2D array to a
pd.DataFrame, the original reference to the array can be used to "remotely" change the values within the
pd.DataFrame. In this case, the
pd.DataFrame does not protect access to its data, serving only as a wrapper of a shared, mutable array.
>>> a1 = np.array([[0.106, -1], [1.777, -1]]) >>> df = pd.DataFrame(a1, index=('muon', 'tau'), columns=('mass', 'charge')) >>> df mass charge muon 0.106 -1.0 tau 1.777 -1.0 >>> a1[0, 0] = np.nan # Mutating the original array. >>> df # Mutation reflected in the DataFrame created from that array. mass charge muon NaN -1.0 tau 1.777 -1.0
Similarly, sometimes NumPy arrays exposed from the
values attribute of a
pd.Series or a
pd.DataFrame can be mutated, changing the values within the
DataFrame.
>>> a2 = df['charge'].values >>> a2 array([-1., -1.]) >>> a2[1] = np.nan # Mutating the array from .values. >>> df # Mutation is reflected in the DataFrame. mass charge muon NaN -1.0 tau 1.777 NaN
With StaticFrame, there is no vulnerability of "behind the back" mutation: as StaticFrame manages immutable NumPy arrays, references are only held to immutable arrays. If a mutable array is given at initialization, an immutable copy will be made. Immutable arrays cannot be mutated from containers or from direct access to underlying arrays.
>>> a1 = np.array([[0.106, -1], [1.777, -1]]) >>> f = sf.Frame(a1, index=('muon', 'tau'), columns=('mass', 'charge')) >>> a1[0, 0] = np.nan # Mutating the original array has no affect on the Frame >>> f <Frame> <Index> mass charge <<U6> <Index> muon 0.106 -1.0 tau 1.777 -1.0 <<U4> <float64> <float64> >>> f['charge'].values[1] = np.nan # An immutable array cannot be mutated Traceback (most recent call last): File "<console>", line 1, in <module> ValueError: assignment destination is read-only
While immutable data reduces opportunities for error, it also offers performance advantages. For example, when replacing column labels with
sf.Frame.relabel(), underlying data is not copied. Instead, references to the same immutable arrays are shared between the old and new containers. Such "no-copy" operations are thus fast and light-weight. This is in contrast to what happens when doing the same thing in Pandas: the corresponding Pandas method,
df.DataFrame.rename(), is forced to make a defensive copy of all underlying data.
>>> f.relabel(columns=lambda x: x.upper()) # Underlying arrays are not copied <Frame> <Index> MASS CHARGE <<U6> <Index> muon 0.106 -1.0 tau 1.777 -1.0 <<U4> <float64> <float64>
No. 4: Assignment is a Function
While Pandas permits in-place assignment, sometimes such operations cannot provide an appropriate derived type, resulting in undesirable behavior. For example, a float assigned into an integer
pd.Series will have its floating-point components truncated without warning or error.
>>> s = pd.Series((-1, -1), index=('tau', 'down')) >>> s tau -1 down -1 dtype: int64 >>> s['down'] = -0.333 # Assigning a float. >>> s # The -0.333 value was truncated to 0 tau -1 down 0 dtype: int64
With StaticFrame's immutable data model, assignment is a function that returns a new container. This permits evaluating types to insure that the resultant array can completely contain the assigned value.
>>> s = sf.Series((-1, -1), index=('tau', 'down')) >>> s <Series> <Index> tau -1 down -1 <<U4> <int64> >>> s.assign['down'](-0.333) # The float is assigned without truncation <Series> <Index> tau -1.0 down -0.333 <<U4> <float64>
StaticFrame uses a special
assign interface for performing assignment function calls. On a
sf.Frame, this interface exposes a
sf.Frame.assign.loc[] interface that can be used to select the target of assignment. Following this selection, the value to be assigned is passed through a function call.
>>> f = sf.Frame.from_dict_records_items((('charm', {'charge':0.666, 'mass':1.3}), ('strange', {'charge':-0.333, 'mass':0.1}))) >>> f <Frame> <Index> charge mass <<U6> <Index> charm 0.666 1.3 strange -0.333 0.1 <<U7> <float64> <float64> >>> f.assign.loc['charm', 'charge'](Fraction(2, 3)) # Assigning to a loc-style selection <Frame> <Index> charge mass <<U6> <Index> charm 2/3 1.3 strange -0.333 0.1 <<U7> <object> <float64>
No. 5: Iterators are for Iterating and Function Application
Pandas has separate functions for iteration and function application. For iteration on a
pd.DataFrame there is
pd.DataFrame.iteritems(),
pd.DataFrame.iterrows(),
pd.DataFrame.itertuples(), and
pd.DataFrame.groupby(); for function application on a
pd.DataFrame there is
pd.DataFrame.apply() and
pd.DataFrame.applymap().
But since function application requires iteration, it is sensible for function application to be built on iteration. StaticFrame organizes iteration and function application by providing families of iterators (such as
Frame.iter_array() or
Frame.iter_group_items()) that, with a chained call to
apply(), can also be used for function application. Functions for applying mapping types (such as
map_any() and
map_fill()) are also available on iterators. This means that once you know how you want to iterate, function application is a just a method away.
For example, we can create a
sf.Frame with
sf.Frame.from_records():
>>> f = sf.Frame.from_records(((0.106, -1.0, 'lepton'), (1.777, -1.0, 'lepton'), (1.3, 0.666, 'quark'), (0.1, -0.333, 'quark')), columns=('mass', 'charge', 'type'), index=('muon', 'tau', 'charm', 'strange')) >>> f <Frame> <Index> mass charge type <<U6> <Index> muon 0.106 -1.0 lepton tau 1.777 -1.0 lepton charm 1.3 0.666 quark strange 0.1 -0.333 quark
We can iterate over a columns values with
sf.Series.iter_element(). We can use the same iterator to do function application by using the
apply() method found on the object returned from
sf.Series.iter_element(). The same interface is found on both
sf.Series and
sf.Frame.
>>> tuple(f['type'].iter_element()) ('lepton', 'lepton', 'quark', 'quark') >>> f['type'].iter_element().apply(lambda e: e.upper()) <Series> <Index> muon LEPTON tau LEPTON charm QUARK strange QUARK <<U7> <<U6> >>> f[['mass', 'charge']].iter_element().apply(lambda e: format(e, '.2e')) <Frame> <Index> mass charge <<U6> <Index> muon 1.06e-01 -1.00e+00 tau 1.78e+00 -1.00e+00 charm 1.30e+00 6.66e-01 strange 1.00e-01 -3.33e-01 <<U7> <object> <object>
For row or column iteration on a
sf.Frame, a family of methods allows specifying the type of container to be used for the iterated rows or columns, i.e, with an array, with a
NamedTuple, or with a
sf.Series (
iter_array(),
iter_tuple(),
iter_series(), respectively). These methods take an axis argument to determine whether iteration is by row or by column, and similarly expose an
apply() method for function application. To apply a function to columns, we can do the following.
>>> f[['mass', 'charge']].iter_array(axis=0).apply(np.sum) <Series> <Index> mass 3.283 charge -1.667 <<U6> <float64>
Applying a function to a row instead of a column simply requires changing the axis argument.
>>> f.iter_series(axis=1).apply(lambda s: s['mass'] > 1 and s['type'] == 'quark') <Series> <Index> muon False tau False charm True strange False <<U7> <bool>
Group-by operations are just another form of iteration, with an identical interface for iteration and function application.
>>> f.iter_group('type').apply(lambda f: f['mass'].mean()) <Series> <Index> lepton 0.9415 quark 0.7000000000000001 <<U6> <float64>
No. 6: Strict, Grow-Only Frames
An efficient use of a
pd.DataFrame is to load initial data, then produce derived data by adding additional columns. This approach leverages the columnar organization of types and underlying arrays: adding new columns does not require re-allocating old columns.
StaticFrame makes this approach less vulnerable to error by offering a strict, grow-only version of a
sf.Frame called a
sf.FrameGO. For example, once a
sf.FrameGO is created, new columns can be added while existing columns cannot be overwritten or mutated in-place.
>>> f = sf.FrameGO.from_records(((0.106, -1.0, 'lepton'), (1.777, -1.0, 'lepton'), (1.3, 0.666, 'quark'), (0.1, -0.333, 'quark')), columns=('mass', 'charge', 'type'), index=('muon', 'tau', 'charm', 'strange')) >>> f['positive'] = f['charge'] > 0 >>> f <FrameGO> <IndexGO> mass charge type positive <<U8> <Index> muon 0.106 -1.0 lepton False tau 1.777 -1.0 lepton False charm 1.3 0.666 quark True strange 0.1 -0.333 quark False
This limited form of mutation meets a practical need. Further, converting back and forth from a
sf.Frame to a
sf.FrameGO (using
Frame.to_frame_go() and
FrameGO.to_frame()) is a no-copy operation: underlying immutable arrays can be shared between the two containers.
No. 7: Dates are not Nanoseconds
Pandas models all date or timestamp values as NumPy
datetime64[ns] (nanosecond) arrays, regardless of if nanosecond-level resolution is practical or appropriate. This creates a "Y2262 problem" for Pandas: dates beyond 2262-04-11 cannot be expressed. While I can create a
pd.DatetimeIndex up to 2262-04-11, one day further and Pandas raises an error.
>>> pd.date_range('1980', '2262-04-11') DatetimeIndex(['1980-01-01', '1980-01-02', '1980-01-03', '1980-01-04', '1980-01-05', '1980-01-06', '1980-01-07', '1980-01-08', '1980-01-09', '1980-01-10', ... '2262-04-02', '2262-04-03', '2262-04-04', '2262-04-05', '2262-04-06', '2262-04-07', '2262-04-08', '2262-04-09', '2262-04-10', '2262-04-11'], dtype='datetime64[ns]', length=103100, freq='D') >>> pd.date_range('1980', '2262-04-12') Traceback (most recent call last): pandas._libs.tslibs.np_datetime.OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 2262-04-12 00:00:00
As indices are often used for date-time values far less granular than nanoseconds (such as dates, months, or years), StaticFrame offers the full range of NumPy typed
datetime64 indices. This permits exact date-time type specification, and avoids the limits of nanosecond-based units.
While not possible with Pandas, creating an index of years or dates extending to 3000 is simple with StaticFrame.
>>> sf.IndexYear.from_year_range(1980, 3000).tail() <IndexYear> 2996 2997 2998 2999 3000 <datetime64[Y]> >>> sf.IndexDate.from_year_range(1980, 3000).tail() <IndexDate> 3000-12-27 3000-12-28 3000-12-29 3000-12-30 3000-12-31 <datetime64[D]>
No. 8: Consistent Interfaces for Hierarchical Indices
Hierarchical indices permit fitting many dimensions into one. Using hierarchical indices, n-dimensional data can be encoded into a single
sf.Series or
sf.Frame.
A key feature of hierarchical indices is partial selection at arbitrary depths, whereby a selection can be composed from the intersection of selections at each depth level. Pandas offers numerous ways to express those inner depth selections.
One way is by overloading
pd.DataFrame.loc[]. When using Pandas's hierarchical index (
pd.MultiIndex), the meaning of positional arguments in a
pd.DataFrame.loc[] selection becomes dynamic. It is this that makes Pandas code using hierarchical indices hard to maintain. We can see this by creating a
pd.DataFrame and setting a
pd.MultiIndex.
>>> df = pd.DataFrame.from_records([('muon', 0.106, -1.0, 'lepton'), ('tau', 1.777, -1.0, 'lepton'), ('charm', 1.3, 0.666, 'quark'), ('strange', 0.1, -0.333, 'quark')], columns=('name', 'mass', 'charge', 'type')) >>> df.set_index(['type', 'name'], inplace=True) >>> df mass charge type name lepton muon 0.106 -1.000 tau 1.777 -1.000 quark charm 1.300 0.666 strange 0.100 -0.333
Similar to 2D arrays in NumPy, when two arguments are given to
pd.DataFrame.loc[], the first argument is a row selector, the second argument is a column selector.
>>> df.loc['lepton', 'mass'] # Selects "lepton" from row, "mass" from columns name muon 0.106 tau 1.777 Name: mass, dtype: float64
Yet, in violation of that expectation, sometimes Pandas will not use the second argument as a column selection, but instead as a row selection in an inner depth of the
pd.MultiIndex.
>>> df.loc['lepton', 'tau'] # Selects lepton and tau from rows mass 1.777 charge -1.000 Name: (lepton, tau), dtype: float64
To handle this ambiguity, Pandas offers two alternatives. If a row and a column selection is required, the expected behavior can be restored by wrapping the hierarchical row selection within a
pd.IndexSlice[] selection modifier. Or, if an inner-depth selection is desired without using a
pd.IndexSlice[], the
pd.DataFrame.xs() method can be used.
>>> df.loc[pd.IndexSlice['lepton', 'tau'], 'charge'] -1.0 >>> df.xs(level=1, key='tau') mass charge type lepton 1.777 -1.0
This inconsistency in the meaning of the positional arguments given to
pd.DataFrame.loc[] is unnecessary and makes Pandas code harder to maintain: what is intended from the usage of
pd.DataFrame.loc[] becomes ambiguous without a
pd.IndexSlice[]. Further, providing multiple ways to solve this problem is also a shortcoming, as it is preferable to have one obvious way to do things in Python.
StaticFrame's
sf.IndexHierarchy offers more consistent behavior. We will create an equivalent
sf.Frame and set a
sf.IndexHierarchy.
>>> f = sf.Frame.from_records((('muon', 0.106, -1.0, 'lepton'), ('tau', 1.777, -1.0, 'lepton'), ('charm', 1.3, 0.666, 'quark'), ('strange', 0.1, -0.333, 'quark')), columns=('name', 'mass', 'charge', 'type')) >>> f = f.set_index_hierarchy(('type', 'name'), drop=True) >>> f <Frame> <Index> mass charge <<U6> <IndexHierarchy: ('type', 'name')> lepton muon 0.106 -1.0 lepton tau 1.777 -1.0 quark charm 1.3 0.666 quark strange 0.1 -0.333 <<U6> <<U7> <float64> <float64>
Unlike Pandas, StaticFrame is consistent in what positional
sf.Frame.loc[] arguments mean: the first argument is always a row selector, the second argument is always a column selector. For selection within a
sf.IndexHierarchy, the
sf.HLoc[] selection modifier is required to specify selection at arbitrary depths within the hierarchy. There is one obvious way to select inner depths. This approach makes StaticFrame code easier to understand and maintain.
>>> f.loc[sf.HLoc['lepton']] <Frame> <Index> mass charge <<U6> <IndexHierarchy: ('type', 'name')> lepton muon 0.106 -1.0 lepton tau 1.777 -1.0 <<U6> <<U4> <float64> <float64> >>> f.loc[sf.HLoc[:, ['muon', 'strange']], 'mass'] <Series: mass> <IndexHierarchy: ('type', 'name')> lepton muon 0.106 quark strange 0.1 <<U6> <<U7> <float64>
No. 9: Indices are Always Unique
It is natural to think index and column labels on a
pd.DataFrame are unique identifiers: their interfaces suggest that they are like Python dictionaries, where keys are always unique. Pandas indices, however, are not constrained to unique values. Creating an index on a
pd.Frame with duplicates means that, for some single-label selections, a
pd.Series will be returned, but for other single-label selections, a
pd.DataFrame will be returned.
>>> df = pd.DataFrame.from_records([('muon', 0.106, -1.0, 'lepton'), ('tau', 1.777, -1.0, 'lepton'), ('charm', 1.3, 0.666, 'quark'), ('strange', 0.1, -0.333, 'quark')], columns=('name', 'mass', 'charge', 'type')) >>> df.set_index('charge', inplace=True) # Creating an index with duplicated labels >>> df name mass type charge -1.000 muon 0.106 lepton -1.000 tau 1.777 lepton 0.666 charm 1.300 quark -0.333 strange 0.100 quark >>> df.loc[-1.0] # Selecting a non-unique label results in a pd.DataFrame name mass type charge -1.0 muon 0.106 lepton -1.0 tau 1.777 lepton >>> df.loc[0.666] # Selecting a unique label results in a pd.Series name charm mass 1.3 type quark Name: 0.666, dtype: object
Pandas support of non-unique indices makes client code more complicated by having to handle selections that sometimes return a
pd.Series and other times return a
pd.DataFrame. Further, uniqueness of indices is often a simple and effective check of data coherency.
Some Pandas interfaces, such as
pd.concat() and
pd.DataFrame.set_index(), provide an optional check of uniqueness with a parameter named
verify_integrity. Surprisingly, by default Pandas disables
verify_integrity.
>>> df.set_index('type', verify_integrity=True) Traceback (most recent call last): ValueError: Index has duplicate keys: Index(['lepton', 'quark'], dtype='object', name='type')
In StaticFrame, indices are always unique. Attempting to set a non-unique index will raise an exception. This constraint eliminates opportunities for mistakenly introducing duplicates in indices.
>>> f = sf.Frame.from_records((('muon', 0.106, -1.0, 'lepton'), ('tau', 1.777, -1.0, 'lepton'), ('charm', 1.3, 0.666, 'quark'), ('strange', 0.1, -0.333, 'quark')), columns=('name', 'mass', 'charge', 'type')) >>> f <Frame> <Index> name mass charge type <<U6> <Index> 0 muon 0.106 -1.0 lepton 1 tau 1.777 -1.0 lepton 2 charm 1.3 0.666 quark 3 strange 0.1 -0.333 quark <int64> <<U7> <float64> <float64> <<U6> >>> f.set_index('type') Traceback (most recent call last): static_frame.core.exception.ErrorInitIndex: labels (4) have non-unique values (2)
No. 10: There and Back Again to Pandas
StaticFrame is designed to work in environments side-by-side with Pandas. Going back and forth is made possible with specialized constructors and exporters, such as
Frame.from_pandas() or
Series.to_pandas().
>>> df = pd.DataFrame.from_records([('muon', 0.106, -1.0, 'lepton'), ('tau', 1.777, -1.0, 'lepton'), ('charm', 1.3, 0.666, 'quark'), ('strange', 0.1, -0.333, 'quark')], columns=('name', 'mass', 'charge', 'type')) >>> df name mass charge type 0 muon 0.106 -1.000 lepton 1 tau 1.777 -1.000 lepton 2 charm 1.300 0.666 quark 3 strange 0.100 -0.333 quark >>> sf.Frame.from_pandas(df) <Frame> <Index> name mass charge type <object> <Index> 0 muon 0.106 -1.0 lepton 1 tau 1.777 -1.0 lepton 2 charm 1.3 0.666 quark 3 strange 0.1 -0.333 quark <int64> <object> <float64> <float64> <object>
Conclusion
The concept of a "data frame" object came long before Pandas 0.1 release in 2009: the first implementation of a data frame may have been as early as 1991 in the S language, a predecessor of R. Today, the data frame finds realization in a wide variety of languages and implementations. Pandas will continue to provide an excellent resource to a broad community of users. However, for situations where correctness and code maintainability are critical, StaticFrame offers an alternative designed to be more consistent and reduce opportunities for error.
For more information about StaticFrame, see the documentation () or project site ().
Posted on by:
Christopher Ariza
Christopher Ariza is Partner and Head of Investment Systems, the core software engineering team at Research Affiliates, a global leader in investment strategies and research.
Read Next
Advance your Python skills by Building a Whatsapp Chat Analyser: a Guided Project
Nityesh Agarwal -
#100DaysOfPython Day 1: Hello World, Data Types & Strings
Tae'lur Alexis 🦄⚛ -
#100DaysOfPython Day 2: Functions, Scope and Best Practices
Tae'lur Alexis 🦄⚛ -
Discussion
Really interesting read. This is the 1st I've heard of static-frame but intend to adopt it into my work-flow. My biggest challenge with Pandas has always been dates: the hours I've lost trying to manipulate dates and create date differences in Pandas are innumerable and it seems like static-frame will vastly improve on this.
Thank you for your comments. Please post to the GitHub site any issues or questions that might come up: we are always looking for feedback!
Thanks, that was a very interesting read, and well done on
static-frame, I'm excited to try it out :) One thing that has repeatedly been a stumbling block for me in
pandasis the somewhat inconsistent/confusing semantics of axis arguments. Is that something
static-framehas also improved upon, or maybe is planning to?
Many thanks for your comments and question. I agree that the
axisargument is confusing: while
axis0 generally means rows and
axis1 generally means columns, there are plenty of places where this association becomes stretched or seems inverted. However, as much as possible, StaticFrame follows NumPy to resolve such potentially arbitrary choices: thus
axis0 and 1 have the same meaning in
np.sum()as in
sf.Frame.sum(). Given NumPy's long history, following NumPy rather than inventing a new notation seems practical.
Note that in some cases StaticFrame offers more numerous and narrow interfaces that remove the need for an
axisargument entirely. For example, there is no
axisargument in
sf.Frame.drop: instead,
__getitem__,
loc, and
ilocinterfaces are exposed on
sf.Frame.drop. So a column named "a" can be dropped with
sf.Frame.drop['a'], or all columns including and after "a", and all rows including and after "b", can be dropped with
sf.Frame.drop.loc['b':, 'a':].
Thank you for the reply! I understand the value of not breaking away from the
numpytradition. Coming up with alternative interfaces which eliminate the need for an
axisargument in the first place (when possible) seems like a good way to improve ergonomics while being reasonably conservative about this.
The different semantics of
axisin
pandas
.dropvs. e.g.
.sumis my go-to example of this inconsistency, so I did notice when playing around with
static-framethat its version of
.dropelegantly avoids
axisaltogether :) Well done! | https://dev.to/flexatone/ten-reasons-to-use-staticframe-instead-of-pandas-4aad | CC-MAIN-2020-34 | refinedweb | 4,765 | 62.14 |
15 February 2011 17:37 [Source: ICIS news]
TORONTO (ICIS)--?xml:namespace>
Austria-based biofuels engineering firm BDI BioEnergy said it won a €3.1m ($4.2m) contract to expand the plant's capacity and upgrade technologies so that the unit could use “difficult waste products” from the meat and food sectors as feedstock for biodiesel.
The Motherwell plant would use the same fat preparation technology BDI BioEnergy had installed at a biodiesel plant in
BDI BioEnergy expected to complete its work in less than 10 months, it said.
A BDI spokeswoman would not disclose capacity details. Argent officials were not immediately available for additional comment.
According to Argent’s website, the biodiesel plant at Motherwell is currently producing a weekly average of at least 875 tonnes, or 1m litres, of biodiesel.
($1 = €0.74) | http://www.icis.com/Articles/2011/02/15/9435651/argent-energy-to-expand-biodiesel-plant-in-scotland.html | CC-MAIN-2015-06 | refinedweb | 135 | 53.21 |
Microsoft Office 2000 uses animated characters, which it
calls Office Assistants, to help users and provide assistance. With Microsoft
Agent, you can add such characters to your own applications. You can even use
the Office Assistants themselves – like The Genius or F1. Characters designed
for use with Agent can do a variety of tasks – like speech synthesis and
recognition, in addition to displaying a variety of animations.
To be able to use this technology, you must have:
All these components are available from.
In addition, to use the Microsoft Office characters, you must have Office 2000.
Office 97 characters are not compatible with Agent.
Microsoft Agent is provided as an ActiveX control DLL. To
use it within .NET, you can use the AxImp.exe utility provided with the .NET
Framework SDK. Create a folder where you want to keep the imported libraries,
and then use the command:
aximp %systemroot%\msagent\agentctl.dll
The makefile included in the download does this for you. This command should create two files: AxAgentObjects.dll
and AgentObjects.dll. You are now ready to use Agent with
.NET.
Agent programming is easy because it
relies completely on COM, and we don’t have to mess around with PInvoke or
something. The first step to using Agent is to add the Agent ActiveX control in
AxAgentObjects.dll to your project. Put it anywhere on a form; it is invisible
at runtime. Next, you should add a member variable of type
AgentObjects.IAgentCtlCharacterEx (from AgentObjects.dll) to your class. You
should import the AgentObjects namespace into your application; this simplifies
several tasks. Essentially, you must, at a minimum:
These steps shall simply load and
display the character.
The mandatory Hello World sample requires
the Genie character. The sample simply loads the character and displays him.
You can also make him say something. To make the character talk, the character’s
speak method is called, like this:
if(TextToSpeak.Text.Length == 0) // Don't speak a zero-length
return; // string.
//Make the character speak.
Character.Speak(TextToSpeak.Text, null);
As you can see, it’s very simple to use Agent.
Unlike the office assistants, you can make Agent characters
talk and also respond to voice commands. Any commands you add shall also be
available in the character’s context menu. Speech recognition is, by default,
enabled only when you hold down the scroll lock key. All you have to do is to add
new commands to the character object’s Commands collection. Then, you must add an
event handler for the Agent control’s Command event. A boxed IAgentCtlUserInput
object is supplied to the event handler as a parameter. You can access the
command recognised by the IAgentCtlUserInput object’s Name property.
The Interactive Hello World sample demonstrates basic speech
recognition, as well as playing animation. First, the control and the character
(Robby) are loaded and initialised, and then two commands are added:
protected void LoadButton_Click(object sender, System.EventArgs e)
{
// Load the Robby character.
AxAgent.Characters.Load("Robby", (object)"ROBBY.ACS");
Character = AxAgent.Characters["Robby"];
StatusLabel.Text = Character.Name + " loaded.";
// Set the language to US English.
Character.LanguageID = 0x409;
// Display the character.
Character.Show(null);
LoadButton.Enabled = false;
// Display name for the commands.
Character.Commands.Caption = "Hello World";
Character.Commands.Add("Hello", // Command name
(object)"Say Hello", // Display name
(object)"([say](hello | hi) | good (day | morning | evening))", // SR String
(object)true, // Enabled
(object)true); // Visible
Character.Commands.Add("Goodbye", // Command name
(object)"Goodbye", // Display name
(object)"(bye | goodbye | exit | close | quit)", // SR String
(object)true, // Enabled
(object)true); // Visible
PromptLabel.Visible = true;
}
The speech recognition strings offer several options that
the user can say; for example, saying ‘goodbye’, ‘bye’ and ‘close’ have the
same effect. In all cases, the command name is passed to the event handler, so
you don’t have to account for the various variations of the recognised string.
In the event handler, a test is done for the command:
protected void AxAgent_Command(object sender, AxAgentObjects._AgentEvents_CommandEvent e)
{
IAgentCtlUserInput ui;
ui = (IAgentCtlUserInput)e.userInput;
if(ui.Name == "Hello")
{
Character.Speak((object)"Hello. My name is Robby." +
" Pleased to meet you.", null);
PromptLabel.Text = "Say goodbye to dismiss Robby.";
}
if(ui.Name == "Goodbye")
{
Character.Speak((object)"It was nice talking to" +
" you. Goodbye.", null);
Character.Play("Wave");
Character.Play("Hide");
}
}
Animations are played with the character object’s play
method.
If you see the
Microsoft Agent site, you will see a Visual Basic sample that enumerates all
animations present in a character. The AgentDemo application also does the same
thing, only in C#. You can load all Agent and Microsoft Office characters and
see a list of animations. Here’s a screenshot (with the Office 2000 F1
character loaded):
This app is kinda redundant now, because Microsoft ships just such a sample with Beta 2.
I just converted what I had done for Beta 1 to Beta 2 code.
Many people think that the Office Assistants are irritating.
You must take care not to make your characters irritating, but instead helpful,
and always provide an easily accessible option to turn them off.
15 Mar 2002 -. | https://www.codeproject.com/Articles/1138/Creating-Cool-Agent-User-Interfaces?fid=2202&df=90&mpp=25&prof=True&sort=Position&view=Normal&spc=Relaxed&select=483685&fr=26 | CC-MAIN-2019-09 | refinedweb | 849 | 51.65 |
50894/can-t-use-read-for-html2text
I'm making a Python program that searches a webpage for a word. Although, when I try
website = urllib.request.urlopen(url)
content = website.read()
website.close()
test = html2text.html2text(content)
print(test)
I get this error :
test = html2text.html2text(content)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site- packages/html2text/__init__.py", line 840, in html2text
return h.handle(html)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site- packages/html2text/__init__.py", line 129, in handle
self.feed(data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/html2text/__init__.py", line 125, in feed
data = data.replace("</' + 'script>", "</ignore>")
TypeError: a bytes-like object is required, not 'str'
I'm new to Python, so I'm not sure how to deal with this error.
Python 3.5, Mac.
decode() the content with the charset sent inside the Charset header:
resource = urllib.request.urlopen(url)
content = resource.read()
charset = resource.headers.get_content_charset()
content = content.decode(charset)
your programme is fine until you start ...READ MORE
Your code is good until you get ...READ MORE
Hi, nice question.
So what I daily use ...READ MORE
Hi, good question.
It is a very simple ...READ MORE
You missed a few login data forms, ...READ MORE
You want to avoid interfering with this ...READ MORE
raw_input() was renamed to input() so now input() returns the exact string ...READ MORE
You can try the following:
from xml.dom.minidom import ...READ MORE
The print() is getting called multiple times ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/50894/can-t-use-read-for-html2text | CC-MAIN-2021-49 | refinedweb | 291 | 55.3 |
Opened 5 months ago
Closed 5 months ago
Last modified 5 months ago
#13440 closed bug (duplicate)
putStr doesn't have the same behavior in GHCi and after compilation with GHC
Description
Hi,
main :: IO () main = do { putStr "hello"; x <- getChar; -- or x <- getLine; putStr "x = "; print x; }
the program works in GHCi and after compilation with GHC. in GHCi it works fine but after compilation it does not work fine. in GHCi the first line of code is evaluated the first while after compilation is the second line which is evaluated first. so in GHCi we have as a result:
hellom x = 'm'
and after compilation we have as a result:
m hellox = 'm'
After compilation, putStr is not evaluated the first. It is getChar or getLine. In GHCi the behavior is correct. But after compilation the behavior is strange.
Change History (5)
comment:1 Changed 5 months ago by
comment:2 Changed 5 months ago by
for RyanGIScott
yes, I am on Windows. I use a standard Windows console (cmd.exe)
I also use Linux Debian8 under virtualBox.
I have just tested this program on Linux and here are the results.
With GHCi the result is:
*Main> main hellomx = 'm' *Main>
It seems much better than GHCi on Windows.
After compiling, the result is the same as Windows.
vanto@debian:~/sourcehs$./test m hellox = 'm'
On the other hand, I used runghc to test the program on Windows and Linux and here are the results:
with Windows
the result is the same as compilation on Windows
c:\sourcehs>runghc test.hs m hellox = 'm'
with Linux
the result is the same as GHCi on Windows
vanto@debian:~/sourcehs$ runghc test.hs hellom x = 'm'
Hope this help!
comment:3 Changed 5 months ago by
On Linux I use GHC version: 8.0.2
comment:4 Changed 5 months ago by
As it turns out, this is a duplicate of a long-standing bug in GHCi, #2189. Fixing that bug will probably require rewriting the whole IO manager to use native Win32 IO (see #11394), but luckily, someone is working on this.
Until then, I can offer you two workarounds.
- If you want to have a stronger guarantee that
"hello"will be printed first, you can use
hFlush stdoutto force this:
import System.IO main :: IO () main = do { putStr "hello"; hFlush stdout; x <- getChar; -- or x <- getLine; putStr "x = "; print x; }
- Alternatively, you can try a different buffering strategy. By default,
stdout's buffering mode is
NoBuffering(which should, in theory, mean that all output is immediately printed to the screen, were it not for #2189). But you can change the buffering mode to something else:
import System.IO main :: IO () main = do { hSetBuffering stdout $ BlockBuffering $ Just 1; putStr "hello"; x <- getChar; -- or x <- getLine; putStr "x = "; print x; }
This does buffer the output, but only 1 character at a time.
I've experimentally verified that both of these workaround work on my Windows machine, on both
cmd.exe and MSYS2.
Are you on Windows? If so, what console are you using? I know there are several issues regarding input/output buffering which might explain the discrepancies you're seeing. | https://ghc.haskell.org/trac/ghc/ticket/13440 | CC-MAIN-2017-34 | refinedweb | 531 | 62.88 |
In this blog, I’ll be telling on how SUSI admins can access list of all the registered users from SUSI-server. Following this, they may modify/edit user role of any registered user..
All the users who are not logged in but interacting with SUSI are anonymous users. These are only subject to chat with SUSI, login, signup or may use forgot password service. Once a user login to the server, a token is generated and sent back to client to maintain the identity, hence acknowledging them. Privileged users are those who have special rights with them. These are more like moderators with much special rights than any other user. At the top level of the hierarchy are the admins. These users have more rights than anyone. They can change role of any other user, override decision of any privileged user as well.
Let us now look at the control flow of this.
First things first, make a component of User List in the project. Let us name it ListUsers and since it has to be accessible by those users who possess ADMIN rights, you will find it enclosed in Admin package in components folder. Open up
index.js file, import Listusers component and add route to it in the following way :
...//other import statements import ListUser from "./components/Admin/ListUser/ListUser"; ...//class definition and other methods <Route path="/listUser" component={ListUser}/> …//other routes defined
Find a suitable image for “List Users” option and add the option for List Users in static appbar component along with the image. We have used Material UI’s List image in our project.
...// other imports import List from 'material-ui/svg-icons/action/list'; …Class and method definition <MenuItem primaryText="List Users" onTouchTap={this.handleClose} containerElement={<Link to="/listUser" />} rightIcon={<List/>} /> ...//other options in top right corner menu
Above code snippet will add an option to redirect admins to ‘/listUsers’ route. Let us now have a closer look at functionality of both client and server. By now you must have known what ComponentDidMount does. {If not, I’ll tell you. This is a method which is given first execution after the page is rendered. For more information, visit this link}. As mentioned earlier as well that this list will be available only for admins and may be even extended for privileged users but not for anonymous or any other user, an AJAX call is made to server in ComponentDidMount of ‘listuser’ route which returns the base user role of current user. If user is an Admin, another method, fetchUsers() is called.
let url; url = ""; $.ajax({ url: url, dataType: 'jsonp', jsonpCallback: 'py', jsonp: 'callback', crossDomain: true, success: function (response) { console.log(response.userRole) if (response.userRole !== "admin") { console.log("Not an admin") } else { this.fetchUsers(); console.log("Admin") } }.bind(this), });
In fetchUsers method, an AJAX call is made to server which returns username in JSONArray. The response looks something likes this :
{ "users" : { "email:""[email protected]", ... }, "Username":["[email protected]", "[email protected]"...] }
Now, only rendering this data in a systematic form is left. To give it a proper look, we have used material-ui’s table. Import Table, TableBody, TableHeader,
TableHeaderColumn, TableRow, TableRowColumn from material-ui/table.
In fetchUsers method, response is catched in data Oblect. Now the keys are extracted from the JSON response and mapped with an array. Iterating through array received as username array, we get list of all the registered users. Now, popuulate the data in the table you generated.
return ( <TableRow key={i}> <TableRowColumn>{++i}> <TableRowColumn>{name}</TableRowColumn> <TableRowColumn> </TableRowColumn> <TableRowColumn> </TableRowColumn> <TableRowColumn> </TableRowColumn> <TableRowColumn> </TableRowColumn> </TableRow> )
Above piece of code may help you while populating the table. These details are returned from susi server which gets a list of all the registered in the following manner. First, it checks if base url of this user is something apart from admin. If not, it returns error which may look like this :
Failed to load resource: the server responded with a status of 401 (Base user role not sufficient. Your base user role is 'ANONYMOUS', your user role is 'anonymous')
Otherwise, it will generate a client identity, use to to get an authorization object which will loop through authorization.json file and return all the users encoded as JSONArray.
Additional Resources
- Official Material UI Documentation on Tables from marterial-ui
- Answer by Marco Bonelli on Stackoverflow on How to map JSON response in JavaScript?
- Answer by janpieter_z on Stackoverflow – on Render JSON data in ReactJS table | https://blog.fossasia.org/list-all-the-users-registered-on-susi-ai/?utm_source=rss&utm_medium=rss&utm_campaign=list-all-the-users-registered-on-susi-ai | CC-MAIN-2020-16 | refinedweb | 744 | 57.27 |
Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.
Compared to some other core language topics we’ve met in this book, exceptions are a fairly lightweight tool in Python. Because they are so simple, let’s jump right into some code.
Suppose we write the following function:
>>>
def fetcher(obj, index):...
return obj[index]...
There’s not much to this function—it simply indexes an object on a passed-in index. In normal operation, it returns the result of a legal index:
>>>>>
fetcher(x, 3)# Like x[3] 'm'
However, if we ask this function to index off the end of the
string, an exception will be triggered when the function tries to
run
obj[index]. Python detects
out-of-bounds indexing for sequences and reports it by
raising (triggering) the built-in
IndexError exception: | http://my.safaribooksonline.com/book/programming/python/9780596805395/32dot-exception-basics/id3635720?bookview=contents | CC-MAIN-2014-15 | refinedweb | 144 | 52.6 |
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
On Fri, 12 Sep 2014, Jakub Jelinek wrote: > Because GCC supports even another flavor, which presumably the patches > implement. The two you are mentioning are for compatibility with existing > math libraries. The third one is used by #pragma omp declare simd > functions and Cilk+ elemental functions. So, to use those you don't > need any extra gcc support, glibc headers could just add > #if defined(__OPENMP) && __OPENMP >= 201307 > #pragma omp declare simd > #endif > on the prototypes (of course maybe with some clauses if needed). That's what this patch does - but we need a way for the headers to declare to GCC which vector ISAs have such versions of a given function available, in a way that works both for (new GCC, old glibc) (GCC knows about newer vector ISAs without function versions in that glibc, and mustn't try to generate calls to functions that glibc doesn't have) and (old GCC, new glibc) (glibc is declaring availability of vector versions the old GCC doesn't know how to use, so the references to those vector versions need to be quietly ignored or conditional on GCC version). -- Joseph S. Myers joseph@codesourcery.com | https://sourceware.org/legacy-ml/libc-alpha/2014-09/msg00282.html | CC-MAIN-2021-43 | refinedweb | 211 | 59.37 |
15 October 2012 21:48 [Source: ICIS news]
SAO PAULO (ICIS)--Brazilian imports of chemical products in September fell by 5.8% month on month, trade group Abiquim said on Monday.
Imports for September totalled $3.8bn (€2.9bn), down 14.7% compared with the previous month.
Fertilizers were the most imported item in September with sales of $764m, 22% lower compared with the same month of 2011, Abiquim added.
Exports reached $1.2bn in September, down by 18.8% year on year and 12.3% lower than the previous month, Abiquim added.
In the January-September period, imports reached $31.1bn, down 0.5% year on year, while exports totalled $11.2bn, down by 6.3% compared with the same period of 2011, Abiquim said.
?xml:namespace>
Abiquim foreign commerce director Denise Mazzaro Naranjo said the industry is concerned with the growing chemical trade deficit in
Such measures will not be widely felt until next year, Naranjo | http://www.icis.com/Articles/2012/10/15/9604160/brazil-september-chemical-imports-drop-5.8-month-on.html | CC-MAIN-2015-22 | refinedweb | 158 | 79.36 |
1. Does a static method gets inherited to it's sub classes ? Yes
Originally posted by rathi ji: No, static method doesn't get inherited to it's sub classes.
Originally posted by Arvind Sampath: 2. Does the private method gets inherited to the sub classes ? No. Private methods are private to the classes in which they are defined. They are not accessible anywhere outside. 3. If the private method is not overridden in the sub class, can we invoke the private method of the Base class thru sub class object ? ie sub.supA(); // is it allowed ? No. Private methods are not inherited by a sub class. Hence, they cannot be overridden neither can they can be accessed using an object of the subclass. Infact, they cannot be accessed outside the class in which they are defined even by using the object of the corresponding class Arvind[/QB]
If the private methods are not inherited to the subclasses, why does the concept of overriding rules comes into picture like "we can broaden the method scope in the sub classes" ie if "private" in super class, we can make it "public" in the sub classes ? In fact we are not overriding the method, right ?
static methods cannot be overridden to a lesser level of visibility
A.
Static method can be inherited example below public class Test1 { public static void testMethod() { System.out.println("Testing"); } } public class Test2 extends Test1 { public static void main(String[] args) { Test2 test2 = new Test2(); test2.testMethod(); } }
Originally posted by Sasikanth Malladi: is that static methods cannot participate in polymorphism. Am I right? Sashi
class Super { public static void testMe() { System.out.println("I am in Super"); } } class Sub extends Super { public static void testMe() { System.out.println("I am in Sub"); } } public class Test{ public static void main(String args[]) { Super sup = new Sub(); sup.testMe(); } }
Originally posted by Jeff Kumar: Kicky San: 1. static methods can be inherited >> Right
Inheritance means, getting super class's method in sub class, adding specific behaviour in sub class if we need, calling super class's method from that with super keyword...
Originally posted by Kicky San: From this thread and by the following example, I am jotting down my observations. Please correct me if i am wrong. 1. static methods can be inherited 2. static methods can be called both by the class name and the class instance 3. static methods cant be overridden. static means one in a class. 4. Even if there is a static method in the child class same as the one in the parent class, it is not the overriden method. It is a seperate method of the child class. Hope this example will clear all these points. I have modified the previous example. Here is the modified version class InheritStaticBase { public static void print() { System.out.println("I am in base class"); } } class InheritStaticChild extends InheritStaticBase { public static void print() { System.out.println("I am in child class"); } } class Main { public static void main(String args[]) { System.out.println("using parent class name"); InheritStaticBase.print(); System.out.println("using child class name"); InheritStaticChild.print(); System.out.println("befre instantiating child class"); InheritStaticChild a = new InheritStaticChild(); a.print(); System.out.println("after instantiating child class"); System.out.println("befre instantiating parent class"); InheritStaticBase b = new InheritStaticBase(); b.print(); System.out.println("befre instantiating parent class"); } }
Originally posted by Enge Chall: Hi Kicky, thank you for the explanation. But according to your theory : Does it mean that if I have a main() method( ie entry point for execution ) in the Base class, don't I need to write one more main() method in the sub classes (say I need same logic of main() method for sub classes also) ??? ie If I run the prog say: java Sub It must invoke the Superclass main() method if I don't have a main() in the sub class... am I right ?
you need to pass the scjp again or improve yourself to know that the static method can be called by both the class name and instance name.
Originally posted by Sasikanth Malladi: Tim, here's what you said:
public class Test{
static void print(){ System.out.println("Test"); }
static Test doNothing(){ return null;}
public static void main(String[] args)
{
doNothing().print();
}
Originally posted by Ravisekhar Kovuru: Does it mean that even if we call static methods on instance, the JVM calls them on Class name??? Can someone clarify this please.. [/QB] | http://www.coderanch.com/t/251985/java-programmer-SCJP/certification/static-methods-inherit-private | CC-MAIN-2015-18 | refinedweb | 745 | 57.27 |
���t
j
GDX-Proto is a lightweight 3d engine built with two main objectives:
While the current version is implemented as a First Person Shooter (FPS) demo, the code is highly adaptable for other uses, without too much work..
Programming
Java LibGDX
In this part of the LibGDX tutorial series we are going to take a look at using GLSL shaders. GLSL standards for OpenGL Shader Language and since the move from a fixed to programmable graphics pipeline, Shader programming has become incredibly important. In fact, every single thing rendered with OpenGL has at least a pair of shaders attached to it. It’s been pretty transparent to you till this point because LibGDX mostly takes care of everything for you. When you create a SpriteBatch object in LibGDX, it automatically creates a default vertex and fragment shader for you. If you want more information on working with GLSL I put together the OpenGL Shader Programming Resource Round-up back in May. It has all the information you should need to get up to speed with GLSL. For more information on OpenGL in general, I also created this guide.
To better understand the role of GL shaders, it’s good to have a basic understanding of how the modern graphics pipeline works. This is the high level description I gave in PlayStation Mobile book, it’s not plagiarism because I’m the author. :).
So basically shaders are little programs that run over and over again on the data in your scene. A vertex shader works on the vertices in your scene ( predictably enough… ) and are responsible for positioning each vertex in the world. Generally this is a matter of transforming them using some kind of Matrix passed in from your program. The output of the Vertex shader is ultimately passed to a Fragment shader. Fragment shaders are basically, as I said above, prospective pixels. These are the actual coloured dots that are going to be drawn on the users screen. In the fragment shader you determine how this pixel will appear. So basically a vertex shader is a little C-like program that is run for each vertex in your scene, while a fragment shader is run for each potential pixel.
There is one very important point to pause on here… Fragment and Vertex shaders aren’t the only shaders in the modern graphics pipeline. There are also Geometry shaders. While vertex shaders can modify geometry ( vertices ), Geometry shaders actually create new geometry. Geometry shaders were added in OpenGL 3.2 and D3D10. Then in OpenGL4/D3D11 Tessellation shaders were added. Tessellation is the process of sub-dividing a surface to add more detail, moving this process to silicon makes it viable to create much lower detailed meshes and tessellate them on the fly. So, why are we only talking about Fragment and Vertex shaders? Portability. Right now OpenGL ES and WebGL do not support any other shaders. So if you want to support mobile or WebGL, you can’t use these other shader types.
As I said earlier, when you use SpriteBatch, it provides a default Vertex and Fragment shader for you. Let’s take a look at each of them now. Let’s do it in the order they occur, so let’s take a look at the vertex shader first:
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord;
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main()
{
v_color = a_color;
v_color.a = v_color.a * (256.0/255.0);
v_texCoords = a_texCoord + 0;
gl_Position = u_projTrans * a_position;
}
As I said, GLSL is a very C-like language, right down to including a main() function as the program entry point. There are a few things to be aware of here. First are attribute and uniform variables. These are variables that are passed in from your source code. LibGDX takes care of most of these for you, but if you are going to write your own default shader, LibGDX expects all of them to exist. So then, what is the difference between a uniform and attribute variable? A uniform stays the same for every single vertex. Attributes on the other hand can vary from vertex to vertex. Obviously this can have performance implications, so if it makes sense, prefer using a uniform. A varying value on the other hand can be thought of as the return value, these values will be passed on down the rendering pipeline ( meaning the fragment shader has access to them ). As you can see from the use of gl_Position, OpenGL also has some built in values. For vertex shaders there are gl_Position and gl_PointSize. Think of these as uniform variables provided by OpenGL itself. gl_Position is ultimately the position of your vertex in the world.
As to what this script does, it mostly just prepares a number of variables for the fragment shader, the color, the normalized ( 0 to 1 ) alpha value and the texture to bind to, in this case texture unit 0. This is set by calling Texture.Bind() in your code, or is called by LibGDX for you. Finally it positions the vertex in 3D space by multiplying the vertices position by the transformation you passed in as u_projTrans.
Now let’s take a quick look at the default fragment shader:
#ifdef GL_ES
#define LOWP lowp
precision mediump float;
#else
#define LOWP
#endif
varying LOWP vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
void main()
{
gl_FragColor = v_color * texture2D(u_texture, v_texCoords);
}
As you can see, the format is very similar. The ugly #ifdef allows this code to work on both mobile and higher end desktop machines. Essentially if you are running OpenGL ES then the value of LOWP is defined as lowp, and precision is set to medium. In real world terms, this means that GL ES will run at a lower level of precision for internal calculations, both speeding things up and slightly degrading the result.
The values v_color and v_texCoords were provided by the vertex shader. A sampler2D on the other hand is a special glsl datatype for accessing the texture bound to the shader. gl_FragColor is another special built in variable ( like vertex shaders, fragment shaders have some GL provided variables, many more than Vertex shaders in fact ), this one represents the output color of the pixel the fragment shader is evaluating. texture2D essentially returns a vec4 value representing the pixel at UV coordinate v_texCoords in texture u_texture. The vec4 represents the RGBA values of the pixel, so for example (1.0,0.0,0.0,0.5) is a 50% transparent red pixel. The value assigned to gl_FragColor is ultimately the color value of the pixel displayed on your screen.
Of course a full discussion on GLSL shaders is wayyy beyond the scope of this document. Again if you need more information I suggest you start here. I am also no expert on GLSL, so you are much better off learning the details from someone else! :) This does however give you a peek behind the curtain at what LibGDX is doing each frame and is going to be important to us in just a moment.
There comes a time where you might want to alter the default shader and replace it with one of your own. This process is actually quite simple, let’s take a look. Let’s say for some reason you wanted to render your game entirely in black and white? Here are a simple vertex and fragment shader combo that will do exactly this:
Vertex shader:
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
v_texCoords = a_texCoord0;
gl_Position = u_projTrans * a_position;
}
Fragment shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
uniform mat4 u_projTrans;
void main() {
vec3 color = texture2D(u_texture, v_texCoords).rgb;
float gray = (color.r + color.g + color.b) / 3.0;
vec3 grayscale = vec3(gray);
gl_FragColor = vec4(grayscale, 1.0);
}
I saved each file as vertex.glsl and shader.glsl respectively, to the project assets directory. The shaders are extremely straight forward. The Vertex is in fact just the default vertex shader from LibGDX. Once again remember you need to provide certain values for SpriteBatch to work… don’t worry, things will blow up and tell you if they are missing from your shader! :) The fragment shader is simply sampling the RGB value of the current texture pixel, getting the “average” value of the RGB values and using that as the output value.
Enough with shader code, let’s take a look at the LibGDX code now:App extends ApplicationAdapter {
SpriteBatch batch;
Texture img;
Sprite sprite;
String vertexShader;
String fragmentShader;
ShaderProgram shaderProgram;
@Override
public void create () {
batch = new SpriteBatch();
img = new Texture("badlogic.jpg");
sprite = new Sprite(img);
sprite.setSize(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());.begin();
batch.setShader(shaderProgram);
batch.draw(sprite,sprite.getX(),sprite.getY(),sprite.getWidth(),sprite.getHeight());
batch.end();
}
}
And when you run it:
Tada, your output is grayscale!
As to what we are doing in that code, we load each shader file as a string. When then create a new ShaderProgram passing in a vertex and fragment shader. The ShaderProgram is the class the populates all the various variables that your shaders expect, bridging the divide between the Java world and the GLSL world. Then in render() we set our ShaderProgram as active by calling setShader(). Truth is, we could have done this just once in the create method instead of once per frame.
In the above example, when we set the shader program, it applied to all of the output. That’s nice if you want to render the entire world in black and white, but what if you just wanted to render a single sprite using your shader? Well fortunately that is pretty easy, you simply change the shader again. Consider:2 extends ApplicationAdapter {
SpriteBatch batch;
Texture img;
Sprite leftSprite;
Sprite rightSprite;
String vertexShader;
String fragmentShader;
ShaderProgram shaderProgram;
@Override
public void create () {
batch = new SpriteBatch();
img = new Texture("badlogic.jpg");
leftSprite = new Sprite(img);
rightSprite = new Sprite(img);
leftSprite.setSize(Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight());
leftSprite.setPosition(0,0);
rightSprite.setSize(Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight());
rightSprite.setPosition(Gdx.graphics.getWidth()/2,0);.setShader(null);
batch.begin();
batch.draw(leftSprite, leftSprite.getX(), leftSprite.getY(), leftSprite.getWidth(), leftSprite.getHeight());
batch.end();
batch.setShader(shaderProgram);
batch.begin();
batch.draw(rightSprite, rightSprite.getX(), rightSprite.getY(), rightSprite.getWidth(), rightSprite.getHeight());
batch.end();
}
}
One using the default shader, one sprite rendered using the black and white shader. As you can see, it’s simply a matter of calling setShader() multiple times. Calling setShader() but passing in null restores the default built-in shader. However, each time you call setShader() there is a fair amount of setup done behind the scenes, so you want to minimize the number of times you want to call it. Or…
Each Mesh object in LibGDX has it’s own ShaderProgram. Behind the scenes SpriteBatch is actually creating a large single Mesh out of all the sprites in your screen, which are ultimately just textured quads. So if you have a game object that needs fine tune shader control, you may consider rolling your own Mesh object. Let’s take a look at such an example:
package com.gamefromscratch;
import com.badlogic.gdx.ApplicationAdapter;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.*;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
import com.badlogic.gdx.graphics.glutils.ShaderProgram;
public class MeshShaderApp extends ApplicationAdapter {
SpriteBatch batch;
Texture texture;
Sprite sprite;
Mesh mesh;
ShaderProgram shaderProgram;
@Override
public void create () {
batch = new SpriteBatch();
texture = new Texture("badlogic.jpg");
sprite = new Sprite(texture);
sprite.setSize(Gdx.graphics.getWidth(),Gdx.graphics.getHeight());
float[] verts = new float[30];
int i = 0;
float x,y; // Mesh location in the world
float width,height; // Mesh width and height
x = y = 50f;
width = height = 300f;
//Top Left Vertex Triangle 1
verts[i++] = x; //X
verts[i++] = y + height; //Y
verts[i++] = 0; //Z
verts[i++] = 0f; //U
verts[i++] = 0f; //V
//Top Right Vertex Triangle 1
verts[i++] = x + width;
verts[i++] = y + height;
verts[i++] = 0;
verts[i++] = 1f;
verts[i++] = 0f;
//Bottom Left Vertex Triangle 1
verts[i++] = x;
verts[i++] = y;
verts[i++] = 0;
verts[i++] = 0f;
verts[i++] = 1f;
//Top Right Vertex Triangle 2
verts[i++] = x + width;
verts[i++] = y + height;
verts[i++] = 0;
verts[i++] = 1f;
verts[i++] = 0f;
//Bottom Right Vertex Triangle 2
verts[i++] = x + width;
verts[i++] = y;
verts[i++] = 0;
verts[i++] = 1f;
verts[i++] = 1f;
//Bottom Left Vertex Triangle 2
verts[i++] = x;
verts[i++] = y;
verts[i++] = 0;
verts[i++] = 0f;
verts[i] = 1f;
// Create a mesh out of two triangles rendered clockwise without indices
mesh = new Mesh( true, 6, 0,
new VertexAttribute( VertexAttributes.Usage.Position, 3, ShaderProgram.POSITION_ATTRIBUTE ),
new VertexAttribute( VertexAttributes.Usage.TextureCoordinates, 2, ShaderProgram.TEXCOORD_ATTRIBUTE+"0" ) );
mesh.setVertices(verts);
shaderProgram = new ShaderProgram(
Gdx.files.internal("vertex.glsl").readString(),
Gdx.files.internal("fragment.glsl").readString()
);
}
@Override
public void render () {
Gdx.gl20.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl20.glClearColor(0.2f, 0.2f, 0.2f, 1);
Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT);
Gdx.gl20.glEnable(GL20.GL_TEXTURE_2D);
Gdx.gl20.glEnable(GL20.GL_BLEND);
Gdx.gl20.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
batch.begin();
sprite.draw(batch);
batch.end();
texture.bind();
shaderProgram.begin();
shaderProgram.setUniformMatrix("u_projTrans", batch.getProjectionMatrix());
shaderProgram.setUniformi("u_texture", 0);
mesh.render(shaderProgram, GL20.GL_TRIANGLES);
shaderProgram.end();
}
}
This sample is long but fairly simple. In create() we create the geometry for a quad by defining 2 triangles. We then load our ShaderProgram just like we did in the earlier example. You may notice in creating the Mesh we define two VertexAttribute values and bind them to values within our ShaderProgram. These are the input values into the shader. Unlike with SpriteBatch and the default shader, you need to do a bit more of the behind the scenes work when rolling your own Mesh.
Then in render() you see we work with the SpriteBatch normally but then draw our Mesh object using Mesh.render, passing in the ShaderProgram. Texture.bind() is what binds the texture from LibGDX to texture unit 0 in the GLSL shader. We then pass in our required uniform values using setUniformMatrix and setUniformi ( as in int ). This is how you set up uniform values from the Java side of the fence. u_texture is saying which texture unit to use, while u_projTrans is the transformation matrix for positioning items within our world. In this case we are simply using the projection matrix from the SpriteBatch.
Using a Mesh instead of a Sprite has some disadvantages however. When working with Sprites, all geometry is batched into a single object and this is good for performance. More importantly, with Mesh you need to roll all the functionality you need from Sprite as you need it. For example, if you want to support scaling or rotation, you need to provide that functionality.
Programming
LibGDX Java 2D.
Title kinda says it all, LibGDX 1.2 was released today. The update includes:
I don’t think the breaking changes will impact any of our tutorial series. Interesting to see the new AI extension, will be nice to see how that develops ( A* next? ). Improved IntelliJ build times are certainly welcome. You can get started here.
News
Java LibGDX | http://www.gamefromscratch.com/?tag=/Java&page=9 | CC-MAIN-2018-39 | refinedweb | 2,563 | 54.73 |
Java from JavaFX
The most powerful advantage of JavaFX is an easy use of Java classes. However, you can encounter an issue when calling some methods, for example, those that have the
insert and
delete names. The
<a href="">File</a> class contains the
delete method. How would you delete a file from JavaFX?
The issue here is that the
insert and
delete names are JavaFX keywords used for sequences. If you try to compile a code that uses the
delete method, you will get the following error message.
Sorry, I was trying to understand an expression
but I got confused when I saw 'delete' which is a keyword.
new java.io.File("temp").delete();
^
To solve this issue, use double angle brackets.
new java.io.File("temp").<<delete>>();
But that's the half of the story so far! This syntax allows any text line to be used as an identifier.
def <<english
variable>> = "any symbols can be used as identifier";
def <<русская переменная>> = <<english
variable>>;
println(<<русская переменная>>);
Note that CR/LF is a part of the identifier above. You can use a funny mug as a name for a debugging function:
function << o.O >> (error) {
println("ERROR: {error}")
}
<< o.O >> ("WTF!");
The code above prints ERROR: WTF!
- Login or register to post comments
- Printer-friendly version
- malenkov's blog
- 2065 reads
by malenkov - 2009-04-17 07:06I never tryed to access any databases. You can start from here: Seems it should work.
by ramnathganesan - 2009-04-17 06:54Hi Sergey, thanks for all the interesting topics you have discussed. I had a question "kinda" related to 'Java from JavaFX'. I am trying to connect a desktop JavaFX application to an Access database. I know that you can use Java to connect, however all the samples I have seen are for the preview SKD and do not work for the current release. Do you have any samples or insight regarding the best way to do this? Thanks! | http://weblogs.java.net/blog/malenkov/archive/2009/03/31/java-javafx | crawl-003 | refinedweb | 329 | 73.68 |
Flutter: Null Safety
As a developer, we always struggle to handle NullPointerException which causes a lot of time and money while developing software. It has been referred to as a billion-dollar mistake. (Wiki)
Flutter 2 supports null safety. You can now migrate your Flutter packages to use non-nullable types. Follow the below link to perform flutter migration for old projects.
Note: To use it it’s necessary to have your all libs updated with a null safety feature, otherwise you can’t migrate your project to Flutter 2.
Dart distinguishes between nullable and non-nullable references as a part of its type system which helps in eliminating the risk of NullPointerException.
In Dart, all variables are non-nullable by default. We cannot assign a null value to a variable because it’ll throw a compilation error:
final String country = null; // Cannot be null.
final String? country = null; // Can be null.
Null safety is a major new productivity feature that helps you avoid null exceptions, a class of bugs that are often hard to spot. As an added bonus, this feature also enables a range of performance improvements.
— — — — — — — — — — — — — — — — — — — — — — — — — —
Compile Time Error For Nullable Type.
Variables are non-nullable by default which means every variable should be assigned and cannot be null.
void main() {
String country;
print(country);
}
Error: The non-nullable local variable 'country' must be assigned before it can be used.
Values must be assigned at the time of declaration otherwise we have to mark it as nullable using ‘?’ Also, it is not possible to assign a value null to the variable, it will give an error “A country of type ‘Null’ can’t be assigned to a variable of type ‘String’.”
void main() {
String country = null;
}
The below code will work, as the variable country is not used before assigning a value.
void main() {
String country;
country = "Welcome To Flutter";
}
Below code gives us an error as we are using a variable before it got assigned value, and the error which thrown “The non-nullable local variable ‘value’ must be assigned before it can be used.”
void main() {
String country;
print(country);
country = "USA";
}
— — — — — — — — — — — — — — — — — — — — — — — — — —
Nullable types (Use Question Mark ‘?’)
To force variable nullable use ‘?’ like below, which tells the system that it can be null, and assigned letter. This will through null exception which results in a red screen on mobile.
void main() {
String? country;
print(country);
}
Output:
null
— — — — — — — — — — — — — — — — — — — — — — — — — —
Non Nullable Type — Exclamation mark (
!)
Appending
! to any variable tells the compiler that the variable is not null, and can be used safely.
void main() {
String? country = "USA";
String myCountry = country!; // myCountry is non-nullable; would throw an error if country is null
— — — — — — — — — — — — — — — — — — — — — — — — — —
late variables
The keyword
late can be used to mark variables that will be initialized later, i.e. not when they are declared but when they are accessed. This also means that we can have non-nullable instance fields that are initialized later: Accessing
value before it is initialized will throw a runtime error.
void main() {
late String country; // non-nullable
// print(value) here would throw a runtime error
country = "USA";
}
— — — — — — — — — — — — — — — — — — — — — — — — — —
late final
Wow!!, isn’t these amazing, Final variable can be assigned a letter.
You can declare a
late final without an initializer, which is the same as having just a
late variable, but it can only be assigned once.
late final String country;
country = "USA";
print(country); // Working
country = "India";//Error: The late final local variable is already assigned.
If the late final is declared as static, then it will be evaluated at runtime.
class MyClass {
static late final country;
}
void main() {
MyClass.country = "USA";
MyClass.country = "USA";
print(MyClass.values.toString());
}Output:
[VERBOSE-2:ui_dart_state.cc(186)] Unhandled Exception: LateInitializationError: Field 'values' has already been initialized.
This error will be thrown at runtime, not at the compile-time, so don't declare final late values again, this will cause a runtime error.
— — — — — — — — — — — — — — — — — — — — — — — — — —
Conditional operator(
?)
The cascade operator helps us to verify whether the given object/variable is null or not if it's null then we can use another value, it’s more like if, else
class MyClass {
String? country;
}
void main() {
var myClass = MyClass();
print(myClass.country?? "Japan");
/// myClass.value is returning null, to avoid null exception we can /// pass placeholder value like above("Japan"), this must be
/// utilized whenever possible inside the code base.
}
Output: Japan
I have tried to put everything below the class file, respectively wrote test cases for it.
Example:
import 'package:flutter/material.dart';
class NullSafety {
int? nullCheckForCountry() {
String country = "USA";
// country = null; //compilation error
return country != null ? country.length : null;
}
int? safeCallForCountry() {
String? country;
return country?.length;
}
int? nullCheckForCity() {
String? city = "Kolkata";
city = null;
return city != null ? city.length : null;
}
int? safeCallForCity() {
String? city;
return city?.length;
}
String? safeCallChainForValue() {
Country? country = Country(City("Kolkata", "003"));
return country.city?.code;
}
List<String?> safeCallUsingList() {
List<String?> cities = ["Kolkata", null, "Mumbai"];
List<String?> name = [];
for (int i = 0; i < cities.length; i++) {
String? city = cities[i];
name.add(city);
}
return name;
}
String getDefaultValueIfObjectNull() {
Country? country = Country(City("New Delhi", null));
return country.city?.code ?? "Not available";
}
int? notNullAssertionForException() {
String? country;
return country!.length;
}
}
class Country {
City? city;
Country(City? city) {
this.city = city;
}
}
class City {
String? name;
String? code;
City(String? name, String? code) {
this.name = name;
this.code = code;
}
}
Class File:
Test File: | https://medium.com/flutterworld/flutter-null-safety-5d20012c2441?source=user_profile---------3---------------------------- | CC-MAIN-2022-27 | refinedweb | 886 | 59.09 |
Reacting to Blob storage events.
See the Blob storage events schema article to view the full list of the events that Blob storage supports. blob storage events, see any of these quickstart articles:
To view in-depth examples of reacting to Blob storage events by using Azure functions, see these articles:
- Tutorial: Use Azure Data Lake Storage Gen2 events to update a Databricks Delta table.
- Tutorial: Automate resizing uploaded images using Event Grid
Note
Storage (general after some delay, use the etag fields to understand if your information about objects is still up-to-date. To learn how to use the etag field, see Managing concurrency in Blob storage.
- As messages can arrive out of order, use the sequencer fields to understand the order of events on any particular object. The sequencer field is a string value that represents the logical sequence of events for any particular blob name. You can use standard string comparison to understand the relative sequence of two events on the same blob name.
- Storage events guarantees at-least-once delivery to subscribers, which ensures that all messages are outputted. However due to retries between backend nodes and services or availability of subscriptions, duplicate messages may occur. To learn more about message delivery and retry, see Event Grid message delivery and retry.
-.
Feature support
This table shows how this feature is supported in your account and the impact on support when you enable certain capabilities.
1 Data Lake Storage Gen2 and the Network File System (NFS) 3.0 protocol both require a storage account with a hierarchical namespace enabled.
1 Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
Next steps
Learn more about Event Grid and give Blob storage events a try: | https://docs.microsoft.com/en-au/azure/storage/blobs/storage-blob-event-overview | CC-MAIN-2022-05 | refinedweb | 306 | 53.61 |
Hi all,
i am new to java and this forum. I am hoping i can get some help with the below program:
Write a program that reads integers, finds the smallest of them and counts its occurrences. Assume that the input ends with number 0. Suppose that you entered 6 2 5 2 2 3 0; the program finds that the smallest is 2 and the occurrence count for 2 is 3.
Hint: maintain two variables min and count. Min stores the current min number and count stores its occurrences. Initially assign the first number to min and 1 to count. Compare each subsequent number with min. If the number is less than min, assign it to min and reset count to 1. If the number is equal to min increment count to 1.
Below is what i have started:
import java.util.Scanner;
public class Integers {
/** Main method */
public static void main(String[] args) {
// Create a Scanner
Scanner input = new Scanner(System.in);
int times = 1;
int count = 1;
int min; // The smallest of the integers entered
int minNumber = 6;
// Read an initial data
System.out.print(
"Enter an int value (the program exits if the input is 0): ");
int data = input.nextInt();
int minimum = data;
// Keep reading data until the input is 0
int sum = 0;
while (data != 0) {
sum += data;
// Read the next data
System.out.print(
"Enter an int value (the program exits if the input is 0): ");
data = input.nextInt();
}
//i am not sure where in the program to put in the check for each number against the min number and reset the count as per question above??
if (data < minNumber)
{
minNumber = data;
}
//keep getting 0 for smallest number but do not want to include the 0 as this is just for exiting the input of integers??
System.out.printf( "Smallest Number is %d\n", minNumber );
//not sure if below correct either but i think i am close!!!
for (int i = 0; i < 100; i++)
count++;
count++;
if (minNumber > 1){times++;}
System.out.println("Number " + minNumber + " occurs " + times + " Times ");
}
} | http://www.javaprogrammingforums.com/%20member-introductions/14266-integer-program-help-printingthethread.html | CC-MAIN-2017-51 | refinedweb | 344 | 73.37 |
Crate rustc_resolvesource · [−]
Expand description
This crate is responsible for the part of name resolution that doesn’t require type checker.
Module structure of the crate is built here. Paths in macros, imports, expressions, types, patterns are resolved here. Label and lifetime names are resolved here as well.
Type-relative name resolution (methods, fields, associated items) happens in
rustc_typeck.
Modules
After we obtain a fresh AST fragment from a macro, code in this module helps to integrate that fragment into the module structures that are already partially built.
“Late resolution” is the pass that resolves most of names in a crate beside imports and macros. It runs when the crate is fully expanded and its module structure is fully built. So it just walks through the crate and resolves all the expressions, types, etc.
Structs
A key that identifies a binding in a given
Module.
One node in the tree of modules.
Records a possibly-private value, type, or module definition.
Everything you need to know about a name’s location to resolve it. Serves as a starting point for the scope visitor. This struct is currently used only for early resolution (imports and macros), but not for late resolution yet.
Nothing really interesting here; it just provides memory for the rest of the crate.
A minimal representation of a path segment. We use this in resolve because we synthesize ‘path
segments’ which don’t have the rest of an AST or HIR
PathSegment.
Enums
Miscellaneous bits of metadata for better ambiguity error reporting.
Used for better errors for E0773
An intermediate resolution result.
Different kinds of symbols can coexist even if they share the same textual name. Therefore, they each have a separate universe (known as a “namespace”).
A specific scope in which a name can be looked up. This enum is currently used only for early resolution (imports and macros), but not for late resolution yet.
Names from different contexts may want to visit different subsets of all specific scopes with different restrictions when looking up the resolution. This enum is currently used only for early resolution (imports and macros), but not for late resolution yet.
Traits
Functions
A somewhat inefficient routine to obtain the name of a module. | https://doc.rust-lang.org/nightly/nightly-rustc/rustc_resolve/index.html | CC-MAIN-2022-05 | refinedweb | 370 | 57.77 |
Note: This Windows desktop application needs a display with a width of at least 1200 pixels. More is better. Less will be frustrating.
There seems to be no real consensus where fractal articles, or Mandelbrot articles in particular, belong. I think General Programming >> Algorithms & Recipes is fair territory and there are some fine neighbors here. So being here, I think its time we consider the Bad Mandelbrot set.
Benoit Mandelbrot discovered the 'good' set now named after him in 1979. This set is approximated by an amazingly simple iterative process where, starting from Z = (0, 0i), for any given point C in the complex plane:
Z' = Z2 + C (or = C + Z2)
Once a new point Z' is found, you repeat the process by adding the original point C plus the square of the last Z found. It is an iterative process but at some point (less than infinity) you have to decide whether Z' is greater than 2 in magnitude or if you consider the point C to be in the set. Typically, if a point is in the set, you color it black.
In 1991, I had a PC with a real 80387 co-processor and of course, one of the things I looked at was the Mandelbrot set. You know, the interesting product in complex math is i * i = -1. And for any real numbers n and m, n * mi = nmi (magnitude nm times imaginary i).
Complex math is a little tricky and mistakes are sometimes made. What if you decided n * mi = -nmi. It's a mistake. It's bad. You would never do this, but if you did, you would see something like this:
Now I'm not saying this is the first light of day for the Bad Mandelbrot set, but stranger timelines have happened. Since 1991 I haven't seen or read anything depicting the "Bad" Mandelbrot set. If you have a reference, let us know. UPDATE: see Jeremy Thomson's comment below.
This badness is a little like quaternions which are super complex numbers where an imaginary part j multiplied by an imaginary part i has a product -k. i, j, and k are such that:
i2 = j2 = k2 = ijk = −1
But quarternion multiplication is not commutative. j * i = -k, but i * j = k. Bad Mandelbrot math is commutative so that:
1*i = i*1 = -i
Personally, I haven't found quaternions to be all that productive in creating 3D or 4D fractals. Maybe messing with some of the rules might show something interesting, but I haven't had that much time.
The downloadable solution can be built with Visual Studio Community 2015. Other versions are usable with maybe a little futzing. I should have placed the following in a help screen, but you are reading it here:
The title bar shows the current magnification scale. The application starts out at Scale = 1.0.
The coordinates of the main left tile's center are displayed under it. The application starts with center = (-0.6, 0.0i). You will also see the complex coordinates of the cursor when it moves over the main left tile.
Left clicking on a point in the main left tile centers that point. This is how you move around.
'z' and Shift-'Z' zoom out and in on the set. You can also click the 'z'/Shift-'Z' button.
z
Z
Turning the Map on suspends displaying the zoomed area in the main left tile to show the set at Scale = 1.0 and the location of the Mtiles (described next). Turnning Map off reloads any zoomed display.
Map
Mtile
One of the amazing things (to me) about the Bad Mandelbrot set is its perfect tri-axis symmetry. The Good Mandelbrot set is only symmetric about the (real) X-axis. The Bad Mandelbrot set is symmetric about three rays: the ray from the origin along the negative X axis, the ray from the origin inclined from the positive X axis by 60 degrees, and the ray from the origin declined by 60 degrees.
How did the Mandelbrot gods do this? There is nothing obvious in Bad Mandelbrot math that says, "find and use the angles +/- 60 degrees."
For any center chosen that is close to, but above the negative X axis, there is a reflected point with reflected topography on the other side of the negative X axis. AND there are two points along the 60 degree inclined ray with the same and reflected topography, and the same along the declined ray.
The button labeled 'Symm. Rotate' shows this in the six Mtiles (or 'sub tiles') displayed to the right of the main left tile. These tiles map as indicated in the following diagram:
Symm. Rotate
When you click a center in the main left tile, you are also setting the same center for tUL. All the other sub tiles are set relative to that. If you click a center in a different quadrant, all the other sub tile centers are set relative to that point.
tUL
Clicking Symm. Rotate to On does the following;
On
In any Mandelbrot set viewer, good or bad, you will eventually get to a zoom factor where you aren't sure if what you are looking at is a miniature of the full-scale approximation (the sine qua non of fractals is self-similarity at finer detail) or a region with something more complicated. To resolve this you have to repeat the iterative process more times than currently chosen. '-' or Shift-'+' decrements or increments this limit. You can also click the 'dec iter limit' / Shift-'INC ITER LIMIT' button.
-
+
dec iter limit
INC ITER LIMIT
Of course the problem with increasing the limit is that it takes longer to determine black pixels from colored pixels.
As an aid, this application colors the background of the iteration limit TextBox first GoldenRod while updating the main left tile, then Khaki while the six Mtiles are updated, returning to near-white when the display is completely updated.
GoldenRod
Khaki
You can always hit z or Z or - or Shift-+ while updating is in progress. This application will always try to complete the display for the last action taken.
If you just want to get back to a starting position, click Restart. Everything but the iteration limit will reset to initial conditions.
Restart
If you right-click a point in the main left tile you can watch how that point jumps through iterations to its way out of the magnitude 2 circle or dances around within the circle for all iterations.
Click the Stop button to cancel an active dance.
Stop
Finally, jumping and dancing leaves tracks. You can click the Restore button to regenerate the main left tile's display.
Restore
In 1991, double precision was really something. Boy, you could zoom forever (with an incredible 20MHz clock speed - oooh) and never see the last detail (can't anyway but that's not the point). Now, using C# doubles with a super-duper i7 quad core processor running at 3GHz, you really can't zoom forever without hitting a wall. I found that I ran out of mantissa room rather quickly with this application. What to do? There are free math libraries that buy you more than double's 52-bit mantissas but I took the easy way out and chose to retrofit Microsoft's decimal type (96-bit mantissa). It's in the language, it has way more precision than 8087, and it's terrifically ... slow (can't have everything).
double
decimal
As an aid, this application emboldens and colors the iteration limit Maroon whenever decimal type is chosen by the application for accuracy in displaying the main left tile. Zooming out (-) will revert back to double (Normal, Black iteration limit).
What I didn't expect was how pervasive different processing for decimal carried. I had to find a square root routine for decimal (thank you Google and stackoverflow):
// From stackoverflow with one change
public static decimal Sqrt(decimal x, decimal epsilon = 0.0M)
{
if (x < 0) throw new OverflowException("Cannot calculate square root from a negative number");
decimal current = (decimal)Math.Sqrt((double)x), previous;
do {
if (current == 0.0M)
return current; // could just break
previous = current;
current = (previous + x / previous) / 2;
} while (Math.Abs(previous - current) > epsilon);
return current;
}
I had to duplicate tile center Point's with a new class, DecimalPoint (not to mention DecimalVector):
Point
DecimalPoint
DecimalVector
public class DecimalPoint // bit of a retrofit here. 96 vs. 52 mantissa bits. Use faster double unless decimal necessary
{
public decimal X;
public decimal Y;
public DecimalPoint(decimal x, decimal y)
{ X = x; Y = y; }
public DecimalPoint(Point p)
{ X = (decimal)p.X; Y = (decimal)p.Y; }
public static DecimalPoint Add(DecimalPoint dp, DecimalVector dv)
{ return new DecimalPoint(dp.X + dv.X, dp.Y + dv.Y); }
public static DecimalPoint Subtract(DecimalPoint dp, DecimalVector dv)
{ return new DecimalPoint(dp.X - dv.X, dp.Y - dv.Y); }
public static DecimalVector operator -(DecimalPoint m, DecimalPoint s) // p-p returns v
{ return new DecimalVector(m.X - s.X, m.Y = s.Y); }
public Point ToPoint()
{ return new Point((double)X, (double)Y); }
}
Later I abandoned dual precision centers for decimal only centers. It took me a while to understand that any user interactions can be performed slowly and precisely while only the iterative process needs as-fast-as-you-can math.
Even with a fair amount of refactoring, centering the sub tiles gets a little wonky near the limits of double precision. The problem is most apparent when the main left tile has switched to decimal while the sub tiles are still at double. Switching on Symm. Rotate also shows some unwanted displacements.
When the Scale factor gets a bit above 1014, you know further zooming will take the main left tile over to decimal. The sub tiles, covering the same extents in the complex plane, but with fewer pixels, take longer to go into decimal territory.
If the main left tile is using decimal but a sub tile is not, the sub tile will be outlined with White/Black while it is being updated. If both main left tile and sub tile are using decimal, the sub tile is outlined in White/Maroon during update. Not all sub tiles go over to decimal at the same increased Scale, even if they are not 'angled' by Symm. Rotate On. If you look at the methods SameCoord and CheckZoom in the Mtile class you would think all not-angled sub tiles should switch at the same increased Scale.
Symm. Rotate On
SameCoord
CheckZoom
I would also like to mention creating the Clipper project (part of the downloadable solution, not set as the Startup project). There were two reasons for Clipper. One was to see if the DrawLineAa method in the WriteableBitmapEx package handled clipping. This is needed to draw sub tile boundaries on the Map where sub tile corners can be off the main left tile's display. Clipping is also needed when dancing a right-clicked point. I am happy to report, the package handles clipping, mostly.
Clipper
DrawLineAa
WriteableBitmapEx
The other reason for inventing Clipper was to prove who was making AccessViolationExceptions when I turned Map on right after the initial starting display. I am sad to report the bug is in version 1.50 of the WriteableBitmapEx package. This bug can be avoided by drawing anti-aliased lines with a strokeThickness of 1 (vs. > 1). This bug is avoided in the Startup project by calling WriteableBitmapEx1_50Bug. I have submitted an issue for this. You can cause this condition in the Clipper program by setting it as the Startup project, starting it and clicking 'Next' until a line is displayed above, beneath or inside (just not to the sides of) the inner box, and entering 384 into the 2nd and 4th TextBoxes from the left at the bottom of the window.
AccessViolationExceptions
strokeThickness
WriteableBitmapEx1_50Bug
384
One problem I haven't solved is generating OverflowExceptions in the method itemizeDecimalPoints. This is used to find DrawLineAa coordinates of two Z' points in the iteration process. The math works fine in the double-based itemizePoints. I know the problem is not the use of the + .5M round-to-nearest adjustments. In decimal space I just return the two Z' points if OverflowException is thrown. Little blips are drawn on the main left tile, but I'm not sure their locations are valid. I hope to solve this in an update. But for now, I didn't want to hold up article submission. Maybe I can use the 'left as an exercise' ploy.
OverflowExceptions
itemizeDecimalPoints
itemizePoints
+ .5M
OverflowException
I hate it when an article says 'left as an exercise' so here goes: Is a single sign mistake the whole difference between Bad and Good sets?
I'll say mostly. Try changing methods DecimalTravel and Travel to generate goodness. Do we see the familiar Good Mandelbrot set?
DecimalTravel
Travel
Another exercise. Good Julia sets based on the Good Mandelbrot set are pretty. They are point symmetric unless C is some value r + 0i. Then they are symmetric at the origin and across both the real and imaginary axes. Do Bad Julia sets exist? What do Bad Julia sets look like? Is there anything different about their symmetry?
Also, have we noticed something strange about the Bad Mandelbrot set? When you zoom in a bit you see something like:
Perfectly expected. Self-similarity shows the Badness continues down the rabbit hole. But what of this region:
Zoom in a bit and you see:
Basically the Mandelbrot gods are so good, they turn areas of the Bad set into the Good set. No? Come on. That's a bit surprising. Now for the next exercise. Try doing this by thinking it through without exploring deeper. In an area of the Bad Mandelbrot set that has turned good, should you see any miniature Bad knots when you zoom in?
BONUS exercise. Imagine there is a line of demarcation, a curve, between the Bad areas and the Good areas. We know the Good Mandelbrot set is connected.
I suspect the Bad Mandelbrot set is also connected. Prove which of the following is true about the area outside the Bad Mandelbrot set:
Bad regions and Good regions are connected. There is only one Bad region and one Good region.
There is more than one Bad region or Good region.
Submitted December, 2015
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
case FT_MbarTricorn:
for (/**/; count < MaxIters && zrsqr + zisqr < OverFlowPoint; count++)
{
zi = zr * zi * -2.0 + JuliaI;
zr = zrsqr - zisqr + JuliaR;
zisqr = zi * zi;
zrsqr = zr * zr;
}
you have to decide whether Z' is greater than 2 in magnitude or if you consider the point C to be in the set.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/1063025/The-Bad-Mandelbrot-Set | CC-MAIN-2017-47 | refinedweb | 2,503 | 64 |
Hide Forgot
+++ This bug was initially created as a clone of Bug #586544 +++
Description of problem:
The init() call variant that uses ConnectionSettings was deprecated from the C++ QMF Agent API. Because of this, the application no longer has access to certain connection parameters (i.e. security-strength-factors, heartbeat intervals, tcp-nodelay, etc.).
A suitable replacement is needed to re-provide access to these features.
The replacement needs to be API-extensible and boost-free.
The fix for this bug will unblock the building of libvirt.
Note that one change is needed: The namespace changed for qpid::client::ConnectionSettings. It is now qpid::management::ConnectionSettings..
Setting devel_ack... This bug fixes a regression where access to functionality was lost in the API.
The restored settings for connection setup are:
- heartbeats
- maxChannels
- maxFrameSize
- tcpNoDelay
- SASL service name
- Min and Max Security Strength Factor
Fixed in build qpid-cpp-0_7_946106-2.
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you. | https://bugzilla.redhat.com/show_bug.cgi?id=595710 | CC-MAIN-2021-31 | refinedweb | 193 | 50.12 |
Using CodeMirror to add C# syntax highlighting to an editable HTML Textarea
I wanted to display some C# code in an html <textarea> control that was displayed in an ASP MVC 3 application using @Html.TextArea():
@using (Html.BeginForm()) { @Html.TextArea("sampleCode", (string)ViewBag.sampleCode, new { }.
Demo
Here's an idea of how it works. This is an editable textbox with C# code syntax highlighting, which dynamically updates as you type.
Adding CodeMirror to a textbox
Adding CodeMirror is really easy:
- Grab the CodeMirror package
- Add the necessary CSS and JavaScript references in your page
- Call CodeMirror.fromTextArea() on your textarea element
Getting the CodeMirror package
Normally you'd grab that from, but my c# syntax changes were just merged in and aren't in the download package yet, so you can just grab the latest zip from github:.
Add the necessary CSS and JavaScript references in your page first script reference brings in the main CodeMirror library
- The second script refernce uses one of the common supported modes - in this case I'm using the c-like syntax mode
- The next is the main codemirror CSS reference
- The last reference is for the default CodeMirror theme - there are 4 included color themes, and you can add your own with your own custom CSS file
Call CodeMirror.fromTextArea() on your textarea element>
Styling the CodeMirror box
CodeMirror creates a new element with the class "CodeMirror" so you need to apply your CSS styles to the CodeMirror class, not the ID or class of your original element.
Putting it all together>
Adding new syntax
I mentioned earlier that when I first found CodeMirror, I noticed that it didn't have C# support. Fortunately, I noticed that it did have a C-Like syntax mode, and adding C# syntax support was really easy.
- I looked up the C# 4 keywords on MSDN here.
- I did some find-replace-fu to turn it into a space delimited list.
- I added a few lines to the end of clike.js, following the format of the previous C++ and Java samples.
CodeMirror.defineMIME("text/x-java", { name: "clike", atAnnotations: true, keywords: keywords(") }); CodeMirror.defineMIME("text/x-csharp", { name: "clike", atAnnotations: true, atStrings: true, keywords: keywords("abstract as base bool break byte case catch char checked class const continue decimal" + " default delegate do double else enum event explicit extern false finally fixed float for" + " foreach goto if implicit in int interface internal is lock long namespace new null object" + " operator out override params private protected public readonly ref return sbyte sealed short" + " sizeof stackalloc static string struct switch this throw true try typeof uint ulong unchecked" + " unsafe ushort using virtual void volatile while add alias ascending descending dynamic from get" + " global group into join let orderby partial remove select set value var yield") });
The key is to find a similar language which is already supported and modify it. | http://weblogs.asp.net/jongalloway/using-codemirror-to-add-c-syntax-highlighting-to-an-editable-html-textarea | CC-MAIN-2016-07 | refinedweb | 482 | 51.11 |
Introduction
Prerequisites
Requirements
Components Used
Conventions
Problem
Solution
NetPro Discussion Forums - Featured Conversations
Related Information
When new subscribers are added in Cisco Unity, the option to add a new
Microsoft Exchange user is not available. This document explains an attempt is made to add new subscribers on a Cisco Unity server,
the Exchange option does not appear in the Add
Subscriber:Type of Subscriber:New Subscriber drop-down menu. The
drop-down menu for New Subscriber is grayed out, and the only options are to
import an existing user or create an internet user, as shown in this
diagram:
This issue can occur if you select Import Existing subscribers
Only in the Message Store Configuration Wizard (MSCW). In order to
verify this, complete these steps:
Log into Cisco Unity with the same account you used to run the MSCW
procedure.
Open Windows Explorer, and navigate to C:\Documents and
Settings\User from item1\Local settings\Temp.
Open the Tempu.log file.
Scroll down to the date that you ran the MSCW procedure. Search for
Set SystemParameters\1.0\DisableNewExchSub. If the value is 1,
this means that when you ran the MSCW procedure, you chose to only import users
from Exchange.
When the Exchange option does not appear when a new Cisco Unity
subscriber is added, re-run the Message Store Configuration Wizard in order to
resolve this issue.
Note: It is recommended to run the MSCW during off-peak hours, since the
wizard requires a restart of the Cisco Unity services.
MSCW can be run from the Control Panel on the Cisco Unity server.
Choose the Add/Remove Programs option, and then choose the
Message Store Configuration Wizard.
Note: Be sure to use an account that has Domain Admin
privileges.
When the Select how subscribers will be created
screen appears, make sure to click the Create new accounts and import
existing accounts radio button. | http://www.cisco.com/en/US/products/sw/voicesw/ps2237/products_tech_note09186a0080904f31.shtml | crawl-002 | refinedweb | 315 | 51.18 |
In our earlier post we have learned about creating and writing data to a .txt file in Java.In this tutorial we shall learn about creating and writing data to a excel file.There are two ways in creating an Excel file in Java.We can create it using File class or by using POI library methods.Today, we shall learn about creating the Excel file using File classes.
While creating a .txt file, we have first created an object to the FileOutputStream class and we have used f1.txt as the file path.To create an Excel we need to change the extension to .xls.
FileOutputStream fos = new FileOutputStream("store.xls");
In the above described example we are creating an Excel file as store which stores the data of a super market.To save the data in the file we need to use the sequence character \t.we use \t because the width between two columns in Excel sheet is 5 spaces or a single tab.To write the data into the next row of the spread sheet we use the sequence character \n.Now we shall write the program with above information.
import java.io.*; class fileswritedemoxls { public static void main(String arg[]) { try { String header="Item\tQuantity(kg)\tPrice"; byte b[] = header.getBytes(); FileOutputStream fos = new FileOutputStream("store.xls"); for(int i=0;i<header.length;i++) { fos.write(b[i]); } String row1="\nSugar\t2.5\t60"; byte b1[] = row1.getBytes(); for(int i=0;i<b1.length;i++) { fos.write(b1[i]); } fos.close(); } catch(Exception e) { System.out.print(e); } } }
Description:In the above program we are inserting two rows…..one as heading for the column and other row is the items present in the store.
Output:
| https://letusprogram.com/2013/12/24/creating-and-writing-excel-file-in-java/ | CC-MAIN-2021-17 | refinedweb | 292 | 70.19 |
I have found a strange bug that only occurs in IE7.
First and foremost, my version of IE7 is the one I downloaded from Tredsoft. But I've also check this issue over at ipinfo.info/netrenderer/index.php.
This is the site I'm making,
In IE6, IE8 and all other normal "normal" browsers, everything is fine. But in IE7 there is a problem with the way those DVD images display in the right column.
Basically, the first DVD image is chopped in half (ie, only the upper part is shown); the image below that is rendered correctly; the one below that is again chopped in half; and the one below that is rendered correctly.
Does anyone know why this happens?
Thanks in advance for any help.
... and here's what I came up with
As with all my examples, the directory is unlocked for easy access to all the bits and pieces:
It took me longer than planned because I ended up writing a new fader script for it. The one you were using was rebuilding the DOM so much using innerHTML and making the browser check the cache so much it was driving lesser computers insane with the CPU use. I'm not as rabidly anti innerHTML as some other developers out there, but really changing the markup for every fade was pushing it. The script ended up growing to 1.6k from the original 1k, but that extra 600 bytes include 'onload' collision prevention, a smoother onload check and timer system. (yours was starting and aborting the timer 20 times per 'is it loaded' check. It also directly manipulates the IE FILTER instead of using their pre-built animations so the timing could be adjusted identical to the non-IE version. It's also all self-contained in a object so the only addition to the top level namespace is the 'slides' variable.
Said variable has to be built as a self-extending object so that our timeouts can call it. This removes the ability to run more than one instance at once, but at least we get the namespace reduction.
HTML/CSS itself tested working 100% in IE 5.5, 6, 7, 8 & 9 beta, Opera 10.62, FF 2 and 3.53, and the latest flavors each of Safari and Chrome. Valid XHTML 1.0 Strict, would be valid CSS 2.1 if not for a couple -moz properties and of course the IE filter.
I documented all the places that needed haslayout triggers too.
The only 'bug' is that the script doesn't work in IE5.5... which is ok, your original doesn't either both throwing that wonderful "library not registered" bug. Big deal, it's IE 5.5 - like we give a ****! (it's just fun to have as much working there as possible for so little effort)
Luckily I already had a yourCurrencyConverter account pointing at that domain from another page I rewrote for someone -- be sure if you copypasta my markup to swap out the ID code in the SCRIPT for that.
I did add a few .first classes, one extra div, and some other minor tweaks from my previously posted markup -- it's really nothing major just the normal stuff you often add once doing the layout.
The biggest changes are the throwing away of the heading sprite image as pointless bloat, some tweaking of the margins so the ones between the columns are even, and biggest of all changing as much of the content fonts to 80% instead of 12px as possible. Anything less than 14px is a miserable accessibility /FAIL/ for Large font/120dpi users, so that's the smallest fixed size I used in the cases where I was forced to make px fonts -- but wherever possible I made them dynamic (%/em) instead.
Hope this helps -- Hope it doesn't annoy you since you asked a small question and I ripped it to shreds... Just trying to be helpful.
IE8 doesn't really have the same "engine" has IE6/7 and although "haslayout" is buried somewhere in the code it is not required for elements to take care of themselves properly (99.9% of the time).
IE6 does need "haslayout" almost all day long and most complicated components of as design will break if they are not in haslayout mode.
The trigger and circumstances are many and varied and while IE7 is much much better than IE6 it does occasionally get caught out. The reason it is more stable is possibly due to differences in its coding algorithms but this also means that it may exhibit problems at different times compared to IE6.
There is no ryhme or reason to it except that you just have to remember that any container that holds more than simple text content will need to be in haslayout mode to be secure.
I'm actually not a CSS dunce. A long time ago someone was talking about my Zen Garden design on this forum. And the guy was giving out my URL incorrectly. So I corrected him. And then you wrote to me to talk about self promotion. If I remember correctly, you gave me a warning!
Sorry, about that
That's a great link, thanks.
I still don't fully understand why "haslayout" is needed for IE7 but not IE6 or IE8.
Never mind!
So -- you see -- I do know a bit about CSS. But I tend to view CSS as a kind of glue to stick all the various bits and pieces together. You are clearly much more knowledgeable about this glue than I am. Obviously I need to read more and learn, learn, learn!
Hi,
The min-height:0 works because min-height is a "haslayout" trigger in IE7 and once in haslayout mode an element will take more care of itself and its boundaries. You can read a full description in the link above.
We use min-height:0 because it has no effect to the layout unlike a dimension and leaves the elements presentation untouched.
Haslayout problems can turn up anywhere but are often caused when an element holds more than simple content. If you have an image inside a div that is maybe floated or have positioned elements inside then the parent must have a layout (haslayout).
IE6 has similar (and usually much worse) issues with haslayout but they don't necessarily correspond with IE7 although at times both will display problems for the same element.
Note that min-height is not a haslayout trigger in IE6 as it doesn't understand min-height anyway and for IE6 you would need a dimension or perhaps the proprietary zoom property.
My editor is Scintilla. And the only BOMS I'm aware of are those that get dumped around Bangkok shopping malls from time to time. Why would a BOM in my code affect things? I don't even see this BOM in my code. And I'm not aware of any browsers being affected by this.
Anyway -- thanks for your help. Really great!
The bom (byte order mark) should nt be save din your css file because it can upset some browers and indeed breaks the files when viewed locally. The browser sees it as a selector () and then looks for the opening bracket and gets confused when it doesn't find one.
Just use Firefox and click the edit css tab from the web developer toolbar and you will see the BOM displayed as the first character in the file.
Just set your editor not to save the bom for css files (assuming its capable of doing that - If not use another editor such as notepad ++) .
Uhm, your layout seems to still have issues here in IE8 and Opera. The right column is float dropping, the footer is not clearing properly, and you seem to have padding issues around the menus.
Taking a peek under the hood, you've got some markup issues and as I always say: "CSS is only as good as the markup it's applied to"
Biggest thing I'm noticing is the non-semantic markup with endless unnecessary classes. Just because something is text doesn't mean it should have a P around it... and if every tag inside a block level container is getting the exact same class, there is NO reason to be putting a class on it!
For example:
<div id="nav-1">
<p class="top-nav"><a href="courses.php" class="nav courses "><span class="link-text">Courses</span></a></p>
<p class="top-nav"><a href="promotions.php" class="nav promotions "><span class="link-text">Promotions</span></a></p>
<p class="top-nav top-nav-right"><a href="compare-courses.php" class="nav editors-tips "><span class="link-text">Editors Tips</span></a></p>
</div>
<div id="nav-2">
<p class="top-nav"><a href="about.php" class="nav about "><span class="link-text">About</span></a></p>
<p class="top-nav"><a href="information.php" class="nav information "><span class="link-text">Information</span></a></p>
<p class="top-nav"><a href="faq.php" class="nav faq "><span class="link-text">FAQ</span></a></p>
<p class="top-nav top-nav-right"><a href="contact.php" class="nav contact "><span class="link-text">contact</span></a></p>
</div>
Those are not paragraphs of text, NONE of the classes you have there are neccessary, and that should be a LIST. As in ONE list. Given your styling there is NO reason for the code there to be more than:
>
It's a list of options, treat it as a list. You aren't doing anything special with them, so they don't need the unique class, they're all getting that same 'nav' class which means they don't need that class as they can inherit off the parent.
You've got an ID on body for no good reason (there's NO reason to EVER put an ID on body!), you have redundant/pointless meta's basically saying the DEFAULT values on things like robots and re-visit, and to top it off you've got presentational classes like "delineation-2 margin-25" kinda defeating one of the reasons to use CSS in the first place; separation of presentation from content.
Much less the image replacement method you are using that doesn't work as a image replacement method with CSS on images off... defeating the point of wasting your time on them -- and for what, just to get some pointless fancy font on things? (that I can't tell apart from Arial bold?)
Of course, it's also got a HTML5 doctype meaning you can't even check that your completely following the STRICT rules, and are deploying something not even out of DRAFT yet.
Which is why if I wrote that same page, the markup
<meta
name="keywords"
content="ISO 9001, Course, Training, auditor, instructors, video, dvd, online, learning"
/>
<link
type="text/css"
rel="stylesheet"
href="screen.css"
media="screen,projection,tv"
/>
<link
type="image/x-icon"
rel="shortcut icon"
href=""
/>
<title>
ISO9001 Courses - Effective ISO 9001 Training
</title>
<script type="text/javascript" src="javascript/core.js"></script>
<script type="text/javascript">
var slide_count=4
</script>
</head><body>
<div class="topBar"></div>
<div id="pageWrapper">
<h1>
<a href="">
ISO 9001 Courses
<span></span>
</a>
</h1>
>
<a id="viewCart" href="">
<img src="graphics/cart.png" alt="View Cart"/>
</a>
<p>
ISO9001 Courses provides a complete range of bestselling ISO 9001 training courses that address your different training needs
</p>
<div id="slideshow">
<img id="slide" width="920" height="235" alt="" src="graphics/slides/slide-1.jpg" />
<div id="next"></div>
</div>
<div class="column1of3">
<div class="section">
<h2>Course Recommendations</h2>
<p>
ISO 9001:2008 requires training, and most auditors will check if proper ISO 9001 training was provided. Basically, all management and employees need to receive some form of training in ISO 9001.
</p>
<a class="more" href="recommendations.php">view our editor's recommendations</a> »
</div>
<h2>Promotions</h2>
<div class="subSection">
<h3>e-Learning Online Promotion</h3>
<p>
Save 25% with our 2-course package!
</p>
<ul>
<li>ISO 9001:2008 Management Overview</li>
<li>ISO 9001:2008 Internal Auditor</li>
</ul>
<a class="more" href="promotions.php">read more</a> »
<!-- .subSection --></div>
<div class="subSection">
<h3>DVD Courses Promotion</h3>
<p>Save 24% with our 3-course package!</p>
<ul>
<li>Quality Basics</li>
<li>How to Deal with External Auditors</li>
</ul>
<a class="more" href="promotions.php">read more</a> »
<!-- .subSection --></div>
<div class="section">
<h2>About Us</h2>
<p>
ISO9001Courses.com specializes in providing a complete range of ISO 9001 training in the most efficient and most cost-effective way.
</p>
<ul>
<li>Foremost Experts in ISO 9001</li>
<li>IACET Approved Courses</li>
<li>Highly Qualified Instructors</li>
<li>Effective and Efficient Training</li>
<li>Tests Included</li>
<li>Convenient – Learn in your Office or Home</li>
<li>Compliant with ISO 9001:2008</li>
<li>Applicable to all Organizations Worldwide</li>
<li>Excellent Customer Service</li>
<li>Best Price Guarantee</li>
<li>Money Back Guarantee</li>
</ul>
<a class="more" href="about.php">learn more about the training we offer</a> »
<!-- .section --></div>
<!-- .column1of3 --></div>
<div class="column2of3">
<h2>e-Learning Online</h2>
<p>
With our e-Learning Online courses you'll get interactive ISO 9001:2008 training that you can take at your own pace and which is available instantly.
</p>
<div class="subSection">
<img src="graphics/monitor.png" alt="A Monitor"/>
<h3>ISO 9001:2008 Management Overview</h3>
<p>
Concise overview of the ISO 9001:2008 standard and its implementation – specifically designed for the needs of executive management.
</p>
<div class="currency">
<span>2 hours</span> $89
</div>
<a class="more" href="e-learning-management-overview.php">More info</a> »
<!-- .subSection --></div>
<div class="subSection">
<img src="graphics/monitor.png" alt="A Monitor"/>
<h3>ISO 9001:2008 – Benefits and QMS Requirements</h3>
<p>
Interactive course that provides in-depth understanding of ISO 9001:2008, including its benefits to the company, the principles behind it, and all the various ISO 9001:2008 requirements.
</p>
<div class="currency">
<span>6 hours</span> $159
</div>
<a class="more" href="e-learning-benefits-qms.php">More info</a> »
<!-- .subSection --></div>
<div class="delineation-2 margin-25">
<img src="graphics/monitor.png" alt="A Monitor"/>
<h3>
ISO 9001:2008 Internal Auditor
</h3>
<p>
Interactive course that provides comprehensive training in the ISO 9001:2008 standard and its requirements, as well as basic skills necessary to complete an internal audit.
</p>
<div class="currency">
<span>6 hours</span> $159
</div>
<a class="more" href="e-learning-internal-auditor.php">More info</a> »
<!-- .subSection --></div>
<!-- column2of3 --></div>
<div class="column3of3">
<h2>DVD Courses</h2>
<p>
Our DVD Courses are cost-effective ISO 9001:2008 training in the form of video presentations (with workbooks) for individuals and groups.
</p>
<div class="subSection">
<img src="graphics/dvd.png" alt="DVD" />
<h3>
ISO 9001:2008 Basics
<em>What Employees Need to Know</em>
</h3>
<p>
Introduction of the ISO 9001 standard for a wide range of employees to create employee "buy-in".
</p>
<div class="currency">
<span>35 minutes</span> $245
</div>
<a class="more" href="dvd-employees.php">More info</a> »
<p class="languages">
Also available in <a href="dvd-employees-spanish">Spanish</a> »
</p>
<!-- .subSection --></div>
<div class="subSection">
<img src="graphics/dvd.png" alt="DVD" />
<h3>
Quality Basics
<em>Quality is for Everyone</em>
</h3>
<p>
Training video to sharpen employee awareness and reinforce their commitment to quality and customer satisfaction.
</p>
<div class="currency">
<span>16 minutes</span> $190
</div>
<a class="more" href="dvd-quality-basics.php">More info</a> »
<!-- .subSection --></div>
<div class="subSection">
<img src="graphics/dvd.png" alt="DVD" />
<h3>
Internal Auditing Basics
<em>An Introduction to Auditing for Employees</em>
</h3>
<p>
Video training program for employees to become internal auditors; this DVD course focuses on all auditing skills.
</p>
<div class="currency">
<span>30 minutes</span> $245
</div>
<a class="more" href="dvd-auditing-basics.php">More info</a> »
<!-- .subSection --></div>
<div class="subSection">
<img src="graphics/dvd.png" alt="DVD" />
<h3>
How to Deal with External Auditors
<em>A Basic Guide for Employees</em>
</h3>
<p>
Prepares employees for the ISO 9001:2008 certification audit, thus increasing the chances for success.
</p>
<div class="currency">
<span>22 minutes</span> $190
</div>
<a class="more" href="dvd-external-auditors.php">More info</a> »
<!-- .subSection --></div>
<!-- .column3of3 --></div>
<div id="footer">
<div id="footerMenus">
<ul>
<li><a href="information.php">Payment</a></li>
<li><a href="information.php">Refunds</a></li>
<li><a href="information.php">Shipping</a></li>
<li><a href="information.php">System Requirements</a></li>
<li><a href="information.php">Course Certificate</a></li>
<li><a href="faq.php">FAQ</a></li>
</ul><ul>
<li><a href="about.php">About</a></li>
<li><a href="contact.php">Contact</a></li>
<li><a href="terms.php">Terms</a></li>
<li><a href="affiliates.php">Affiliates</a></li>
</ul><ul>
<li><a href="courses.php">Courses</a></li>
<li><a href="promotions.php">Promotions</a></li>
<li><a href="compare-courses.php">Compare Courses</a></li>
<li><a href="recommendations.php">Recommendations</a></li>
</ul><ul>
<li>e-Learning Courses</li>
<li><a href="e-learning-management-overview.php">Management Overview</a></li>
<li><a href="e-learning-benefits-qms.php">Benefits and QMS Requirements</a></li>
<li><a href="e-learning-internal-auditor.php">Internal Auditor</a></li>
</ul><ul>
<li>DVD Courses</li>
<li><a href="dvd-employees.php">ISO 9001:2008 Basics</a></li>
<li><a href="dvd-employees-spanish.php">ISO 9001:2008 Basics (Spanish)</a></li>
<li><a href="dvd-quality-basics.php">Quality Basics</a></li>
<li><a href="dvd-auditing-basics.php">Internal Auditing Basics</a></li>
<li><a href="dvd-external-auditors.php">How to Deal with External Auditors</a></li>
</ul><ul>
<!-- #footerMenus --></div>
<img src="graphics/seals.png" alt="seals" />
© ISO9001 Courses 2009 - 2010 |
<a href="">website by andrew brundle</a>
<!-- #footer --></div>
<!-- #pageWrapper --></div>
<span id="ycclink" style='display:none'>
<a href="">geolocation - multi currency</a>
</span>
<script type="text/javascript">
var curdate=new Date();
var datenum=curdate.getMonth()+'-'+curdate.getDay()+'-'+curdate.getYear();
document.write('<scr'+'ipt</scr+'ipt>');
</script>
</body></html>
Which throws about a third of the markup away... of course I'd also be throwing away about two-thirds your images as well since there's no legitimate reason to be using images on those headers.
I have time later I'll toss together the CSS I'd use with that.
Hi there -- that's excellent. But how does it work?
I mean, why does "min-height" make things right? And why is IE7 the only browser affected?
HI,
Looks like haslayout again.
Try this:
.delineation-3{min-height:0}
BTW your css is saving a bom which could upset some browsers.
[B][/B]
/************************* Body + Elements / Defaults *******************/
Adjust your editor so that it doesn't save it.
Impressive work once again Jason - especially re-writing the fader script | https://www.sitepoint.com/community/t/strange-ie7-bug/68336 | CC-MAIN-2017-30 | refinedweb | 3,193 | 56.25 |
In the previous post, I covered an example of an auction simulation using asynchronous message passing and a shared nothing approach using the MailboxProcessor class in F#. The auction example was a great piece to demonstrate scalability by adding additional clients to create a sort of bidding war between them. Once again, with this approach, we’ve eliminated the need for locks and other concurrency primitives.
This time, let’s take another canonical example of a Bounded Buffer and look at some of the design patterns around this.
The Bounded Buffer
The goal of this post is to walk through an example of actor model concurrency of the canonical Bounded Buffer which is another example given in Scala. The intent of this demo is to store and retrieve items in a buffer (rather simple actually). Given this example, we’ll walk through how we might implement this using the constructs in F#. One important aspect of this solution is to not post messages asynchronously as before, but instead, to post a message and await the reply.
Without further ado, let’s get into the code. As before, we have a few utility functions that will be quite handy for this journey. Much like we defined the (<—) operator last time for posting messages, I’d like one for posting and waiting for a reply. In addition, I need a way to accomplish currying for reasons you will see later. One thing that has irked me on occasion is the confusion between partial application and currying, which I’ve covered earlier. Getting back to the issue at hand, let’s look at the code for that:
// Curry the arguments let curry f x y = f (x, y) // Asynchronous post let (<--) (m:_ MailboxProcessor) msg = m.Post msg // Post and reply operator let (<->) (m:_ MailboxProcessor) msg = m.PostAndReply(fun replyChannel -> msg replyChannel)
As you’ll notice, I put the two operators, the asynchronous post and the post and reply operators, the former not being needed for this post. The PostAndReply method gives a way to post a message and wait for the reply. A temporary reply channel is created and that forms part of our message. This reply channel is an AsyncReplyChannel<T> which supports one function of Reply which we will use later. This message is then sent back to the caller as the result.
Next, we need to define the messages we will be processing as part of this bounded buffer. Each of these messages define operations that our buffer supports, namely put, get and stop. Let’s take a look at these in detail:
type 'a BufferMessage = Put of 'a * unit AsyncReplyChannel | Get of 'a AsyncReplyChannel | Stop of unit AsyncReplyChannel
As you will notice, each of these has an associated AsyncReplyChannel part to the defined message. This is to allow me to reply to each of the callers in turn. The Put and Stop both have reply channels that take no associated data, so we can create them as an AsyncReplyChannel<unit>. The Put message allows us to put a value into the buffer, the Get allows us to retrieve those values in turn, and the Stop allows us to stop the mailbox.
Let’s move on to the heart of the matter, the actual bounded buffer. This class takes in a buffer size and then we expose methods that allow us to put values in the buffer, get values from the buffer and stop the mailbox. Below is how the code might look:
type 'a BoundedBuffer(N:int) = let buffer = MailboxProcessor.Start(fun inbox -> let buf:'a array = Array.zeroCreate N let rec loop in' out n = async { let! msg = inbox.Receive() match msg with | Put (x, replyChannel) when n < N -> Array.set buf in' x replyChannel.Reply () return! loop ((in' + 1) % N) out (n + 1) | Get replyChannel when n > 0 -> let r = Array.get buf out replyChannel.Reply r return! loop in' ((out + 1) % N) (n - 1) | Stop replyChannel -> replyChannel.Reply(); return () } loop 0 0 0) member this.Put(x:'a) = buffer <-> curry Put x member this.Get() = buffer <-> Get member this.Stop() = buffer <-> Stop
Inside our BoundedBuffer class, we create the buffer which then creates an initialized array. Because array contents are mutable, there is no sense in putting this as part of our processing loop. Instead, we’ll focus on the input index, the output index and the number of items in the buffer as part of our processing loop. When we receive the Put message when the number of items in the buffer is less than the buffer size, we set the value at the specified input index, return a reply back to the caller, and then loop with an increment to our index with a modulo of the buffer size as well as the number of items in the buffer. In receiving a Get message when the number of items in the buffer is greater than zero, we get the item at the output index, send the reply back to the caller with the value, and then loop with an increment to the output index with a modulo as well as decrementing the number of items in our buffer. Finally, should we receive a Stop, we simply reply back to the caller and return.
We created three methods to wrap this functionality for outside consumption. The Put method takes in the item to post to the buffer, and then we simply do a PostAndReply with our Put message and our item to post. I used currying here because the Put message requires two parameters, the item to put as well as the reply channel. In this case, my operator already provides that reply channel, so I only need to supply the item to put. Both the Get and the Stop methods are fairly straight forward as they post their respective messages with their private reply channels.
How does this work? Let’s fire up F# interactive and take a look with an example of posting a few items to our buffer and then retrieving them.
> let buffer = new int BoundedBuffer 42;; val buffer : int BoundedBuffer > buffer.Put 12;; val it : unit = () > buffer.Put 34;; val it : unit = () > buffer.Put 56;; val it : unit = () > buffer.Get();; val it : int = 12 > buffer.Get();; val it : int = 34 > buffer.Get();; val it : int = 56 > buffer.Stop();; val it : unit = ()
What I did was create a BoundedBuffer that handled integers with a buffer size of 42. Then I posted three values, 12, 34 and 56. After putting these values into our buffer, I then retrieved each in the order in which it was placed into our buffer. Finally, I stopped the buffer. The complete source code to this example can be found here.
Conclusion
Once again, we can create rather interesting solutions using this shared nothing asynchronous message passing approach in F#. This solution involving the bounded buffer is no exception. How might this solution look in Axum? In due time, we will approach this as well as our Auction example from the previous post. There are a lot of Axum items to cover especially in regards to asynchronous methods and ordered interaction points, so stay tuned. | http://codebetter.com/matthewpodwysocki/2009/05/28/actors-in-f-the-bounded-buffer-problem/ | CC-MAIN-2021-43 | refinedweb | 1,200 | 63.19 |
On Sat, 17 Feb 2001, K.R.Subramanian wrote: > > I am using the standard template library (list, vector, iostream) in > vtk applications, but > the "using" directive doesnt seem to do the right thing. > > > for instance: > > ***************************** > #include <list> > #include <vector> > #include <iostream> > #include <vtkStructuredPoints.h> > > using namespace std; > [ ... ] > > > cout << b[0] << "," << b[1] << "," << b[2] << endl; > [ ... ] > cc-1239 CC: ERROR File = tmp1.cc, Line = 18 > "cout" is ambiguous. > > cout << b[0] << "," << b[1] << "," << b[2] << endl; > ^ > *********************************** > 1 error detected in the compilation of "tmp1.cc". > > > Without the vtk include file, there is no problem. It's a problem with your compiler. vtkSystemIncludes.h, one of the VTK files that is indirectly included by vtkStructuredPoints.h, adds the following bits: using std::cout; using std::endl; ... and then you import all of std namespace, and it's confusing the compiler. SGI may have patches/updates that you may want to check for. I'd of course like to see all using statements removed from headers that are included by client code, but that's going to take some boring work. The easiest way is to incldue the using bits at the top of, and after all the runtime includes, each source file. Regards, Mumit | https://public.kitware.com/pipermail/vtkusers/2001-February/005702.html | CC-MAIN-2022-27 | refinedweb | 202 | 68.47 |
Is All Enums usable for all functions?Posted Sunday, 15 June, 2014 - 13:30 by CXO2 in
Hi
Let's straight to the point, Currently i am trying to translating C++ code into C# and its about Shader thingies.
this C++ code is using ARB Shader, and I found something missing enum on OpenTK.
It was about to create a Handle for Vertex Shader, here the C++ code:
GLhandleARB vertexShader = glCreateShaderObjectARB(GL_VERTEX_SHADER_ARB);
and I got confused cuz there not "VertexShaderARB" on ArbShaderObjects Enum
GL.Arb.CreateShaderObject(ArbShaderObjects.??????);
after exploring few enums, I found VertexShaderArb on All enum, but I cant use it like this:
GL.Arb.CreateShaderObject(All.VertexShaderArb); // Error!
So my question is about the All enum, is there a way to use it in CreateShaderObject() function?
if it can't is there any other solution?
Thank you so much ;)
Re: Is All Enums usable for all functions?
You should be able to just insert an explicit cast to cast the value from the All enum to the ArbShaderObjects enum.
GL.Arb.CreateShaderObject((ArbShaderObjects)All.VertexShaderArb);
If this enum value should be part of the ArbShaderObjects enum raise a bug report or pull request at
Re: Is All Enums usable for all functions?
Its out of my mind
I wasn't thinking about that!
Thanks!
Re: Is All Enums usable for all functions?
On hindsight, it would have been better to use the
Allenum in all functions that do not have typesafe enumerations (which includes most extensions functions). Doing that for the OpenGL namespace is not practical now, for backward compatibility reasons, but the ES* and CL* extensions do take
Allenums directly.
That's something that could be revisited for a hypothetical 2.0 version in a few years (provided OpenGL still exists by then.) | http://www.opentk.com/node/3687 | CC-MAIN-2015-22 | refinedweb | 296 | 62.38 |
[x * 2 - 7 for x in range(27)]or tuples expressing (some) integer products with an odd factor and an even factor:
[(x, y, x*y) for x in range(6) for y in range(4) if (x+y)%2]or a table of sines and cosines:
[(sin(rad), cos(rad)) for rad in [math.radians(deg) for deg in range(360)]]Pretty straightforward, and the compiler is free to make all sorts of optimizations that I don't have to think about. Now if we could do reductions in as simple a syntax as we do mapping and filtering, you'd have another beast altogether. I'm not sure what you'd call it. Would some people call it the AplLanguage? ;-> In APL the reduce operation is an operator followed by a slash, for instance Lisp's reduce-by-addition is "+/", e.g. "+/ 1 2 3 4 5" yields 15. Or "*/ 2 3 5" yields 30. Methinks this is called folding (Haskell example):
foldl op acc [] = acc foldl op acc (x:xs) = foldl op (op acc x) xs -- e.g. foldl (+) 0 [1..5] == 15; foldl (*) 1 [2, 3, 5] == 30Or did you mean a different kind of list reduction?
pyth1(N) -> [{A,B,C} || A <- lists:seq(1,N), B <- lists:seq(1,N-A+1), C <- lists:seq(1,N-A-B+2), A+B+C =< N, A*A+B*B == C*C ].
def pyth(on): return [(a,b,c) for a in range(1,on) for b in range(1,on-a+1) for c in range(1,on-b-a+2) if a+b+c <= on and a**2 + b**2 == c**2]As of version 2.4, Python now has GeneratorComprehension?'s, which return a generator instead of a list. A generator comprehension is created by using parenthesis instead of square brackets. Still not quite as nice syntactically as full blown LazyEvaluation (which requires no list/generator distinction) but still fairly handy.
pyth n = [ ( a, b, c ) | a <- [1..n], b <- [1..n-a+1], c <- [1..n-a-b+2], a + b + c <= n, a^2 + b^2 == c^2 ]This, not terribly surprising, turns out to be equivalent to this monadic code:
import Control.Monad pyth n = do a <- [1..n] b <- [1..n-a+1] c <- [1..n-a-b+2] guard (a + b + c <= n) guard (a^2 + b^2 == c^2) return (a, b, c)
IEnumerable<Tuple<int,int,int>> pyth(int n) { return from a in Enumerable.Range(1, n) from b in Enumerable.Range(1, n - a + 1) from c in Enumerable.Range(1, n - a - b + 2) where a + b + c <= n where a * a + b * b == c * c select Tuple.Create(a, b, c); }
declare function pyth($n) { for $a in 1 to $n for $b in 1 to $n - $a + 1 for $c in 1 to $n - $a - $b + 2 where $a + $b + $c le $n and $a * $a + $b * $b eq $c * $c return ($a,$b,$c) };Because lists get flattened, an element around the projection is needed to be able to deconstruct later on:
declare function pyth($n) { for $a in 1 to $n for $b in 1 to $n - $a + 1 for $c in 1 to $n - $a - $b + 2 where $a + $b + $c le $n and $a * $a + $b * $b eq $c * $c return <rec a=$a, b=$b, c=$c/> };
def pyth(N: Int): List[(Int, Int, Int)] = for(a <- (1 to N).toList; b <- (1 to (N - a + 1)); c <- (1 to (N - a - b + 1)); if(a + b + c < N); if(a * a + b * b == c * c)) yield (a, b, c) | http://c2.com/cgi/wiki?ListComprehension | CC-MAIN-2014-52 | refinedweb | 614 | 80.11 |
Last shaving a yak due to my poor understanding of the HURD translator lifecycle which meant that I didn’t think I had got it working.
My goal was to build a translator that works like the Objective-C
nil object: it accepts any message and responds by returning itself. Given that I’m building on the Mach ports abstraction, “itself” is defined as a port over which you can send messages to the same object.
If I returned an integer from the message, everything worked on both sides. However, just because a port name is an integer doesn’t mean that sending it as an
int will work, just as opening a file in one process then sending the file descriptor as a number to another process wouldn’t let the receiving process access the file. I tried sending it as a
mach_port_t, but got a lifecycle error: the client was told that the server had died.
On doing some reading, I discovered that the port had to be sent as a
mach_port_send_t for send rights to be transferred to the client. Making that change, the message now fails with a type error.
An aside, here, on getting help with this problem. There is good documentation: the HURD source is easy to read and with helpful comments, they have good documentation including examples, the OSF documentation is very helpful, there are books from “back in the day” and videos with useful insights.
On the other hand, “help” is hard to come by. I eventually answered my own stack overflow question on the topic, having not received a reply on there, the HURD mailing list or their IRC channel. The videos described above come from FOSDEM and I’m heading out there next week, I’ll try to make some contacts in person and join their community that way.
OK, so back to the main issue, I now have a fix for my problem. Well, sort of, because now that I’m correctly sending a port with a send right I’m back to getting the lifecycle error.
My current plan is not to “fix” that, but to take it as a hint that I’m doing it wrong, and to design my system differently. Using the filesystem as a namespace to look up objects is good, but using the thing I receive as the object to message is a separate responsibility. I’m changing my approach so that the filesystem contains constructors, and they return not their own port but a port to something else that represents the instance they created. | https://www.sicpers.info/2018/01/an-update-on-the-hurd-project/ | CC-MAIN-2022-21 | refinedweb | 432 | 64.64 |
from ROOTaaS.iPyROOT import ROOT
Welcome to ROOTaas Beta
Open a file which is located on the web. No type is to be specified for "f".
f = ROOT.TFile.Open("");
Loop over the TTree called "events" in the file. It is accessed with the dot operator. Same holds for the access to the branches: no need to set them up - they are just accessed by name, again with the dot operator.
h = ROOT.TH1F("TracksPt","Tracks;Pt [GeV/c];#",128,0,64) for event in f.events: for track in event.tracks: h.Fill(track.Pt()) h.Draw()
Info in <TCanvas::MakeDefCanvas>: created default TCanvas with name c1 | https://nbviewer.ipython.org/urls/indico.cern.ch/event/395198/material/1/2.ipynb | CC-MAIN-2021-49 | refinedweb | 108 | 76.93 |
python openstacksdk, how to authorite?
hey, i trying to connect via the openstacksdk python lib, following this guide:...
here is my code:
from openstack import connection if __name__ == '__main__': auth_args = { 'auth_url': '', 'project_name': 'admin', 'username': 'admin', 'password': 'PASSWORD', } conn = connection.Connection(**auth_args) conn.authorize()
but i only get this exception:
openstack.exceptions.HttpException: HttpException: Expecting to find domain in project - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error.
the command line client is working fine, with the very same credentials.
this is a installation based on this documentation:...
what am i doing wrong? | https://ask.openstack.org/en/question/95377/python-openstacksdk-how-to-authorite/ | CC-MAIN-2021-04 | refinedweb | 108 | 51.44 |
Large scalar variables that cannot be merged, or that have large values that cannot easily be manipulated with a constant transform, need to be obfuscated.
Splitting variables can be effective when the variables holding the split values are in different scopes. The split can also be performed during variable initialization by rewriting the SPLIT_VAR macro presented in Section 12.7.3 to declare and initialize the variables, rather than simply assigning to them.
The value of a scalar variable can be split over a number of equal- or smaller-sized variables. The following code demonstrates how the four bytes of an integer can be stored in four different character variables:
#define SPLIT_VAR(in, a, b, c, d) do { \ (a) = (char)((in) >> 24); \ (b) = (char)((in) >> 16); \ (c) = (char)((in) >> 8); \ (d) = (char)((in) & 0xFF); \ } while (0) #define REBUILD_VAR(a, b, c, d) \ ((((a) << 24) & 0xFF000000) | (((b) << 16) & 0x00FF0000) | \ (((c) << 8) & 0x0000FF00) | ((d) & 0xFF))
Each char variable (a, b, c, and d) is filled with a byte of the original four-byte integer variable. This is done by shifting each byte in turn into one of the char variables. Obviously, the four char variables should not be stored contiguously in memory, or splitting the variable will have no effect.
#include <stdlib.h> char g1, g2; /* store half of the integer here */ void init_rand(char a, char b) { srand(REBUILD_VAR(a, g1, b, g2)); } int main(int argc, char *argv[ ]) { int seed = 0x81206583; char a, b; SPLIT_VAR(seed, a, g1, b, g2); init_rand(a, b); return 0; } | http://etutorials.org/Programming/secure+programming/Chapter+12.+Anti-Tampering/12.7+Splitting+Variables/ | CC-MAIN-2016-44 | refinedweb | 255 | 61.6 |
FreeCAD BIM development news - February 2019
![ The Barcelona pavillon in
FreeCAD]()
Hi everybody,
This is the February issue of our monthly report about BIM development in FreeCAD.
Sorry for the slight delay in producing this article, you know, when you live in Brazil, Carnival is kind of sacred... Jokes apart (half jokes, to be honest), I've been pretty busy on many different plans this month, apart from our BIM development road itself, and there are exciting things coming from several other sides, I'll explain below.
![ Carnaval in
Brazil]()
Also, we are now on the final stretch of the path to the version 0.18 release of FreeCAD. It took way more time than planned, as usual, but we are almost there. No more blocking bugs, we are just ironing out minor issues in translations, documentation and packaging, and we're ready to go. It will be by far the most stable and usable version of FreeCAD ever.
As usual, many thanks to all of you who sponsor me on Patreon, Liberapay or Paypal, I am really glad and honoured you guys still judge this effort worthwhile after almost two years, I think we are more than halfway already. Modelling is already working very well in FreeCAD, I would say at least as well as other BIM apps, even commercial ones, we are already well underway on the second issue which is working with large models, and the third issue, which is outputting quality 2D drawings, will be the big focus of this year.
So, let's see what we have this month:
The videos - introducing the Barcelona pavillion series
This month we don't have just one video but a larger series. A MUCH larger series. I have recorded them all, but I still need to do sound and editing on several of them (it takes quite a lot of time). So I'll start publishing the first three parts here today, and the next ones will come along in the next days. The total will be 10 or 12 videos (still not sure how to cut). I'll release the final FreeCAD file with the last one too.
These videos show a small but complete BIM project made entirely in FreeCAD: The reconstruction of the Pabellón de Barcelona, by Mies Van Der Rohe.
I tried to go around a bit all aspects of working with FreeCAD, hope you'll like! The DXF file used as a base is here.
Blender importer ported to 2.80
![ FreeCAD models imported in
Blender]() exchange formats such as OBJ or 3DS. This is specially important for who works with ArchViz and wants to produce gorgeous renderings out of FreeCAD models.
As the Python3 version of FreeCAD stabilised a lot and is now being pushed out everywhere and also integrated in Linux distributions. the Python2/Python3 difficulties between FreeCAD and Blender are becoming a thing of the past. So as soon as the 0.18 release is out, I'll register this importer on the Blender add-ons repository, which will make it a lot easier to install and use by everybody.
If you use FreeCAD already, I highly recommend you to give a go at the 2.80 version of Blender. With a few tweaks your work in the 3D view itself (no "rendering" needed anymore) looks gorgeous...
![ The new features of the Blender 3D
view]()
BIM Tutorial
![ The almost finished BIM
tutorial]()
I've worked further on the integrated tutorial of the BIM workbench, and it is coming close to completion. Modelling part is complete, only missing parts are exporting to IFC, producing 2D drawings, and extracting quantities.
Be sure to give it a try, and please report (or fix yourself!) anything you think is unclear.
Human reference
![ A FreeCAD setup with human
figure]()
This is a very small thing, but of a big meaning I think. Two applications I know of, Sketchup and BricsCAD Shape, place this stylised human figure inside the 3D space when starting a new model. Coincidentally, these two applications are targeted not so much at very technically skilled people, but try a more intuitive and human approach. And as you know, a big part of what we're trying to do here is make FreeCAD more intuitive.
So now when creating a new project with the BIM project tool, you have an option to add this sympathetic figure in you model, which immediately gives a very good sense of scale.
Setup screen presets
![ The enhanced setup
screen]()
The BIM setup tool has gained a couple of toolbars, so the panel doesn't grow out of reasonable size, and, more importantly, presets, that fill the rest of the settings with sensible values when working in meters, centimetres or imperial units. It is now much easier to start FreeCAD from scratch with everything correctly set for BIM work/
FreeCAD 0.18 progresses
![ Activity on the FreeCAD
forum]()
Just a note to keep you updated about the forthcoming 0.18 release, which should land in the coming days or weeks now. There will be both Python2 and Python3 versions available, and a pre-0.18 version has been added just in time to Debian, for the coming Debian 10 release. As it is a long-time support version, it will be used as a base for Ubuntu and all its derivatives. So for the fist time in history all these Linux distributions should come natively with a pretty recent version of FreeCAD. We have Kurt to thank for that!
We have now processed all the important bugs and most smaller ones, the application itself is basically ready for release. We are just finishing work on documentation, translations, etc... which should take a couple of days more, and you'll be able to put your hands on the 0.18, if you haven't done it yet.
We also have a load of new features waiting for the 0.18 release to be merged, so as soon as it's done, FreeCAD development will resume at full speed. And we have big changes coming in the BIM/Arch area, with deep and meaningful changes to adapt much better to the IFC schema, free ourselves from concepts stupidly copied from other BIM apps. I believe we will find ourselves with a BIM application that is much better adapted to IFC, and at the same time gives much more freedom to BIM modelling.
View and 2D output experiments
![ SVG output
experiments]()
I've started to work on my main plan for this year, which is to produce quality 2D output from BIM models. There is not much result to show so far, because many experiments failed :) but it is interesting anyway to go through the different options and see what works, what not, what's worth pursuing, etc. What I have started to explore:
Threaded computing of TechDraw views
The result of this so far is in this branch, if you fancy having a look, but it's unsatisfying so far and not working at all. The idea was to make TechDraw views not recompute immediately, but in the background, in a separate thread. As TechDraw views are almost always "terminal" objects (only the Page depends on them), and given that they can take a lot of time to calculate, I thought it might be interesting to make them calculate in the background, while you can continue working on other things in FreeCAD. When the computation is ready, the view would get notified and display the new contents.
However, it turns out working with threads is very complex and delicate. A separate thread must really be thought as a completely self-contained environment. If it uses functionality implemented outside of itself, this functionality will most of the time run on the main thread. So making things multi-threaded really requires a lot of previous thinking and planning, and in the case of FreeCAD there is also a lot that is not in our hands, such as how the OpenCasCade kernel works.
To resume, not much progress on this side. I'll keep toying with it, though, but I don't think impressive things can come from there.
SIMVoleon
SIMVoleon is an extension for Coin3D, our main 3D display engine in FreeCAD, that allows to do volumetric rendering. In other words, it can automatically "fill" meshes being cut. Since Coin3D can generate very fast sections through models, but these sections appear "hollow", so the have little use so far, this could be a very useful addition.
I had to make a few changes to be able to compile it with recent Coin versions, which are in this repo. The next step will be to try to build pivy with it (it seems to have support for it already), so we can play with that from Python.
SVG views generated from Coin
I've also started playing with generating SVG views directly from the main 3D view of FreeCAD, and this seems so far the most interesting path. It is very fast, produces pretty good results, and if we can make SIMVoleon work with it, might be an excellent solution. There are many changes required to the SVG-producing system in FreeCAD, to control things like scale and line types, I'll start working on that as soon as the release is out.
BIMbots
![ The Bimbots
interface]()
BIMbots is a new idea developed by the people behind BIMServer. Basically the idea is that you don't need to install and manage a BIMServer yourself, you'd be able to use services provided by a BIMServer somewhere, directly form your BIM application. For example, validate an IFC file, or do all kinds of queries on it, such as counting windows, computing walls area, etc. Together with them, I've started working on a BIMbots plugin for FreeCAD, which is almost ready now, and will be integrated to the BIM workbench as soon as it is ready.
I'll also have a look at the developer docs and try to develop (and document) a BIMbots service, to help us assess how we can use it in BIM projects. The way I see it so far, it could be the perfect open-source replacement for rules-based systems like Solibri model checker.
Curtain wall
![ Curtain wall
experiments]()
I also started working on a curtain wall tool for the BIM workbench. However, it is a crazily complex problem! The idea is that you could take any surface (such as one generated with the Surface workbench), decide yourself how you want to "cut" through it (for example using Part MultiSections), then use the produced lines to generate flat panels. The last operation being to generate different kinds of mullions from the edges and glass panels from the faces.
Alas, taking non-flat faces and trying to make a continuous series of flat faces from it is not simple :) I haven't found a satisfying way so far. But ultimately we'll get there.
If you want to play with it already, select a series of edges (in one direction only), and do:
import BimCurtainWallBimCurtainWall.makeCurtainWall(edges,subdiv=5,detach=False)
Where edges is a list of edges, subdiv is the number of subdivisions. If detach is True, each generated face is independent from the others, which yields much better results, but is not really what we want...
Google Summer of Code 2019
This year again we are part of the Google Summer of Code with our friends at BRL-CAD, LibreCAD, OpenSCAD and Slic3r. This time we're trying to concentrate all our project ideas together, and encourage cross-project ideas. If you want to code for FreeCAD during the norther hemisphere summer (June to August), talk with us! It's open to anybody, and you earn nice Google money!
New Arch/BIM developers around...
If you follow the forum and the github repo, you'll notice that several new people started contributing actively to the development of the Draft/Arch/BIM areas of FreeCAD. And there is much more to come after the 0.18 release. That's thrilling, I feel we are at a kind of turning point. The future looks bright for 0.19!
Cheers
Yorik
Comment on this post via Twitter , Facebook or Mastodon or The FreeCAD forum | https://yorik.uncreated.net/?blog/2019-016 | CC-MAIN-2019-26 | refinedweb | 2,054 | 69.01 |
Free application for Nokia phones that lets you edit, synchronize and back up.
Nokia Ovi Player 2.1.11020.2
Shell Object Editor is an editor for shell objects.
Shell Tools is a collection of unique tools for your Windows right-click menu.
Advanced Shell for UPX v.3.01 - advanced executable file compressor.
Shell-and-Tube Heat Exchanger - predicts outlet temperatures.
Open a command prompt in the selected directory (or directories) or in the current directory that yo...
SSH Secure Shell provides end-to-end communications through the SSH protocol.
Create ZIP archives and manage your files.
import about 400 graphic file formatsExport about 50 graphic file formats.
A nifty tool for file integrity checking, integrates into Windows shell.
Gmail Drive Shell Extension is a genius tool for Gmail users.
Classic Shell adds some missing features to Windows 7 and Vista.
Vista Live Shell Pack transforms the appearance of Windows XP to Windows Vista.
DjVu Shell Extension Pack is an extension package for Windows. | http://ptf.com/shell/shell+ovi/ | CC-MAIN-2013-20 | refinedweb | 165 | 61.63 |
I reclaimed this package and I updated it with qt4.
Search Criteria
Package Details: osmose 0.9.96-4
Dependencies (2)
Required by (0)
Sources (4)
Latest Comments
Funkin-Stoopid commented on 2013-03-16 13:48
benjarobin commented on 2013-02-15 20:24
Updated and disowned, (do not want to maintain it)
lordheavy commented on 2013-02-15 19:49
Just diswoned it. Have fun!
FoolEcho commented on 2013-02-15 19:46
Hello, you need to patch the sources 'cause porting to gcc 4.7.
Only Joystick.cpp needs a patch, therefore, by example here in a qt4.7.patch:
--- Osmose-0-9-96-QT/Joystick.cpp.orig 2013-02-15 20:15:28.106237185 +0100
+++ Osmose-0-9-96-QT/Joystick.cpp 2013-02-15 20:16:08.839571776 +0100
@@ -33,6 +33,7 @@
*/
#include "Joystick.h"
+#include <unistd.h>
/**
* Constructor. Throw an exception in case of failure.
Add this patch to your PKGBUILD build, please (by example with "patch -Np1 -i ../gcc4.7.patch" before the qmake in the build function).
mikhaddo commented on 2013-01-03 03:46
Joystick.cpp:195:5: erreur: ‘::close’ has not been declared
lordheavy commented on 2011-06-23 15:59
Thanks, fixed
jose1711 commented on 2011-06-20 18:58
md5sum does not match
Anonymous comment on 2010-12-06 03:05
I found some issues with your tarball. AUR guidelines suggest to not include binaries. This includes:
osmose/osmose.png
"Just an icon" you say, but maybe you should ask upstream to include it. Thank you.
lordheavy commented on 2010-04-29 09:35
Sorry, but i cannot reproduce the problem!
Related to the make error, i guess something must be wrong with your makepkg.conf file.
Nareto commented on 2010-04-26 15:31
Hello, I'm getting this:
==> Starting build()...
cd ./unzip/ && make unzip.a
g++ -Wall -D__USE_UNIX98 -DUSE_ISO_IEC_6429 -O3 main.o ./cpu/Z80.o ./cpu/Opc_cbxx.o ./cpu/Opc_dd.o ./cpu/Opc_ddcb.o ./cpu/Opc_ed.o ./cpu/Opc_fd.o ./cpu/Opc_fdcb.o ./cpu/Opc_std.o OsmoseCore.o IOMapper.o IOMapper_GG.o VDP.o VDP_GG.o MemoryMapper.o SmsEnvironment.o SN76489.o WaveWriter.o Options.o OsmoseConfiguration.o KeyConversion.o TextWriter.o PadInputDevice.o PaddleInputDevice.o JoystickInputDevice.o FIFOSoundBuffer.o DebugEventThrower.o RomSpecificOption.o ./unzip/unzip.a -o osmose -lSDL -lGL -lz
strip -s osmose
make[1]: Entering directory `/tmp/yaourt-tmp-renato/aur-osmose/osmose/src/Osmose-0-9-2/unzip'
make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule.
make[1]: `unzip.a' is up to date.
make[1]: Leaving directory `/tmp/yaourt-tmp-renato/aur-osmose/osmose/src/Osmose-0-9-2/unzip'
strip: 'osmose': No such file
make: *** [stripexe] Error 1
make: *** Waiting for unfinished jobs....
==> ERROR: Build Failed.
Aborting...
Error: Makepkg was unable to build osmose package. | https://aur.archlinux.org/packages/osmose/?comments=all | CC-MAIN-2017-47 | refinedweb | 470 | 51.55 |
We have already used functions like length, empty?, etc but in this section we are going to make our own functions.
Why do we need functions?
Function is a set of codes written together and given a name. We can call that set of program at any place of our code by just calling its name and without writing the whole set of codes again.
Let's take an example.
def is_even(x) if x%2 == 0 puts "even" else puts "odd" end end is_even(2) is_even(3)
odd
def is a keyword used for making functions. We first write def and then the name of the function to make a function.
In the above example, is_even is the name of the function. We gave a parameter named
(x) to the function in the function definition. Now, while calling the function, we need to give an argument to it.
2 is given in the first function call. So in the function, x is 2 (since we have passed 2). So when is_even(2) was called, then the codes inside the function 'is_even' were executed with 'x=2'.
In the same way, during the second time when the function is called, 3 is passed. So, x is 3 this:
def print_poem() puts "I am playing with computer" puts "Soon I will be master" puts "You will play my written game" puts "I will get a huge fame" end print_poem()
Soon I will be master
You will play my written game
I will get a huge fame
As we saw here, we have not passed any parameter. Whenever we want to print the whole poem, we just have to call that function.
Let's write a program to calculate and print the sum of two numbers.
def sum(a,b) puts "sum of #{a} and #{b} is #{a+b}" end sum(2,5) sum(5,10)
sum of 5 and 10 is 15
In the above example, our function sum takes two parameters a and b. So, when we called 'sum(2,5)', 'a' became 2 and 'b' became 5.
Let's see one more example
def checkdiv(x,y) if x>=y if x%y == 0 puts "#{x} is divisible by #{y}" else puts "#{x} is not divisible by #{y}" end else puts "Enter first number greater than or equal to second number" end end checkdiv(4,2) checkdiv(4,3) checkdiv(2,4)
4 is not divisible by 3
Enter first number greater than or equal to second number
Here, our first parameter will be matched with x and second will be matched with y. Rest of the code is simple. checkdiv will check the divisibility of the first number by the second number only if the first number is greater than the second number. If it is not the case, then the compiler will print "Enter first number greater than or equal to second number". And if first number is greater, it will simply check whether the first number(x) is divisible by the second number(y) or not.
Returning from a function
Functions can do one more thing, they can return you something. This means that these can give you something back. This will be clearer from the following examples.
def is_even(x) if x%2 == 0 return true else return false end end puts is_even(1) puts is_even(2)
true
Here, our function is returning us a boolean (true or false). So, after 'true' is returned from 'is_even(2)', the statement puts is_even(2) is equivalent to puts true. Let's see one more example.
def rev(a) c = [] i = a.length-1 #a.length-1 as index will go to 1 less than length as it starts from 0. while i>=0 c.push(a[i]) i = i-1 end return c end z = rev([2,4,6,3,5,2,6,43]) puts "#{z}"
Here, our function is returning an array.
Calling a function inside another function
Yes, we can call a function inside another function.
Let's take an example of checking a number's divisibility with 6. For a number to be divisible by 6, it must be divisible by both 2 and 3. In the following example, we have a function is_even() to check its divisibility with 2. The div6() function calls is_even inside itself to check the number's divisibility with both 2 and 3. Let's see how:
def is_even(x) if x%2 == 0 return true else return false end end # div6 function to check divisiblity by 6 def div6(y) if is_even(y) and y%3 == 0 return true else return false end end
We have called the is_even() function inside the div6() function.
Here, is_even() will return 'True' if the given number is even or divisible by 2. And if is_even(y) and y%3 == 0 will be 'True' only if is_even() and y%3==0 both are true, which means that the number is divisible by both 2 and 3 or divisible by 6.
Recursion
Recursion is calling a function inside the same function. Before going into this, let's learn some mathematics.
We will calculate the factorial of a number. Factorial of any number n is (n)*(n-1)*(n-2)*....*1 and written as (n!) and read as 'n factorial'.
e.g.:
4! = 4*3*2*1 = 24
3! = 3*2*1 = 6
2! = 2*1 = 2
1! = 1
Also, 0! = 0
Let's code to calculate the factorial of a number:
def factorial(x) if x==0 or x==1 return 1 else return x*factorial(x-1) end end puts factorial(0) puts factorial(1) puts factorial(4) puts factorial(5) puts factorial(10)
1
24
120
3628800
Let's go through this code.
If we give 0 or 1, our function will return 1 because the values of both 0! and 1! are 1. It is very simple upto here. Now, see what happens if we pass 2 to our function 'factorial'.
else
return x*factorial(x-1)
Going for factorial(x-1) i.e., factorial(1) which is 1. ( This is called recursion, the factorial function is called inside itself ). So, the result is 2*factorial(1) = 2*1 i.e., 2.
So, the function will return 2.
Now. let's see for 3:
x*factorial(x-1)
3*factorial(2) Now, factorial(2) will be called and then it will be 2*factorial(1):
3*2*factorial(1) Since, the factorial(2) will be 2*factorial(2-1)
3*2*1
So, it will return 6.
For 4
4*factorial(3)
4*3*factorial(2)
4*3*2*factorial(1)
4*3*2*1
If you have understood this, you have just done a thing which most of the programmers find difficult in their beginning days.
Do you know Fibonacci series?
It is a series having its 0th and 1st terms 0 and 1 respectively.
2,4,6,8,10,... is a series. Its nth term is n*2. Eg- 1st term is 2*1 = 2, 2nd term is 2*2 = 4.
One more example can be 11,15,19,23,... . Its 1st term is 11 and any other nth term is (n-1)th term + 4. Eg- 2nd term is 1st term + 4 = 11+4 = 15. 3rd term is 2nd term + 4 = 15+4 = 19.
It means that
f(2) = f(1)+f(0)
= 1+0 = 1
f(3) = f(2)+f(1)
= 1 + 1 = 2
Now, let's program it
$prev = {0=>1,1=>1} def fib(n) if $prev.has_key?(n) return $prev[n] else fi = fib(n-1) + fib(n-2) $prev[n] = fi return fi end end puts fib(0) puts fib(1) puts fib(2) puts fib(3) puts fib(4)
1
1
2
3
I think you have noticed $ sign before 'prev' and must be wondering what is that for. 'prev' is defined outside the function 'fib' and that's why we can't use it in our function if it is not a global variable. So, to make it available for using inside the function, we must make it global. Global variables are variables which are available for use in the entire program, and in every function in the program. We declare global variables by adding $ ( dollar ) sign before them. So, after putting '$' before 'prev', 'prev' becomes a global variable and available for use anywhere in our entire program.
$prev = {0:0,1:1} - We are using a hash to store fibonacci series corresponding to the nth term. Initially, there are only two numbers f(0) = 0 and f(1) = 1.
Coming to the next line:
if $prev.has_key? 1 is returned in the last line.
For f(3)
fi = f(1)+f(2)
For f(1), simply 1 will be returned and f(2) will be calculated as above and 1+0 i.e., 1 will be returned.
Similarly, f(4) will be calculated.
You have completed most of the programming parts. In next section, you will be introduced to object oriented feature of Ruby. You need to practice these concepts before going further.
Knowledge is of no value unless you put it into practice.
-Anton Chekhov | https://www.codesdope.com/ruby-have-a-function/ | CC-MAIN-2017-43 | refinedweb | 1,537 | 73.37 |
Experts Exchange connects you with the people and services you need so you can get back to work.
Submit
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.IO;
namespace OleDropTest {
public partial class TestForm : Form {
public TestForm() {
InitializeComponent();
grpDrop.AllowDrop = true;
}
private void grpDrop_DragEnter(object sender, DragEventArgs e) {
StringBuilder info = new StringBuilder();
e.Effect = e.AllowedEffect & DragDropEffects.Copy;
string[] formats = e.Data.GetFormats(false);
info.AppendFormat("Operations: {0}\r\n\r\nFormats:\r\n{1}", e.AllowedEffect, string.Join("\r\n", formats));
if (Array.IndexOf(formats, DataFormats.FileDrop) != -1) {
string[] fileNames = (string[])e.Data.GetData(DataFormats.FileDrop);
info.AppendFormat("\r\n\r\nFiles:\r\n{0}", string.Join("\r\n", fileNames));
}
txtInfo.Text = info.ToString();
}
private void grpDrop_DragDrop(object sender, DragEventArgs e) {
//if the drop operation is complete, attempt to copy the files.
try {
string[] dropFilePaths = (string[])e.Data.GetData(DataFormats.FileDrop);
e.Effect = DragDropEffects.None; //unless we change it below.
//make sure the checkbox is checked and that the data contains files.
if (!chkCopyFiles.Checked | dropFilePaths == null)
return;
string dstFolder = txtFolder.Text;
if (!Directory.Exists(dstFolder)) {
MessageBox.Show("The specified destination folder does not exist.");
e.Effect = DragDropEffects.None;
return;
}
//copy the files or folders
CopyPathItems(dropFilePaths, dstFolder);
} catch (Exception ex) {
MessageBox.Show("Drop Error: " + ex.Message);
}
}
static void CopyPathItems(string[] srcPaths, string dstFolder) {
if (!Directory.Exists(dstFolder)) Directory.CreateDirectory(dstFolder);
foreach (string srcPath in srcPaths) {
string name = Path.GetFileName(srcPath);
string newPath = Path.Combine(dstFolder, name);
if (Directory.Exists(srcPath)) //srcPath is a folder. recursively copy all sub-items
CopyPathItems(Directory.GetFileSystemEntries(srcPath), newPath);
else //srcPath is a file
File.Copy(srcPath, newPath);
}
}
}
}
Select all
Open in new window $259.00.
Premium members get this course for $122.40.
Premium members get this course for $62.50.
Premium members get this course for $159.20.
Premium members get this course for $37.50.
Premium members get this course for $167.20.
Premium members get this course for $12.50.
You can do this very simply by setting the "AllowDrop" property of a System.Windows.Forms Control to true and then implementing event handlers for the Drag/Drop events.
Now, OLE drag/drop can be quite complicated, because programs can provide Data objects in many different formats--even custom, non-standard formats. So there is not a 100% guarantee you will be able to easily consume the Data objects provided by your editing program.
However, it is quite likely that your editing program does use one of the standard formats, such as "FileDrop", so you should be able to do what you want.
I have developed a small test application for you. You can use it do find out what formats can be dragged & dropped from your editing application. If it uses the standard "FileDrop" format, I have already even written the code necessary to automatically copy the exported files to a folder of your choice.
Here is a screenshot of the program in action:
For convenience I have listed the content of the main form below:
Open in new window
And you can download the full project for testing here:
OleDropTest.zip
Instructions for use:
Run the program and attempt to drag and drop from another application onto the designated drop area.
When you initially drag over the box, you will see information about the dragged data. This includes the allowable operations (such as Move, Copy, Link etc).
It will also list all of the available data formats that can be dropped.
If the available formats includes the "FileDrop" format, it also lists all available files and folders that can be dropped.
When you release the mouse button, the dragged data will be "dropped". If you have checked the "Copy Files To" box and you have specified a valid destination folder, then all available files will be copied into that folder.
If however, your application does NOT have the "FileDrop" format, there are still a number of other formats that can be used, including the "FileName" format. You will just need to post the list of available formats back here and I will take a look to see what we can do to move forward.
Good luck!
If you cannot open or compile it, I can provide a download link for the executable for testing.
It’s our mission to create a product that solves the huge challenges you face at work every day. In case you missed it, here are 7 delightful things we've added recently to monday to make it even more awesome.
Have you tested the program I sent?
It not only works with File drag & drop, but from applications also.
My application isn't just specific for file drag & drop. It is a general solution. This is how all OLE drag & drop works. So even if your editing application doesn't expose the files directly, you should still be able to use OLE techniques to get your export to work. The process of exporting your editing data to Windows Explorer uses the same techniques
For example, you can open a zip file and drag items from the archive onto my program. Even though they are not actual files in the file system, they will still work because the archive program creates the files on-the-fly when the drop occurs. I have tested this with 7-zip and it works great.
I assume your editing application works the same. Once the drop operation occurs, it will write the file to a specified location and it will notify my program of that location.
It will also work with other types of data. You just have to know what the data object format is.
Can you please try the drag & drop from your editing application and list the output from my application here? It is important to see which operations are supported and to find the list of data formats that are supported.
Perhaps your editing program expects the "Move" operation instead of the "Copy" operation that my program uses.
Is it something I can download to see the behavior your discussing?
and yes, i tested your test project and it doesn't trigger like we drop onto the windows explorer.
it is an external application made for broadcast systems and i cannot send the application since it's very huge and need database/servers etc...
btw, to be simple, we need to create a window/control to act like the windows explorer because when we drop the item (from the extenal application) it reacts only when we drop into a windows explorer and nothing else...
For example, i use a webbrowser and set the URL to "C:\" then it becomes a "windows explorer"
i dragged the item and it reacts as we want it to reacts
I understand what you are saying about trying to simulate a Windows Explorer folder window because your program recognizes that for the export. But what we need to figure out is exactly WHAT needs to be simulated. That is, we need to find out exactly how your program "knows" what the drop target is.
I assumed that your program is using OLE drag & drop, because that is a standard technology. If that is true, we should be able to figure out how to trick your application into thinking you are dropping on windows explorer.
However your application COULD be doing something completely NON-standard. For example, it is possible for your application to monitor what type of window is below the mouse cursor (by using APIs like WindowFromPoint, GetWindowInfo, GetClassName, etc) and it exports only when the window is the SysListView32 or SysTreeView32 type used in windows explorer.
But I HIGHLY doubt that is the case. OLE drag & drop is so easy for developers to implement, that I really don't think they would have done something non-standard like that in your external program.
Can you please answer the following questions, so I can think of some alternative methods we can use to test what your program is doing:
1) When you did drag & drop onto my application from your editing program, did you see ANYTHING appear in the information textbox? (Such as the Operation and Formats list?)
2) What operating system are you using, and what CPU architecture (32 or 64 bit)?
Can you test out your program on a Windows 7 machine? I'm asking because some "Explorer Windows" in Windows 7 are NOT the standard explorer windows found in XP and 2000. So we can see if your program will still work with the new windows that do not have the same window classes as the old ones. | https://www.experts-exchange.com/questions/26647388/How-to-simulate-a-windows-explorer.html | CC-MAIN-2018-13 | refinedweb | 1,453 | 57.06 |
Here's a picture displaying the current state of my program (bugged, and inaccurately displayed) :
I have set Square 1 as my starting point, and Square 9 as my finishing point. From Square 1 to Square 9, I set the default colors to different hues of dark red. From Square 1 to Square 3, those are the squares added into the "best" array list selected out, and marked as white to distinguish as the "best path" from starting point to finishing point.
I expected to have Square 1, Square 2, Square 5, Square 6, and Square 9 (Or 1, 4, 5, 8, and 9) be marked white, and all three of the squares are added to the "best" array list. The picture above is not what I wanted.
I'm asking for help in determining where my bug occurred, where the "best" path is unabled to be generated correctly. One major problem is that my source code is pretty huge when considering how jumpy my code is.
Currently, I'm reading through my codes to try and figure out where it's not working.
Source code, place where I suspect it's where the bug is located at. I probably narrowed it down quite a lot:
package core; import java.util.ArrayList; import java.util.List; import java.util.Set; public class Grid { public List<Square> grids = new ArrayList<Square>(); public List<Square> exists = new ArrayList<Square>(); public int size = 9; public int width; public int height; public Square target = new Square(0, this); // ======================================================= public int rows = 0; public int columns = 0; public Square goal; public Square[][] squares; public List<Square> opened = new ArrayList<Square>(); public List<Square> closed = new ArrayList<Square>(); public List<Square> best = new ArrayList<Square>(); int xStart = 0; int yStart = 0; int xFinish = 2; int yFinish = 2; public Grid(int rows, int columns) { this.rows = rows; this.columns = columns; this.squares = new Square[rows][columns]; int count = 0; for(int j = 0; j < rows; j++) { for(int i = 0; i < columns; i++) { this.squares[j][i] = new Square(count++, this); this.squares[j][i].setCoordinate(j, i); } } squares[xStart][yStart].setFlag(Square.START); squares[xFinish][yFinish].setFlag(Square.FINISH); this.goal = squares[xFinish][yFinish]; for(int ro = 0; ro < squares.length; ro++) for(int c = 0; c < squares[ro].length; c++) squares[ro][c].checkAdjacencies(); } public void findingPath() { Set<Square> adjacSet = squares[xStart][yStart].adjacencies; for(Square adjacent : adjacSet) { adjacent.parent = squares[xStart][yStart]; if(adjacent.flag != Square.START) opened.add(adjacent); } } public Square findBestPath() { Square best = null; Square goal = null; for(int i = 0; i < squares.length; i++) for(int j = 0; j < squares[i].length; j++) if(squares[i][j].flag == Square.FINISH) goal = squares[i][j]; for(Square square : opened) { if(best == null || square.getCost(goal) < best.getCost(goal)) { best = square; } } return best; } // ============================================ private void populateBestList(Square square) { best.add(square); if(square.parent.flag != Square.START) populateBestList(square.parent); return; } boolean testFlag = false; public void tick() { if(testFlag == false) { findingPath(); testFlag = true; } if(opened.size() > 0) { Square best = findBestPath(); opened.remove(best); closed.add(best); if(best.flag == Square.FINISH) { populateBestList(goal); return; } else { Set<Square> neighbors = best.adjacencies; for(Square neighbor : neighbors) { if(opened.contains(neighbor)) { Square temp = new Square(neighbor.id, this); temp.setCoordinate(neighbor.x, neighbor.y); temp.parent = best; if(temp.getCost(goal) >= neighbor.getCost(goal)) continue; } if(closed.contains(neighbor)) { Square temp = new Square(neighbor.id, this); temp.setCoordinate(neighbor.x, neighbor.y); temp.parent = best; if(temp.getCost(goal) >= neighbor.getCost(goal)) continue; } neighbor.parent = best; opened.remove(neighbor); closed.remove(neighbor); opened.add(0, neighbor); } } } if(opened.size() <= 0) { Square temp = null; for(Square t : best) if(t.flag == Square.FINISH) temp = t; while(temp != null) { temp = temp.parent; } } } public void render(int[] pixels) { for(Square object : closed) { pixels[object.x + object.y * width] = object.color; } } }
Thanks for your help in reading this thread in advance. | http://www.gamedev.net/topic/620612-a-algorithm-by-pixel-failed-to-create-diagonal-path-in-3x3-grid/page__pid__4923011#entry4923011 | crawl-003 | refinedweb | 651 | 61.43 |
Chapter I
The QUARREL ON THE CAPM: A LITERATURE SURVEY
Abstract
The current chapter has attempted to do three things. First it presents an overview on the capital asset pricing model and the results from its application throughout a narrative literature review. Second the chapter has argued that to claim whether the CAPM is dead or alive, some improvements on the model must be considered. Rather than take the view that one theory is right and the other is wrong, it is probably more accurate to say that each applies in somewhat different circumstances (assumptions). Finally the chapter has argued that even the examination of the CAPM's variants is unable to solve the debate into the model. Rather than asserting the death or the survival of the CAPM, we conclude that there is no consensus in the literature as to what suitable measure of risk is, and consequently as to what extent the model is valid or not since the evidence is very mixed. So the debate on the validity of the CAPM remains a questionable issue.
Keywords:
CAPM, CAPM's variants, circumstances, literature survey.
1. INTRODUCTION
The traditional capital assets pricing model (CAPM), always the most widespread model of the financial theory, was prone to harsh criticisms not only by the academicians but also by the experts in finance. Indeed, in the last few decades an enormous body of empirical researches has gathered evidences against the model. These evidences tackle directly the model's assumptions and suggest the dead of the beta (Fama and French, 1992); the systematic risk of the CAPM.
If the world does not obey to the model's predictions, it is maybe because the model needs some improvements. It is maybe because also the world is wrong, or that some shares are not correctly priced. Perhaps and most notably the parameters that determine the prices are not observed such as information or even the returns' distribution. Of course the theory, the evidence and even the unexplained movements have all been subject to much debate. But the cumulative effect has been to put a new look on asset pricing. Financial Researchers have provided both theory and evidence which suggest from where the deviations of securities' prices from fundamentals are likely to come, and why could not be explained by the traditional CAPM.
Understanding security valuation is a parsimonious as well as a lucrative end in its self. Nevertheless, research on valuation has many additional benefits. Among them the crucial and relatively neglected issues have to do with the real consequences of the model's failure. How are securities priced? What are the pricing factors and when? Once it is recognized that the model's failure has real consequences, important issues arise. For instance the conception of an adequate pricing model that accounts for all the missing aspects.
The objective of this chapter is to look at different approaches to the CAPM, how these have arisen, and the importance of recognizing that there's no single ‘'right model'' which is adequate for all shares and for all circumstances, i.e. assumptions. We will, so move on to explore the research task, discuss the goodness and the weakness of the CAPM, and look at how different versions are introduced and developed in the literature. We will, finally, go on to explore whether these recent developments on the CAPM could solve the quarrel behind its failure.
For this end, the recent chapter is organized as follows: the second section presents the theoretical bases of the model. The third one discusses the problematic issues on the model. The fourth section presents a literature survey on the classic version of the model. The five section sheds light on the recent developments of the CAPM together with a literature review on these versions. The next one raises the quarrel on the model and its modified versions. Section seven concludes the paper.
2. THEORETICAL BASES OF THE CAPITAL ASSET PRICING MODEL
In the field of finance, the CAPM is used to determine, theoretically, the required return of an asset; if this asset is associated to a well diversified market portfolio while taking into account the non diversified risk of the asset its self. This model, introduced by Jack Treynor, William Sharpe and Jan Mossin (1964, 1965) took its roots of the Harry Markowitz's work (1952) which is interested in diversification and the modern theory of the portfolio. The modern theory of portfolio was introduced by Harry Markowitz in his article entitled “Portfolio Selection'', appeared in 1952 in the Journal of Finance.
Well before the work of Markowitz, the investors, for the construction of their portfolios, are interested in the risk and the return. Thus, the standard advice of the investment decision was to choose the stocks that offer the best return with the minimum of risk, and by there, they build their portfolios.
On the basis of this point, Markowitz formulated this intuition by resorting to the diversification's mathematics. Indeed, he claims that the investors must in general choose the portfolios while getting based on the risk criterion rather than to choose those made up only of stocks which offer each one the best risk-reward criterion. In other words, the investors must choose portfolios rather than individual stocks. Thus, the modern theory of portfolio explains how rational investors use diversification to optimize their portfolio and what should be the price of an asset while knowing its systematic risk.
Such investors are so-called to meet only one source of risk inherent to the total performance of the market; more clearly, they support only the market risk. Thus, the return on a risky asset is determined by its systematic risk. Consequently, an investor who chooses a less diversified portfolio, generally, supports the market risk together with the uncertainty's risk which is not related to the market and which would remain even if the market return is known.
Sharpe (1964) and Linter (1965), while basing on the work of Harry Markowitz (1952), suggest, in their model, that the value of an asset depends on the investors' anticipations. They claim, in their model that if the investors have homogeneous anticipations (their optimal behavior is summarized in the fact of having an efficient portfolio based on the mean-variance criterion), the market portfolio will have to be the efficient one while referring to the mean-variance criterion (Hawawini 1984, Campbell, Lo and MacKinlay 1997).
The CAPM offer an estimate of a financial asset on the market. Indeed, it tries to explain this value while taking into account the risk aversion, more particularly; this model supposes that the investors seek, either to maximize their profit for a given level of risk, or to minimize the risk taking into account a given level of profit.
The simplest mean-variance model (CAPM) concludes that in equilibrium, the investors choose a combination of the market portfolio and to lend or to borrow with proportions determined by their capacity to support the risk with an aim of obtaining a higher return.
2.1. Tested Hypothesis
The CAPM is based on a certain number of simplifying assumptions making it applicable. These assumptions are presented as follows:
- The markets are perfect and there are neither taxes nor expenses or commissions of any kind;
- All the investors are risk averse and maximize the mean-variance criterion;
- The investors have homogeneous anticipations concerning the distributions of the returns' probabilities (Gaussian distribution); and
- The investors can lend and borrow unlimited sums with the same interest rate (the risk free rate).
The aphorism behind this model is as follows: the return of an asset is equal to the risk free rate raised with a risk premium which is the risk premium average multiplied by the systematic risk coefficient of the considered asset. Thus the expression is a function of:
- The systematic risk coefficient which is noted as;
- The market return noted;
- The risk free rate (Treasury bills), noted
This model is the following:
Where:
; represents the risk premium, in other words it represents the return required by the investors when they rather place their money on the market than in a risk free asset, and;
; corresponds to the systematic risk coefficient of the asset considered.
From a mathematical point of view, this one corresponds to the ratio of the covariance of the asset's return and that of the market return and the variance of the market return.
Where:
; represents the standard deviation of the market return (market risk), and
; is the standard deviation of the asset's return. Subsequently, if an asset has the same characteristics as those of the market (representative asset), then, its equivalent will be equal to 1. Conversely, for a risk free asset, this coefficient will be equal to 0.
The beta coefficient is the back bone of the CAPM. Indeed, the beta is an indicator of profitability since it is the relationship between the asset's volatility and that of the market, and volatility is related to the return's variations which are an essential element of profitability. Moreover, it is an indicator of risk, since if this asset has a beta coefficient which is higher than 1, this means that if the market is in recession, the return on the asset drops more than that of the market and less than it if this coefficient is lower than 1.
The portfolio risk includes the systematic risk or also the non diversified risk as well as the non systematic risk which is known also under the name of diversified risk. The systematic risk is a risk which is common for all stocks, in other words it is the market risk. However the non systematic risk is the risk related to each asset. This risk can be reduced by integrating a significant number of stocks in the market portfolio, i.e. by diversifying well in advantage (Markowitz, 1985). Thus, a rational investor should not take a diversified risk since it is only the non diversified risk (risk of the market) which is rewarded in this model. This is equivalent to say that the market beta is the factor which rewards the investor's exposure to the risk.
In fact, the CAPM supposes that the market risk can be optimized i.e. can be minimized the maximum. Thus, an optimal portfolio implies the weakest risk for a given level of return. Moreover, since the inclusion of stocks diversifies in advantage the portfolio, the optimal one must contain the whole stocks on the market, with the equivalent proportions so as to achieve this goal of optimization. All these optimal portfolios, each one for a given level of return, build the efficient frontier. Here is the graph of the efficient frontier:
The (Markowitz) efficient frontier
The efficient frontier
Lastly, since the non systematic risk is diversifiable, the total risk of the portfolio can be regarded as being the beta (the market risk).
3. Problematic issues on the CAPM
Since its conception as a model to value assets by Sharpe (1964), the CAPM has been prone to several discussions by both academicians and experts. Among them the most known issues concerning the mean variance market portfolio, the efficient frontier, and the risk premium puzzle.
3.1 The mean-variance market portfolio
The modern portfolio theory was introduced for the first time by Harry Markowitz (1952). The contribution of Markowitz constitutes an epistemological shatter with the traditional finance. Indeed, it constitutes a passageway from an intuitive finance which is limited to advices related to financial balance or to tax and legal nature advices, to a positive science which is based on coherent and fundamental theories. One allots to Markowitz the first rigorous treatment of the investor dilemma, namely how obtaining larger profits while minimizing the risks.
3.2 The efficient frontier
3.3 The equity premium puzzle
4. Background on the CAPM
. The CAPM's empirical problems may reflect theoretical failings, the result of many simplifying assumptions.”
Fama and French, 2003, “The Capital Asset Pricing Model: Theory and Evidence”, Tuck Business School, Working Paper No. 03-26
Being a theory, the CAPM found the welcome thanks to its circumspect elegance and its concept of good sense which supposes that a risk averse investor would require a higher return to compensate for supported the back-up risk. It seems that a more pragmatic approach carries out to conclude that there are enough limits resulting from the empirical tests of the CAPM.
Tests of the CAPM were based, mainly, on three various implications of the relation between the expected return and the market beta. Firstly, the expected return on any asset is linearly associated to its beta, and no other variable will be able to contribute to the increase of the explanatory power. Secondly, the beta premium is positive which means that the market expected return exceeds that of individual stocks, whose return is not correlated with that of the market. Lastly, according to the Sharpe and Lintner model (1964, 1965), stocks whose return is not correlated with that of the market, have an expected return equal to the risk free rate and a risk premium equal to the difference between the market return and the risk free rate return. In what follows, we are going to examine whether the CAPM's assumptions are respected or not through the empirical literature.
Starting with Jensen (1968), this author wants to test for the relationship between the securities' expected return and the market beta. For this reason, he uses the time series regression to estimate for the CAPM´ s coefficients. The results reject the CAPM as for the moment when the relationship between the expected return on assets is positive but that this relation is too flat. In fact, Jensen (1968) finds that the intercept in the time series regression is higher than the risk free rate. Furthermore, the results indicate that the beta coefficient is lower than the average excess return on the market portfolio.
In order to test for the CAPM, Black et al. (1972) work on a sample made of all securities listed on the New York Stock Exchange for the period of 1926-1966. The authors classify the securities into ten portfolios on the basis of their betas.They claim that grouping the securities with reference to their betas may offer biased estimates of the portfolio beta which may lead to a selection bias into the tests. Hence, so as to get rid of this bias, they use an instrumental variable which consists of taking the previous period's estimated beta to select a security's portfolio grouping for the next year.
For the estimate of the equation, the authors use the time series regression. The results indicate, firstly, that the securities associated to high beta had significantly negative intercepts, whereas those with low beta had significantly positive intercepts. It was proved, also, that this effect persists overtime. Hence, these evidences reject the traditional CAPM. Secondly, it is found that the relation between the mean excess return and beta is linear which is consistent with the CAPM.
Nevertheless, the results point out that the slopes and intercepts in the regression are not reliable. In fact, during the prewar period, the slope was sharper than that predicted by the CAPM for the first sub period, and it was flatter during the second sub period. However, after that, the slope was flatter. Basing on these results, Black, Fischer, Michael C. Jensen and Myron Scholes (1972) conclude that the traditional CAPM is inconsistent with the data.
Fama and MacBeth (1973) propose another regression method so as to overcome the problem related to the residues correlation in a simple linear regression. Indeed, instead of estimating only one regression for the monthly average returns on the betas, they propose to estimate regressions of these returns month by month on the betas. They include all common stocks traded in NYSE from 1926 to 1968 in their analysis.
The monthly averages of the slopes and intercepts, with the standard errors of the averages, thus, are used to check, initially, if the beta premium is positive, then to test if the averages return of assets which are not correlated with the market return is from now on equal to the average of the risk free rate. In this way, the errors observed on the slopes and intercepts are directly given by the variation for each month of the regression coefficients, which detect the effects of the residues correlation over the variation of the regression.
Their study led to three main results. At first, the relationship between assets return and their betas in an efficient portfolio is linear. At second, the beta coefficient is an appropriate measure of the security's risk and no other measure of risk can be a better estimator. Finally, the higher the risk is, the higher the return should be.
Blume and Friend (1973) in their paper try to examine theoretically and empirically the reasons beyond the failure of the market line to explain excess return on financial assets. The authors estimate the beta coefficients for each common stock listed in the New York Stock Exchange over the period of January 1950 to December 1954. Then, they form 12 portfolios on the basis of their estimated beta. They afterwards, calculate the monthly return for each portfolio. Third, they calculate the monthly average return for portfolios from 1955 to 1959. These averaged returns were regressed to obtain the value of the beta portfolios. Finally, these arithmetic average returns were regressed on the beta coefficient and the square of beta as well.
Through, this study, the authors point out that the failure of the capital assets pricing model in explaining returns maybe due to the simplifying assumption according to which the functioning of the short-selling mechanism is perfect. They defend their point of view while resorting to the fact that, generally, in short sales the seller cannot use the profits for purchasing other securities.
Moreover, they state that the seller should make a margin of roughly 65% of the sales market value unless the securities he owns had a value three times higher than the cash margin. This makes a severe constraint on his short sales. In addition to that, the authors reveal that it is more appropriate and theoretically more possible to remove the restriction on the short sales than that of the risk free rate assumption (i.e., to borrow and to lend on a unique risk free rate).
The results show that the relationship between the average realized returns of the NYSE listed common stocks and their correspondent betas is almost linear which is consistent with the CAPM assumptions. Nevertheless, they advance that the capital assets pricing model is more adequate for the estimates of the NYSE stocks rather than other financial assets. They mention that this latter conclusion is may be owed to the fact that the market of common stocks is well segmented from markets of other assets such as bonds.
Finally, the authors come out with the two following conclusions: Firstly, the tests of the CAPM suggest the segmentation of the markets between stocks and bonds. Secondly, in absence of this segmentation, the best way to estimate the risk return tradeoff is to do it over the class of assets and the period of interest.
The study of Stambaugh (1982) is interested in testing the CAPM while taking into account, in addition to the US common stocks, other assets such as, corporate and government bonds, preferred stocks, real estate, and other consumer durables. The results indicate that testing the CAPM is independent on whether we expand or not the market portfolio to these additional assets.
Kothari Shanken and Sloan (1995), show that the annual betas are statistically significant for a variety of portfolios. These results were astonishing since not very early, Fama and French (1992), found that the monthly and the annual betas are nearly the same and are not statistically significant. The authors work on a sample which covers all AMEX firms for the period 1927-1990. Portfolios are formed in five different ways. Firstly, they from 20 portfolios while basing only on beta. Secondly, 20 portfolios by grouping on size alone. Thirdly, they take the intersection of 10 independent beta or size to obtain 100 portfolios. Then, they classify stocks into 10 portfolios on beta, and after that into 10 portfolios on size within each beta group. They, finally, classify stocks into 10 portfolios on size and then into 10 portfolios on beta within each size group. They use the GRSP equal weighted portfolio as a proxy for the whole market return market.
The cross-sectional regression of monthly return on beta and size has led to the following conclusions: On the one hand, when taking into account only the beta, it is found that the parameter coefficient is positive and statistically significant for both the sub periods studied. On the other hand, it is demonstrated that the ability of beta and size to explain cross sectional variation of the returns on the 100 portfolios ranked on beta given the size, is statistically significant. However, the incremental economic benefit of size given beta is relatively small.
Fama and French published in 1992 a famous study putting into question the CAPM, called since then the "Beta is dead" paper (the article announcing the death of Beta). The authors use a sample which covers all the stocks of the non-financial firms of the NYSE, AMEX and NASDAQ for the period of the end of December 1962 until June 1990. For the estimate of the betas; they use the same test as that of Fama and Macbeth (1973) and the cross-sectional regression.
The results indicate that when paying attention only to the betas variations which are not related to the size, it is found that the relation between the betas and the expected return is too flat, and this even if the beta is the only explanatory variable. Moreover, they show that this relationship tend to disappear overtime.
In order to verify the validity of the CAPM in the Hungarian stock market, Andor et al. (1999) work on daily and monthly data on 17 Hungarian stocks between the end of July 1991 and the beginning of June 1999. To proxy for the market portfolio the authors use three different indexes which are the BUX index, the NYSE index, and the MSCI world index.
The regression of the stocks' return against the different indexes' return indicates that the CAPM holds. Indeed, in all cases it is found that the return is positively associated to the betas and that the R-squared value is not bad at all. They conclude, hence, that the CAPM is appropriate for the description of the Hungarian stock market.
For the aim of testing the validity of the CAPM, Kothari and Jay Shanken (1999), study the one factor model with reference to the size anomaly and the book to market anomaly. The sample used in their study contains annual return on portfolios from the GRSP universe of stocks. The portfolios are formed every July from 1927 to 1992. The formation procedure is the following; every year stocks are sorted on the basis of their market capitalization and then on their betas while regressing the past returns on the GRSP equal weighted index return. They obtain, hence, ten portfolios on the basis of the size. Then, the stocks in each size portfolio are grouped into ten portfolios based on their betas. They repeat the same procedure to obtain the book-to-market portfolios.
Using the Fama and MacBeth cross-sectional regression, the authors find those annual betas perform well since they are significantly associated to the average stock returns especially for the period 1941-1990 and 1927-1990. Moreover, the ability of the beta to predict return with reference to the size and the book to market is higher. In a conclusion, this study is a support for the traditional CAPM.
Khoon et al. (1999), while comparing two assets pricing models in the Malaysian stock exchange, examine the validity of the CAPM. The data contains monthly returns of 231 stocks listed in the Kuala Lumpur stock exchange over the period of September 1988 to June 1997. Using the cross section regression (two pass regression) and the market index as the market portfolio, the authors find that the beta coefficient is sometimes positive and some others negative, but they do not provide any further tests.
In order to extract the factors that may affect the returns of stocks listed in the Istanbul stock exchange, Akdeniz et al. (2000)make use of monthly return of all non financial firms listed in the up mentioned stock market for the period that spans from January 1992 to December 1998. They estimate the beta coefficient in two stages using the ISE composite index as the market portfolio.
First, they employ the OLS regression and estimate for the betas each month for each stock. Then, once the betas are estimated for the previous 24 months (time series regression), they rank the stocks into five equal groups on the basis of the pre-ranking betas and the average portfolio beta is attributed to each stock in the portfolio. They, afterwards, divide the whole sample into two equal sub-periods and the estimation procedure is done for each sub-period and the whole period as well.
The results from the cross sectional regression, indicate that the return has no significant relationship with the market beta. This variable does not appear to influence cross section variation in all the periods studied (1992-1998, 1992-1995, and 1995-1998).
In a relatively larger study, Estrada (2002) investigates the CAPM with reference to the downside CAPM. The author works on a monthly sample covering the period that spans from 1988 to 2001 (varied periods are considered) on stocks of 27 emerging markets.
Using simple regression, the authors find that the downside beta outperforms the traditional CAPM beta. Nevertheless, the results do not support the rejection of the CAPM from two aspects. Firstly, it was found that the intercept from the regression is not statistically different from zero. Secondly, the beta coefficient is positive and statistically significant and the explanatory power of the model is about 40%. This result stems for the conclusion according to which the CAPM is still alive within the set of countries studied.
In order to check the validity of the CAPM, and the absence of anomalies that must be incorporated to the model, Andrew and Joseph (2003) try to investigate the ability of the model to predict book-to market portfolios. If it is the case, then the CAPM captures the Book-to-market anomaly and there's no need to further incorporate it in the model.
For this intention, the authors work on a sample that covers the period of 1927-2001 and contains monthly data on stocks listed in the NYSE, AMEX, and NASDAQ. So as to form the book-to-market portfolios, they use, alike Fama and French (1992), the size and the book-to-market ratio criterion. To estimate for the market return, they use the return on the value weighted portfolios on stocks listed in the pre-cited stock exchanges and to proxy for the risk free rate; they employ the one-month Treasury bill rate from Ibbotson Associates. They, afterwards, divide the whole period into two laps of time; the first one goes from July 1927 to June 1963, and the other one span from July 1963 to the end of 2001.
Using asymptotic distribution the results indicate that the CAPM do a great job over the whole period, since the intercept is found to be closed to zero, but there is no evidence for a value premium. Hence, they conclude that the CAPM cannot be rejected. However, for the pre-1963 period the book to market premium is not significant at all, whereas for the post-1963 period this premium is relatively high and statistically significant. Nevertheless, when accounting for the sample size effect, the authors find that there is an overall risk premium for the post-1963 period. The authors conclude then that, taken as a whole, the study fails to reject the null that the CAPM holds. This study points to the necessity to take into account the small sample bias.
Fama and French (2004), estimate the betas of stocks provided by the CRSP (Center for Research in Security Prices of the University of Chicago) of the NYSE (1928-2003), the AMEX (1963-2003) and the NASDAQ (1972-2003). They form, thereafter, 10 portfolios on the basis of the estimated betas and calculate their return for the eleven months which follow. They repeat this process for each year of 1928 up to 2003.
They claim that, the Sharpe and Lintner model, suppose that the portfolios move according to a linear line with an intercept equal to risk free rate and a slope which is equal to the difference between the expected return on the market portfolio and that of the risk free rate. However, their study, and in agreement with the previous ones, confirms that the relation between the expected return on assets and their betas is much flatter than the prediction of the CAPM.
Indeed, the results indicate that the expected return of portfolios having relatively lower beta are too high whereas expected return of those with higher beta is too low. Moreover, these authors indicate that even if the risk premium is lower than what the CAPM predicts, the relation between the expected return and beta is almost linear. This latter result, confirms the CAPM of Black which assumes that only the beta premium is positive. This means, analogically, that only the market risk is rewarded by a higher return.
In order to test for the consistency of the CAPM with the economic reality, Thierry and Pim (2004) use monthly return of stocks from the NYSE, NASDAQ, and AMEX for the period that spans from 1926-2002. The one -month US Treasury bill is used as a proxy for the risk free rate, The CRPS total return index which is a value-weighted average of all US stocks included in this study is used as a proxy for the market portfolio.
They sort stocks into ten deciles portfolios on the basis of historical 60 months. They afterwards, calculate for the following 12 months their value weighted returns. They obtain, subsequently, 100 beta-size portfolios. The results from the time series regression indicate, firstly, that the intercepts are statistically indifferent from zero. Secondly, it is found that the betas' coefficients are all positive. Furthermore, in order to check the robustness of the model, the authors split the whole sample into sub-samples of equal length (432 months). The results indicate, also, that for all the periods studied the intercepts are statistically not different from zero except for the last period.
In his empirical study, Blake T (2005) works on monthly stocks return on 20 stocks within the S&P 500 index during January 1995-December 2004. The S&P 500 index is used as the market portfolio and the 3-month Treasury bill in the Secondary Market as the risk free rate. His methodology can be summarized as follows; the excess return on each stock is regressed against the market excess return. The excess return is taken as the sample average of each stock and the market as well. After estimating of the betas, these values are used to verify the validity of the CAPM. The coefficient of beta is estimated by regressing estimated expected excess stock returns on the estimates of beta and the regression include intercept and the residual squared so as to measure the non systematic risk.
The results confirm the validity of the CAPM through its three major assumptions. In fact, the null hypothesis for the constant term is not rejected. Moreover, the systematic risk coefficient is positive and statistically significant. Finally, the null hypothesis for the residual squared coefficient is not rejected. Hence, he concludes that none of the three necessary conditions for a valid model were rejected at the 95% level.
So as to estimate for the validity of the CAPM, Don Galagedera (2005) works on a Sample period from January 1995 to December 2004. This sample contains monthly data from emerging markets represented by 27 countries which are 10 Asian, 7 Latin American and 10 African, Middle-Eastern and European (Argentina, Brazil, Chile, China, Columbia, Czech, Egypt, Hungary, India, Indonesia, Israel, Jordan, Korea, Malaysia, Mexico, Morocco, Pakistan, Peru, Philippines, Poland, Russia, South Africa, Sri Lanka, Taiwan, Thailand, Turkey, and Venezuela ).
To proxy for the market index, the world index available in the MSCI database is used and the proxy for the risk-free rate is the 10-year US Treasury bond rate. The results indicate that the CAPM, compared to the downside beta CAPM, offers roughly the same estimates and the same performance.
Fama and French (2005) investigate the ability of the CAPM to explain the value premium in the US context during the period of 1926 to 2004 and include in their sample the NYSE, the AMEX and the NASDAQ stocks. They construct portfolios on the basis of the size, and the book-to-market. The size premium is the simple average of the returns on the three small stock portfolios minus the average of the returns on the three big stock portfolios. The value premium is the simple average of the returns on the two value portfolios minus the average of the returns on the two growth portfolios. They afterwards divide the whole period into two sub-periods which are respectively; 1926-1963 and 1963-2004.
Then, at the end of June of each year, the authors form 25 portfolios through the intersection of independent sorts of NYSE, AMEX, and NASDAQ stocks into five size groups and five book-to-market groups or Earning to price groups. Finally the size premium or the value premium of each of the six portfolios sorted by size and book-to-market is regressed against the market excess return.
The results show that for the size portfolios, when considering the whole period, the F-statistics shows that the intercepts are jointly different from zero. Moreover, it's found that the market premium is for all portfolios positive and statistically significant except. However, the explanatory power is relatively too low. Then, for the first sub-period the intercepts are not statistically significant which means that the intercepts are close to zero.
Moreover the beta coefficient is for all portfolios positive and statistically significant with a relatively higher explanatory power. Finally, for the last sub-period, the results are not supportive for the CAPM. In fact, for almost all portfolios, the beta is found to be negative and statistically significant. For further rejection, all the intercepts from the regression are statistically different from zero.
Erie Febrian and Aldrin Herwany (2007) test the validity of the CAPM in Jakarta Stock Exhange. They work involve three different periods which are respectively; the pre-crisis period (1992-1997), the crisis period (1997-2001), and the post-crisis period (2001-2007), and contains monthly data of all listed stocks in the Indonesian stock exchange. To proxy for the risk free rate, the authors make use of the 1-month Indonesian central bank rate (BI rate).
In order to estimate for the model, the authors use two approaches documented in the literature; it is bout particularly the times series regression and the cross sectional regression. The first one tests the relationship between variables in one time period, and the latter focuses on the observation of the relationship in various periods. To check the validity of the CAPM, they use manifold models used by; Sharpe and Linter (1965), Black Jensen and scholes (1972), Fama and Macbeth (1973), Fama and French (1992), and Cheng Roll and Ross (1986).
Using the Fama and French approach (1992) together with the cross sectional regression, the authors find that for all the three periods studied tests of the CAPM lead to the same conclusions. Indeed, the results indicate that the intercept from the cross sectional regression is negative and statistically significant which is a violation to the CAPM's assumption which pretends that the risk premium is the only risk factor. Nevertheless, the beta coefficient is found to be positive and statistically significant and that the R-squared is relatively high, which is a great support for the CAPM.
However, when using the Fama and Macbeth method (1973), the results are somehow different. In fact, the intercept is almost always not significant, unless for the latter period which stems for a negative and a significant intercept. The associated beta coefficient is only significant during the last period and the related sign is negative which contradicts the CAPM.
Furthermore, the coefficient from the square of beta is neither positive nor significant at all but in the last period (negative coefficient) which means that the relation between the risk premium and the excess return is, in a way, linear. Finally, the results indicate that the coefficient from the residuals is negative and statistically significant for the first and for the last period. Hence, basing on the above results, it is difficult to infer whether the CAPM is valid or not since the results are very sensitive to the method used for the estimate of the model.
Ivo Welch (2007) in his test of the CAPM works on daily data during the period of 1962-2007. The Federal Funds overnight rate is used as the risk free rate, the short term rate divided by 255 is used as the daily rate of return, the market portfolio is estimated from the daily S&P500 data, and The Long-Term Rate is the 10-year Treasury bond.
Using the time series regression, the difference between the 10 years treasuries and the overnight return is regressed against the market excess return. The results indicate that the intercept is close to zero. In addition to that, the beta coefficient is positive and statistically different from zero. These results are consistent with the CAPM.
Working on daily data of stocks listed on the thirty components of the Dow Jones Index and the SP 500 index for five years, one year, or a half year (180 days) of daily returns. While using the cross section regression, the authors find that for all periods used, the results are far from rejecting the CAPM. In fact, the R-squared is 0.28, which indicates, subsequently that the marker beta is a good estimator for the expected return.
Michael. D (2008) studies the ability of the CAPM to explain the reward-risk relationship in the Australian context. The sample of the study covers monthly data on stocks listed in the Australian stock exchange for the period of January 1974- December 2004. The equally value weighted ASE indices is used to proxy for the market portfolio. He forms portfolios on the basis of the betas, i.e. stocks are ranged each month on the basis of their estimated betas.
The main interesting result from this study is that which show that when the highest beta portfolio and the lowest beta portfolio are removed, it's found that portfolios' returns tend to increase with beta. Hence, the author concludes that the beta is an appropriate measure of risk in the Australian stock exchange.
Simon.G.M and Ashley Olson (XXXX) try to investigate the reliability of the CAPM's beta as an indicator of risk. For this end, they work on a one year sample data from November 2005 to November 2006. Their sample includes 288 publicly traded companies and the S&P 500 index (used as the market portfolio). In order to look into the risk return relationship, the authors group stocks into three portfolios on the basis of their betas, i.e. portfolios respectively; with low beta (about 0.5), market beta (about 1), and high beta (around 2).
The results point to the rejection of the CAPM. In fact, it was found that the assumption according to which the beta is an appropriate measure of risk s rejected. For example, according to the results obtained, if an investor takes the highest beta portfolio, the chance that this risk be rewarded is about only 11%.
In order to test for the validity of the CAPM, Arduino Cagnetti (XXXX) works on monthly returns (end of the month returns) of 30 shares on the Italian stock market for during the period of January 1990 to June 2001. To proxy for the market portfolio, he uses the Mib30 market index and the Italian official discount rate as a proxy for the risk-free rate. Then, he divided the whole period into two sub-periods of 5 years each one.
In order to test for the CAPM, the author uses two step procedures. In fact, the first step consists of employing the time series to estimate for the betas of the shares. While the second step involves the regression of the sample mean return on the betas. Then he runs a cross sectional regression while using the average returns for each period as dependant variable and the estimated betas as independent variables.
The results indicate that for the second sub-period the beta displays a high significance in explaining returns with a relatively strong explanatory power. However, for the first sub-period and during the whole 10 years, this variable is not significant at all.
In their study, Pablo and Roberto (2007)examine the validity of the CAPM with reference to the reward beta model and the three factor model. It is praiseworthy to note, in this level, that our investigation for the article is limited only to the study of the CAPM.
To reach their objective, the authors work on a sample covering the period that goes from July 1967 and ends in December 2006 and consists of all data on North American stock markets. That is the NYSE, AMEX, and the NASDAQ stock exchanges with all available monthly data on stocks. The one month Treasury bond is used to proxy for the risk free rate and the CRSP index as a proxy for the market portfolio (portfolio built with all the NYSE, AMEX and NASDAQ stocks weighted with the market value). The methodology pursued along this study is the classic two step methodology, i.e. the times series regression to estimate for ex-ante CAPM's beta, and then the cross section regression using the betas and the sensibilities of the estimated factors as explanatory variables in explaining expected return. To examine the validity of the CAPM, tests were run on the Fama and French's portfolios (6 portfolios formed on the basis of the size and the book to market).
The results were represented through two aspects whether or not the intercept is taken into account. On the one hand, when considering the CAPM with intercept, it is found that the latter is statistically different from zero which is a violation to the CAPM's assumptions. Moreover, results indicate that the risk premium is negative which is in contradiction with the Sharpe and Linter model. On the other hand, when removing the intercept, the results are rather supportive for the model. In fact, the risk premium appears to be positive and highly significant which goes in line with the CAPM. Nevertheless, the explanatory power in both cases is too low.
In his comparative study, Bornholt (2007) investigates the validity of three assets pricing models, it is about particularly, the CAPM, the three factor model and the reward beta model. It is trustworthy to mention again that in this literature we will be interested in the CAPM.
Working on monthly portfolios' return constructed according to the Fama and French methodology (1992) for the period that spans from July 1963 to December 2003. The one month Treasury bill is used as a proxy for the risk free rate and the GRSP value weighted index of all NYSE, AMEX, and NASDAQ stocks as a proxy for the market portfolio. The author uses the time series regression in order to estimate for the portfolios' betas (from 1963 to 1990) and then the cross section regression using the betas estimated as explanatory variables in expected return (1991 to 2003).
The results are interpreted from two sides depending on whether the estimation is done with or without the intercept. From the one side, when the intercept is included, the latter is found to be closed to zero which is in accordance with the CAPM. However, the beta coefficient is negative and not significant at all, which is in contradiction with the CAPM. This latter conclusion is enhanced further with the weak and the negative value of the R squared.
From the other side, the removal of the intercept does improve the beta coefficient which is in this case positive and statistically significant. Nevertheless, the explanatory power is getting worse. Hence, this study cannot claim to the acceptance or the rejection of the CAPM, since the results are foggy and far from allowing a rigorous conclusion.
Working on the French context, Najet et al. (2007) try to investigate the validity of the capital asset pricing model at different time scales. Their study sample entails daily data on 26 stocks in the CAC 40 index and covers the period that spans from January 2002 to December 2005. The CAC40 index is used as the market portfolio and the EUROBOR as the risk free rate. They consider six scales for the estimation of the CAPM which are; 2-4 days, 4-8 days dynamics, 8-16 days, 16-32 days, 32-64 days, 64-128 days, and 128-256 days.
The results from the OLS regression show that the relationship between the excess return of each stock is positive and statistically significant at all scales. The explanatory power moves up when the scales get widened. Which means that the higher the frequency is the stronger the relationship will be.
To further investigate the changes of the relationship over different scales, the authors use also the following intervals; 2-6, 6-12, 12-24, 24-48, 48-96, and 96-192. The results indicate that the relationship between the two variables of the model is becoming stronger as the scale increases. Nevertheless, they claim that the results obtained cannot draw conclusion about the linearity which remains ambiguous.
Within the Turkish context, Gürsoy and Rejepova (2007) check the validity of the CAPM. Their sample covers the period that spans from January 1995 to December 2004 and consists of weekly data on all stocks traded in the Istanbul Stock Exchange. The whole period was divided into five six-year sub-periods. The sub-periods are divided, in their turn, into 3 sub-periods of 2 years each, which correspond to respectively; portfolio formation, beta estimation and testing periods. All periods are considered with one overlapping year. They afterwards, form 20 portfolios of 10 stocks each one on the basis of their pre-ranking betas. Then for the regression equation they use two different approaches which are; the Fama and MacBeth's traditional approach, and Pettengil et al's conditional approach (1995).
With reference to the first approach, the CAPM is rejected in all directions. In fact, the results indicate that the intercept is statistically different from zero for all sub-periods except for one. In addition to that, the beta coefficient is found to be significant different from zero only in 2 sub-periods and almost in all cases is negative. Finally the R-squared is only significant in two sub-periods and beyond that it's too weak. Hence, this approach claims to the rejection of the CAPM.
As for the second approach, the beta is almost always found to be significant but either positive in up-market or negative in down- market. Furthermore, the results point to a very high explanatory power for both cases. However, the assumption according to which the intercept must be close to zero is not verified for both cases, which means that the risk premium is not the only risk factor in the CAPM. The authors conclude, on the basis of the results, that the validity of the CAPM remains a questionable issue in the Turkish stock market.
Robert and Janmaat (2009) examine the ability of the cross sectional and the multivariate tests of the CAPM under ideal conditions. They work on a sample made of monthly return on portfolios provided by the Kenneth French's data library. While examining the intercepts, the slopes and the R-squared, the authors reveal that these parameters are unable to inform whether the CAPM holds or not. Moreover, they claim that the positive and the statistically significant value of the beta coefficient, doesn't indicate that the CAPM is valid at all. The results indicate, also, that the value of the tested parameters, i.e. the intercepts and the slopes, is roughly the same independently whether the CAPM is true or false.
In addition to that, the results from the cross sectional regression have two different implications. From the one side, they indicate that tests of the hypothesis that the slope is equal to zero are rejected in only 10% of 2000 replications, which means, subsequently, that the CAPM is dead. From the other side, it's found that tests of the hypothesis that the intercept is equal to zero are rejected in 9%. As consequent, one may think that the CAPM is well alive. As for the R squared, its value is relatively lower particularly when the CAPM is true. The authors find, also, that the tests of the hypothesis that the intercept and the slope are equal to zero differ on whether they make use of the market or the equal weighted portfolio which confirms the Roll's critique.
5. IS THE CAPM DEAD OR ALIVE? SOME RESCUE ATTEMPTS
After reviewing the literature on the CAPM, it is difficult if not impossible to reach a clear conclusion about whether the CAPM is still valid or not. The assaults that tackle the CAPM's assumptions are far from being standards and the researchers versed in this field are still between defenders and offenders.
Actually, while Fama and French (1992) have announced daringly the dead of the CAPM and its bare foundation, some others (see for example, Black, 1993; Kothari, Shanken, and Sloan, 1995; MacKinlay, 1995 and Conrad, Cooper, and Kaul, 2003) have attributed the findings of these authors as a result of the data mining (or snooping), the survivorship bias, and the beta estimation. Hence, the response to the above question remains a debate and one may think that the reports of the CAPM's death are somehow exaggerated since the empirical literature is very mixed. Nevertheless, this challenge in asset pricing has opened a fertile era to derive other versions of the CAPM and to test the ability of these new models in explaining returns.
Consequently, three main classes of the CAPM's extensions have appeared and turn around the following approaches; the Conditional CAPM, the Downside CAPM, and the Higher-Order Co-Moment Based CAPM.
5.1. The Conditional CAPM
The academic literature has mentioned two main approaches around the modeling of the conditional beta. The first approach stems for a conditional beta by allowing this latter to depend linearly on a set of pre-specified conditioning variables documented in the economic theory (see for example Shanken, 1990). There have been several evidences that go on this road of research form whose we mention explicitly among others (Jagannathan and Wang, 1996; Lewellen, 1999; Ferson and Harvey, 1999; Lettau and Ludvigson, 2001 and Avramov and Chordia, 2006).
In spite of its revolutionary idea, this approach suffers from noisy estimates when applied to a large number of stocks since many parameters need to be estimated (see Ghysels, 1998). Furthermore, this approach may lead to many pricing errors even bigger than those generated by the unconditional versions (Ghysels and Jacquier, 2006). These limits are further enhanced by the fact that the set of the conditioning information is unobservable.
The second non parametric approach to model the dynamic of betas is that based on purely data-driven filters. The approaches in this category include the modeling of betas as a latent autoregressive process (see Jostova and Philipov, 2005; Ang and Chen, 2007), or estimating short-window regressions (Lewellen and Nagel, 2006), or also estimating rolling regressions (Fama and French, 1997). Even if these approaches sustain to the need to specify conditioning variables, it is not clear enough through the literature how many factors are they in the cross-sectional and time variation in the market beta.
In order to model the beta variation, studies have tried different modeling strategies. For instance, Jagannathan and Wang (1996) and Lettau and Ludivigson (2001) treat beta as a function of several economic state variables in a conditional CAPM. Engle, Bollerslev, and Wooldridge (1988) model the beta variation in a GARCH model. Adrian and Franzoni (2004, 2005) suggest a time-varying parameter linear regression model and use the Kalman filter to estimate the model. The following section treats these findings one by one while showing their main results.
5.1.1. Conditional Beta on Economic States
‘'If one were to take seriously the criticism that the real world is inherently dynamic, then it may be necessary to model explicitly what is missing in a static model", Jagannathan and Wang (1996), p.36.
The conditional CAPM is that in which betas are allowed to vary and to be non stationary through time. This version is often used to measure risk and to predict return when the risk can change. The conditional CAPM asserts that the expected return is associated to its sensitivity to a set of changes in the state of the economy. For each state there is a market premium or a premium per unit of beta. These price factors are often the business cycle variables.
The authors, who are interested in the conditional version of the CAPM, demonstrate that stocks can show large pricing errors compared to unconditional asset pricing models even when a conditional version of the CAPM holds perfectly. In what follows, a summary of the main studies versed in this field of research is presented.
Jagannathan and Wang (1996) were the first to introduce the conditional CAPM in its original version. They claim that the static CAPM has been founded on two unrealistic assumptions. The first one is that according to which the betas are constant over time. While the second one is that in which the portfolio, containing all stocks, is assumed to be a good proxy for the market portfolio. The authors assert that it would be completely reasonable to allow for the betas to vary over time since the firm's betas may change depending on the economic states. Moreover, they state that the market portfolio must involve the human capital.
Consequently, their model includes three different betas; the ordinary beta, the premium beta based on the market risk premium which allows for conditionality, and the labor premium which is based on the growth in labor income. Their study includes stocks of non financial firms listed in the NYSE and AMEX during the period that spans from July 1963 to December 1990. They, after that, group stocks into size portfolios and use the GRSP value weighted index as proxy for the market portfolio.
For each size decile, they estimate the beta and, afterwards, class stocks into beta deciles on the basis of their pre-ranking betas. Hence, 100 portfolios are formed and their equivalent equally weighted return is calculated. The regressions are made using the Fama and Macbeth procedure (1973).
The authors find that the static version of the CAPM does not hold at all. In fact, the results show a small evidence against the beta which appears to be too weak and not statistically different from zero. Moreover, the R-squared is only about 1.35%. Then, the estimation of the CAPM when taking into account the beta variation shows that the beta premium is significantly different from zero. Furthermore, the R-squared moves up to nearly 30%. Nevertheless, the estimation of the conditional CAPM with human capital indicates that this variable improves the regression. Indeed, the results show a positive and a statistically significant beta labor. In addition to that, the beta premium remains significant and the adjusted R-squared goes up to reach 60%. This is a great support for the conditional version of the CAPM.
Meanwhile, one of the most important theoretical results which contributes to the survival of the CAPM is that of Dybvig and Ross (1985) and Hansen and Richard (1987) who find that the conditional version of the CAPM is adequate even if the static one is blamed.
Pettengill et al. (1995) find that the conditional CAPM can explain the weak relationship between the expected return and the beta in the US stock market. They argue that the relationship between the beta and the return is conditional and must be positive during up markets and negative during down markets.
These findings are further supported by the study of Fletcher (2000) who investigates the conditional relationship between beta and return in international stock markets. He finds that the relationship between the two variables is significantly positive during up markets months and significantly negative during down markets months.
Another study performed by Hodoshima, Gomez and Kunimura (2000) and supports the conditional relationship between the beta and the return in the Nikkei stock market.
Jean-Jacques. L, Helene Rainelli. L M, and Yannick. G (XXXX) run the same study as Jagannathan and Wang (1996) but in an international context. They work on monthly data for the period which goes from January 1995 to December 2004 related to six countries which are; Germany, Italy, France, Great Britain, United States, and Japan. For each country the MSCI index is retained as a proxy for the market portfolio and the 3 month Treasury bond as a proxy for the risk free rate.
Following the same methodology as for Fama and French (1992), the authors form portfolios on the basis of the size, the book to market. Hence, for each country 12 portfolios are formed and regressed on the explanatory variables using the cross-sectional regression.
The results indicate that the market beta is statistically significant only in one country (the Great Britain). However, the beta premium is found to be significant in four countries, i.e. Italy, Japan, Great Britain and the US. As for the premium labor, the results exhibit significance only in three cases (Germany, Great Britain and France). Moreover, the results stem to a very high explanatory power for all countries which is on average beyond 40%.
Meanwhile, Durack et al. (2004) run the same study as Jagannathan & Wang (1996) in the Australian context. Their sample contains monthly data of all listed Australian stocks over the period of January 1980- December 2001.
In order to estimate for the premium labor model, they use the value weighted stock index as the market portfolio and extract the macroeconomic data from the bureau of statistics in order to measure the beta premium and the beta labor (two variables; the first one captures the premium resulting from the change in the market premium and the second one captures the premium of the human capital). They, afterwards, sort the stocks into seven size portfolios, then, into seven further beta portfolios. Finally, 49 portfolios are formed and used for the estimate of the conditional CAPM.
Using the OLS technique, the authors find that the conditional CAPM does a great job in the Australian stock market. In fact, in both cases, i.e. conditional CAPM with and without human capital, the model accounts for nearly 70% of the explanatory power. Nevertheless, the results report a little evidence towards the beta premium which is found to be, in all cases, positive but not statistically significant at any significance level.
Furthermore, the beta of the premium variation is negative and statistically significant. But, unlike Jagannathan and Wang (1996), the authors find that the human capital does not improve the beta estimate which remains insignificant. Finally, they find that the intercept is found to be significantly different from zero which is a violation to the CAMP's assumptions.
Campbell R. Harvey (1989), tests the CAPM while assuming that both expected return and covariances are time varying. The author uses monthly data of the New York Stock Exchange from September 1941 to December 1987. Ten portfolios are sorted by market value and rebalanced each year on the basis of this criterion. The risk free rate is the return on Treasury bill that is closed to 30 days at the end of the year t-1 and the conditional information includes the first lag on the equally weighted NYSE portfolio, the junk bond premium, a dividend yield measure, a term premium, a constant and a dummy variable for January are included in the model as well.
The results indicate that the conditional covariance change over time. Moreover, it is found that the higher the return is the more important the conditional covariance will be. Nevertheless, it is found that the model with a time-varying reward to risk appears to be worse than the model with a fixed parameter. In fact, the intercepts vary so high to be able to explain the variance of the beta. Consequently, this rejects the CAPM even in this general formula.
Jonathan Lewelen and Stefan Nagel (2003) study the validity of the conditional CAPM in the US context. The authors work on a sample including the period that spans from 1964 to 2001, and contains different data frequencies, i.e. daily, weekly and monthly. The data includes all NYSE, and AMEX stocks on GRSP Compustat sorted on three portfolios' type on the basis of the size, the book-to-market, and the momentum effects.
The regression test is run on the excess return of all portfolios on the one month Treasury bill rate. The results indicate that the beta varies over time but that this variation is not enough to explain the pricing errors. In fact, the results show that beta cannot covary with the risk premium sufficiently in a way that can explain the alphas of the portfolios. Indeed, the alphas are found to be are high and statistically significant which is a violation to the CAPM.
Treerapot Kongtoranin (2007) applies the conditional CAPM to the stock exchange of Thailand. He works on monthly data of 170 individual stocks in SET during 2000 to 2006. The author uses the SET index and the three-month Treasury bill to proxy, respectively for the market portfolio and the risk free rate.
Before testing the validity of the conditional CAPM, the author classify the stocks into 10 portfolios of 17 stocks each one on the basis of their average return. They afterwards estimate for the conditional CAPM using the cross sectional regression. The beta is calculated using the ratio of covariance between the individual portfolio and the market portfolio, and the variance of the market portfolio. The covariance is determined by using the ARMA model and the variance of the market portfolio is determined by the GARCH (1, 1) model.
The results indicate that the relationship between the beta and the return is negative and not statistically significant. Meanwhile, when a year period is considered, it is found that in 2000, 2004, and 2005 the beta premium is negative and statistically significant. Consequently, these results reject the CAPM which assume that the risk premium is positive.
Abd.Ghafar Ismaila and Mohd Saharudin Shakranib (2003) study the ability of the conditional CAPM to generate the returns of the Islamic unit trusts in the Malaysian stock exchange. They work on weekly price data of 12 Islamic unit roots and the Sharjah index for the period that spans from 1 May 1999 until 31 July 2001.
They, firstly, estimate for each unit trust the equivalent beta is estimated for the whole sample and for the two following sub-samples; 1 May 1999 - 23 June 2000, and 24 June 2000 - 31 July 2001. Then, they estimate the average beta using the conditional CAPM. The results indicate that the betas coefficients are significant and have a positive value in up markets and negative value in down markets which supports the conditional relationship for all the period studied. Moreover, it is shown that the conditional relationship is better in the down market than in the up market.
In order to look at the ability of the conditional CAPM, M. Lettau and S. Ludvigson (2001) study the consumption CAPM within a conditional framework. The study sample includes the returns of 25 portfolios formed according to Fama and French (1992, 1993). These portfolios are considered as the value weighted returns for the intersection of five size portfolios and five books to market portfolios on the NYSE, AMEX, and NASDAQ stocks in COMPUSTAT. The period of study goes from July 1963 to June 1998 and contains quarterly data of all portfolios.
For the estimate of the conditional CAPM, the GRSP value weighted return is considered as proxy for the market portfolio, and the cross-sectional regression methodology proposed in Fama and MacBeth (1973) is used. The results indicate that the static CAPM fails in explaining the cross-sectional of stocks return motivating by a small R-squared that is only of 1%. However, for the conditional CAPM, it was found that the scaled variable is positive and statistically significant. Moreover, the adjusted R-squared moves up to reach the 31%. Subsequently, this means that the conditional version of the CAPM performs well the static version.
Goezmann et.al (2007), expand the results of the Jagannathan and Wang's study (1996). In their study, the authors assume that the market beta and the premium beta as well as the premium its self can vary across time. Their idea is based on the assumption according to which the expected return is conditional on the economic states. Furthermore, these economic states are determined through the investor expectations about the future prospect of the economy. Hence, they suppose that the expected real Gross Domestic Product (GDP) growth rate is considered as a predictive instrument to know the state of the economy.
To reach their objective, the authors form 25 portfolios from the intersection of 5 five book-to-market portfolios and five size portfolios. They, afterwards, measure the beta instability risk directly through the bad and good states of the GDP. Using a two beta model and the general method of moments, the authors find that the risk premium is increasing with the book-to-marker portfolios ranges.
In fact, the results indicate that the conditional market risk premium is not priced at all. Besides, when considering the conditional beta (on bad and good states), it is found that the good time sensitivity are positively associated to book to market portfolios. In addition to that, it's found that stocks whose returns are positively related to the GDP earn negative premium. Finally, the results point to a positive beta premium for value stocks which turns into negative for growth stocks.
In order to test for the risk-return relationship in the Karachi stock exchange Javid A and Ahmad E (2008) study the conditional CAPM. Their sample study includes daily and monthly return of 49 companies and the KSE 100 index during the period of July 1993 to December 2004 which is divided into five overlapping intervals (1993-1997, 1994-1998, 1995-1999, 1996-2000, and 1997-2001).
Using the time series regression (whole period) together with the cross section regression (sub-periods), the results point to weak evidence towards the unconditional version of the CAPM. In fact, it was found that the positive relationship between the expected return and the risk premium doesn't hold in any sub-period. Moreover, the intercepts are found to be statistically indifferent from zero in almost all cases. This weakness is further enhanced by the fact that the residues play a significant role in explaining the return. They, hence, assert that the return distribution must vary over time.
Having this in mind, the authors allow for the risk premium to vary along with macroeconomic variables which are supposed to contain the business cycle information. These conditioning variables are the following; market return, call money rate, term structure, inflation rate, foreign exchange rate, growth in industrial production, growth in real consumption, and growth in oil prices.
Applying the conditional version of the CAPM, the authors find that the risk premium is positive for roughly all sub-periods (1993-1995, 1993-1998, 1999-2004, 2002-2004), and for the overall sample period 1993-2004. In addition to that, beta coefficient is statistically significant which indicate that investors get a compensation for bearing risk. Nevertheless, the results show that for all sub-periods and for the whole period the intercepts are statistically different from zero.
Within an international context, Fujimoto A, and Watanabe M (2005) study the time variation in the risk premium of the value stocks and the growth ones as well. The authors assume that the CAPM holds each period allowing for the beta and for the market premium to vary through time.
Their sample includes data set of 13 countries which represent, roughly, 85% of the world's market capitalization. These countries are the following; Australia, Belgium, Canada, France, Germany, Italy, Japan, Netherlands, Singapore, Sweden, Switzerland, UK and the US. For each country, monthly stock returns, market capitalizations, market-to-book values, as well as value-weighted market returns for the non-US are used for the period that ends in 2004 and begin in 1963 for the US, 1965 for the UK, 1982 for Sweden, and in 1973 for all the other countries.
They, firstly, form portfolios as the intersection of the book-to-market portfolios and those of size. Then, the authors suppose that the market premium and the betas of the portfolios are presumed to be a function of the dividend yield, the short rate, the term spread, and the default spread. The cross section regression indicates, firstly, that the beta sensitivity is positive and statistically significant in 9 countries for value stocks.
Furthermore, it is found also that growth stocks exhibit negative beta premium sensitivity. Moreover, for the long-short portfolio and the size/book-to market portfolios the beta sensitivity is for almost all countries positive and statistically significant. Finally, it is found that the beta premium sensitivity cannot explain the whole variation in value premium in international markets.
From their part, Michael R. Gibbons and Wayne.Ferson (1985), also relax the assumption related to the stagnation of the risk premium. Hence, the expected return is conditional on a set of information variables. They use daily return of common stocks composing the Dow Jones 30 for the period that goes from1962 to 1980.
So the daily stocks returns are regressed against the lagged stock index, the Monday dummy, and an intercept. The results show that the Monday dummy and the lagged GRSP value-weighted index, are highly significant. Nevertheless, the coefficient of determination is beyond 5%. Consequently, they conclude that their study is robust to missing information.
Ferson and Harvey (1999), try to look into the conditioning variables and their impact on the cross section of stock returns. The sample of the study covers the period of 1963-1994 and contains monthly data of the US common stock portfolios. The lagged instrumental variables used in here are respectively; the difference between the one-month lagged returns of a three-month and a one-month treasury bills, the dividend yield of the standard and Poors 500 index, the spread between Moody's Baa and Aaa corporate bond yields, the spread between ten-year and one-year treasury bond yield.
The regression produces significant values for roughly all variables. The conditional version implies, also, that the intercepts are time varying which means that they are not zero. They conclude, hence, that the conditional version is not valid.
Wayne E. Ferson and Andrew F. Siegel (2007) in their trial to investigate the portfolio efficiency with conditioning information, test the conditional version of the CAPM. For this objective, they use a standard set of lagged variables to model the conditioning information and a sample that ranges from 1963 to 1994.
The instrumental variables used are the following; the lagged value of a one-month Treasury bill yield, the dividend yield of the market index, the spread between Moody's Baa and Aaa corporate bond yields, the spread between ten-year and one-year constant maturity Treasury bond yields, and the difference between the one-month lagged returns of a three-month and a one-month Treasury bill.
They, after that, group stocks into two classes. The former is a sorting according to the Twenty five value-weighted industry portfolios. The latter, is a classification with reference to their prior equity market capitalization, and separately into five groups on the basis of their ratios of book value to market value. The results indicate that the conditioning variables do not improve very well the estimation. Nevertheless, during all periods these variables exhibit high and statistically significant coefficients.
5.1.2. The Conditional CAPM: data-driven filters
Unlike the first approach which is based on pre-specified conditioning information, the data-driven filters approach is based on purely empirical bases. In fact, the data used is the source of factors and it is the only responsible of the beta variation. This means that one do not require a well defined variables, rather one allows for the data to define these variables. Some papers interested in this approach are discussed next.
Always in the same field of research, Nagel S and Singelton K (2009) try to investigate the conditional version of assets pricing models. Their methodology is somehow different from the others as for the moment when they use the Stochastic Discount Factor (SDF) as a conditionally affine function of a set of priced risk factors.
Their sample study includes quarterly data on three instruments variables over the period of 1952-2006. These variables are; the consumption-wealth ratio of Lettau and Ludvigson (2001a), the corporate bond spread as in Jagannathan and Wang (1996) or the labor income-consumption ratio of Santos and Veronesi (2006). They, next, form portfolios sorted by size and book-to-market ratio as well.
Applying the time varying SDF, the authors find that when the two conditioning information, i.e. the consumption-wealth ratio, and the corporate bond spread, is incorporated in the estimation; the model fails in explaining the cross-sectional of stocks returns. They conclude, hence, that the conditional asset pricing models do not play a good role in improving the pricing accuracy.
In the US context, Huang P and Hueng J (XXXX) investigate the risk-return relationship in a time-varying beta model according to the view of Pettengill, Sundaram, and Mathur (1995). For this objective, they use a sample which includes daily returns of all stocks listed in the S&P 500 index over the period of November 1987-December 2003. For the estimation of the time-varying beta model, they make use of the Adaptive Least Squares with Kalman Foundations (ALSKF) proposed by McCulloch (2006).
The results support the Pettengil et al. model (1995). In fact, it is found that the beta premium is positive and statistically significant in up markets, whereas the risk-return relationship is found to be negative and statistically significant in down markets. Moreover, the results exhibit that none of the intercept is statistically different from zero. Finally, while using the ALSKF, the authors find that the estimation is more precise than that obtained via the OLS regression.
The study of Demos A and Pariss S (1998) aims at investigating the validity of the validity of the conditional CAPM within the Athens Stock Exchange. For this end, the authors work on a sample covering fortnightly returns of the nine Athens Stock Exchange (ASE) sectorial indices and the Value weighted index from January 1985 to June 1997.
Then, to model the idiosyncratic conditional variances, the ARCH type process is used. The results from the OLS regression indicate that the static version of the CAPM is doing a good job. In fact, the beta coefficient is in all cases positive and statistically significant. This result does not depend on whether an intercept is included in the regression. This is because the intercepts only in three out of nine cases are found to be statistically different from zero.
They after that, consider that Value Weighted Index follows a GQARCH(1,1)-M process, the authors find similar results to the static CAPM. In fact, all the estimated betas are found to be positive and statistically different from zero. Nevertheless, the authors find that within the CAPM in both static and dynamic versions, the idiosyncratic risk is priced. Moreover, it's found that the intercepts are jointly statistically different from zero. Basing on the pre-mentioned result the authors claim to the invalidity of the model. They consider that the potential cause of failure is the use of the value weighted index rather than the equally weighted one.
Consequently, they repeat the same procedure while using the equally weighted index. Firstly, with respect to the static version, the results indicate that all intercepts are statistically indifferent from zero except for the bank sector and are jointly different from zero. Secondly, as for the dynamic version of the CAPM, the results point to a very supportive result. Indeed, the betas coefficients not only have the right sign but also are highly significant. Finally, the most supportive result is that which shows that the idiosyncratic risk is not priced.
In their study, Basu D and Stremme A (2007), study the conditional CAPM while assuming time variation in risk premium. This time variation is captured by a non linear function of asset of variables related to the business cycle.
In order to model the time-varying factor risk premium, the authors use the approach of Hansen and Richard (1987). This approach permits to construct a candidate stochastic discount factor given as an affine transformation of the market risk factor. Then, the coefficients from this transformation are non-linear of the conditioning variables sensed to capture the time-variation of the risk premium.
They work on monthly data of different kinds of portfolios, i.e. the portfolios sorted on CAPM beta (stocks relative to the S&P 500) over the period of 1980-2004, the momentum portfolios over the period of 1961-1999, the book-to-market portfolios over the 1961-1999 period, and finally the 25 and 100 portfolios sorted by size and book-to-market over the 1963-2004 and the 1963-1990 periods. As for the conditioning variables, the authors make use of the following instruments; the one-month Treasury Bill rate, the term spread (the difference in yield between the 10-year and the one-year Treasury bond), the credit spread (the difference in 10-year yield between the AAA-rated corporate and the corresponding government bond), and the convexity the of the yield curve (the 5-year yield minus the sum of the 10-year and 1-year yields).
The results point to a weak evidence towards the static CAPM. In fact, it is found that this latter account only for 1.6% of the returns cross sectional variation. Moreover, the expected return for all studied portfolios has the U-shape which, subsequently, contradicts the CAPM's assumptions.
However, when the conditional version is set into play, the results are somehow surprising. Definitely, the scaled version of the CAPM captures 60% of the cross-sectional variation. Furthermore, this model predicts relatively better the expected return, since it accounts for lower errors for the extreme as well as middle portfolios. Finally, the scaled version explains better the risk premium of all portfolios with comparison to the static version.
Tobias A and Franzoni F (2008) introduce the unobservable long-run changes to the conditional CAPM as a risk factor. They, hence, propose to model the conditional betas using the Kalman filter since investors are supposed to learn the long level of factor loading through the observation of the realized return. For this reason, they suppose that the betas change overtime following a mean-reverting process.
For this aim, they work on quarterly data of all stocks listed in the NYSE, AMEX, and NASDAQ for the period that spans from 1963 to 2004. They, afterwards, group stocks into 25portfolios basing on the size and the book-to-market criterion. As for the conditioning variables, they make use of value-weighted market portfolio, the term spread, the value spread, and the CAY variable documented Lettau and Ludvigson (2001). Then, in order to extract for the filtered betas, they apply the Kalman Filter. The times series test on size and book-to-market portfolios indicates that the introduction of the learning process into the conditional CAPM contributes to the decrease of the pricing errors. Moreover, when the learning is not included in the conditional CAPM, the results point to a rejection.
In the interim, Jon A. Christopherson, Wayne E. Ferson and Andrew L. Turner (1999), try to find out the effect of the conditional alphas and betas on the performance evaluation of portfolios. They assert that the betas and alphas move together with a set of conditioning information variables. They find that the excess returns are partially predictable through the information variables. The results point, also, to statistically insignificant alphas which is consistent with the CAPM's predictions.
In the same path of research, Wayne E. Ferson, Shmuel Kandel, and Robert F. Stambaugh (1987), apply the conditional version of the CAPM. They assume that the market beta and the risk premium vary overtime. They work on weekly data from the NYSE over the period of 1963-1982 which includes the return of ten common portfolios sorted on equity capitalization. The results suggest that the single factor model is not rejected when the risk premium is allowed to vary overtime and when the risk related to that risk premium is not constrained to be equal to market betas.
In the same way, Paskalis Glabadanidis (2008), studies the dynamic asset pricing models while assuming that both the factor loading and the idiosyncratic risk are time varying. In order to model the time variation in the idiosyncratic risk of the one factor model, he uses the multivariate GARCH model.
Their main objective relies on the fact that the risk return relationship should contain the proper adjustment which may account for serial autocorrelations in volatility and time variation in the return distribution. His study is run on monthly return of 25 size and book-to-market portfolios and 30 industry ones as well for the period of 1963-1993. The results indicate that the dynamic CAPM could reduce the pricing errors. However, the results indicate that the null intercept hypothesis cannot be rejected at any significance level.
5.2. The Downside Approach
“A man who seeks advice about his actions will not be grateful for the suggestion that he maximize his expected utility.”
Roy (1952)
5.2.1. From the Mean-Variance to the Downside Approach
The mean-variance approach lies on the fact that the variance is an appropriate measure of risk. This latter assumption is founded upon at least one of the following conditions. Either, the investor's utility function is quadratic, or the portfolios' returns are jointly normally distributed. Subsequently, the optimal portfolio chosen based on the criterion of the mean variance would be the same as that which maximizes the investor's utility function.
Nevertheless, the adequacy of the quadratic utility function is tackled as for the moment when the investor's risk aversion would be an increasing function of his wealth, whereas the opposite is completely possible. Furthermore, the normal distribution of the return is criticized since the data may exhibit high frequency such as skewness (Leland, 1999; Harvey and Siddique, 2000 and Chen, Hong, and Stein, 2001), or kurtosis (see for example Bekaert, Erb, Harvey, and Viskanta, 1998 and Estrada, 2001c).
Levy and Markowitz (1979) find that the mean-variance behavior is a good approximation to the expected utility. In fact, they show that the integration of the skewness or the kurtosis or even the both worsens the approximation to the expected utility.
The credibility of the variance as a measure of risk is valid only in the case of symmetric distribution of the return. Then, it is reliable only in the case of normal distribution. Moreover, the beta which is the measure of risk according to the mean-variance approach suffers from diverse critics (discussed in the first and the second section of the chapter).
Meanwhile, Brunel (2004), states that the mean-variance criterion is not able to generate a successful allocation of wealth given that investors, in this case, do not consider the higher statistical moment issues. For that reason, choices have to be made on other parameters of the return distribution such as skewness or kurtosis. Skewness preference indicates that investors allocate more importance to downside risk than to upside risk.
From these critics, one may think that the failure of the traditional CAPM comes from its ignorance of the extra reward prime required by investors in the bear markets. In fact, by intuition investors would require higher return for holding assets positively correlated with the market in distress periods and a lower return for holding assets negatively correlated with the market in bear periods. Consequently, upside and downside periods are not treated symmetrically, from where the birth of the semi-deviation or also the semi-variance approach.
The concept of the semi-variance was firstly introduced by Markowitz (1959) and was later refined by Hogan and Warren (1974) and Bawa and Lindenberg (1977). This approach preserves the same characteristics as the regular CAPM with the only difference in the risk measures. In fact, while the former uses the semi-variance and the downside beta, the latter uses the variance and the regular beta. In the particular case where the returns are symmetrically distributed, the downside beta is equal to the regular beta.
However, for asymmetrical distribution the two models diverge largely. The standard deviation identifies the risk related to the volatility of the return, but it does not make a distinction between upside changes and downside changes. In practice, the separation between these two aspects is though important. In fact, if the investor is risk averse, then he will be averse to downside volatility and accept gladly the upside volatility. So the risk occurs when the wrong scenario is put into play.
The semi-variance is more plausible than the variance as a measure of risk. Indeed, the semi-deviation accounts for the downside risk that investors want to prevent contrary to the upside risk which has the welcome. In a nutshell, the semi-variance is an adequate measure of risk for a risk averse investor.
5.2.2. A Brief History on the Downside Risk Measures
In order to understand the contribution and the concept of the downside risk, it is important to study the history of its development. The purpose of this section is to review the measures of the downside risk and to clarify its major innovation
Along with the academic literature, there have been two major measures that have been commonly used; it is about the semi-variance and the lower partial moment. All of these measures were tools to develop the portfolio theory and to determine the efficient choice among portfolios for a risk averse investor.
The first paper appeared in the field of finance and which was interested in the downside risk theory was that of Markowitz (1952). This author develops a theory that uses the mean return and the variance and covariance to construct the efficient frontier in where one may find all portfolios that maximizes the return for a given risk level or that minimizes the risk for a given return level. Hence, the investor should make a risk-return tradeoff according to his utility function. Nevertheless, due to the human being nature which is not obvious, it's difficult to determine a common utility function.
The second paper published in this field of research was that of Roy (1952).this latter states that the creation of mathematical utility function for an investor is very difficult. Consequently, an investor would not be satisfied for simply maximizing his expected utility. He suggests, for that reason another measure of risk which he calls the safety first technique. This measure suggests that an investor would prefer the safety of his wealth and chooses some minimum acceptable return that conserves the principal.
The minimum acceptable return is called, according to Roy (1952), the disaster level or also the target return beyond which we would not accept the risk. That is why; the optimal choice of an investor will be that which has the smallest probability of going under the disaster level. He develops, thus, the reward to variability ratio which, for a given investor, minimizes the probability of the portfolio to go under the target return level.
After that, Markowitz (1959) has admitted the Roy's approach and the importance of the downside risk. He states that the downside risk is crucial for portfolio choice for at least two reasons; first because the return distribution is far form being normal. Second, because only the downside risk is pertinent for the investor. He proposes, therefore, two measures of the downside risk; the below-mean semi variance and the below-target semi variance. Both of them use only the return below the mean or the target return.
Since that, many researchers have explored the downside measures in their study and have demonstrated the superiority of these risk measures over the variance (see for example; Quirk and Saposnik, 1962; Mao, 1970; Balzer, 1994 and Sortino and Price, 1994 among others). Meanwhile, Klemkosky (1973) and Ang and Chua (1979) have demonstrated the plausibility of the below-target semi variance approach as a tool in evaluating the mutual fund performance. Hogan and Warren (1974) have developed a below- target capital asset pricing model which is useful in the case where the return distribution is non normal and asymmetric.
The development of the downside risk is pulled along with the emergence of the low partial moment measure introduced by Bawa (1975) and Fishburn (1977).Then, Nantell and Price (1979), and Harlow and Rao (1989) have suggested another version of the downside CAPM known since as the lower partial moment CAPM. The lower partial moment as a risk measure developed for the first time by Bawa (1975) and Fishburn (1977) which moves the constraint from having only one utility function and provide a whole rainbow of utility functions. Moreover, it describes the risk in terms of risk tolerance.
Another support for the downside risk provided by Roy (1952), Markowitz (1959), Swalm (1966), and Mao (1970) who demonstrated that the investors are not concerned with the above-target returns. From this, the semi variance is more practical in evaluating risk in investment and financial strategies.
5.2.3. Background on the Downside CAPM
Over the last decade, extensive empirical literature had been carried out to investigate the downside approach as a risk measure. Indeed, taking as a starting point the failure of the CAPM's beta in representing risk, several researchers have tried to improve the relationship risk return and to fill the gaps of its limitations with reference to the market model. In order to obviate these limitations, Hogan and Warren (1974) and Bawa and Linderberg (1977) put forward the use of the downside risk rather than the variance as a risk measure and developed a MLPM-CAPM, which is a model that does not rely on the CAPM's assumptions.
Both studies sustained that the MLPM-CAPM model outperforms the CAPM at least on theoretical grounds. Harlow and Rao (1989) improved the MLPM-CAPM model and introduced a more general model, which is known as the Generalised Mean-Lower Partial Moment CAPM. This is a MLPM-CAPM model for any arbitrary benchmark return. Particularly, their empirical results suggest the use of the generalized MLPM-CAPM model, since no evidence goes in support for traditional CAPM. The other caveat from the latter study is that target return should equal to the mean of the assets' returns rather than the risk-free rate.
In this road of research, we note, particularly, Leland (1999) who investigates the risk and the performance measures for portfolios with asymmetrical return distribution. This author criticizes the plausibility of ‘'alpha'' and the ‘'Sharpe ratio'' to evaluate portfolios' performance, and suggest the use of the downside risk approach. He proposes, hence, another risk measure which differs from the CAPM beta, particularly, when the assets' return or that of the portfolio is assumed to be non linear in the market return.
Estrada (2002) in her seminal paper evaluates the mean-semi variance behavior in the sense that it yields a utility level similar to the investor's expected utility. She divided the whole world on three markets; all markets, developed markets and the emerging markets. She finds that the standard deviation is an implausible risk measure and suggests in turn the semi-deviation as a better alternative. The results indicate, also, that the mean-semi variance behavior outperforms the mean-variance behavior in emerging markets and in the whole sample of all markets.
Then, using the J-test of Davidson and MacKinnon (1981), the author finds that none of the two approximations does better than the other in explaining the variability of the expected utility. However, it is shown that the mean-semivariance approach outperforms the other in the case of the negative exponential utility function. The author reports, also, that MSB is not only consistent with the maximization of expected utility but also with the maximization of the utility of expected compound return.
She ended, finally, her article with an inquiry of whether the downside beta can be used in a one factor model as an alternative to the traditional beta. She prepares, hence, the area to the next paper treating the CAPM within a downside framework.
In the same year, Estrada investigates the downside CAPM within the emerging markets context. The downside CAPM replaces the original beta by the downside beta. This beta is defined as the ratio of the cosemivariance to the market's semivariance.
She works on a sample that covers the entire Morgan Stanley Capital Indices database of emerging markets. The data contains monthly returns on 27 emerging markets for various sample periods, some of the data begins at January 1988 and some others start later. The author demonstrates, while making use of the average monthly return for the whole sample, that both the beta and the downside beta are significant in generating returns), and that the latter explains better the average return witnessed by a high explanatory power. However, when considering the two risk measures into one model, only the downside beta is found to be significant (cross section regression). The results indicate, in addition to that, that the returns are more sensitive to the changes in the downside risk.
Then, the author compares the performance of the CAPM and the downside CAPM. The results support the downside CAPM since the downside beta explains roughly 55% of the returns variability in emerging markets. As a conclusion, the author mentions the plausibility of the mean-semivariance behavior in explaining the return on the sample of markets studied.
For the meantime, Thierry Post and Pim Vliet (2004) investigate the downside risk and the CAPM. In order to assign weights to the market's portfolios return, the authors use the pricing Kernel. Their samples includes the ordinary common US stocks listed on the New York Stock Exchange (NYSE), American Stock Exchange (AMEX), and NASDAQ markets and uses a monthly frequency for the period of 1926-2002. Portfolios are sorted according to their betas and downside betas for the previous 60 months and the average return is then computed for the following next 12 months.
For further discussion, the authors control also for the size and the momentum (portfolios formed on the basis of the size as in Fama and French (1992) and the momentum). The results indicate that the downside betas are higher than the regular betas. Furthermore, it is found that the downside betas decrease the pricing errors.
They, afterwards, apply a double sorting routine, in order to distinguish the effect of the two betas. Hence, they first sort stocks on quintile portfolios based on regular beta and then divide each of those portfolios into five portfolios based on downside beta and the opposite procedure is done also. Consequently, 25 portfolios is constructed and then regressed separately against the regular beta and the downside beta.
The results from the regression indicate that the average return is positively associated to the downside beta within each regular beta quintile. Nevertheless, the relation between the average return and the regular beta tends to fade up. Finally, Thierry Post and Pim Vliet (2004) assert that the mean-semivariance CAPM strongly outstrips the mean-variance CAPM and that the downside risk is a better risk measure both theoretically and empirically.
In the same way, Thierry Post, Pim Vliet, and Simon Lansdorp (2009) look into the downside beta and its ability to derive return. They use monthly stock returns from the GRSP at the University of Chicago and select the ordinary common US stocks listed on the NYSE, AMEX, and NASDAQ for the period that spans from January 1926 to December 2007. They, moreover, investigate the role of the downside beta for various sub-samples which are; 1931-1949, 1950-1969, 1970-1988, and 1989-2007 and use the regular beta portfolio and the downside beta portfolio as a benchmark.
After that, the authors carry out several portfolios' classification, first on the basis of the regular beta, then on downside beta, on regular beta first and then on three definitions of downside betas (semivariance beta, ARM beta, and downside covariance beta.), and even on one downside beta then on the other definitions of downside betas. They obtain in sum 12 sets of double sorted beta portfolios. They, also, control for the size, the book-to-market, the momentum, the co-skewness and the total volatility.
Thierry P et al. (2009) find that the downside beta is more pertinent to investors than regular beta and that the downside beta measured by the semivariance is more plausible than that determined through the other definitions. Furthermore, it is shown that the downside covariance beta does not yield a better performance than the regular beta (mean spread of only 5 or 6 basis points).
As for the other characteristics per se the size, the value and the momentum, the authors make use of only the semivariance measure since it has shown a great dominance over the other measures. In here, the results indicate that the difference between the regular beta and downside beta cannot account for the omitted stock characteristics. In a nutshell, the authors sustain, while including the other characteristics into the regression, that the significance of the downside beta remains higher and outperforms all the other betas in each of the sub-sample and notably in most recently years. This latter result corroborates the importance of the semi-variance as a risk measure and the superiority of the downside CAPM over the traditional one.
Ang, Chen and Xing (2005) in their attempt to investigate the downside risk, use data from the Center for Research in Security Prices (CRSP) to construct portfolios of stocks sorted by various characteristics of returns. It is about, particularly, ordinary common stocks listed on NYSE, AMEX and NASDAQ during the period that spans from July 3rd, 1962 to December 31st, 2001.
Using the Fama and Macbeth (1973) regression, the authors examine separately the downside and the upside component of beta and find that the downside risk is priced. In fact, the downside risk coefficient is found to be positive and highly significant. Nevertheless, the upside beta coefficient is negative which goes in the same current as the previous literature investigating the availability of the CAPM's beta. They show that the downside risk premium is always positive roughly about 6% per annum and statistically significant.
They find also that this positive significant premium remains even when controlling for other firm characteristics and risk characteristics. In opposite, the upside premium changes its sign to turn into negative when considering the other characteristics.
Likewise, Olmo (2007) finds, while performing his study on a number of UK sectoral indices, that both the CAPM beta and the downside beta are pricing factors for risky assets. He finds, also, that stocks which co-vary with the market in downturn periods generate higher return than that predicted by the CAPM. Contrary, stocks that are negatively correlated with the market in bad times are found to have lower return. The major result of this study is that the sectors which are indifferent to bad states changes and belong generally to the safe sectors seem to be not priced by downside beta.
Similarly, Diana Abu-Ghunmi (2008) explores the downside risk in a conditional framework within the U.K context. Her sample includes all common stocks traded on the London Stock Exchange and the FTSE index from July 1981 to December 2005. They also use the coincident index to split their sample into expansion and recession periods. The portfolio's formation is done upon monthly return and based on three main risk measures per se; the beta, the upside beta, and the downside beta. They, afterwards, run the Fama-MacBeth (1973) cross sectional regression of excess returns on realized betas to examine the downside risk- return relation using individual stocks.
The results point to a positive and a significant premium between the expected return and the unconditional downside risk. They note, as well, that conditioning the risk return relationship on the state of the world contributes to increase monotonically the relationship between the expected return and the downside beta during expansion periods.
However, in recession periods they find no evidence between the return and the downside beta or the CAPM's beta. They conclude that downside beta plays a major role in pricing in pricing small and value stocks but not large and growth stocks. But, although the downside approach has been basis for many academic papers and has had significant impact on academic and non academic financial community, it is still subject to severe critics.
5.2.4. The High Order Moment CAPM
5.2.4.1. Evidence from the existence of the skewness and the kurtosis in the returns' distribution
Literature on the CAPM has shown several evidences on favor of non normality and asymmetrical returns distribution. The attractive attribute of the CAPM is its pleasing and powerful explanation with a well built theoretical background about the risk-return relationship. This model is built on the basis of some assumptions but a critical one which imposes normality on the return distribution, so that the first two moments (mean and variance) are largely sufficient to describe the distribution.
Nevertheless, this latter assumption is far from being satisfied as demonstrated by Fama (1965), Arditti (1971), Singleton and Wingender (1986), and more recently, Chung, Johnson, and Schill (2006). These studies point that the higher moments of return distribution are crucial for the investors and, from that, must not be neglected. They suggest, hence, that not only the mean and the variance but also higher moments such as skewness and kurtosis should be included in the pricing function.
Consequently these attacks have led to the rejection of the CAPM within the Sharpe and Linter version and lead the way to the development of asset pricing models with higher moment than the variance like for instance; Fang and Lai (1997), Hwang and Satchell (1998) and Adcock and Shutes (1999) who introduced the kurtosis coefficient in the pricing function or also Kraus and Lizenberg (1974) who introduce the Skewness coefficient.
The skewness coefficient is a measure of the asymmetry in the distribution. Particularly it is a tool to check that the distribution does not look to be the same to the left and the right with reference to a center point. The negative value of the skewness indicates that the distribution is concentrated on the right or skewed left. However, the positive value of the skewness indicates analogically that the data are skewed right.
For more precision, we mean by skewed left that the left tail is longer than the right one and vice versa. Subsequently, for a normal distribution the skewness must be near to zero. The formula of the skewness is given by the ratio of the third moment around the mean divided by the third power of the standard deviation.
Similarly the kurtosis coefficient is a measure to check whether the data are peaked or flat with reference to a normal distribution. This means, subsequently, that data with high kurtosis tend to have a different peak around the mean, decreases rather speedily and have heavy tails. However, data with low kurtosis tend to have rather a flat top near the mean. From this, the negative kurtosis indicates that the distribution is flat contrary the positive kurtosis indicates peaked distribution. For a normal distribution the kurtosis must be equal to zero. The formula of the kurtosis is given by the fourth moment around the mean divided by the square of the variance minus three which one calls also the excess kurtosis with reference to the normal distribution.
The distribution that has zero excess Kurtosis is called “mesokurtotic” which is the case of all the normal distribution family. However, the distribution with positive excess Kurtosis is called “leptokurtotic” and means that this distribution has more than normal of values near the mean and a higher probability than normal of values in extreme (fatter tails). Finally, the distribution with negative excess Kurtosis is called “platykurtotic” and indicates that there is a lower probability than the normal to find values near to the mean and a lower probability than the normal to find extreme values (thinner tails).
The French mathematician Benoit Mandelbrot (2004) in his book entitled ‘'The Misbehavior of Markets: A Fractal view of risk, ruin and reward'' concluded that the failure of any model (for example the option model of Black and Sholes or also the CAPM of Sharpe and Linter) or any investment theory in the modern finance can be due to the wide reliance on the normal distribution assumption.
Generally speaking, investors who maximize their utility function have preferences which cannot be explained only as a straightforward comparison between the first two moments of the returns' distribution. In fact, the expected utility function of a given investor uses all the available information relating to the assets' returns and can be somehow linked to the other moments.
It is not strange then to see authors like Arditti (1967), Levy (1969), Arditti and Levy (1975) and Kraus and Litzenberger (1976) extending the standard version of the CAPM to incorporate the skewness in the pricing function. Or even to see others incorporating the kurtosis coefficient, we name among others Dittmar (2002) who extends the three moments CAPM and examines the co-kurtosis coefficient. All these works stem to the necessity of introducing the high moment to the distribution in order to ameliorate the assets pricing if the restriction of normality is moved away.
For example Dittmar(2002) finds that investors dislike co-kurtosis and prefer stocks with lower probability mass in the tails of the distribution rather than stocks with higher probability mass in tails of the distribution. He concludes, hence, that assets that increase the portfolios' kurtosis must earn higher return. Likewise, assets that decrease the portfolios' kurtosis should have lower expected return.
As for Arditti (1967), Levy (1969), Arditti and Levy (1975) and Kraus and Litzenberger (1976), their results imply a preference for a positive skewness. They find that investors prefer stocks that are right skewed to those which are left skewed. Hence, assets that decrease the portfolios' skewness are more risky and must earn higher return comparing to those which increase the portfolios' skewness. These findings are further supported by studies like that of Fisher and Lorie (1970) or also that of Ibbotson and Sinquefield (1976) who find that the return distribution is skewed to the right.
This has led Sears and Wei (1988) to derive the elasticity of substitution between the systematic risk and the systematic skewness. More recently, Harvey and Siddique (2000) show that this systematic skewness is highly significant with a positive premium of about 3.60 percent per year and therefore must be well admitted in pricing assets.
The empirical evidence on the high moment CAPM is very mixed and very rich not only by its contribution but also by the methodologies used and the moments introduced. That's why the following section is devoted to further understand these approaches and to summarize the most important papers that explore this model in their studies in a purely narrative review.
5.2.4.2. Literature review on the high order moments CAPM
Kraus & Litzenberger (1976) were the first to suggest that the higher co-moments should be priced. They claim that in the case where the return's distribution is not normal, investors are concerned about the skewness or the kurtosis.
Just like Kraus and Litzenberger (1976), Harvey and Siddique(2000), have studied non-normal asset pricing models related to co-skewness. The study sample covers the period of 1963-1993 and contains the NYSE, AMEX, and NASDAQ equity data. They define the expected return as a function of covariance and co-skewness with the market portfolio in a three-moment CAPM and find that this model is better in explaining return and report that coskewness is significant and commands on average a risk premium of 3.6 percent per annum.
From his part Dittmar (2002) uses the cubic function as a discount factor in a Stochastic Discount Factor framework. The model is a cubic function of return on the NYSE value-weighted stock index, and labor growth following Jagannathan and Wang (1996). He finds that the co-kurtosis must be included with labor growth so as to arrive to an admissible pricing kernel.
Within the American context, Giovanni Barone Adesi, Patrick Gagliardini, and Giovanni Urga (2002) investigate the co-skewness in a quadratic market model. The study sample includes monthly return of 10 stock portfolios of the NYSE, AMEX, and NASDAQ formed by size from July 1963 to December 2000. The results from the OLS regression indicate that the extension of the return generating process to the skewness is praiseworthy. In fact, it is found that portfolios of small firms have negative co-skewness with the market. The results show also that there's an additional component in portfolios' return which not explained neither by the covariance nor by the co-skewness.
Daniel Chi-Hsiou Hung, Mark Shackleton and Xinzhong Xu (2003) investigate the plausibility of the high co-moments CAPM (co-skewness and co-kurtosis) in explaining the cross section of stock returns in the UK context. They work on a 26 year period from January 1975 to December 2000 and incorporate monthly data of all listed stocks. For the portfolios formed on the basis of the beta, the authors find that the higher beta portfolio has the highest total skewness and kurtosis. Moreover, the higher co-moments show little significance in explaining cross section returns and do not increase the explanatory power of the model. However, the intercepts are found to be all insignificant.
For the size sorted portfolios, the higher co-moments seem to be somehow significant and increase the explanatory power of the model. Moreover, for these portfolios the betas are all insignificant. Overall, Hung et al. (2003) find that the beta is very significant in every model and the addition of the higher order co-moment terms do not improve the explanatory power of the model which remains roughly unchanged. Furthermore, in all these models, the intercepts are insignificant.
They conclude, finally, that in the UK stock exchange there exists a little evidence on favor of non-linear market models, since the higher order moments are found to be too weak.
From their part, Rocky Roland and George Xiang (2004) suggest an asset pricing model with higher moments than the variance and extend the traditional version to a three-moment CAPM and a four-moment CAPM. They conclude, through their theoretical study, that further tests must be conducted to check the accuracy of the model even if some of research is already done.
Angelo Ranaldo, and Laurent Favre (2005) put forward the extension of the two moments CAPM to a four moments one including the co-skewness and the co-kurtosis in pricing the hedge funds' returns. Their study is run on monthly returns 0f 60 hedge fund indices which are equally weighted. The market portfolio is constructed upon 70% of the Russell 3000 index and 30% of the Lehman US aggregate bond index and the risk free rate is the US 1 month Certificate of Deposit.
Including the skewness and considering the quadratic model (time series regression), the authors find, on the one hand, that the adjusted R squared increases with more than half comparing to that of the two-moment CAPM. On the other hand, it is found that the coefficient of the co-skewness is positive and statistically significant which supports the existence of the co-skewness. However, the results indicate that the betas from the quadratic model are smaller than those implied by the two-moment CAPM. This latter result may be explained by the fact that some of the explanatory power of the beta is taken away by the co-skewness.
Then, taking into account the co-kurtosis in a cubic model, Angelo Ranaldo, and Laurent Favre (2005) find that the additional coefficient has no major function in explaining the hedge fund return since it is only significant in four strategies. They point, hence, on the basis of their findings that the higher moments are more suited to represent the hedge fund industry return.
In 2007, Chi-Hsiou Hung investigates the ability of the higher co-moments in predicting returns of portfolios formed from combining stocks in international markets and portfolios invested locally in the UK and the US. His sample includes monthly US dollar denominated returns, market value of common shares and interest rates for the period that spans from January 1975 to December 2004 and contains 18 countries.
For this end, the author constructed portfolios on the basis of the momentum criterion. Hence, 10 equally weighted momentum sorted portfolios are obtained every six months based on past six months compounded returns. Then, portfolios are sorted on size every 12 months by ranking stocks on the basis of the market value at the time of the ranking. Finally, to dismantle the momentum effect and that of the size, the author builds 36 portfolios from the intersection of six size portfolios and six momentum portfolios. Overall, 100 equally weighted momentum, size and 36 double sorted momentum-size portfolios are obtained and regressed against the beta, the co-skewness and the co-kurtosis. He uses also a beta-gamma-delta sorts by firstly groups stocks into beta deciles, then into gamma and finally into delta deciles.
Applying the cross sectional regression and looking at the beta-gamma-delta sequential sorts, the author finds that the risk premium associated to the market, the skewness and the kurtosis are highly significant. This is further supported by a higher adjusted R-squared for the four moment model compared to the two moment model. As for the momentum portfolios, the beta of portfolios is found to be statistically significant and the co-skewness and the co-kurtosis are generally almost significant. In addition to that, the adjusted R-squared increases when including the third and the fourth moment.
For size portfolios, it is found that all portfolios' betas are negatively associated to the size and the co-skewness and co-kurtosis premiums are highly significant. However, unlike the other sorted portfolios, for the size portfolios all the intercepts are significant. Finally, for the two-way-sorts, it's found that the inclusion of the higher co-moments renders all intercepts insignificant and engenders a positive and a statistically significant premium for both the co-skewness and the co-kurtosis. For robustness check, Chi-Hsiou Hung (2007) investigates within a time series regression a cubic market model and finds that returns on the winner, loser and the smallest size deciles are cubic functions of the market model.
One year later, particularly in 2008, the same author i.e., Chi-Hsiou Hung, investigates the ability of the non linear market-models in predicting assets' returns. The sample of the study contains a set of nineteen countries including the Canada, the United States, Belgium, Denmark, Finland, France, Germany, Italy, Netherlands, Norway, Spain, Sweden, Switzerland, United Kingdom, Australia, Hong Kong, Japan, Singapore and Taiwan and covers the 954-weeks' returns from 22 September 1987 to 27 December 2005 for both listed and delisted firms. The analysis of the higher order moment models includes the three moments and the four moment framework and run on the momentum and size portfolios.
Using the times series regression, Chi-Hsiou Hung (2008) finds that the beta coefficient is highly significant in every model for both the winner and the loser portfolios. For the winner portfolios, adding the co-skewness to the CAPM increases the adjusted R-squared as the coefficient of the co-skewness is negative and statistically significant. However, for the loser portfolios, the addition of this coefficient does not contribute to the improvement of the explanatory power which is enhanced by an insignificant value of the coefficient. As for the fourth moment, the results do not provide any support on its favor. Though, in all models and for both the winner and the loser portfolios, the intercepts are found to be positive and highly significant which weakens the accuracy of these models in predicting returns.
Concerning the size portfolios, the results indicate in sum there major conclusions. First, the standard CAPM explains well the return on the biggest size portfolios. Then, adding the squared of the markets contributes obviously to the improvement of the estimation for the smallest size portfolios. Nevertheless, the inclusion of the cubed market term does not have any utility since the coefficient is insignificant in all models and for both the studied portfolios.
Gustavo M. de Athayde and Renato G. Flôres Jr (200?) extend the traditional version of the CAPM to include the higher moments and run tests on the Brazilian context. They use daily returns for the ten most liquid Brazilian stocks for the period that goes from January 2nd 1996 to October 23rd 1997. The results from the time series regression point toward the importance of the skewness coefficient. Indeed, adding the skewness to the classical CAPM generate a significant gain at the 1%level. Similarly, adding the skewness to the CAPM that contains already the kurtosis coefficient obviously contributes to the improvement of the latter since the coefficient is found to be significant. However, the addition of the kurtosis either to the classical CAPM or to that including skewness does not have any marginal gain. Gustavo M. and Renato G. (200?) conclude, therefore, that for the Brazilian context the addition of the skewness is appropriate while the gain in adding the kurtosis is irrelevant.
In a similar way, Daniel R. Smith (2007) tests whether the conditional co-skewness explains the cross section of expected return in the UK stock exchange. Their study is performed on 17 value-weighted industry portfolios, 25 portfolios formed by sorting stocks on their market capitalization and book-equity to market-equity ratio for the period that spans from July 1963 to December 1997. For the conditional version, he uses six conditioning information variables that are documented in the literature. The results indicate that the co-skewness is important determinant in explaining the cross section of equities' return. Moreover, it is found that the pricing relationship varies through time depending on whether the market is negatively or positively skewed. The author claims, through the results, that the conditional two-moment CAPM and the conditional three factor model are rejected. However, the inclusion of the co-skewness in both models cannot be rejected by the data at all.
Recently, Christophe Hurlin, Patrick Kouontchou, Bertrand Maillet(2009) try to include higher moments in the CAPM. As in Fama and French (1992), portfolios are formed with reference to their market capitalization and book-to-market ratio. The study sample consists of monthly data of listed stocks from the French stock market over the January 2002 through December 2006 sample period. Using three regression types, i.e. the cross section regression, the times series regression and the rolling regression, the authors find that, when co-skewness is taken into account, portfolios characteristics have no explanatory power in explaining returns, whereas, the ignorance of the co-skewness produces contradictory results.
The results point also to a relatively higher value of adjusted R-squared for the model with the skewness factor compared to the classical model and that the co-skewness is positively related to the size. In a sum, the authors assert that high frequency intra-day transaction prices for the studied portfolios and the underlying factor return produce more plausible measures and models of the realized co-variations.
Benoit Carmichael (2009) explores the effect of co-skewness and co-kurtosis on assets pricing. He finds that the skewness market premium is proportional to the standard market risk premium of the CAPM. This result supports that standard market risk is the most important determinant of the cross-sectional variations of asset returns.
Likewise, Jennifer Conrad, Robert F. Dittmar, and Eric Ghysels (2008) try to explore the effect of the volatility and the higher moments on the security returns. Using a sample made of option prices for the period that goes from 1996 to 2005, the authors estimate the risk moments for individual securities. The results point to a strong relationship between the third and the fourth moments and the subsequent returns. Indeed, it's found that the skewness is negatively associated to the subsequent returns. This is equivalent to say that stocks with a relatively lower negative or positive skewness earn lower return. It is found also that the kurtosis is positively and significantly associated to the returns. They claim also, that these relationships are robust when controlling for some firms' characteristics.
Then, using a stochastic discount factor and controlling for the higher co-moments, Conrad et al. (2008) find that idiosyncratic kurtosis is significant for short maturities whereas idiosyncratic skewness has significant residual predictive power for subsequent returns across maturities.
In the Pakistani context, Attiya Y. Javid (2009) looks at the extension of the CAPM to a mean-variance-skewness and a mean-variance-skewness-kurtosis model. He works on daily as well as monthly return of individual stocks traded in the Karachi stock exchange for the period from 1993 to 2004. Allowing for the covariance co-skewness and co-kurtosis to vary overtime, the results indicate that both the unconditional and the conditional three-moment CAPM performs relatively well compared to the classical version and the four moment model. Nevertheless, the results show that the systematic covariance and the systematic co-skewness have only insignificant role in explaining return.
6. THE QUARREL ON THE CAPM AND ITS MODIFIED VERSIONS
The CAPM developed by Sharpe (1964) and John Lintner (1965), and Mossin (1965) gave the birth to assets' valuation theories. For a long time, this model had always been the theoretical base of the financial assets valuation, the estimate of the cost of capital, and the evaluation of portfolios' performance.
Being a theory, the CAPM found the welcome thanks to its circumspect elegance and its concept of good sense which supposes that the risk-averse investors would require a higher return to compensate for supporting higher risk. It seems that a more pragmatic approach carries out to conclude than there are enough limits resulting from the empirical tests of the CAPM. In fact, since the CAPM is based on simplifying assumptions, it will be completely normal that the deviation from these assumptions generates, ineluctably, imperfections.
The most austere critic that was addressed to the CAPM, is that advanced by Roll (1977). In fact, in his paper, the author declares that the theory is not testable unless the market portfolio includes all assets in the market with the adequate proportions. Then, he blames the use of the market portfolio as a proxy, since the proxy should be mean/variance efficient even though the true market portfolio could not be. He, afterwards, passes judgment on the studies of both Fama and MacBeth (1973) and Blume and Friend (1973) in the sense that they present evidence of insignificant nonlinear beta terms.
In fact, he sustains that without verifying to how extent the proxy of the market portfolio is closer to the reality, these evidences won't serve to any conclusion at all. He concludes, then, that the most practical hypothesis in this theory is that the market portfolio is ex-ante efficient. He asserts, subsequently, that verifying whether the market proxy is a good estimator may allow verifying the testable hypothesis of the model.
With reference to Roll (1977), the results of empirical tests are dependent on the index chosen as a proxy of the market portfolio. If this portfolio is efficient, then we conclude that the CAPM is valid. If not, we will conclude that the model is not valid. But these tests do not allow us to ascertain whether the true market portfolio is really efficient.
The tests of the CAPM are based, mainly, on three various implications of the relationship between the return and the market beta. Initially, the expected return on any asset is linearly connected to its beta, and no other variable will be able to contribute to the increase of the explanatory power of the model.
Then, the premium related to beta is positive which means that the expected return of the market exceeds that of the individual stocks, whose return is not correlated with that of the market. Lastly, in the Sharpe and Lintner model (1964, 1965), the stocks whose returns are not correlated with that of the market, have an expected return equal to the risk free rate and a risk premium equal to the difference between the market return and that of the risk free rate.
Furthermore, the CAPM is based on the simplifying assumption that all investors behave in the same way, but this is not easily feasible. Indeed, this model is based on anticipations and since the individuals do not announce their beliefs concerning the future, the tests of the CAPM can only lead to the assumption that the future can present either less or more the past. As a conclusion, tests of the CAPM can be only partially conclusive.
Tests of the CAPM find evidences that are conflicting with the assumptions. For instance, many researchers (Jensen, 1968; Black, Jensen, and Scholes, 1972 among others) have found that the relationship between the beta and the expected return is weaker than the CAPM predicts. It is not bizarre, also, to find that low beta stocks earn higher returns than the CAPM suggests.
Moreover, the CAPM is based on the risk reward principle which says that investors who bear higher risk are compensated for higher return. However, sometimes the investors support higher risk but require only lower returns. It is, particularly, the case of the horse ... and the casino gamblers.
Several other assumptions are delicate tackling the validity of the model. For example, the CAPM assumes that the market beta is unchanged overtime. Yet, in a dynamic world this assumption remains a discussed issue. Since, the market is not static it would be preferable for a goodness of fit to model what is missing in a static model. In addition to that, the model supposes that the variance is an adequate measure of risk. Nevertheless, in the reality other risk measures such as the semi-variance may reflect more properly the investors' preferences.
Furthermore, while the CAPM assumes that the return's distribution is normal, it is though often observed that the returns in equities, hedge funds, and other markets are not normally distributed. It is even demonstrated that higher moments such as the skewness and the kurtosis occur in the market more frequently than the normal distribution assumption would expect. Consequently, one can find oscillations (deviations compared to the average) more perpetually than the predictions of the CAPM.
The reaction to these critics is converted into several attempts aiming at the conception of a well built pricing model. The Jagannathan and Wang (1996) conditional CAPM, for one, is an extension of the standard model. The conditional CAPM differ from the static CAPM in some assumptions about the market's state. In this model, the market is supposed to be conditioned on some state variables. Hence, the market beta is time varying reflecting the dynamic of the market. But, while the conditional CAPM is a good attempt to replace the static model, it had its limitations as well.
Indeed, within the conditional version there are various unanswered questions. Questions are of the type; how many conditioning variables must be included? Can we consider all information with the same weight? Should high quality information be heavily weighted? How can investors choose between all information available on the market?
For the first question, there is no consensus on the number of state variables included. Ghysels (1998) has criticized the conditional asset pricing models due to the fact that the incorporation of the conditioning information may lead to a great problem related to parameter instability. The problem is further enhanced when the model is used out-of-sample in corporate finance applications.
Then, since investors do not have the same investment perspectives, the set of information available on the markets is not treated in the same way by all of them. In fact, information may be judged as relevant by an investor and redundant by another. So, the former will attribute a great importance to it and subsequently assign it a heavy weight. The latter, whereas, neglect this information since it doesn't affect the decision making process.
To my own knowledge, the failure of the conditional CAPM may possibly come from the ignorance of the information weight. The beta of the model must be conditioning on the state of variables with the adequate weights. i.e., the contribution of each information variable in the market's risk must be proportional to its importance and relevancy in the decision making process for a given investor.
Furthermore, even investors are not certain about which information must be included and which is not. Investors are usually doubtful about the quality of these information sources. Shall they refer to announcements and disclosures, analyst reports, observed returns and so on? This is remains questionable since even the set of information is not observed and that investors are uncertain about these parameters.
A further limit associated to the conditional CAPM, is that they are prone to the underconditioning bias documented by Hansen and Richard (1987) and Jagannathan and Wang (1996). This means that there is a lack of the information included. Shanken (1990), and Lettau and Ludvigson (2001) suggest in order to overcome this problem to make the loadings depend on the observable state variables. Nevertheless, the knowledge of the ‘'real'' state variables clearly requires an expert.
With the intention of avoiding the use of ‘'unreal'' state variables, Lewellen and Nagel (2006) divide the whole sample into non-overlapping small windows (months, quarters, half-years) and estimate directly from the short window regressions the time series of the conditional alphas or betas. They find weak evidence for the conditional CAPM over the unconditional one. Nevertheless, the method of Lewellen and Nagel (2006) can lead to biases in alphas and betas known as the ‘'overconditioning bias'' (Boguth, Carlson, Fisher, and Simutin (2008)). This bias may occur when using a conditional risk proxy not fully included in the information set such as the contemporaneous realized betas.
The third contribution in the field of asset pricing models was the higher order moments CAPM. This model introduces the preferences about higher moments of asset return distributions such as skewness and kurtosis. The empirical literature highlighted a large discrepancy over the moments included in the model. For instance, Christie-David and Chaudry (2001) employ the four-moment CAPM on future markets. The result of their study indicates that the systematic co-skewness and co-kurtosis explain the return cross section variation.
Jurczenko and Maillet (2002), Galagedera, Henry and Silvapulle (2002) make use of the Cubic Model to test for coskewness and cokurtosis. Hwang and Satchell (1999) study the co-skewness and the co-kurtosis in emerging markets. They show that co-kurtosis is more plausible than the co-skewness in explaining the emerging markets return. Y. Peter Chung Herb Johnson Michael J. Schill (2004) show that adding a set of systematic co-moments of order 3 through 10 reduces the explanatory power of the Fama-French factors to insignificance in roughly every case.
Through these inconsistencies, one may think that modeling the non linear distribution suffers from the lack of a standard model to capture for the high moments. This problem is getting worse when cumulated together with the necessity to specify a utility function which is a faltering block in the application of higher moments CAPM-versions until now. Because only the investors utility function can determine their preferences.
The last extension of the CAPM is that related to the downside risk. The downside CAPM defines the investors risk as the risk to go below a defined goal. When calculating the downside risk, only a part of the return distribution is used and only the observations which are below he mean are considered, i.e. only losses. Hence the downside beta can be largely biased.
Moreover, the semi-variance is only useful when the return distribution is asymmetric. However, when the return distribution for a given portfolio is normal, then the semi-variance is only half the portfolio's variance. Consequently, the risk measure may be biased since the portfolio is mean-variance efficient.
Furthermore, the downside risk is always defined with reference to a target return such as the mean or the median or in some cases the risk-free rate which is supposed to be constant for a given lap of time. Nevertheless, investors change their objectives and preferences from time to time which modify, consequently, the accepted level of risk over time. So, a well defined downside risk should perhaps include a developing learning process about investors' accepted rate of risk.
7. CONCLUSION
The dispute over the CAPM has been for as much to answer the following question: ‘'is the CAPM dead or alive?''. Tests of the CAPM find evidences that are in some cases supportive and in some others aggressive. For instance, while Blume and Friend (1973), Fama and Macbeth (1973) accept the model, Jensen (1968), Black, Jensen, and Scholes (1972), and Fama and French (1992) reject it.
The above question raises various issues in asset pricing models that academicians must struggle in order to claim to the validity of the model or before drawing any conclusion. Issues are like for example; to know whether the CAPM's relationship is still valid or not? Or does this relationship change when the context of the study changes? Do statistical methods affect the validity of the model? Does the study sample impinge on the model?
Also since many improvements are joined to the model, the questions may be turned into types like for example; does conditional beta improve the CAPM? Does co-skewness or co-kurtosis or the both improve the risk-reward relationship? Does the Downside risk contribute to the survival of the CAPM?
So, to conclude whether the CAPM is dead or not, one may find various difficulties since the evidence is very mixed. Unfortunately, through the narrative literature review we do not come out with a clear conclusion about whether our answer is yes or not. It seems that this tool is inadequate for our study since a strong debate needs to be solved. In fact, to answer the question of interest, several issues, remain doubtful in the literature review, must be taken for granted. First, it is impossible to compare studies that do not have the same quality. Quality is measured through for example the statistical methods, the sample size, the data frequency …etc
Second, in order to reach a clear conclusion we must not rely only on studies that defend our point of view and neglect the opposite view. Hence, to defend the model, it is advisable to gather all positive studies, and to reject it is recommendable to accumulate only negative studies.
Finally, if many versions need to be examined, how can we draw conclusion about the validity of the version since in the version itself the evidence is mitigated. For instance, how shall we know whether the conditional version improves the model while the conditional version its self is contested.
In a nutshell, in order to move away the fogs on the CAPM and to brush away the hole inherent to the narrative literature review we recommend the use of the meta-analysis technique which is an instrument that affords accurate and pertinent conclusions.
References
Jensen, M., 1968. The Performance of Mutual Funds in the Period 1945-1964. Journal of Finance 23 (2), 389-416.
Black, F., Jensen, M., Scholes, M., 1972. The Capital Asset Pricing Model: Some Empirical Tests in Studies in the Theory of Capital Markets. Michael C. Jensen, Ed. New York: Praeger, 79-121.
Fama, Eugene F., and James D., MacBeth. 1973. Risk, Return, and Equilibrium: Empirical Tests. Journal of Political Economy 71, 607-636.
Blume, M., and Irwin, F., 1973. A New Look at the Capital Asset Pricing Model. Journal of Finance 28 (1), 19-33.
Stambaugh, R., 1982. On the Exclusion of Assets from Tests of the Two-Parameter Model: A Sensitivity Analysis. Journal of Financial Economics 10 (3), 237-268.
Kothari. S, Shanken, J., and Richard G., 1995. Another look at The Cross Section of Expected Returns. Journal of Finance 50, 185-224.
Fama, Eugene, F., and French, Kenneth, R., 1992. The Cross-Section of Expected Stock Returns. Journal of Finance 47 (2), 427-465.
György, A., Mihály, O., and Balázs, S., 1999. Empirical Tests of Capital Asset Pricing Model (CAPM) In the Hungarian Capital Market. Periodica Polytechnica Ser. Soc. Man. Sci 7, 47-61.
Kothari, S., and Shanken, J., 1999. Beta and Book-to-Market: Is the Glass Half Full or Half Empty?, Working paper available at
Huck, K., Ahmadu, S., and Gupta, S., 1999. CAPM or APT? A Comparison of Two Asset Pricing Models for Malaysia. Malaysian Management Journal 3, 49-72.
Levent, A., Altay-Salih, A., and Aydogan, K., 2000. Cross Section of Expected Stock Returns in ISE. Working paper available at
Estrada, J., 2002. Systematic Risk in Emerging Markets: the D-CAPM. Emerging Markets Review 3, 365-379.
Andrew, A., and Joseph, C., 2003. CAPM over the Long-Run: 1926-2001. NBER Working Paper No. W11903
Fama, Eugene, F. and French, Kenneth, R., 2004. The Capital Asset Pricing Model: Theory and Evidence. The Journal of Economic Perspectives 18 (3), 25-46.
Thierry, P., and Pim Van V., 2004. Conditional Downside Risk and the CAPM. ERIM Report Series No. ERS-2004-048-F&A.
Blake, T., 2005. An Empirical Evaluation of the Capital Asset Pricing Model. Working paper available at
Galagedera, D., 2005. Relationship between downside beta and CAPM beta. Working paper available at.
Fama, Eugene, F. and French, Kenneth, R., 2005. The Value Premium and the CAPM. The Journal of Finance 61 (5), 2163-2185.
Erie, Febrian and Aldrin, Herwany., 2007. CAPM and APT Validation Test Before, During, and After Financial Crisis in Emerging Market: Evidence from Indonesia. The Second Singapore International Conference on Finance.
Ivo Welch., 2007. A Different Way to Estimate the Equity Premium (for CAPM and One-Factor Model Usage Only). Working paper available at
Peter, Christo ersen, Kris Jacobs and Gregory Vainberg, 2008. Forward Looking Betas. Working paper available at.
Michael Dempsey, 2008., The significance of beta for stock returns in Australian markets. Investment Management and Financial Innovations 5 (3), 51-60.
Simon, G., Koo and Ashley, Olson, XXXX., Capital Asset Pricing Model Revisited: Empirical Studies on Beta Risks and Return. Working paper available at.
Arduino, Cagnetti, 2002. Capital Asset Pricing Model and Arbitrage Pricing Theory in the Italian Stock Market: an Empirical Study. Business and Management Research Publications. Available at.
Pablo, Rogers and José Roberto Securato, 2007. Reward Beta Approach: A Review. Working paper available at.
Bornholt, G. N., 2007. Extending the capital asset pricing model: the reward beta approach. Journal of Accounting and Finance 47, 69-83.
Najet, Rhaiem, Saloua, A., and Anouar Ben.Mabrouk., 2007. Estimation of Capital Asset Pricing Model at Different Time Scales: Application to French Stock Market. The International Journal of Applied Economics and Finance 2, 79-87.
Cudi Tuncer Gürsoy and Gulnara Rejepova., 2007. Test of Capital Asset Pricing Model in Turkey. Dogus Üniversitesi Dergisi 8, 47-58.
Robert, R. Grauer, and Johannus A. Janmaat., 2009. On the power of cross-sectional and multivariate tests of the CAPM. Journal of Banking and Finance 33,775-787.
Ravi, Jagannathan, and Zhenyu, Wang., 1996., The Conditional CAPM and the Cross-Section of Expected Returns. The Journal of Finance 51, 3-53.
Jean-Jacques Lilti, Helene Rainelli Le Montagner, and Yannick Gouzerh (….), Capital humain et CAPM conditionnel: une comparaison internationale des rentabilités d'actions.'
Martin Lettau & Sydney Ludvigson, 2001. Resurrecting the (C) CAPM: A Cross-Sectional Test When Risk Premia Are Time-Varying. Journal of Political Economy, University of Chicago Press 109, 1238-1287.
William N. Goetzmann, Akiko Watanabe, and Masahiro Watanabe, 2007. Investor Expectations, Business Conditions, and the Pricing of Beta-Instability Risk'',
Attiya Y. Javid and Eatzaz Ahmad, 2008. The Conditional Capital Asset Pricing Model: Evidence from Karachi Stock Exchange'', PIDE Working Papers, Vol.48
Akiko Fujimoto, and Masahiro Watanabe, 2005. Value Risk in International Equity Markets.
Stefan Nagel, and Kenneth J. Singleton, 2009. Estimation and Evaluation of Conditional Asset Pricing Models.
Peng Huang, and James Hueng, ‘' Conditional Risk-Return Relationship in a Time-Varying Beta Model.
Antonis Demos and Sofia Pariss., 1998. Testing Asset Pricing Models: The Case of The Athens Stock Exchange. Multinational Finance Journal 2, 189-223.
Devraj Basu and Alexander Stremme; 2007. CAPM and Time-Varying Beta: The Cross-Section of Expected Returns.
Tobias Adrian and Francesco Franzoni, 2008. Learning about Beta: Time-Varying Factor Loadings, Expected Returns, and the Conditional CAPM.
More from UK Essays
- Dissertation Examples Index - Return to the Dissertations Index
- Example Finance Dissertations - More Finance Dissertation Examples
- Free Finance Essays - Finance Essays (submitted by students)
- Dissertation Help - Free help guides for writing your dissertation | http://www.ukessays.com/dissertations/finance/capm.php | CC-MAIN-2014-35 | refinedweb | 23,957 | 50.87 |
.
The:
The payment service is also certified by following the guidance of the Payment Card Industry (PCI) Security Standards Council.
The areas that are relevant when describing the integration to the Payment Service can be described by the following scenarios:.
Enabling the payment integration does require a couple of steps both inside and outside NAV:
For more information please look at the following resources:
-Rikke Lassen
The
This post shows how to include an external .NET assembly in your report layout when designing reports for RTC. The focus here is NOT how to build a .NET assembly, but how you can include such an assembly in your report design. But still, we will begin by creating our own .NET assembly for use in the layout.
To keep it simple, let's make a .NET assembly to just add up two numbers:
The whole project should look like this now:
using System;using System.Collections.Generic;using System.Linq;using System.Text;
namespace MyReportAssembly{ public class AddNumbers { public int AddFunction(int i, int j) { return (i + j); } }}
That's all the functionality we need - as mentioned we want to keep it simple!
But we need to make a few more changes to the project before we can use it:
In Project Properties, on the Signing tab, select "Sign the assembly", select a New key (give it any name), and untick password protection.
To allow this assembly to be called from the report layout you must set this property in Assemblyinfo.cs. So, in Assemblyinfo.cs, add these lines:
using System.Security;
[assembly: AllowPartiallyTrustedCallers]
Full details of why you need those two lines here: "Asserting Permissions in Custom Assemblies"
When I built this project using Visual Studio 2010, I was not able to install the assembly. And when trying to include it in my report layout I got this error: "MyReportAssembly.dll does not contain an assembly.". So if you will be using the installation instructions below, and you are using Visual Studio 2010, then change "Target Framework" to ".NET Framework 3.5" under Project Properties on the Application tab. This is the default target framework in Visual Studio 2008. Visual Studio 2010 defaults to version 4.0. I'm sure there are better ways to instlal this and still build it for .NET 4, but that's outside of the current scope. Also if it later complains about reference to Microsoft.CSharp, then just remove that from the project under references.
When this is done, build your project to create MyReportAssembly.dll
Again, the focus here is not on buildign and installing .NET assemblies, and I am no expert in that, so this is probably NOT the recommended way to install a .NET assembly, but it works just for the purpose of being able to see it in the report layout:
Start an elevated Visual Studio Command prompt, and go to the folder where your net MyReportAssembly.dll is (C:\Users\[UserName]\Documents\visual studio 2010\Projects\MyReportAssembly\MyReportAssembly\bin\Debug\). Then run this command:
gacutil /i MyReportAssembly.dll
It can be uninstalled again with this command:
gacutil /uf MyReportAssembly
After installing it, open this folder in Windows Explorer:
c:\Windows\Assembly
and check that you have the MyReportAssembly there. If not, then check if the section above about compiling it to .NET 3.5 applies to you.
Finally - How to use an exterenal .NET assembly in report layout
So now we finally come to the point of this post: How do you use your new assembly in the report layout:
Public Function Addnumbers(Num1 as Integer, Num2 as Integer) as Integer Dim MyAssembly as MyReportAssembly.AddNumbers MyAssembly = new MyReportAssembly.AddNumbers() Return MyAssembly.AddFunction(Num1,Num2)End Function
Then call this function nby adding this Expression to a TextBox:
=Code.Addnumbers(1,2)
And, finally, if you run the report like this, you would get the error "xyz, which is not a trusted assembly.". So back in the classic report design, in report properties, you just have to set the property EnableExternalAssemblies = Yes, and the report should run.
That was a lot of work for just adding up two numbers, but hopefully is shows what steps are needed to open up your report layout to endless opportunities. Note: I have no ideas if this will work with visual assemblies or anything which contains any UI at all. Any experiences with this, feel free to add to the end of this post.
As always, and especially this time since I'm in no way a c# developer:
These postings are provided "AS IS" with no warranties and confer no rights. You assume all risk for your use.
Additional Information
If you plan to look further into this, then here are some recommended links:
"Using Custom Assemblies with Reports"
"Deploying a Custom Assembly"
If your assembly calls other services, it is likely you need to consider passing on the user's credentials. For more information on that, here is a good place to begin:
"Asserting Permissions in Custom Assemblies"
Lars Lohndorf-Larsen
Microsoft Dynamics UK
Microsoft Customer Service and Support (CSS) EMEA
When a new contact is created in Dynamics NAV, you may want to synchronize that contact to Outlook. Sometimes it could happen that during the next synchronization attempt, this specific contact seems to duplicate somehow. This blog describe how this could happen and what you could do to prevent this. This blog will also describe how the synchronization works in detail. A future blog will describe what to do when this situation has occurred.
If you create a new contact in Dynamics NAV and synchronize that contact to Outlook via a normal synchronization, then a new contact will be created in the dedicated Outlook Synchronization folder in Outlook. Dynamics NAV must know that this process finished successfully, so what happens next is that a Unique Identifier is sent back from Outlook to Dynamics NAV. This Unique Identifier is stored in table 5302.
When the contact is created in Outlook but the Unique Identifier is not sent back to Dynamics NAV, during the next synchronization attempt, a duplication could occur.
Most of the time, this happens when the user does not know the synchronization is running in the background.
E.g.:
NOTE: using the “Schedule automatic synchronization every” in general is a bad idea because with a scheduled synchronization and with the current Outlook Synchronization solution, the progress bar and summary window will not be shown to the Outlook Synchronization user in Outlook!
With Office 2010 it is very easy to close Outlook –even when the synchronization is running! If the Unique Identifier is not sent back to Dynamics NAV, a previous attempt to synchronize new items to Outlook and the other way around will duplicate the synchronized data in Dynamics NAV or Outlook! The same scenario applies if the Outlook Synchronization User uses a laptop and closes the lid of the laptop (when he does not know the synchronization is running). Of course, this scenario could also happen if Outlook suddenly crashes; e.g. during a power failure, etc.
There are many reason why a duplication could occur, but in general an Outlook Synchronization User should know that the Outlook Add-In is running in the background and therefore, we now do not recommend to disable the “Show synchronization progress” option and the “Show synchronization summary” option. Enabling these options again would prevent the most common cause why a duplication could occur.
Regards,
Marco Mels CSS EMEA
This posting is provided "AS IS" with no warranties, and confers no rights ...
When moving to the RoleTailored client some people have experienced difficulties with grouping in RDLC reports in cases where the requirements are just a bit more complex than a basic grouping functionality.
I must admit that at the first glance it does not seem simple, but after a little research it does not look dramatically complex either. That motivated me to write this blog post and share my findings with all of you NAV report developers.
So let's take a look at the task we are trying to accomplish.
I have a master table, which contains a list of sales people - SalesPerson. The sales people sell software partially as on-premises software and partially as a subscription. There are two tables, which contain data for these two types of sale: OnPremisesSale and SubscriptionSale.
The example is artificial and is meant only to show different tricks on how to do grouping. The picture below shows the data for this report:
For each sales person I need to output both sales of on-premises software and subscription sales and show the total sales. Something that looks like the following:
Now we have all required information, let's start solving the task.
1. First, I create an NAV report. Add all necessary data items, establish data item links for the proper joining of the data, and place all needed fields on the section designer in order to get them inside RDLC report.See the picture below.
2. Next, I go to RDLC designer in Visual Studio. First I pick a list control, put the SalesPerson_Name field in the list, and set the grouping based on the SalesPerson_SalesPersonId field.3. After that, I place a row with column captions on top of the list.
Design in Visual Studio as shown below.
4. Now I need to place two tables inside the list- one for the on-premises software and one for the subscriptions. A list can display detail rows or a single grouping level, but not both. We can work that around this limitation by adding a nested data region. Here I place a rectangle inside the list and place two tables inside this rectangle, one for On-Premises and one for Subscriptions. In each table, I add header text and add the CustomerName and Amount fields.
5. I also add two text boxes for the sum of amount - one inside the rectangle to show total sales for the sales person and one outside to show the overall amount. Both contain the same formula: =SUM(Fields!SubscriptionSale_Amount.Value) + SUM(Fields!OnPremisesSale_Amount.Value)
The picture below shows the result of this design:
6. It looks more or less correct, but there are uneven strange empty spaces between rows. In order to detect the root cause of this problem let's add visible borders to our tables. Change the BorderStyle property to Solid for one of the tables and to Dashed for another.
So the result will look like this:
Not.
You can use links on your task pages to guide users to additional information. The links can be the URL addresses of web sites or links to documents on a computer.
In the following example, you can see how to add a link to a customer card task page. The link is then viewable both from the customer card and the customer list..
5. Fill in the Description field with information about the link.
6. Click Save.
7. In Links, click on the link in the Link Address field. The appropriate program, such as Microsoft Word or Microsoft Internet Explorer, opens and displays the link target..
Most.
With).
To summarize here is a list of tips to consider when defining creation functions:
These and some other patterns have also been used for the implementation of the creation functions included in the Application Test Toolset. | http://blogs.msdn.com/b/nav/archive/2010/11.aspx?PostSortBy=MostComments&PageIndex=1 | CC-MAIN-2015-11 | refinedweb | 1,899 | 62.17 |
Component Libraries with Stencil.js - Getting Started
John Woodruff
Updated on
・4 min read
Component Libraries with Stencil.js (5 Part Series)
This is the second in a series of posts about creating a web component library using Stencil.js - Check out the first post
Now that we've talked about the reasoning for choosing Stencil to build our web component library, let's jump right in. First things first, we need to get our base project structure set up. Thankfully, the Ionic team has handled this completely for us. Make sure you're running
npm at version 6 or later, and run the following command:
$ npm init stencil
You should get a prompt similar to the following:
? Pick a starter › - Use arrow-keys. Return to submit. ionic-pwa Everything you need to build fast, production ready PWAs app Minimal starter for building a Stencil app or website ❯ component Collection of web components that can be used anywhere
The Ionic team has provided a few starters for us. We're not interested in building an entire app or PWA using Stencil (although you definitely can!) so we're going to choose the
component option. This will structure it so we can build a reusable component library for distribution.
You'll be asked to name your collection, so go ahead and name that however you'd like. The component library I'll be building is one I'm calling
mountain-ui, because I live in Utah among the beautiful Wasatch mountain range.
Once the starter is created, you can
cd into your newly created directory and run
npm start. That'll open the project in your browser with a basic web component they put there as a start. We'll jump into writing a component in a minute, but let's first go over the project structure and figure out where things go.
Project Structure
You'll notice a few directories. The main one you'll want to concern yourself with is
src. Other important directories to note are
www, which is where your compiled components go when you're developing, and
dist, which is where your actual distribution is after running a production build.
There are a few other important files to look at before we get to the components. If you've written TypeScript before you'll recognize the
tsconfig.json which defines our TypeScript compiler options. The other important file is
stencil.config.ts. This file defines your Stencil build and its various options. We'll change a couple of things there later on in this post. Finally there's
src/components.d.ts which is a file you won't ever modify yourself. It's a file generated by stencil at build time that keeps all your library's TypeScript definitions up to date.
Components
The
src/components directory is where you'll be spending most of your time. We're going to be keeping each component in its own separate folder. You'll notice a
my-component directory in there by default, with three files inside. The main file is
my-component.tsx. This is the TSX file that keeps the component class, its render method, and other methods and props associated with your component. There is also a corresponding
my-component.spec.ts file that is used for testing your component. Finally there is
my-component.css, which is where your styles live. By default you have CSS files, but you can use CSS preprocessors such as SASS.
Configuring our Library
There are a few things we'll want to do right at the beginning before moving on to building components. Let's start in the
stencil.config.ts file. The first thing we'll change is the
namespace value. By default it is set to
mycomponent, but we're going to change this to the name of our library. In my case, I'll be changing the value to
mountain-ui. Because we changed the namespace, let's also make sure that name is reflected correctly in the
src/index.html file. Make sure you have the following script tags at the top of your file:
<script type="module" src="/build/mountain-ui.esm.js"></script> <script nomodule</script>
The next thing to do is set up a CSS preprocessor. This is not required, many people will prefer to use plain CSS and that's great. The available plugins for Stencil are listed on the Plugins page. Personally I love using SASS, so I'm going to be setting that up for my project.
First we'll need to install the appropriate plugin. In my case I'll be running the following command (note that I'm using yarn, but you're welcome to use whatever you prefer):
$ yarn add --dev @stencil/sass
Once the plugin is installed, I'll be importing it in the config file, and passing it in to the
plugins array in the config object. See the full file below:
import { Config } from '@stencil/core'; import { sass } from '@stencil/sass'; export const config: Config = { namespace: 'mountain-ui', outputTargets: [ { type: 'dist', esmLoaderPath: '../loader' }, { type: 'docs-readme' }, { type: 'www', serviceWorker: null // disable service workers } ], plugins: [ sass() ] };
That's all you need to do to get your project set up for SASS. If you want to use one of the other preprocessors, simply install the appropriate plugin and follow the same steps above. The last thing I need to do is change
my-component.css to
my-component.scss, I'll change the
styleUrl in
my-component.tsx, and any new components I create from now on will have an
scss file instead of a
css file.
Next Steps
Now that we've fully configured our project, we're free to build our components without worrying about configuration or build. In the next post we'll start to go into a detailed build of our first component!
Simply want to see the end result repo? Check it out here
Component Libraries with Stencil.js (5 Part Series)
(open source and free forever ❤️) | https://dev.to/johnbwoodruff/component-libraries-with-stenciljs---getting-started-4jej | CC-MAIN-2020-10 | refinedweb | 1,009 | 65.52 |
Recently I got an opportunity to work as technical reviewer of the programming guide wrt vSphere with Kubernetes Configuration and Management (pdf here). As part of that, I had contributed few key Java API samples around vSphere Supervisor cluster into open source vSphere Automation Java SDK. I thought it is good to brief you about the same and also share key getting started tips around vSphere Automation Java SDK. If you are new to vSphere with Tanzu (aka vSphere with K8s) capability, I suggest you to go through below articles.
1. Introduction to vSphere Supervisor Cluster REST APIs
2. Python scripts to configure Supervisor cluster & create namespaces
Basically there are 4 Java API samples. If you are just looking for samples, just refer below links.
- Enable Supervisor cluster,
- Create Supervisor namespace and
- upgrading Supervisor cluster to next available kubernetes version.
- Disabling Supervisor cluster
Getting started tips : Enabling Supervisor Cluster with vSphere Automation java SDK
- It is assumed that you already have a base environment with NSX-T stack as specified here . Note that there is vSphere with Tanzu through vSphere network stack introduced with vSphere 70U1 as well. It is just that the sample I had contributed around enabling workload management feature (i.e. Configuring your vSphere cluster as Supervisor cluster) is applicable to NSX-T stack. I will contribute vSphere network stack based sample as well.
- Before you start, please understand this general REST API doc around Supervisor cluster enable operation. I love using this brand new developer portal. As a beginner, I would just spend around few minutes. It is fine if you did not understand few things, just move on.
- Now it is time to build your vSphere automation Java SDK environment in Eclipse. Steps for building it are explained here
- Once your eclipse environment is ready and added all samples into your eclipse project, you would also get all Supervisor cluster related samples I contributed into your eclipse project under “namespace_management” package. You can optionally spend few minutes, its fine if you could not, lets move on.
- In order to automate anything using vSphere automation Java SDK , it is important that you understand Java specific vSphere Automation doc. Note that this doc is different from the one mentioned in step 2 above. Again just spend few minutes understanding how it is organized and move on. As you spend more time, you will be comfortable referring it.
- Once you got fair idea around java specific doc as mentioned in step 5, now you can start looking at Java specific API doc for enabling workload platform i.e. Enabling vSphere cluster as Supervisor cluster. This is the doc I used to write this sample, just spend 1-2 min on this and move on.
- While you are going through doc in step 6, start co-relating it in parallel with actual java code sample around the same. Co-relating with doc and code will make your understanding better.
- Please closely look at what params are passed and EnableSpec is built.
Creating Supervisor namespace
- Most of the steps mentioned above applies here also. You can simply move to next step and take a look at this sample directly.
- This is the sample you need to understand for creating Supervisor namespace. Good thing is that same sample applies to creating namespace with either NSX-T or vSphere networking stack. With vSphere networking stack, there is one new param introduced i.e. networks on which you want to create this Supervisor namespace. Here is the Java Specific documentation around this API
Upgrading Supervisor cluster to next available kubernetes version.
- Upgrading Supervisor cluster deserves one separate post, which I will do as I get time but meanwhile you can learn more about it here
- Java sample for the same is here.
- Good thing is that same sample applies to upgrading Supervisor cluster with either NSX-T or vSphere networking stack.
- Note that we have REST API for upgrading multiple clusters in single API call as well. Refer doc here (upgradeMultiple method)
Disabling Supervisor cluster.
- Disabling supervisor cluster does lot more than usual disabling as it removes/deletes vSphere pods, Guest clusters/TKG also. Basically it makes cluster in a state that was just before enabling it. Hence be careful before calling this API.
- Java sample for the same is here
- Good thing is that same sample applies to disabling/removing Supervisor cluster with either NSX-T or vSphere networking stack.
This is all I have in this post. I hope it was useful post. If Java SDK is not your thing, you can simply automate Supervisor cluster ops using either python as I did or use official vSphere Automation python SDK or go for DCLI also. In addition, it is important to note that VMware has decided to deprecate vSphere Automation .NET and Perl SDK (VMware KB) to more focus on Java, python and Go SDKs.
>. | https://vthinkbeyondvm.com/tag/vsphere-supervisor-cluster-using-api/ | CC-MAIN-2022-27 | refinedweb | 814 | 63.7 |
On Thu, 13 Jun 2002, Stephen Colebourne wrote:
> What you describe does indeed compile, but isn't the proposal. The following
> demonstrates (I hope) why they need to be package scoped.
>
> public class CollectionUtils {
> public static Collection predicatedCollection(Collection coll) {
> return new PredicatedCollection(coll);
> }
> static class PredicatedCollection {
> }
> }
> public class ListUtils {
> public static List predicatedList(List list) {
> return new PredicatedList(list);
> }
> static class PredicatedList extends PredicatedCollection {
> }
> }
got it... I thought I was missing something. :) didn't realize you were
referring to cross-xxxUtils inheritance. Yes, that would need to be
package private. I'm not sure that is such a big deal though.
> > I haven't fully digested this thread yet, but I'm inclined to agree with
> > you.
>
> It is quite a lot to digest, but given the number of ideas that are floating
> about, we do seem to need it. There could be some work for the committers to
> manage the patches etc. for any renaming however!
yup... believe me, I know... I'm still trying to manage the patches
Paul sent in for the new testing framework... :)
michael
--
To unsubscribe, e-mail: <mailto:commons-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:commons-dev-help@jakarta.apache.org> | http://mail-archives.apache.org/mod_mbox/commons-dev/200206.mbox/%3CPine.LNX.4.44.0206131705480.15051-100000@champion.sslsecure.com%3E | CC-MAIN-2015-35 | refinedweb | 203 | 50.94 |
What's New? Intel® Threading Building Blocks 4.4
By Kirill R. (Intel), Updated).
This example shows the use of composite_node to encapsulate two flow graph nodes (a join_node and a function_node). In the example the concept that the sum of the first n positive odd numbers is the same as n squared is demonstrated.
A class adder is defined. This class has a join_node j with two input ports and a function_node f. j receives a number at each of its input ports and sends a tuple of these numbers to f which adds the numbers. To encapsulate these two nodes, the adder inherits from a composite_node type with two input ports and one output port to match the two input ports of j and the one output port of f.
A split_node s is created to serve as the source of the positive odd numbers. The first four positive odd numbers 1, 3, 5 and 7 are used. Three adders a0, a1 and a2 are created. The first adder a0 receives 1 and 3 from the split_node. These are added and the sum forwarded to a1. The second adder a1 receives the sum of 1 and 3 on one input port and receives 5 on the other input port from the split_node. These are also added and the sum forwarded to a2. Likewise, the third adder a2 receives the sum of 1, 3 and 5 on one input port and receives 7 on the other input port from the split_node. Each adder reports the sum it computes which is the square of the count of numbers accumulated when that adder is reached in the graph.
#include "tbb/flow_graph.h" #include <iostream> #include <tuple> using namespace tbb::flow; class adder : public composite_node< tuple< int, int >, tuple< int > > { join_node< tuple< int, int >, queueing > j; function_node< tuple< int, int >, int > f; typedef composite_node< tuple< int, int >, tuple< int > > base_type; struct f_body { int operator()( const tuple< int, int > &t ) { int n = (get<1>(t)+1)/2; int sum = get<0>(t) + get<1>(t); std::cout << "Sum of the first " << n <<" positive odd numbers is " << n <<" squared: " << sum << std::endl; return sum; } }; public: adder( graph &g) : base_type(g), j(g), f(g, unlimited, f_body() ) { make_edge( j, f ); base_type::input_ports_type input_tuple(input_port<0>(j), input_port<1>(j)); base_type::output_ports_type output_tuple(f); base_type::set_external_ports(input_tuple, output_tuple); } }; int main() { graph g; split_node< tuple<int, int, int, int> > s(g); adder a0(g); adder a1(g); adder a2(g); make_edge(output_port<0>(s), input_port<0>(a0)); make_edge(output_port<1>(s), input_port<1>(a0)); make_edge(output_port<0>(a0),input_port<0>(a1)); make_edge(output_port<2>(s), input_port<1>(a1)); make_edge(output_port<0>(a1), input_port<0>(a2)); make_edge(output_port<3>(s), input_port<1>(a2)); s.try_put(std::make_tuple(1,3,5,7)); g.wait_for_all(); return 0; }:
· Removal of all edges of a graph (using reset(rf_clear_edges)).
· Reset of all function bodies of a graph (using reset(rf_reset_bodies)).
Additionally, the following operations with a flow graph node are available as preview functionality:
· Extraction of an individual node from a flow graph (preview feature).
· Retrieval of the number of predecessors and successors of a node (preview feature).
·. | https://software.intel.com/en-us/articles/whats-new-intel-threading-building-blocks-44 | CC-MAIN-2017-34 | refinedweb | 526 | 50.67 |
This is a somewhat large project, combining two ‘experiments’ I wanted to try: providing environmental monitoring with ESP8266-based sensors and using NET-SNMP’s extend facility to interface external data to SNMP.
Long ago, I managed a large international network with hundreds of routers. SNMP was used heavily to monitor many aspects of the network. Then I ended up managing a data center. The building monitoring systems were a hodge-podge so I figured out how to convert each system to an SNMP-based monitoring system. That allowed all building systems to be monitored from the same SNMP console that I knew exhaustively.
I would have liked to have placed temperature sensors in every rack in the data center, but it was cost prohibitive. The devices we were using at the time were hundreds of dollars each so there weren’t many of them (a quick check shows the cheapest current models to cost $200). So I have long wanted to come up with a system where I could place several DS18B20 temp sensors in each rack and tie perhaps a few racks together with an MCU and a network connections.
That desire is the basis for this project. Here is a diagram of the concept I’m implementing in this post.
While the broad idea would be to support multiple ESP8266’s with multiple temperature sensors on each ESP8266, for this experiment, I will implement one ESP8266 with one temperature sensor.
The flow of data is as follows:
- ESP8266 reads the DS18B20 temperature sensor every 10 seconds
- That temperature is transmitted, along with the ESP8266’s MAC address to the Raspberry Pi
- The Raspberry Pi receives the temperature update and writes it to a file. The file is named after the MAC address (allowing for multiple ESP8266’s).
- The SNMP server will then use the contents of that file if an SNMP request is made for the temperature.
Resources
Here are some of the resources I used when creating this project.
The ESP8266 code is based on the accumulation of projects I’ve done so far
Extending SNMP is described here
-
-
-
Install SNMP on the Raspberry Pi
For the Raspberry Pi to act as a SNMP server between the ESP8266 and the SNMP console, the SNMP service must be installed on the Raspberry Pi. I covered this some time ago here:
Installing SNMP onto a Raspberry Pi
After snmpd is installed and running, snmpwalk should work much like this:
rpi/snmp:snmpwalk -v 1 -c public localhost system SNMPv2-MIB::sysDescr.0 = STRING: Linux rpi 3.18.7+ #755 PREEMPT Thu Feb 12 17:14:31 GMT 2015 armv6l SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (852799) 2:22:07.99 SNMPv2-MIB::sysContact.0 = STRING: Me <me@example.org> SNMPv2-MIB::sysName.0 = STRING: rpi SNMPv2-MIB::sysLocation.0 = STRING: Sitting on the Dock of the Bay SNMPv2-MIB::sysServices.0 = INTEGER: 72
Testing Extended SNMP
The next step is to make sure Extended SNMP is working. The snmpd.conf file that was installed onto my RPI already has some test Extended SNMP calls:!
snmpwalk should give you 2 test OID (test1 and test2) and both will have the value ‘Hello World’.
If not, edit your snmpd.conf file and add these two lines:
extend test1 /bin/echo Hello, world! extend-sh test2 echo Hello, world! ; echo Hi there ; exit 35
Restart the snmpd service:
service snmpd restart
and the snmpwalk above should properly return the ‘hello world’ lines. If not, you need to troubleshoot until you resolve the problem, as the succeeding steps require extended SNMP.
Write ESP8266/nodeMCU Lua Code to Transmit the Temperature
This part of the project is based fairly closely on my prior blog:
ESP8266 and DS18B20: Transmitting Temperature Data
Every 10 seconds, I will read the temperature from the DS18B20, then transmit that and the ESP8266’s MAC address via UDP (port 9999) to the RPI.
Note the temperature transmitted is in Celsius * 10000 to get rid of the decimal point. The Raspberry Pi can handle floating point and will convert it to floating point Fahrenheit.
The ESP8266 program consists of the file getTemp.lua and init.lua.
getTemp.lua
function getTemp() local addr = nil local count = 0 local data = nil local pin = 4 -- pin connected to DS18B20 local s = '' -- setup gpio pin for oneWire access ow.setup(pin) -- do search until addr is returned repeat count = count + 1 addr = ow.reset_search(pin) addr = ow.search(pin) tmr.wdclr() until((addr ~= nil) or (count > 100)) -- if addr was never returned, abort if (addr == nil) then print('DS18B20 not found') return -999999 end -- validate addr checksum crc = ow.crc8(string.sub(addr,1,7)) if (crc ~= addr:byte(8)) then print('DS18B20 Addr CRC failed'); return -999999 end if not((addr:byte(1) == 0x10) or (addr:byte(1) == 0x28)) then print('DS18B20 not found') return -999999 end ow.reset(pin) -- reset onewire interface ow.select(pin, addr) -- select DS18B20 ow.write(pin, 0x44, 1) -- store temp in scratchpad tmr.delay(1000000) -- wait 1 sec present = ow.reset(pin) -- returns 1 if dev present if present ~= 1 then print('DS18B20 not present') return -999999 end ow.select(pin, addr) -- select DS18B20 again ow.write(pin,0xBE,1) -- read scratchpad -- rx data from DS18B20 data = nil data = string.char(ow.read(pin)) for i = 1, 8 do data = data .. string.char(ow.read(pin)) end -- validate data checksum crc = ow.crc8(string.sub(data,1,8)) if (crc ~= data:byte(9)) then print('DS18B20 data CRC failed') return -9999 end -- compute and return temp as 99V9999 (V is implied decimal-a little COBOL there) return (data:byte(1) + data:byte(2) * 256) * 625 end -- getTemp function xmitTemp() local temp = 0 temp = getTemp() if temp == -999999 then return end cu:send(wifi.sta.getmac() .. ':' .. tostring(temp)) end -- xmitTemp function initUDP() -- setup UDP port cu=net.createConnection(net.UDP) cu:connect(9999,"192.8.50.106") end -- initUDP function initWIFI() print("Setting up WIFI...") wifi.setmode(wifi.STATION) wifi.sta.config("SSID","PASSWORD") wifi.sta.connect() tmr.alarm(1, 1000, 1, function() if wifi.sta.getip()== nil then print("IP unavailable, Waiting...") else tmr.stop(1) print("Config done, IP is "..wifi.sta.getip()) end end -- function ) end -- initWIFI initWIFI() initUDP() tmr.alarm(0, 5000, 1, xmitTemp)
init.lua
function startup() if abort == true then print('startup aborted') return end print('Starting xmitTemp') dofile('xmitTemp.lua') end abort = false print('Startup in 5 seconds') tmr.alarm(0,5000,0,startup)
Once the code is installed onto the ESP8266, I run wireshark on the Raspberry Pi to verify I’m getting UDP packets to port 9999 AND that the data within the packet contains the MAC address and a reasonable temperature in Celsius:
Receiving the Data on the Raspberry Pi
If you don’t already have Lua setup on your Raspberry Pi, here are instructions:
Installing LUA on Raspberry Pi and Getting it Running
Again, I’m going to use another post as the basis for this code:
ESP8266 UDP to/from Raspberry Pi running LUA
I am going to modify that program slightly to receive the data packet from the ESP8266, split the MAC address from the temperature, convert the temperature to fahrenheit, and finally write the temperature to a file named after the MAC address.
Here is the code:
#!/usr/bin/lua -- Setup UDP socket. Bind to localhost, port 9999. local socket = require "socket" local udp = socket.udp() udp:settimeout(0) -- indicates not to wait. If no data, return immediately udp:setsockname('*', 9999) local/tmp/'..mac) elseif msg_or_ip ~= 'timeout' then error("Unknown network error: "..tostring(msg)) end socket.sleep(0.01) -- sleep .01 secs end
When you run this program on the RPI, it should see the data from the ESP8266 and display the current temp:
and if you look in the /tmp dir, you should see a file being updated:
rpi/tmp:cd /tmp rpi/tmp:ll total 376K -rw-r--r-- 1 danh danh 7 Apr 28 17:21 18-FE-34-A0-52-62 -rw------- 1 danh danh 0 Apr 28 13:39 hist8497 -rw------- 1 danh danh 0 Apr 28 13:41 hist8523 -rw------- 1 root root 0 Apr 28 14:21 hist8591 drwx------ 2 danh danh 4.0K Apr 27 14:42 ssh-JFZoo8p8BQDL/ -rw------- 1 danh danh 263K Apr 28 17:21 wireshark_eth0_20150428133554_hwJwTi -rw-r--r-- 1 danh danh 97K Apr 27 17:12 xx.txt rpi/tmp:cat 18-FE-34-A0-52-62 86.375 rpi/tmp:
Modifying SNMP to Read the File Data
Now that the temperature is being recorded properly into a file, we merely need to get SNMP to recognize this data. This is really quite easy to do.
Edit the /etc/snmp/snmpd.conf file and add the following line (I am using the file name based on my ESP8266’s MAC address. You will need to change that to your own MAC address):
extend-sh tempSensor01 cat /tmp/18-FE-34-A0-52-62; exit 35
The name of this OID will be ‘tempSensor01’. When that OID is retrieved, it will execute the shell command ‘cat /tmp/18-FE-34-A0-52-62; exit 35‘. The output of that command is sent back to the SNMP console.
Once you are done editing snmpd.conf, save it and restart snmpd:
service snmpd restart
Now do this snmpwalk command:! NET-SNMP-EXTEND-MIB::nsExtendOutput1Line."tempSensor01" = STRING: 85.125
or to get just the tempSensor01 OID:
snmpget -v 1 -c public localhost 'NET-SNMP-EXTEND-MIB::nsExtendOutput1Line."tempSensor01"' NET-SNMP-EXTEND-MIB::nsExtendOutput1Line."tempSensor01" = STRING: 84.75
Conclusion
Putting together these various tools works quite well and the goal to allow an SNMP console the ability to read ESP8266-based is (very) roughly achieved. This is a long ways from a usable project, but I at least proved the concept to myself.
The one glaring issue that must be addressed to make this usable is the fact that the temperature is being returned as a string and not an integer. That makes it hard to apply tests and set alarms (for example, if the temp were > 100 I might want to set an alarm on the management console).
From what I’ve seen of the Net-SNMP EXTEND facility so far, it appears this is doable, but would require writing an actual MIB. Not hard, but beyond the scope of this little ‘experiment’.
Very good site.
I would like to read more than 5 sensors DS18b20 and send data to my mobile phone.
How could I modify Your program ?
Any help please ?
Thanks
Ambrogio
It should be pretty easy to find sites discussing using multiple DS18B20s such as
As far as being able to see it from a mobile phone, that is a major divergence from what I did here. If I were trying to do the same thing, I would either have the ESP8266 serve up a web page, or if it doesn’t have enough resources, have the Raspberry Pi do so. Then you just go to that device’s website using your phone. SNMP is great for data centers. Not so useful for individuals 🙂
Hello,
Nice job. I am doing the same project as yours.
The problem is my linux distortion is Openwrt for some reasons! And it don’t have net-snmp module in standard repository (instead it have snmpd)
the question here is
1. Is snmpd the same as net-snmp?
2. I did not get how you find OID. Can you describe more please
3. I heard we should make a MIB file for our device(sensor for example) in SNMP communication, but you did not. Is it correct?!
Thanks you.
1. The daemon, snmpd, is just part of the entire snmp package. Just looking around briefly it looks like it may be called snmp-static for openwrt. See.
2. If you want the OID for a specific datum, the easiest way to find it is to walk the entire mib table looking for what you want. google ‘walking mib’. Here is one example
3. I did not create my own MIB because I used the ‘extend’ option of net-snmp (look at). I’ve written a couple of MIBs, and while not overly difficult, it isn’t simple either – especially the first time. The extended MIB lets you get to device data without having to write your own MIB. For quick and dirty it’s a great option. For production work, or stuff you want to share with others (or if an instructor requires it), then write your own MIB.
Thanks Dan, it really works for me.
But as I see, you return temp value via snmp and the data type is string (look at this in your result ==> “tempSensor01” = STRING: 85.125)
the question is do all network management station (NMS) applications like cacti or nagios can understand numbers in string. I want to draw some chart in time, by sensor data in these application.
I tried coding in shell and c and … all of them return values in “string” or by some trick in “int” data type.
do you think we can some how return value in float data type?
Thanks again
Hi Javadgo,
You are correct, NMS systems are not going to be happy with a number embedded in a string if you intend to do any computations.
net-snmp-extend supports integers as discussed here:
I cannot ever recall seeing floating point in a MIB. I believe you end up having to use a smaller unit. So, perhaps, instead of using degrees F, you use tenth of degrees F. Here is a blurb about floating point in MIBs:
Hi Dan
Long time no see! Almost a year since the last time lol
Happy New Years to you.
I have one more question about SNMP.
Does the solution you have described (I mean extend snmp) work for snmp trap too?
or in other words, can we get trap from an executable file that is defined as extend snmp?
(again there is no MiB)
Thanks
I don’t know how to do this. I’m fairly certain the extend extension won’t support traps.
There is an SNMP protocol called AGENTX that might be able to, but it isn’t clear to me how.
If you are to the level of complexity of wanting to support traps, you may be better off writing and compiling a MIB for the traps. I believe you could then use snmptraps utility to transmit the traps to the net management station. | https://bigdanzblog.wordpress.com/2015/04/29/snmp-environmental-monitoring-using-esp8266-based-sensors/comment-page-1/ | CC-MAIN-2017-34 | refinedweb | 2,439 | 64.1 |
First Steps in Sencha Touch
Want to develop mobile applications across all platforms? Want to develop HTML5 mobile applications? Don’t know where to start?
This article aims to help you begin your journey with the Sencha Touch HTML5 framework.
What is Sencha Touch?
Sencha Touch is an HTML5 framework for developing mobile applications. It allows you to develop mobile applications that would have the same look and feel as a native application. Sencha Touch supports Android, iOS, Windows Phone, Microsoft Surface Pro and RT, and Blackberry devices.
Features
- UI components (Panels, Tab bar, Navigation view, buttons, pickers)
- Components can be themed depending on the target devices
- Access device capabilities like camera, accelerometer etc, with the help of PhoneGap frameworks.
How to Start
Download the free Sencha Touch SDK and Sencha Cmd from the Sencha website. Note that Sencha Cmd will also install Ant, Ruby, Sass, and Compass, all or some of which will be useful for building applications.
You will also need a web server running locally on your computer, for example XAMPP.
The Sencha website advises “If you are running the IIS web server on Windows, manually add
application/x-json as a MIME Type for Sencha Touch to work properly. For information on adding this MIME type see the following link:“.
Installation
Extract the SDK zip file to your projects directory. This folder should be accessible to your HTTP server, so you can navigate to in your browser and see the Sencha Touch documentation.
Run the Sencha Cmd installer. The installer adds the Sencha command line tool to your path, enabling you to generate a fresh application template, among other things.
Confirm that Sencha Cmd is correctly installed by changing to the Sencha Touch directory, and entering a
sencha command, for example:
$ cd ~/webroot/sencha-touch-2.n/ $ sencha Sencha Cmd v3.1.n ...
Note When using the
sencha command, you must be inside either the downloaded SDK directory or a generated Touch app. For further details see the Sencha Cmd documentation.
Your development and testing environment should now be ready.
Sencha Touch Project
- Index.html – page where your application will be hosted from.
- App Directory – the application in general is a collection of Models,Views, Controllers, Stores and Profiles
- Model: represents the type of data that should be used/stored in the application
- View: displays data to the user with the help of inbuilt Sencha UI components/custom components
- Controller: handles UI Interactions and the interaction between the model and the view.
- Store: responsible for loading data into the application
- Profile: helps in customizing the UI for various phone and tablets.
- Resources Directory – contains images, css and other media assets
- App.js
- Global settings of the application
- Contains the app name, references to all the models, views, controllers, profiles and stores
- Contains the app launch function that is called after the models, views, controllers, profiles and stores are loaded. App launch function is the starting point of the application wherein the first view gets instantiated and loaded.
- Touch directory – Contains the Sencha Touch framework files.
Try a Sample
Let us create a simple navigation view with a list in it.
Home.js
Ext.define('MyFirstApp.view.Home',{ extend:'Ext.NavigationView', xtype:'Home', config:{ items:[] } });
Let’s break that down.
The
Ext.define function helps us to define a class named
Home in the namespace
MyFirstApp/view. All view components are placed within this namespace as per Sencha Touch MVC standards.
Extend keyword specifies that
Home class is the subclass of
Ext.NavigationView. So, the
Home class inherits the base configuration and implementation of
Ext.NavigationView class.
xtype keyword is used to instantiate the class.
Config keyword helps us to initialize the variables/components used in that particular class. In this example we should initialize the navigation view with a list.
The content of
Items in the Home view is currently blank. Let us create a list view and place its reference inside the items field of the Home view.
First of all, present this home view in the app launch function
App.js
launch: function() { // Initialize the main view Ext.Viewport.add([{xtype:'Home'}]); },
Now, Let us create a simple model class for the data in the list.
MyModel.js
Ext.define('MyFirstApp.model.MyModel',{ extend:'Ext.data.Model', config:{ fields:['name'] } });
Let us create a data store and map it to the above model.
MyStore.js
Ext.define('MyFirstApp.store.MyStore',{ extend:'Ext.data.Store', config:{ model:'MyFirstApp.model.MyModel', autoLoad:true, data:[ {name:'t1'}, {name:'t2'} ], proxy:{ type:'localstorage' } } });
We have initialized the store with the list data as well.
Proxy is used to load the model with the data. The Local Storage object is an HTML5 feature for storing data locally in the browser. There are other proxies as well.
Ajax – Used for request within a particular domain
Local Database – Allows creating client side database.
We can omit the proxy for this sample as we will not be using it.
MyList.js
Ext.define('MyFirstApp.view.MyList',{ extend:'Ext.Panel', xtype:'MyList', requires:['Ext.dataview.List','Ext.data.Store'], config:{ title:'My List', layout:'fit', items:[ { xtype:'list', store:'MyStore', itemTpl:'<b>{name}<b>' } ] } });
In the code above,
MyList view is created in the namespace
MyFirstApp.view and it inherits the properties of
Ext.Panel.
When references to other classes are required in a particular class, those classes should be declared under the
requires field. Sencha Touch ensures that the required classes are loaded.
MyList view is initialized with the title as
My List, layout as
fit and contents as the
List component.
List component is mapped to the data store.
ItemTpl represents the data template as to how the list should be displayed to the user.
Now we will add the list to our home view
Ext.define('MyFirstApp.view.Home',{ extend:'Ext.NavigationView', xtype:'Home', config:{ items:[xtype:'MyList'] } });
The above example can be tested in Chrome browser by simulating various mobile resolutions. Right click on the browser and select ‘Inspect Element’. Select Settings icon in the right corner of the Inspect Element Window. Select any user agent and the resolution.
Conclusion
The aim of this article has been to help you take your first steps in Sencha Touch development. So, what are you waiting for?
Go ahead and improvise, adapt the code, bring in your ideas, then develop and publish your own HTML5 mobile applications.
- Prem
- Alex Barylski
- Manu | http://www.sitepoint.com/first-steps-in-sencha-touch/ | CC-MAIN-2014-23 | refinedweb | 1,069 | 57.27 |
table of contents
- buster 241-7~deb10u7
- buster-backports 247.3-5~bpo10+2
- testing 247.3-5
- unstable 247.3-6
- experimental 249-1
NAME¶libudev - API for enumerating and introspecting local devices
SYNOPSIS¶
#include <libudev.h>
pkg-config --cflags --libs libudev
DESCRIPTION¶libudev.h provides APIs to introspect and enumerate devices on the local system.
All functions require a libudev context to operate. This context can be create via udev_new(3). It is used to track library state and link objects together. No global state is used by libudev, everything is always linked to a udev context..
To introspect a local device on a system, a udev device object can be created via udev_device_new_from_syspath(3) and friends. The device object allows one to query current state, read and write attributes and lookup properties of the device in question.
To enumerate local devices on the system, an enumeration object can be created via udev_enumerate_new(3).
To monitor the local system for hotplugged or unplugged devices, a monitor can be created via udev_monitor_new_from_netlink(3).
Whenever libudev returns a list of objects, the udev_list_entry(3) API should be used to iterate, access and modify those lists.
Furthermore, libudev also exports legacy APIs that should not be used by new software (and as such are not documented as part of this manual). This includes the hardware database known as udev_hwdb (please use the new sd-hwdb(3) API instead) and the udev_queue object to query the udev daemon (which should not be used by new software at all). | https://dyn.manpages.debian.org/experimental/libudev-dev/libudev.3.en.html | CC-MAIN-2021-31 | refinedweb | 256 | 57.67 |
Building apps is one of the coolest things to be done as a software developer and tech enthusiast. Apps are not only more portable and user friendly, based on the requirements, but sometimes they are the only option for a particular application In this tutorial and the upcoming series, we will learn how to build a cool cross-platform mobile application using Flutter and Metamask.
The problem we are solving
To interact with the blockchain, the users must have an account (public and private key pair) on the blockchain that is used to sign the transactions. Sharing the private key with someone is equivalent to sharing access to the account. Because of this, users will be reluctant to provide their private key to the app since this would raise a lot of security concerns. The industry-trusted method is to use a “Non-custodial wallet” like Metamask.
Although there are numerous tutorials on how to use the browser extension of Metamask, using the mobile app version isn’t properly documented yet. In this tutorial series, we will be covering how to connect a Flutter App with Metamask for user login and in later articles, interacting with smart contracts will also be covered. We are using Flutter as our development framework of choice because we want to build a cross-platform application that is supported in both Android and IOS.
In this tutorial, we will be building an App that will be using Metamask for login. It will get the public key from Metamask. Metamask can also be used for signing messages and transactions but that will be covered in a different tutorial. The following GIF shows what we are going to build:
Pre-requisites
- Flutter is installed in your system. You can follow the official guide of Flutter here.
- Have an Android / Ios emulator or physical device connected that will be used for testing the application. I recommend using Android Emulator as there are some bugs while working with Ios.
- Install and set up Metamask Mobile App in your Android emulator.
💡 It is recommended to use VS Code with Flutter extension.
Known challenges
While writing this article, there are some challenges with building and running the app on IoS.
- IoS doesn’t support installing third-party applications from App Store for simulators. So we have to install Metamask directly from its Github repo.
- If you are using a MacBook with Apple Silicon, there are some extra steps needed for setting up Metamask in a simulator. You can read about it here.
- Deep Linking with Metamask is not working as expected. (For this tutorial it is recommended to use Android Emulator)
I will update this article if I find the solutions to the above problems. Till then your helpful comments are highly expected.
Project initiation
We start by running
flutter doctor to make sure we have everything set up properly for our development journey. You should see similar to :
Now we start by creating a new flutter project. To do this, open the location where you want to store your project folder in your terminal and type the following command:
flutter create my_app
Here
my_app is the name of the project we are going to build. This will create a folder with the same name where all our code will reside. You should see an output similar to:
Open this folder in a code editor of your choice. You can run your app by typing
flutter run or using the Flutter Debugger if you are using VS Code.
Installing dependencies
For this project we will require the following dependencies:
- url_launcher: This will be used for opening
Metamaskfrom our app using a URI.
- walletconnect_dart: This will be used for generating a URI that will be used to launch Metamask.
- google_fonts: Optional dependency for using Google Fonts in our app.
- slider_button: Optional dependency for using a Slider Button for login purposes.
To install these dependencies, type in the following command
flutter pub add url_launcher walletconnect_dart google_fonts slider_button
Adding assets folder
We want to use static images in our app’s UI. For that, we have to create a folder that will contain our assets and tell flutter to use them as assets for our project.
Create a folder called
assets inside the root
my_app folder. The name of the root folder will be whatever name you used for creating the flutter project. Inside the
assets folder, we will create an
images folder for storing our image assets. Finally, inside the
pubspec.yaml file we add this folder by adding the following lines in the
flutter section:
flutter: # The following line ensures that the Material Icons font is # included with your application, so that you can use the icons in # the material Icons class. uses-material-design: true # To add assets to your application, add an assets section, like this: assets: - assets/images/
Understanding the flow
Finally, before we start coding our app, it is important to understand the user flow. The following diagram represents the flow a user will go through starting from when he opens our app:
Code Along
We will start from the
main.dart inside the
lib folder. The
main.dart is the first file to be compiled and executed by Flutter. Clear all the contents of this file and paste in the following lines of code:
import 'package:flutter/material.dart'; void main(List<String> args) { runApp(const MyApp()); } class MyApp extends StatelessWidget { const MyApp({Key? key}) : super(key: key); @override Widget build(BuildContext context) { return MaterialApp(); } }
We start by creating a new Stateless Widget. This widget will act as the starting point of our project. Based on the flow diagram shown above, the first thing to do will be to create the Login Page. Although for this tutorial our app will have only one page, we should have a proper routing system so that we can easily keep on adding newer pages as we proceed with our project.
Creating routes
The way routing works in flutter is quite similar to how it works for web apps, i.e. using the
/path format. In simple words, routes are nothing but a mapping of a path to its respective widget. An example of how routes work is:
return MaterialApp( initialRoute: "/login", routes: { "/login": (context) => const LoginPage(), "/home": (context) => const HomePage() }, );
Inside
routes we define all the routes that will be used in our project and their respective widgets. In this example we are saying that the widget
LoginPage will be rendered when the user is in the
/login router and similarly when the user is in the
/home route, the
HomePage widget will be rendered. The
initialRoute field tells the initial or starting route to be loaded. In this example, the first widget the user sees on opening the app will be the
LoginPage widget.
Since there will be multiple routes present in a project which are used across multiple files, it is not wise to directly type the route. Rather, one should have constant variable names defined for the code to be more robust. For this create a new folder called
utils inside the
lib folder and inside the
utils folder create a new file called
routes.dart. This file will store all our routes. Inside this file define the routes like this:
class MyRoutes { static String loginRoute = '/login'; }
Now let’s get back to our
main.dart file and make the following changes:
import 'package:flutter/material.dart'; import 'package:my_app/utils/routes(), }, ); } }
Here we are importing our newly created
routes.dart file and using the variable name instead of directly typing the route. Since we don’t have the
LoginPage widget yet, we will be getting an error message. So let’s create our login page.
Creating Login Page
Inside the
lib folder, let’s create a new folder called
pages that will have all our pages. This folder will have all our pages. Inside the
pages folder, create a new file called
login_page.dart. Inside this file paste in the following code:
import 'package:flutter/material.dart'; class LoginPage extends StatefulWidget { const LoginPage({Key? key}) : super(key: key); @override State<LoginPage> createState() => _LoginPageState(); } class _LoginPageState extends State<LoginPage> { @override Widget build(BuildContext context) { return Scaffold(); } }
Here we are creating a new Stateful Widget called
LoginPage. Now we can import this into our
main.dart file by adding
import 'package:my_app/pages/login_page.dart'; at the start of the file. The final main.dart file looks like this:
import 'package:flutter/material.dart'; import 'package:my_app/utils/routes.dart'; import 'package:my_app/pages/login_page(), }, ); } }
Designing the Login Page
Now it’s time to design our Login Page. For this tutorial, we will be designing a very simple Login Page. To start with, it only has an image along with
Connect with Metamask button. When Metamask is connected, it will display the account address and the chain connected with it. If the chain is not the officially supported chain (Mumbai Testnet for our case), we display a warning asking the users to connect to the appropriate chain. Finally, if the user is connected with the connected network, we show the details along with a “Slide to login” slider. These three are shown in the following diagrams respectively:
Building the default Login Page
We start by editing the
login_page.dart file. Make the following changes inside the
_LoginPageState class:
class _LoginPageState extends State<LoginPage> { : () => {}, child: const Text("Connect with Metamask")) ], ), ), ); } }
Here we are doing the following:
- We start by returning a
Scaffold.
Scaffoldin flutter is used to implement the basic material design layout. You can read more about it here.
- Then we are defining an
AppBarwith the title “Login Page”. This will be the title to be displayed on top of our app.
- We start the body of our app by defining a
SingleChildScrollView. This is helpful when our App is opened on a phone with a relatively smaller display. It enables the users to scroll through our widget. Read more about it here.
- Inside the
SingleChildScrollViewwe define a
Columnto contain the various components of our page as its
children.
- The first child we define is an image. We want to render an image stored inside our
assetsfolder. For this, we use
Image.asset()and pass in the path to where the image is stored. Remember to use a path already added as a source of assets. Previously we added the
assets/images/as a source of assets. I am using this image that I downloaded into the
imagesfolder and named
- Next, we create a button using the
ElevatedButtonclass. It takes two arguments:
onPressed: The function to be executed when the button is clicked. For now, this is blank.
child: A child widget that will determine how our button will look. For now, it is a
Textwith the string
“Connect with Metamask”.
If you run the app now, you should see something like:
Although pressing the button doesn’t do anything right now, we have our default look ready. It only gets more interesting from here 😎😎😎.
Understanding the dependencies
Next, we will be writing the logic behind the “Connect with Metamask” button. For these we will be using two important dependencies:
walletconnect_dart: This dependency will be used for connecting with Metamask. Practically it can be used with other wallets like Trust Wallet as well, but for this tutorial, we will focus only on Metamask.
To understand how this works, we must first understand how Wallet Connect works. Wallet Connect is a popularly used protocol for connecting web apps with mobile wallets (commonly by scanning a QR code). It generates a URI that is used by the mobile app for securely signing transactions over a remote connection. The way our app works is, that we directly open the URI in Metamask using our next dependency.
walletconnect_dartis a package for flutter written in
dartprogramming language. We will use this dependency to generate our URI and connect with Metamask. This package also provides us with callback functions that can be used to listen to any changes done in Metamask, like changing the network connected with.
url_launcher: This dependency is used for launching URLs in android and ios. We will be using this dependency for launching the URI generated by
walletconnect_dartin the Metamask app.
Using the dependencies in our code
We start by importing the dependencies in our
login_page.dart file
import 'package:walletconnect_dart/walletconnect_dart.dart'; import 'package:url_launcher/url_launcher_string.dart';
Next, inside our
_LoginPageState class we define a connector that will be used to connect with Metamask
var connector = WalletConnect( bridge: '', clientMeta: const PeerMeta( name: 'My App', description: 'An app for converting pictures to NFT', url: '', icons: [ '' ]));
We are using the
WalletConnect class to define our connector. It takes in the following arguments:
bridge: Link to the Wallet Connect bridge
clientMeta: This contains optional metadata about the client
name: Name of the application
description: A small description of the application
url: Url of the website
icon: The icon to be shown in the Metamask connection pop-up
We also define two variables called
_session and
_uri, which will be used to store the session and URI respectively when our widget state is updated.
We define a function called
loginUsingMetamask to handle the login process as follows:
loginUsingMetamask(BuildContext context) async { if (!connector.connected) { try { var session = await connector.createSession(onDisplayUri: (uri) async { _uri = uri; await launchUrlString(uri, mode: LaunchMode.externalApplication); }); print(session.accounts[0]); print(session.chainId); setState(() { _session = session; }); } catch (exp) { print(exp); } } }
Here we are doing the following:
- First, we check if the connection is already established by checking the value of
connector.connectedvariable. If the connection is not already established, we proceed with the code inside the
ifblock.
- We use
try-catchblock to catch any exception that may arise during establishing the connection, like the user clicking on
cancelin the Metamask pop-up.
- Inside the
tryblock, we create a new session by using the
connector.createSession()function. It takes in a function as an argument that is executed when the URI is generated. Inside this function, we use the
launchUrlString()function to open the generated URI in an external app. We pass in the generated URI as a parameter and since it will be opening an external application, we set the
modeas
LaunchMode.externalApplication. Finally, since we want our code to wait until the connection is confirmed using Metamask, we use the
awaitkeyword with
launchUrlString()function.
- We can fetch the accounts connected by using
session.accountsand the chain id by using
session.chainId. For now, we print the selected account using
session.accounts[0]and the chain Id to the console to check if our code is working properly.
- Finally, we update the state of our app using
setStateand store the created session in the
_sessionvariable.
- If any exception is generated in any of the above statements, the
catchblock will be executed. Right now we only print the generated exception, but in the latter stages of the project, we can use more robust exception handling.
Finally, we call the
loginUsingMetamask function as the
onPressed argument of our created button. The final code looks something like this:
import 'package:flutter/material.dart'; import 'package:walletconnect_dart/walletconnect_dart.dart'; import 'package:url_launcher/url_launcher_string.dart'; class LoginPage extends StatefulWidget { const LoginPage({Key? key}) : super(key: key); @override State<LoginPage> createState() => _LoginPageState(); } class _LoginPageState extends State<LoginPage> { var connector = WalletConnect( bridge: '', clientMeta: const PeerMeta( name: 'My App', description: 'An app for converting pictures to NFT', url: '', icons: [ '' ])); var _session, _uri; loginUsingMetamask(BuildContext context) async { if (!connector.connected) { try { var session = await connector.createSession(onDisplayUri: (uri) async { _uri = uri; await launchUrlString(uri, mode: LaunchMode.externalApplication); }); setState(() { _session = session; }); } catch (exp) { print(exp); } } } : () => loginUsingMetamask(context), child: const Text("Connect with Metamask")) ], ), ), ); } }
Now, we run our app 🤞🏾. If everything is done as described, we will be greeted with a familiar Login Page. But when we click on the
Connect with Metamask button, it will redirect to Metamask. Metamask will prompt you to connect your wallet. It will show the URL and icon specified in the
clientMeta field.
When we click on the blue
Connect button, we will be redirected back to our wallet. Right now we won’t see anything different, but if we check back the logs, you should see flutter printed the account address and the chain id.
Congratulations 🥳 🎉!! You have successfully connected with your Metamask wallet and it was that simple.
There is still one challenge left. Users may not connect with the blockchain your Smart Contracts are deployed to. So before we let users inside our platform, we should check if connected with the correct blockchain. Also, we should update if the user changes the connected network and also the selected account.
Subscribing to events
Using our
connector variable we can subscribe to
connect,
session_update and
disconnect event. Paste the following code inside the
build function:
Widget build(BuildContext context) { connector.on( 'connect', (session) => setState( () { _session = _session; }, )); connector.on( 'session_update', (payload) => setState(() { _session = payload; print(payload.accounts[0]); print(payload.chainId); })); connector.on( 'disconnect', (payload) => setState(() { _session = null; })); ... }
Here we are subscribing to the different events. On a
session_update we update the state of our app using
setState and assign the updated payload inside the
_session variable. We also print the new account address and the chain Id, so that we can check if our code is working properly from the terminal.
Perform a hot reload of your app and perform the same steps to connect Metamask with your app. Now you can change the network and the connected account from inside Metamask and observe the chain Id and account address change in your terminal/console.
Displaying the data on the screen
We have successfully connected Metamask with our app. Although the tutorial can end right here, I would prefer to display the details on the screen for users to verify and create a better login experience.
The first thing we want is when the user has connected with Metamask, we want to display the details instead of the button. For this we wrap our
ElevatedButton in a ternary operator as follows:
(_session != null) ? Container() : ElevatedButton()
Here, if the
_session variable is
null, i.e. Metamask is not connected, it would render the
ElevatedButton else the
Container will be rendered.
We start with the following code inside our
Container:
Container( padding: const EdgeInsets.fromLTRB(20, 0, 20, 0), child: Column( crossAxisAlignment: CrossAxisAlignment.start, children: [ Text( 'Account', style: GoogleFonts.merriweather( fontWeight: FontWeight.bold, fontSize: 16), ), Text( '${_session.accounts[0]}', style: GoogleFonts.inconsolata(fontSize: 16), ), ] ) )
- We start by adding small padding of
20pxfrom left and right.
- We want our cross-axis alignment to be from the start, so we define
crossAxisAlignmentas
CrossAxisAlignment.start.
We want the first widget in our column to be a simple saying
Accountand below it shows the account address of the connected Metamask account. We use the
Textwidget for displaying the data and use
GoogleFontsfor styling. You can import Google fonts by writing
import 'package:google_fonts/google_fonts.dart';
on top of the file. We use the
${}notation to access the
_sessionvariable inside a pair of single-quote(
’’).
The next thing we want to show is the name of the chain users are connected to. We want to display it in the following way:
Since most of the users may not be familiar with the chain Ids of the different blockchains, it’s better to show them the name of the blockchain, rather than just the chain Id. To do this, we can write a simple function that takes in the
chainId as input and returns the name of the chain. Inside the
_LoginPageState define a function called
getNetworkName as follows:
getNetworkName(chainId) { switch (chainId) { case 1: return 'Ethereum Mainnet'; case 3: return 'Ropsten Testnet'; case 4: return 'Rinkeby Testnet'; case 5: return 'Goreli Testnet'; case 42: return 'Kovan Testnet'; case 137: return 'Polygon Mainnet'; case 80001: return 'Mumbai Testnet'; default: return 'Unknown Chain'; } }
The function uses
switch-case statements to return the name of the chain based on
chainId.
Inside our
Container, after the two
Text widgets, we add a
SizedBox with
height of 20px to add some gap. Next we define a
Row with two children widgets, the text “Chain” and the name of the chain obtained by calling the
getNetworkName function. We do it like this:
Row( children: [ Text( 'Chain: ', style: GoogleFonts.merriweather( fontWeight: FontWeight.bold, fontSize: 16), ), Text( getNetworkName(_session.chainId), style: GoogleFonts.inconsolata(fontSize: 16), ) ], ),
Next, we want to check if the user is connected to the correct network. We check with
_session.chainId matches the chain id of our supported blockchain (in this case 80001 for Mumbai Testnet). If it’s not equal to the required chain id, we create a
Row to display our icon and the helper text, otherwise, we create a
Container that will be used for our
SliderButton.
(_session.chainId != 80001) ? Row( children: const [ Icon(Icons.warning, color: Colors.redAccent, size: 15), Text('Network not supported. Switch to '), Text( 'Mumbai Testnet', style: TextStyle(fontWeight: FontWeight.bold), ) ], ) : Container()
Next, we add out
SliderButton. We import our dependency with the following statement at the start of our file:
import 'package:slider_button/slider_button.dart';
Finally inside our
Container, we define our
SliderButton like this:
Container( alignment: Alignment.center, child: SliderButton( action: () async { // TODO: Navigate to main page }, label: const Text('Slide to login'), icon: const Icon(Icons.check), ), )
For now, the
SliderButton doesn’t do anything, but in further tutorials, it will navigate us to the main page of our application.
Now your app is fully ready to be run. If everything was done as described in this tutorial, your app should be now ready. You should be able to Login In to your app using Metamask. Although the app doesn’t login into any page, still you can connect with Metamask using your mobile app. How awesome is that ?!!
Wrapping Up
Wow!! That was a log tutorial. In this tutorial, we covered how to build a very basic flutter app from scratch. We learned how to interact with Metamask from our app. We explored two important dependencies,
walletconnect_dart and
url_launcher, and learned how they work and how they can be used to connect an app with a wallet like Metmask. We also learned how to update our app when the user updates the Metamask session. And finally, I hope we all had a great time learning something new and interesting.
The code for this project is uploaded to Github here.
I plan to extend this application into an app that does more cool things and dive deeper into the world of Defi, Blockchain, and beyond. If you liked this tutorial, don’t forget to show your love and share it on your socials or help me improve by posting your feedback in the Discussion. If you want to connect with me or recommend any topic, you can find me on LinkedIn, Twitter, or through my mail.
We will meet again with another new tutorial or blog, till then stay safe, spend time with your family and KEEP BUIDLING!
Discussion (7)
Something I didn't know i was looking for until i found it, as a flutter developer who started his Blockchain journey, this is what i needed
Clearly explained 🎯
Glad I could help. I am a Blockchain Developer who got interested in Flutter. I have plans to bring more tutorials in this domain. 👨🏾💻
Looking forward to them, as i will be diving deep into learning Blockchain too
🙂
🙂
How about support in the browser?
Is this somehow possible yet?
I have not tested that yet. Thank you for the idea, I will look into it and reply. In the meantime, if you find something else, please do share.
Proper tutorial. Nicely explained and all the doubts were solved from within. Kudos! | https://practicaldev-herokuapp-com.global.ssl.fastly.net/bhaskardutta/building-with-flutter-and-metamask-8h5 | CC-MAIN-2022-33 | refinedweb | 3,956 | 55.84 |
Hint is a base class to be used to pass information between StepHandler s, which cannot be convayed through the Event record.
More...
StepHandler
#include <Hint.h>
Hint is a base class to be used to pass information between StepHandler s, which cannot be convayed through the Event record.
The base class contains a vector of of tagged particles. A StepHandler is always given a hint, and is only allowed to treat Particles from the current Step which are listed in the vector of tagged particles in the hint (if this vector is empty the StepHandler may treat all particles in the Step.
A Hint may have the stop flag set. In this case the StepHandler to which the hint is assigned is not called, and the event generation is stopped.
A Hint may be given a scale, but what a StepHandler does with this and other pieces of information possibly supplied by subclasses of Hint, is not defined.
There is a special Hint which is kept as the static member called Hint::theDefaultHint. Although any default constructed Hint object would work as a default hint, only pointers to this static object should be used where a default hint is needed.
Definition at line 48 of file Hint.h.
Function used to read in object persistently.
Referenced by Default().
Function used to write out object persistently.
Return a list of pointers to particles to be handled.
A handler is not allowed to touch other particles in the event record. If a particle which has been flagged by the hint is no longer present in the current Step, a null pointer is inserted in its place. | https://thepeg.hepforge.org/doxygen/classThePEG_1_1Hint.html | CC-MAIN-2018-39 | refinedweb | 275 | 72.16 |
.
Now that you understand the reasoning behind forbidding the ability to read a file from a kernel module, you of course can skip the rest of this article. It does not concern you, as you are off busily converting your kernel module to use sysfs.
Still here? Okay, so you still want to know how to read a file from a kernel module, and no amount of persuading can convince you otherwise. You promise never to try to do this in code that will be submitted for inclusion into the main kernel tree and that I never described how to do this, right?
Actually, reading a file is quite simple, once one minor issue is resolved. A number of the kernel system calls are exported for module use; these system calls start with sys_. So, for the read system call, the function sys_read should be used.
The common approach to reading a file is to try code that looks like the following:
fd = sys_open(filename, O_RDONLY, 0); if (fd >= 0) { /* read the file here */ sys_close(fd); }
However, when this is tried within a kernel module, the sys_open() call usually returns the error -EFAULT. This causes the author to post the question to a mailing list, which elicits the “don't read a file from the kernel” response described above.
The main thing the author forgot to take into consideration is the kernel expects the pointer passed to the sys_open() function call to be coming from user space. So, it makes a check of the pointer to verify it is in the proper address space in order to try to convert it to a kernel pointer that the rest of the kernel can use. So, when we are trying to pass a kernel pointer to the function, the error -EFAULT occurs.
To handle this address space mismatch, use the functions get_fs() and set_fs(). These functions modify the current process address limits to whatever the caller wants. In the case of sys_open(), we want to tell the kernel that pointers from within the kernel address space are safe, so we call:
set_fs(KERNEL_DS);
The only two valid options for the set_fs() function are KERNEL_DS and USER_DS, roughly standing for kernel data segment and user data segment, respectively.
To determine what the current address limits are before modifying them, call the get_fs() function. Then, when the kernel module is done abusing the kernel API, it can restore the proper address limits.
So, with this knowledge, the proper way to write the above code snippet is:
old_fs = get_fs(); set_fs(KERNEL_DS); fd = sys_open(filename, O_RDONLY, 0); if (fd >= 0) { /* read the file here */ sys_close(fd); } set_fs(old_fs);
An example of an entire module that reads the file /etc/shadow and dumps it out to the kernel system log, proving that this can be a dangerous thing to do, can be seen below:
#include <linux/kernel.h> #include <linux/init.h> #include <linux/module.h> #include <linux/syscalls.h> #include <linux/fcntl.h> #include <asm/uaccess.h> static void read_file(char *filename) { int fd; char buf[1]; mm_segment_t old_fs = get_fs(); set_fs(KERNEL_DS); fd = sys_open(filename, O_RDONLY, 0); if (fd >= 0) { printk(KERN_DEBUG); while (sys_read(fd, buf, 1) == 1) printk("%c", buf[0]); printk("\n"); sys_close(fd); } set_fs(old_fs); } static int __init init(void) { read_file("/etc/shadow"); return 0; } static void __exit exit(void) { } MODULE_LICENSE("GPL"); module_init(init); module_exit(exit);
Now, armed with this newfound knowledge of how to abuse the kernel system call API and annoy a kernel programmer at the drop of a hat, you really can push your luck and write to a file from within the kernel. Fire up your favorite editor, and pound out something like the following:
old_fs = get_fs(); set_fs(KERNEL_DS); fd = sys_open(filename, O_WRONLY|O_CREAT, 0644); if (fd >= 0) { sys_write(data, strlen(data); sys_close(fd); } set_fs(old_fs);
The code seems to build properly, with no compile time warnings, but when you try to load the module, you get this odd error:
insmod: error inserting 'evil.ko': -1 Unknown symbol in module
This means that a symbol your module is trying to use has not been exported and is not available in the kernel. By looking at the kernel log, you can determine what symbol that is:
evil: Unknown symbol sys_write
So, even though the function sys_write is present in the syscalls.h header file, it is not exported for use in a kernel module. Actually, on three different platforms this symbol is exported, but who really uses a parisc architecture anyway? To work around this, we need to take advantage of the kernel functions that are available to kernel modules. By reading the code of how the sys_write function is implemented, the lack of the exported symbol can be thwarted. The following kernel module shows how this can be done by not using the sys_write call:
#include <linux/kernel.h> #include <linux/init.h> #include <linux/module.h> #include <linux/syscalls.h> #include <linux/file.h> #include <linux/fs.h> #include <linux/fcntl.h> #include <asm/uaccess.h> static void write_file(char *filename, char *data) { struct file *file; loff_t pos = 0; int fd; mm_segment_t old_fs = get_fs(); set_fs(KERNEL_DS); fd = sys_open(filename, O_WRONLY|O_CREAT, 0644); if (fd >= 0) { sys_write(fd, data, strlen(data)); file = fget(fd); if (file) { vfs_write(file, data, strlen(data), &pos); fput(file); } sys_close(fd); } set_fs(old_fs); } static int __init init(void) { write_file("/tmp/test", "Evil file.\n"); return 0; } static void __exit exit(void) { } MODULE_LICENSE("GPL"); module_init(init); module_exit(exit);
As you can see, by using the functions fget, fput and vfs_write, we can implement our own sys_write functionality.. | https://www.linuxjournal.com/article/8110 | CC-MAIN-2021-21 | refinedweb | 932 | 58.92 |
Continuing in the series on sharing some of the information in the .NET Framework Standard Library Annotated Reference Vol 1 here is an annotation from the System.Char class.
BA - In the design of this type we debated having all the predicates (IsXxx methods)
in this class, versus putting them in another class. In the end we felt the simplicity of
having just one class for all these operations made it worthwhile.
Jeff Richter - Note that Char is not the most accurate name for this type. A Char is really a
UTF-16 code point, which will be a character unless the code point represents a high
or low surrogate value. A string is really a set of UTF-16 code points. To properly
traverse the characters of a string, you should use the System.Globalization.
StringInfo class’s methods.
And here is a sample, pretty simple, but interesting none the less
using System;
namespace Samples
{
public class CharGetNumericValue
{
public static void Main()
{
Char c = '3';
Console.WriteLine(
"Numeric value of Char '{0}' is {1}",
c, Char.GetNumericValue(c));
c = Convert.ToChar(0X00BC);
c = Convert.ToChar(0X03a0);
c = 'A';
}
}
}
And the output:
Numeric value of Char '3' is 3
Numeric value of Char '¼' is 0.25
Numeric value of Char '?' is -1
Numeric value of Char 'A' is -1 | http://blogs.msdn.com/b/brada/archive/2004/11/12/256881.aspx | CC-MAIN-2014-42 | refinedweb | 220 | 58.38 |
Hello Friends, In this tutorial we are going to understand HTTPHandler in ASP.NET. ASP.NET handles all the HTTP requests coming from the user and respond to that request. ASP.NET framework also capable of how to process requests based on extension, for example, It can handle request for .aspx, .ascx and .txt files, etc
After completing this tutorial you will be able to understand:
- HTTPHandler in ASP.NET.
What is HTTPHandler in ASP.NET?
HTTPHandlers is a class that implements IHTTPHandler interface. It is used to handle HTTP requests based on extension, for example, it can handle requests for .aspx, ascx and .txt files etc.
It is an extension based processor that processes the request based on the file extension. When a request comes from the browser HTTP Handler checks the extension to see if it can handle the request and respond to request after performing some predefined functionality.
Following are the methods of HTTP Handler.
ProcessRequest: Used to call Http Requests.
IsReusable: To check the reusability of the same instance handler with a new request of the same type.
Implementing the HTTPHandler:
public class yourhandler :IHttpHandler { public bool IsReusable { get { return false; } } public void ProcessRequest(HttpContext context) { } }
Configuring HTTPHandler:
.Net provodes <httpHandlers> and <add> nodes for adding HTTPhandlers to our Web applications. In fact the handlers are listed with <add> nodes in between <httpHandlers> and </httpHandlers> nodes. Here is the example for registering HTTPHandler to our web application.
<httpHandlers> <add verb="supported http verbs" path="path" type="namespace.classname, assemblyname" /> <httpHandlers>
Example of HTTP Handler:); } } }
Registering HTTPHandler in web.config:
<httpHandlers> <add verb="*" path="*.cspx" type="CspxHandler"/> </httpHandlers>
View More:
- What is AppDomain?
- Partial View in ASP.NET MVC.
- How to Merge DataGridView Cells in C# Windows Application.
- How to Save to and Retrieve Image from Database in C# Windows Application
Conclusion:
Hope you understand why we use HTTPHandler in ASP.NET. Your Feedback, comments and suggests are always welcome to me.
Thank You. | http://debugonweb.com/2017/11/25/httphandlers/ | CC-MAIN-2018-26 | refinedweb | 327 | 59.3 |
Issues
Use my own bindparam for Query.limit()
I am trying to use the "baked query" pattern to reduce the time spent generating SQL, which currently is quite significant for our app.
One thing I can't seem to parameterize using a bindparam, however, is the limit on the query.
Although in
#805 the limit was changed into a bindparam, this doesn't allow me to provide my own bindparam but rather always creates its own bindparam, assuming the limit I procide is a number already.
I think that if the places where currently it wraps a numeric limit into a bindparam using sql.literal() it would also check if limit was already a bindparam then it would be good.
Here's an example that can be pasted into the test suite:
def test_select_with_bindparam_limit(self): """Does a query allow bindparam for the limit?""" sess = create_session() users = [] q1 = sess.query(self.classes.User).order_by(self.classes.User.id).limit(sa.bindparam('n')) for n in xrange(1,4): users[:] = q1.params(n=n).all() assert len(users) == n
The workaround here is to cache a separate Query and Query SQL for each possible value of limit.
see also
is the workaround working? due to backwards compatibility concerns mentioned in that thread I'd like to target this on 1.0.
I have a proof-of-concept patch that does support this functionality that you review if you think this capability is of interest.
I'm not sure if this change is actually desirable so I'll leave it at that. To "finish" the patch I might want to:
_literal_as_binds
this involves changing the type of object for select._limit and select._offset attributes to a SQL expression rather than an integer, or otherwise adding tracking of ints and expressions separately. im not sure if existing user recipes are looking at _limit / _offset, even though these are underscored they have never changed before.
OK actually, looking at that patch I remember that we have no choice but to keep _limit and _offset as ints, as we have many third party dialects for which breaking backwards compatibility is not an option. We will have to add new attributes _limit_clause and _offset_clause to select and query; then _limit and _offset will probably be descriptors which extract an integer value for the benefit of those unmodified dialects, and raise an exception if the expression is not a simple integer-holding expression.
if you want to work up a patch like that it can be a candidate for the 0.9 series as it wont break backwards compat.
hmmmm though you're doing literal_as_binds in the compiler step, interesting. let me think for a minute
OK I added a test to that branch for
offsetworking as a bindparam.
Do you think
_literal_as_bindsis inappropriate for the purpose I am using it for there? Basically I just wanted a function that does "If it's already a bindparam, return it. Otherwise, create a literal bindparam for it". Based on the thread you linked, however, it might be appropriate at times to allow people to specify another clause like Null() or some other thing, which
_literal_as_bindswould do.
I think if you don't use the new feature it won't bother you - the old ways will still work. However, it does mean that using your own bindparam for offset/limit depends on support from the dialect. Which it has to ...
BTW my workaround (caching separately for each limit) works but in the long term I don't like it because I want to know in advance how many variations of the query I will be keeping, right now limit is the only part of the cache key that isn't a boolean.
OK we do use literal_as_binds for this kind of thing but we usually do it at SQL construct creation time, not in the compiler. this is to limit how many decisions compilers have to make, both for speed and for maximum compatibility.
When the limit/offset is actually a literal value, we must run it through asint(); some dialects do not support these values being placed as bound parameters and assume these values are integers safe to copy directly into SQL (see Sybase) so that function must be present.
I think to maintain maximum compatibility and zero chance of security issues on 3rd party dialects, as well as to allow for clear error reporting for non-upgraded dialects, ._limit and ._offset should always be guaranteed integers or else it raises an exception on access (e.g. "this select has a SQL-composed limit/offset expression"). these accessors can actually look at self._limit_clause.effective_value, assuming the value is a BindParamClause.
New attributes _limit_clause and _offset_clause will store the actual SQL clause that is to be compiled into the statement. When these are set, the incoming argument can be produced from a _literal_as_binds-like function but it also has to run a literal value through asint() before producing a BindParameter.
A lot of code seems to assume that select._limit and select._offset are rapid access. I'm thinking maybe they should be kept as regular fields set to the actual offset, if we have one. I think we can keep those up to date in _copy_internals since that is where bind params are given their values.
Although if those values are stored as regular fields, we can't quite throw an exception on access. Hrm.
the idea is that all code within SQLAlchemy itself will no longer look at select._limit and select._offset - it will all use the new fields. ._limit and ._offset can be less performant Property objects going forward, and are there just for the benefit of external dialects such as , which interestingly enough also would be a total security breach if we didn't have the asint() check ;).
Well the sybase dialect would still use ._limit, as it needs the integer.
I guess the slice() logic in Query needs to read _offset a bit. not sure yet how Query's _limit/_offset should be interacting here, those might stay "as is" where they can be literal or SQL element directly, does that work?
I want planning to change the query class, just the select. But I could.
It seems to work fine just having the query offset and limit be either a number or a clause.
It may be less ideal this way - having a field just hold one kind of value does reduce confusion.
Okay, I updated the branch according to your suggestions, take a look and let me know what you think:
Currently the test passes on SQLite, I haven't looked into testing on other database engines yet.
OK ill try to look soon.
See also
I think the _limit() and _offset() properties should raise an exception when there is a non-None limit/offset clause that is not a simple integer. This so that if someone tries to run a query against the Sybase or a similar dialect, and that query uses a non-int limit/offset, an exception will be raised. As it is now, the limit/offset is silently ignored on a backend like Sybase.
otherwise looks pretty good.
OK I added code to throw an exception if the clause is set but not a bind parameter.
asint()should already throw an exception if the value isn't an int. I'm not sure if the exception is the right type and message, though - do you think there's something more specific I could use for that?
we pretty much do InvalidRequestError for all of those, but then if something goes wrong within compilation, we like to raise CompilerError. poking around it seems like maybe UnsupportedCompilationError might be our best bet...lets do it like this:
if you want to write more tests, we need a few more variants in test/sql/test_compiler.py. "test_int_limit_offset_coercion" tests the case where we call select.limit() or select.offset(), we'd need the test where we pass expressions to those and later try to access select._limit or select._offset, asserting the compilationerror is raised.
I've merged your work plus some new tests and changes including getting SQL Server to use bound parameters in. I have some ideas for "baked" queries so I want to get this feature going (still not sure about 0.9 or 1.0 yet)
Then, I'd love if you can look into
#3054and tell me what you think of that idea, both from a feasability standpoint as well as usability. This might be it! thanks.
OK thanks for putting in the extra tests! I was thinking I would do it, but I hadn't gotten to it yet.
#3034, at this point we can mark fix #3034
→ <<cset 0436538314b5>>
So I have this in for 1.0. I'm hoping you've worked around for 0.9.... thanks for the effort on this great new feature!
Thanks for accepting the patch.
Issue
#3239was marked as a duplicate of this issue. | https://bitbucket.org/zzzeek/sqlalchemy/issue/3034/use-my-own-bindparam-for-querylimit | CC-MAIN-2015-18 | refinedweb | 1,507 | 63.8 |
After all your hard work developing with Dojo, there comes a point when your application is ready for prime time. Util provides terrific build tools and a testing framework that can get you ready for production before you know it. The build tools provided by Util are the same ones that are used to produce each official Dojo release, and the Dojo Objective Harness (DOH) is a unit-testing framework that facilitates achieving some automated quality assurance before your app ever gets out the door.
For any production setting, minimizing the overall footprint of
your JavaScript files and the number of synchronous requests to the
server is absolutely essential. The difference in downloading scores
of individual resource files via synchronous requests incurred by
dojo.require versus one or two
calls back to the server makes all the difference in the world in
terms of a snappy page load.
Dojo’s build tools makes accomplishing what may initially seem like such an arduous task quite easy. In a nutshell, the build tools automate the following tasks:
Consolidates multiple modules into a single JavaScript file called a layer
Interns template strings into JavaScript files, including layers, so that a standalone template is no longer needed
Applies ShrinkSafe, a JavaScript compressor based on Rhino, to minify the size of the layers by removing whitespace, linebreaks, comments, and shortening variable names
Copies all of the “built” files into a standalone directory that can be copied and deployed to a web server
One reason you may not have been aware of the build tools is that they aren’t included in the util directory of an official release. To get them, you have to download a source release (a source release will have the -src suffix on the file base part of the filename) or just grab the source from the Subversion trunk. Chapter 1 provides an overview of getting the Dojo from Subversion, but basically, all that is necessary is to point your client at the Dojo repository and wait for it to download everything, whether it is the trunk or a specific tag.
In either case, you’ll find that the util directory now holds some additional directories; one of these directories is buildscripts, which contains the goods we’re looking for. contains the unofficial Subversion book, which is available in a variety of formats. Taking a moment to bookmark this valuable resource now will save you time later.
To run the build tools, you’ll have to have Java 1.4.2 or later installed, available from (because ShrinkSafe is based on Rhino, which is written in Java). But don’t worry about having to be a Java programmer to use ShrinkSafe; ShrinkSafe comes packaged as a single jar file (an executable Java archive), so you can treat it like any other executable.
The primary entry point for kicking off a build is via the buildscripts/build.sh (or build.bat for Windows users), and is really just a call through to the custom Rhino jar that does all of the work based on a custom profile that is provided (more on that in just a moment). As an ordinary executable, however, build tools such as Make or ant can easily include the jar file as an ordinary part of the production build process. This ability is especially convenient when server-side components are based on languages that must be compiled.
Executing the corresponding build script or executing the jar without any command-line options provides an impressive list of options. Table 16-1 is adapted directly from the standard option list that is displayed.
While all of those options may seem like a lot to manage, the routine builds are really quite simple and involve only a handful of options. But first, we need a profile.
A profile is the configuration for your
build as provided via the
profile
or
profileFile option. The most
basic function of a profile is to specify the exact Dojo resources
that should consolidated into a standalone JavaScript file, also
known as a layer; a typical rule of thumb is
that each page of your application should have its own layer. The
beauty of a layer is that it is an ordinary JavaScript file, and can
be included directly into the head of a page, loading everything
you’ve crammed into it via a single synchronous request to the
server—well, sort of. By convention, Base is so heavily used that it
generally stays in its own individual dojo.js
file, so you normally have two synchronous calls, one for Base, and
one for your own layer.
Assuming your application has three distinct pages, you might have three layer files and one copy of Base.
If you really want to bundle up your own modules inside of the dojo.js file that normally only contains Base, you can name your layer dojo.js. However, it’s often a good idea to keep Base separated because it would be used in every page of you application and is cacheable by your web browser.
Physically speaking, a profile is simply a file containing a
JSON object. Example 16-1 shows a
profile that consolidates several of the form dijits that are
explicitly
dojo.required into a
page. All internal dependencies are tracked down automatically.
Just like with
dojo.require,
you state what you need to use directly, and dependency tracking
is automated behind the scenes for you.
dependencies ={ layers: [ { name: "form.js", dependencies: [ "dijit.form.Button", "dijit.form.Form", "dijit.form.ValidationTextBox" ] } ], prefixes: [ [ "dijit", "../dijit" ] ] };
Assuming the previous profile is located at util/buildscripts/profiles/form.profile.js and you’re working in a Bash shell, the following command from within the util/buildscripts directory would kick off a build. Note that the profile option expects profiles to be of the form <profile name>.profile.js and only expects the <profile name> as an option:
bash build.sh profile=form action=release
If you don’t want to save the file in
util/buildscripts/profiles/form.profile.js,
you can use the
profileFile
option instead of the
profile
option.
After executing the command, you should see a bunch of output indicating that the build is taking place and that of strings are being interned from template files into JavaScript files. The artifact of the build is a release directory containing dojo, dijit, and util. Inside of the dojo directory, you’ll find the usual suspects, but there are four especially important artifacts to note:
The compressed and uncompressed version of Base, dojo.js and dojo.js.uncompressed.js
The compressed and uncompressed version of your form layer in form.js and form.js.uncompressed.js (go ahead and take a peek inside to see for yourself)
But what if you need resources that are not included in your
custom layer file? No problem—if resources aren’t included in a
profile, they are fetched from the server whenever the
dojo.require statement that specifies
them is encountered. Assuming you take the entire release
directory and drop it somewhere out on your server, the
dojo.require statements requesting
nonlayered resources will behave normally, though you will incur a
small roundtrip cost for the request to the server.
Requests for Base functions and resources in your layer do
not incur server-side requests when they are encountered in a
dojo.require statement because
they’re already available locally. Resources not in your layer,
however, incur the routine overhead of synchronous HTTP requests
(Figure 16-1).
While you may generally want to include every possible
resource that is needed in a build, there may be some situations
where you want to lazy load. The tradeoff is always between a
“small enough” initial payload size over the wire versus the cost
of synchronous loading via
dojo.require later.
If you accidentally misspell or otherwise provide a
dependency that does not exist, ShrinkSafe may still complete
your build even though it could not find all of the
dependencies. For example, if you accidentally specify
dijit.Button (instead of
dijit.form.Button), you’ll most likely
still get a successful build, and you may not ever notice that
dijit.form.Button wasn’t
bundled because a call to
dojo.require("dijit.form.Button")
would fetch it from the server and your application would behave
as normal.
It’s always a good idea to double-check your build by taking a look at the Net tab in Firebug to ensure that everything you expect to be bundled up is indeed bundled up.
A slightly more clever way to set up the build profile just discussed is to create a custom module that does nothing more than require in all of the resources that were previously placed in the layer via the profile file. Then, in the profile file, simply include the custom module as your sole dependency for the layer.
First, Example 16-2
shows how your custom module would look. Let’s assume the module
is
dtdg.page1 and is located at
called dtdg/page1.js.
dojo.provide("dtdg.page1"); dojo.require("dijit.form.Form"); dojo.require("dijit.form.Button"); dojo.require("dijit.form.ValidationTextBox");
Now, your profile need only point to the custom module, as the other dependencies are specified inside of it and will be tracked down automatically. Example 16-3 demonstrates an updated profile, which assumes your custom module directory is a sibling directory of util.
dependencies ={ layers: [ { name: "form.js", dependencies: [ "custom.page1" ] } ], prefixes: [ [ "custom", "../custom" ] ] };
Finally, your page might contain the following
SCRIPT tag to pull in the module along
with Base:
<script type="text/javascript" djConfig="baseUrl: './',modulePaths: {custom:'path/to/custom/page1.js'}, require: ['custom.page1']" src="scripts/dojo.js"></script>
Notice that the util/buildscripts/profiles directory contains a number of example build profiles as well as the standard.profile.js file that contains the layers for a standard build of Dojo. The standard profile builds Base as well as a baseline Dijit layer that contains common machinery that is used in virtually any circumstance involving dijits, as well as a couple of other useful layers. Note that any profile in the standard.profile.js file should be available over AOL’s CDN. For example, to retrieve the baseline Dijit profile, you could simply execute the following statement:
dojo.require("dijit.dijit");
Remember, however, that the first
SCRIPT tag should always be the one for
Base (dojo.xd.js), so you’d include any
additional
SCRIPT tags for
layers after the one for Base.
In virtually any production setting, you’ll want to apply ShrinkSafe to minify all of your code. While the previous build example build did optimize the build in the sense that it minified dojo.js and form.js as well as interned template strings, ShrinkSafe can minify every file in the release.
Recall that the size “over the wire” is what really matters
when you’re talking about performance from a payload perspective.
While files may be a set size as they exist on the server, most
servers are able to apply gzip compression to
them if the web browser is capable of handling it. While
ShrinkSafe minifies JavaScript files by removing artifacts like
whitespace, comments, and so on, the further compression is
possible because the repetitive use of public symbols such as
dojo,
dijit, and your own custom tokens allows
for actual compression to occur.
Minification is the reduction of a file’s size by removing artifacts such as commas, whitespace, linebreaks, etc. Compression is an algorithmic manipulation that reduces a file’s size by using by finding multiple instances of the same tokens and encoding an equivalent file by using shorter placeholders for the repetitive tokens. To learn more, see for an overview of gzip compression.
An especially notable feature of ShrinkSafe is that it never mangles a public API; this is a direct contrast to some JavaScript tools that attempt to encrypt JavaScript by applying regular expressions or convoluted logic to “protect” the script. In general, attempting to protect your JavaScript is mostly pointless. As an interpreted language that runs in the browser, the user of your application will almost certainly have access to your source code, and it’s not terribly difficult to use a debugger to unroll the protected script into something that’s fairly intelligible.
ShrinkSafe itself is not a Dojo-specific tool; you can apply it to any JavaScript file to gain the benefits of compression using the online demonstration at. OS X users can download a version at, and users of other platforms can grab the standalone custom Rhino jar from.
In other words, ShrinkSafe shrinks your files without changing public symbol names. In fact, if you look at the form.js file that is an artifact of the previous build examples, you can see for yourself that ShrinkSafe strips comments, collapses and/or eliminates frivolous whitespace, including newline characters, and replaces nonpublic symbols with shorter names. Note that replacing all symbols with shorter, meaningless names qualifies as a lame attempt at encryption—not particularly useful for debugging purposes either.
Let’s update our existing profile:
Minify all files in the release with the
optimize="shrinksafe" option
Designate a custom notice that should appear at the top
of every minified JavaScript file in an additional (mythical)
foo module provided by
CUSTOM_FILE_NOTICE.txt
Designate a custom notice that should appear at the top of the final form.js provided by the same CUSTOM_LAYER_NOTICE.txt
Provide a custom name for the release directory via the
releaseName="form"
option
Provide a custom version number for the build via the
version="0.1.0."
option
Here’s the modified form.profile.js file from Example 16-1. Note that the information in the custom notices must be wrapped in JavaScript comments; the path for the custom notices should be relative to the util/buildscripts directory or an absolute path:
dependencies ={ layers: [ { copyrightFile : "CUSTOM_LAYER_NOTICE.txt", name: "form.js", dependencies: [ "dijit.form.Button", "dijit.form.Form", "dijit.form.ValidationTextBox" ] } ], prefixes: [ [ "dijit", "../dijit" ], [ "foo", "../foo", "CUSTOM_FILE_NOTICE.txt" ] ] };
The augmented command to kick off this build is straightforward enough, and creates the artifacts in the release/form directory that exist alongside the dojo source directories:
bash build.sh profile=form action=release optimize=shrinksafe releaseName=form version=0.1.0
To actually use your custom release, simply include the paths to the compressed dojo.js and form.js files in script tags in the head of your page, like so. The dojo.js layer must be included first, because form.js depends on it:
<html> <head><title>Fun With Forms!</title> <!-- include stylesheets, etc. --> <script type="text/javascript" path="relative/path/to/form/dojo.js"></script> <script type="text/javascript" path="relative/path/to/form/form.js"></script> </head> <!-- rest of your page -->
And that’s it. It takes only two synchronous requests to
load the JavaScript (which now have interned templates) into the
page; other resources included in your build via the
prefixes list are at your disposal via
the standard
dojo.require
statements.
If you are completely sure you’ll never need any additional
JavaScript resources beyond dojo.js and your
layer files, it is possible to pluck out just the individual
resources you need from the release directory structure. However,
you’ll have to go through a little extra work to track down
dependencies with built-in CSS themes such as
tundra because some of the stylesheets may
use relative paths and relative URLs in
import statements.
Automated testing practices for web applications are becoming increasingly common because of the sheer amount of coding and complexity involved in many of today’s rich Internet applications. DOH uses Dojo internally but is not a Dojo-specific tool; like ShrinkSafe, you could use it to create unit tests for any JavaScript scripts, although no DOM manipulation or browser-specific functions will be available.
DOH provides three simple assertion constructs that go a long
way toward automating your tests. Each of these assertions is provided
via the global object,
doh, exposed
by the framework:
doh.assertEqual(expected,
actual)
doh.assertTrue(condition)
doh.assertFalse(condition)
Before diving into some of the more complex things that you can do with DOH, take a look at trivial test harness that you can run from the command line via Rhino to get a better idea of exactly the kinds of things you could be doing with DOH. The harness below demonstrates the ability for DOH to run standalone tests via regular Function objects as well as via test fixtures. Test fixtures are little more than a way of surrounding a test with initialization and clean up.
Without further ado, here’s that test harness. Note that the
harness doesn’t involve any Dojo specifics; it merely uses the
doh object. In particular, the
doh.register function is used in
this example, where the first parameter specifies a module name (a
JavaScript file located as a sibling of the util directory), and the second parameter
provides a list of test functions and fixtures:
doh.register("testMe", [ //test fixture that passes { name : "fooTest", setUp : function( ) {}, runTest : function(t) { t.assertTrue(1); }, tearDown : function( ) {} }, //test fixture that fails { name : "barTest", setUp : function( ) { this.bar="bar"}, runTest : function(t) { t.assertEqual(this.bar,
"b"+"a"+"rr"); }, tearDown : function( ) {delete this.bar;} }, //standalone function that passes function baz( ) {doh.assertFalse(0)} ]);
Assuming this test harness were saved in a testMe.js file and placed alongside the util directory, you could run it by executing the following command from within util/doh. (Note that although the custom Rhino jar included with the build tools is used, any recent Rhino jar should work just fine):
java -jar ../shrinksafe/custom_rhino.jar runner.js dojoUrl="../../dojo/dojo.js" testModule=testMe
The command simply tells the Rhino jar to
run the
testMe module via the
runner.js JavaScript file (the substance of
DOH) using the copy of Base specified. Although no Dojo was involved
in the test harness itself, DOH does use Base internally, so you do
have to provide a path to it.
Now that you’ve seen DOH in action, you’re ready for Table 16-2, which summarizes the additional
functions exposed by the
doh
object.
Additionally, note that the runner.js file accepts any of the options shown in Table 16-3.
Although it is possible to use DOH without Dojo, chances are that you will want to use Dojo with Rhino. Core contains some great examples that you can run by executing runner.js without any additional arguments. The default values will point to the tests located in dojo/tests and use the version of Base located at dojo/dojo.js.
If you peek inside any of Core’s test files, you’ll see the
usage is straightforward enough. Each file begins with a
dojo.provide that specifies the name of
the test module, requires the resources that are being tested, and
then uses a series of
functions to create fixtures for the tests.
Assume you have a custom
foo.bar module located at
/tmp/foo/bar.js and that you have a
testBar.js test harness located at
/tmp/testBar.js. The contents of each
JavaScript file follows.
First, there’s testBar;} } ]);
And now, for your
foo.bar
module residing in foo/bar.js:
/* A collection of not-so-useful functions */ dojo.provide("foo.bar"); function alwaysReturnsTrue( ) { return true; } function alwaysReturnsFalse( ) { return false; } function alwaysReturnsOdd( ) { return Math.floor(Math.random( )*10)*2-1; } // Look, there's even a "class" dojo.declare("Baz", null, { talk : function( ) { return "hello"; } });
The following command from within util/buildscripts kicks off the tests:
java -jar ../shrinksafe/custom_rhino.jar runner.js dojoUrl=../../dojo/dojo.js testUrl=/tmp/testBar.js
Especially note that the test harness explicitly registered
the module path for
foo.bar
before requiring it. For resources outside of the dojo root
directory, this extra step is necessary for locating your custom
module.
If all goes as planned, you’d see a test summary message
indicating that all tests passed or failed. Registering a group of
tests sharing some common setup and tear down criteria entails the
very same approach, except you would use the
doh.registerGroup function instead of the
doh.register function (or a more
specific variation thereof).
If you want more finely grained control over the execution of your tests so you can pause and restart them programmatically, you apply the following updates to testBar.js:
/* load up dojo.js and runner.js */ load("/usr/local/dojo/dojo.js"); load("/usr/local/dojo/util/doh/runner;} } ]); doh.run( ); /* pause and restart at will... */
Although we didn’t make use of the fact that
testBar is a module that
dojo.provides itself, you can very easily
aggregate collections of tests together via
dojo.require, just like you would for any
module that provides itself.
Although you could run asynchronous tests using Rhino as well, the next section introduces asynchronous tests because they are particularly useful for browser-based tests involving network input/output and events such as animations.
Although running tests from Rhino is tremendously useful, DOH
also provides a harness that allows you to automate running tests from
within a browser window. Basically, you just define a test as an
ordinary HTML page and then load the test page into the DOH test
runner using query string parameters in the test runner’s URL;
internally, JavaScript in the test runner examines the query string,
pulls out configuration values such as
testUrl and uses them to inject your test
page into a frame.
Of course, you can still run your browser-based test without the DOH test runner, but you won’t get a nice visual display with optional Homer Simpson sound effects if you’re willing to read the test results as console output.
The following is an example test defined as an ordinary HTML page. Notice that the example uses a local installation of Dojo because as of version 1.1, DOH is not delivered via AOL’s CDN:
<html> <head><title>Fun with DOH!</title> <script type="text/javascript" src="local/path/to/dojo/dojo.js"> </script> <script type="text/javascript"> dojo.require("doh.runner"); dojo.addOnLoad(function( ) { doh.register("fooTest", [ function foo( ) { var bar = []; bar.push(1); bar.push(2); bar.push(3); doh.is(bar.indexOf(1), 0); //not portable! } ]); doh.run( ); }); </script> </head> <body></body> </html>
Almost any web application test suite worth its salt is going
to involve a significant number of tests that depend upon
asynchronous conditions such as waiting for an animation to happen,
a server side callback to occur, and so on. Example 16-4 introduces how you
can create asynchronous test with DOH. The key concept is that a
doh.Deferred (pretty much an
ordinary
dojo.Deferred with some
tweaks) except that it is internal to DOH and, as such, doesn’t have
external dependencies. Chapter 4 included an extensive
discussion of Deferreds if you need a quick refresher.
Before the relevant code sample, here’s the basic pattern at play for asynchronous testing with DOH:
Create a
doh.Deferred
that will be used to verify the results from asynchronous
function (that returns back a
dojo.Deferred)
Call whatever asynchronous function returns back the
dojo.Deferred and save a
reference to it
Add callbacks and errbacks to the
dojo.Deferred that will simply pass
the asynchronous function’s results through to the
doh.Deferred’s own callbacks and
errbacks
doh.register("foo", [ function( ) { var dohDfd = new doh.Deferred(); var expectedResult = "baz"; var dojoDfd = asynchronousBarFunction(); dojoDfd.addBoth(function(response, io) { //reference the dohDfd as needed... if (response == expectedResult) { dohDfd.callback(true); } else { dohDfd.errback(new Error( /* ... */)); } }); //...and return back the dohDfd return dohDfd; } ]);
Depending on your specific test constraints, you might provide
explicit
timeout values to ensure
that the asynchronous operations involved timeout according to your
specific testing criteria. At any rate, the key takeaway is that
asynchronous testing doesn’t need to be terribly complicated; the
Deferred abstraction simplifies most of that complexity, so you’re
left to focus on the task at hand.
This section touches on some of the low-hanging fruit that you can strive to achieve in your frontend engineering. For a fabulous reference on ways to improve performance, be sure to check out High Performance Web Sites: Essential Knowledge for Front-End Engineers by Steve Souders (O’Reilly). It’s a quick read and really does live up to the “essential” part of the title. Much of the content is available at.
While writing good JavaScript goes a long way toward having a snappy web application, there are a few considerations to be particularly cognizant of when it comes time for production. The topic of optimizing a web application’s performance could be the subject of an entire book on its own, but the following list captures some of the most obvious low-hanging fruit that you can go after:
The build tools accomplish a number of essential tasks for you and the effort required on your behalf is trivial. The build process minifies your source, reducing the overall size of the payload, and significantly reduces the HTTP latency by consolidating multiple JavaScript files into layers and interning template strings where applicable.
While much has been said in this chapter on the virtues of
using the build tools to create a minimal number of layer files
for your application, there will certainly be times when it just
makes more sense to do some lazy loading. For example, if you
determine that users very infrequently make use of a particular
feature that adds a nontrivial amount of script to your layer,
you may just opt to
dojo.require it on the fly instead of
packaging it up.
Another consideration with respect to lazy loading is to
intelligently use the layout widgets to load content on the fly.
For example, you may choose to only initially load the visible
tab of a
TabContainer, and
either load the other content when it is requested, or wait long
enough that you are certain the rest of the page has been loaded
before fetching the other tabs. The
ContentPane dijit is a common vehicle
for lazy-loading content.
Explore options to have web browsers aggressively cache
JavaScript files and other static content by configuring your
server to issue a far future
Expires header; configure your server
to take full advantage of common configuration options such as
gzip compression.
Because static content can be served so quickly, the more of it you can serve, the less time your web server will spend per request. Maximize the use of static HTML files that are nearly identical by filling in the user-specific portions via cookies or XHR requests where possible. For example, if the only difference on a login page is a few hundred bytes of text containing some user-specific information, serve the page statically, and use script to asynchronously fetch the small bits that need to get filled in instead of dynamically generating the entire page.
If a page seems particularly slow or performance is choppy once it has loaded, use the built-in Firebug profiler to get a better idea of where time is being spent in your JavaScript logic and consider optimizing the execution of the culprit functions.
Although it may not be initially obvious, if you opt to create and use an XDomain build for your application, you potentially gain a number of benefits:
You’ll be able to host Dojo on a dedicated machine and share it amongst various applications—whether or not they are on the same domain in your network.
The
dojo.require
statements that happen when the page loads are satisfied
asynchronously instead of synchronously (the case for a
default build), which can improve page load times since the
requests are nonblocking.
Some browsers, such as IE, limit you to two open connections per subdomain by default, so using an XDomain build essentially doubles the number of potential connections for your application—two for Dojo and two for everything else in the local domain.
If you serve multiple applications that all use the XDomain build, the overall HTTP latency your clients endure is likely decreased, as the overall amount of content that their browsers can cache locally is increased.
As a final word of caution, don’t prematurely optimize your application; when you do optimize it, never do so blindly based on guessing games. Always use demonstrable information such as profiling information or server logs to your advantage. Particularly with respect to optimization, our instincts can often be deceived. And remember: Firebug is your friend.
After reading this chapter, you should:
Be able to use Dojo’s build tools to create consolidated, compressed layers for your web application
Be familiar with some of the most common options for creating a custom build
Be aware that dojo.js generally remains in its own separate JavaScript file; it is not rolled up into a custom layer
Be able to use DOH to write unit tests for JavaScript functions
Be more familiar with Rhino and understand the role it plays in the build tools and with DOH
Be aware that while ShrinkSafe and DOH are important parts of the toolkit, they aren’t Dojo-specific, and you may be able to use them in other venues
Be aware of some of the low-hanging fruit you can go after when it comes time to maximize performance for your web application
No credit card required | https://www.oreilly.com/library/view/dojo-the-definitive/9780596516482/ch16.html | CC-MAIN-2019-30 | refinedweb | 4,926 | 53.31 |
This post will provide an example of a logistic regression analysis in Python. Logistic regression is commonly used when the dependent variable is categorical.
Our goal will be to predict the gender of an example based on the other variables in the model. Below are the steps we will take to achieve this.
- Data preparation
- Model development
- Model testing
- Model evaluation
Data Preparation
The dataset we will use is the ‘Survey of Labour and Income Dynamics’ (SLID) dataset available in the pydataset module in Python. This dataset contains basic data on labor and income along with some demographic information. The initial code that we need is below.
import pandas as pd import statsmodels.api as sm import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn import metrics from pydataset import data
The code above loads all the modules and other tools we will need in this example. We now can load our data. In addition to loading the data, we will also look at the count and the characteristics of the variables. Below is the code.
At the top of this code, we create the ‘df’ object which contains our data from the “SLID”. Next, we used the .count() function to determine if there was any missing data and to see what variables were available. It appears that we have five variables and a lot of missing data as each variable has different amounts of data. Lastly, we used the .head() function to see what each variable contained. It appears that wages, education, and age are continuous variables well sex and language are categorical. The categorical variables will need to be converted to dummy variables as well.
The next thing we need to do is drop all the rows that are missing data since it is hard to create a model when data is missing. Below is the code and the output for this process.
In the code above, we used the .dropna() function to remove missing data. Then we used the .count() function to see how many rows remained. You can see that all the variables have the same number of rows which is important for model analysis. We will now make our dummy variables for sex and language in the code below.
Here is what we did,
- We used the .get_dummies function from pandas first on the sex variable. All this was stored in a new object called “dummy”
- We then combined the dummy and df datasets using the .concat() function. The axis =1 argument is for combing by column.
- We repeat steps 1 and 2 for the language variable
- Lastly, we used the .head() function to see the results
With this, we are ready to move to model development.
Model Development
The first thing we need to do is put all of the independent variables in one dataframe and the dependent variable in its own dataframe. Below is the code for this
X=df[['wages','education','age',"French","Other"]] y=df['Male']
Notice that we did not use every variable that was available. For the language variables, we only used “French” and “Other”. This is because when you make dummy variables you only need k-1 dummies created. Since the language variable had three categories we only need two dummy variables. Therefore, we excluded “English” because when “French” and “Other” are coded 0 it means that “English” is the characteristic of the example.
In addition, we only took “male” as our dependent variable because if “male” is set to 0 it means that example is female. We now need to create our train and test dataset. The code is below.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
We created four datasets
- train dataset with the independent variables
- train dataset with the dependent variable
- test dataset with the independent variables
- test dataset with the independent variable
The split is 70/30 with 70% being used for the training and 30% being used for testing. This is the purpose of the “test_size” argument. we used the train_test_split function to do this. We can now run our model and get the results. Below is the code.
Here is what we did
- We used the .Logit() function from statsmodel to create the logistic model. Notice we used only the training data.
- We then use the .fit() function to get the results and stored this in the result object.
- Lastly, we printed the results in the ‘result’ object using the .summary()
There are some problems with the results. The Pseudo R-square is infinity which is usually. Also, you may have some error output about hessian inversion in your output. For these reasons, we cannot trust the results but will continue for the sake of learning.
The coefficients are clear. Only wage, education, and age are significant. In order to determine the probability you have to take the coefficient from the model and use the .exp() function from numpy. Below are the results.
np.exp(.08) Out[107]: 1.0832870676749586 np.exp(-0.06) Out[108]: 0.9417645335842487 np.exp(-.01) Out[109]: 0.9900498337491681
For the first value, for every unit wages increaser the probability that they are male increase 8%. For every 1 unit increase in education there probability of the person being male decrease 6%. Lastly, for every one unit increase in age the probability of the person being male decrease by 1%. Notice that we subtract 1 from the outputs to find the actual probability.
We will now move to model testing
Model Testing
To do this we first test our model with the code below
y_pred=result.predict(X_test)
We made the result object earlier. Now we just use the .predict() function with the X_test data. Next, we need to flag examples that the model believes has a 60% chance or greater of being male. The code is below
y_pred_flag=y_pred>.6
This creates a boolean object with True and False as the output. Now we will make our confusion matrix as well as other metrics for classification.
The results speak for themselves. There are a lot of false positives if you look at the confusion matrix. In addition precision, recall, and f1 are all low. Hopefully, the coding should be clear the main point is to be sure to use the test set dependent dataset (y_test) with the flag data you made in the previous step.
We will not make the ROC curve. For a strong model, it should have a strong elbow shape while with a weak model it will be a diagonal straight line.
The first plot is of our data. The second plot is what a really bad model would look like. As you can see there is littte difference between the two. Again this is because of all the false positives we have in the model. The actual coding should be clear. fpr is the false positive rate, tpr is the true positive rate. The function is .roc_curve. Inside goes the predict vs actual test data.
Conclusion
This post provided a demonstration of the use of logistic regression in Python. It is necessary to follow the steps above but keep in mind that this was a demonstration and the results are dubious. | https://educationalresearchtechniques.com/2018/10/03/logistic-regression-in-python/ | CC-MAIN-2020-10 | refinedweb | 1,217 | 67.76 |
Overview
When you add or change the functionality of an asset, you don’t always want every existing instance of the asset to be upgraded. For example, the new version might work differently, and you don’t want finished shots to suddenly render differently. Or the new version might have a different interface that would break channel references in previous scene files.
The solution is to add a version to the asset. This lets previous versions of the asset co-exist with the current version.
When you name digital assets, there is a risk that someday Side Effects, or a subcontractor, or a third party vendor, will use the same name, causing a conflict.
You can guard against this by including a namespace in the name of the asset.
Namespaces and versioning were introduced in Houdini 12.
Versions
The version string allows you to create multiple independent versions of an asset without having to change the "main name" (that is, without having to have nodes named,
copy,
new_copy,
newer_copy, and so on). If Sasha wants to completely change the interface and/or implementation of her
copy asset, she can create an asset named
com.sade::copy::2.0. Instances of the old version will still work and use the old implementation, while users placing a new node will get the latest version.
The version can only contain numbers and periods (
.). For example,
myasset::2,
myasset::2.1,
myasset::19.1.3, but not
myasset::2a or
myasset::alpha.
When multiple definitions of an asset with the same name are available, Houdini will automatically make only the latest version available in the user interface.
Note
The operator type properties window for an asset contains a Version field. This field is left over from previous versions of Houdini where it functioned more as an annotation. It has no relation to the
::version part of the node type name.
Namespaces
The namespace identifier lets you name your assets without worrying about using the same name as a built-in Houdini node or as a third-party asset you might use someday. (Note that this only applies to the internal name of the node… you can always use any string you want for the human readable label that appears in the user interface.)
For example, Sasha’s Unbelievably Natural Discount Animation Emporium might produce a surface node for copying geometry, and name it
com.sundae::copy. This keeps it separate from the built-in
copy node, as well as Joe’s Geometry Hut’s
com.joesgeohut::copy node.
This is useful for creating assets you might want to distribute to other users. It is also useful to allow separate artists in the same facility to create their own assets without having to worry about name clashes when their assets are used together later.
A useful convention to ensure you use a unique namespace name is to reverse the DNS address of your website. For example, if Ada’s Houdini appreciation website is at
houdini.bacon.org, she would use
org.bacon.houdini as the namespace for her assets. You can use additional conventions, such as adding the name of the asset creator, for example
org.bacon.houdini.ada.
The parts of an asset name
The general form of an asset’s internal name is
[namespace::]node_name[::version].
The namespace and version are both optional. You can have a name with both a namespace and a version, or just a namespace, or just a version, or neither.
Some scripting commands require the node category and node name together (for example
Object/geo,
Sop/copy,
Dop/popsolver). To use namespaced names with these commands, use the form
[namespace::]node_category/node_name[::version]. For example,
com.sundae::Sop/copy::2.0.
How to
Referencing node types in scripts
When you refer to node types in scripts, you can use a "fully qualified" name (for example
com.sundae::vines::2.0) to refer to a specific node and version exactly, or use an "ambiguous" reference (for example,
vines) and have Houdini infer which node you want. Ambiguous references are often useful when you always want to use the latest version of an asset, or want to allow the user to prefer another asset in a different namespace without having to change all your scripts.
In cases where an ambiguous name could refer to several nodes, Houdini uses the following rules to select one.
If you don’t specify a version number, Houdini selects the node with the highest version number. A node that was created without a "version" part on the end of its name is always considered to be the lowest version.
Houdini selects nodes that are limited to the current scope before nodes that are not scoped.
Houdini selects nodes created without a namespace before nodes that have a namespace.
Houdini always prefers names or namespaces listed in the
HOUDINI_OPNAMESPACE_HIERARCHYenvironment variable. You can set this variable to a space-separated list of namespace names and/or node names to control ambiguous name resolution. Namespaces appearing first in the list are used before namespaces later in the list, or not in the list at all.
For example, if your
HOUDINI_OPNAMESPACE_HIERARCHYcontains…
com.sundae org.bacon.houdini
Then if Houdini knows about assets named
vines,
org.bacon.houdini::vines, and
com.sundae::vines, and you use the command…
opadd vines
…Houdini will use
com.sundae::vinesbecause it is earliest in the
HOUDINI_OPNAMESPACE_HIERARCHYlist.
You can add a fully-qualified name to the list to make it take precedence even over a later version. For example, adding
com.sundae::vines::1.0to the list would make Houdini use that for an ambiguous
vinesreference, even if
com.sundae::vines::2.0is available.
You can use wildcards (
*and
?) in the names in the
HOUDINI_OPNAMESPACE_HIERARCHYlist.
Tip
To unambiguously refer to a node without a namespace part in its name, use
::node_name. For example,
::copy. To unambiguously refer to a node without a version part in its name, use
node_name::. For example,
copy::.
Some scripting commands/methods have an explicit option to disable namespace resolution and
HOUDINI_OPNAMESPACE_HIERARCHY. For example, opadd has a
-e option to require an exact operator name. Similarly, hou.Node.createNode() has an
exact_type_name argument.
Subnet scoping
You can specify in an asset’s name that it is only valid within a certain type of sub-network, using the form
[Node_category/node_name::]node_spec. For example, if you have an asset
com.example::mysop, you can specify that it is only valid within a SOP Solver DOP by naming it
Dop/sopsolver::com.example::mysop. This would prevent users from adding the asset inside a Geometry object (
Object/geo) network.
This may be useful in reducing clutter in the ⇥ Tab menu by restricting the contexts in which a very specialized asset appear. | https://www.sidefx.com/docs/houdini/assets/namespaces | CC-MAIN-2021-21 | refinedweb | 1,128 | 55.95 |
Quoting Aristeu Rozanski (aris@ruivo.org):> Tejun,> On Thu, Sep 13, 2012 at 01:58:27PM -0700, Tejun Heo wrote:> >?> > if Serge is not planning to do it already, I can take a look in device_cgroup.That's fine with me, thanks.> also, heard about the desire of having a device namespace instead with> support for translation ("sda" -> "sdf"). If anyone see immediate use for> this please let me know.Before going down this road, I'd like to discuss this with at least you,me, and Eric Biederman (cc:d) as to how it relates to a device namespace.thanks,-serge | http://lkml.org/lkml/2012/9/14/648 | CC-MAIN-2017-43 | refinedweb | 102 | 75.3 |
7.2.3 The product must also be oriented in such a manner that it can be clamped without having to rearrange the load. For example, if mattresses are oriented in such a fashion that the larger flat surface is facing the rear end of the truck, a clamp truck can’t move them without rearranging such that the larger flat surface faces the side walls of the trailer. 7.2.4 Arrange each tier to be uniform and aligned relative to the other tiers on the footprint so that all four sides of the freight stack can be safely ‘squeezed’ by a clamp. Tier heights may vary. 7.2.5 Use spacers between stacks to prevent them from shifting during transit. All shipments must be properly secured using load bars/straps. It is the shipper’s and the carrier’s responsibility to ensure that shipments are loaded into a trailer in a balanced manner that prevents the load from shifting during transit or unloading. 8 Safety and Quality Requirements 8.1 Trailer/Shipment Safety and Loading Requirements 8.1.1 Due to safety concerns, the use of trailers with uneven or corrugated floors such as those in refrigerated trailers are highly discouraged. In the event that product must be shipped in a climate controlled trailer, product must be palletized. Nonpalletized (floor loaded) product that arrives to Amazon FCs on a trailer with uneven or corrugated floors will be refused. 8.1.2 A trailer, shipment or portion of a shipment is subject to refusal at the FC if FC associates are unable to safely unload product from the trailer or to verify the contents of a shipment. Common reasons for freight refusals include, but are not limited to: 8.1.2.1 Pallets shifting in transit. 8.1.2.2 Pallets/product stacked in a manner that prevents the FC from safely unloading the product. 8.1.2.3 Over-sized floor loaded product that exceeds 100 lbs (mech lift) and cannot be unloaded by a clamp. 8.1.3 If there are multiple pallets for the same PO, all pallets of the same PO must be loaded together throughout the trailer, provided all overweight axle guidelines are met. 8.1.4 Under all circumstances, shipments must be loaded in a manner that is balanced and that prevents the load from shifting. 8.2 Quality Assurance In an effort to help our vendors meet operational expectations, Amazon collects and reviews vendor operational performance data on a continual basis. We use this data to identify and address noncompliance in vendor operations. Depending on the severity of noncompliance, Amazon may initiate communication with vendors in a number of ways to help bring awareness and a resolution to the situation. Vendors may receive a one-time contact regarding an isolated incident or may have ongoing communication with an Amazon representative in order to rectify consistent problems. Amazon will often share data in order to educate vendors on operational issues. Whenever necessary, Amazon may return merchandise at vendors’ expense and/or assess charges to vendors to offset expenses incurred as a result of vendor non-compliance with operational standards. More information on vendor chargebacks is available in the Infraction Management section of HELP inside Vendor Central. To help ensure continuous levels of quality, it is necessary to communicate to your Retail Representative well in advance of any circumstances that may compromise or interrupt service, such as system changes or facility closures. © 2016, Amazon.com, Inc. or its affiliates. All rights reserved. 22
9 Returns 9.1 Returns of items that were received and met the requirements of fulfilling a purchase order (see Section 1) will be subject to the terms agreed upon by the vendor and Amazon Retail Representative. 9.2 All deliveries to Amazon that do not meet the requirements of fulfilling a purchase order (e.g. overages, damaged product, wrong delivery location) may be rejected or returned to the vendor at the FC or Retail Representative’s discretion, and at the vendor’s expense (e.g. charge-back or damage allowance). These returns are not subject to the agreed upon terms of returns, as they are considered to be caused by vendor non-compliance. 9.3 Vendors must attempt to find resolution prior to refusing any returned items by opening a ‘Contact Us’ case in Vendor Central. If the vendor believes they were incorrectly billed for a return (Shortage, Rejection, Pricing, etc.) they can submit a dispute using the same ‘Contact Us’ case feature present at the top of all pages on Vendor Central. When submitting your dispute using a ‘Contact Us’ case, please use the Accounting a Support Topic and one of the following Specific Issues: 9.3.1 Vendor Returns (VRET) - Request Proof of Delivery (POD)/Back up detail/Inquiry This specific issue must be selected when you require more details about your return that you were unable to find using the Vendor Returns detail search located under the PAYMENTS tab of Vendor Central. 9.3.2 Vendor Returns (VRET) – Dispute This selection must only be used if you wish to request repayment or a reduction in your balance owed for a Return that was billed to you. When submitting a dispute, please ensure you include a completed copy of the Returns Discrepancy Form located under the Operations section of the Resource Center. If your dispute is for rejected product, and is found valid by Amazon, you may be requested to send the rejected product back to Amazon. Please do not send rejected returns to Amazon's billing address. If you have been asked to send the rejected returns to Amazon, and you do not have the address of the proper Amazon warehouse, please request it within your dispute ‘Contact Us’ case. 9.4 The Amazon Returns Shipment ID, located on the returns packing slip, must be included with the vendor’s credit memo. Please note that you do not need to submit a credit memo to Amazon for returns unless your account is not setup to deduct from payment. In most cases a credit memo must not be sent. If you are uncertain of your account setup, please ask your Retail Representative. 9.5 Please ensure that you keep your Vendor Return Address information in Vendor Central or Advantage website (‘Return Addresses’ Section under ‘Account Settings’ heading) up to date. 9.6 Return Merchandise Authorization (RMA) 9.6.1 RMA enables the vendor to query and authorize their own returns in Vendor Central or Advantage website. For the removals, requiring authorization, an auto email is sent to the vendor (through Vendor Central or Advantage website) notifying them they need to take some action on the removal. 9.6.2 Vendors can review and authorize the removals by going to the Returns Section of the ORDERS tab in Vendor Central or Advantage website. Vendors can authorize the entire return or approve the individual items. 9.6.3 The vendor must take action on the returns within 2 weeks. In the case if those remain untouched for 1 week, an escalation email is sent to the vendor and the respective Retail Representative. In the case if those returns remain untouched for 2 weeks, then system may automatically consider those as authorized, confirm them and send to vendor. 9.6.4 When unit needs to be returned across border (US to CA or CA to US), we need broker name, email, and phone number as a mandatory information to process vendor returns. 10 International Shipments Amazon has arranged with few vendors to provide products directly from overseas, which require the engagement of international freight forwarding, international transportation, US customs brokerage, and other services not otherwise required for domestic shipments. This includes vendors whose physical address may be in the United States but has arranged a transaction with Amazon.com under INCOTERMS FOB or FCA (Foreign port of lading). Section 2 & 3 of this manual regarding packing & labeling inventory are applicable to import shipments as well. Please see the Direct import vendor workflow, located in the Help of Vendor Central, for further information regarding International shipments. For direct imports via small parcel and LTL/TL between US, CA and MX, commercial invoices must be provided for all shipments. Commercial invoices can be generated via Vendor Central during both the ASN and Routing Request submission process. To learn more, please search Vendor Central help for “commercial invoice.” If you are unable to create a commercial invoice in Vendor Central, please ensure that a commercial invoice is completed according to the requirements found in the Trade Compliance section of the Import Vendor Workflow. Documentation of © 2016, Amazon.com, Inc. or its affiliates. All rights reserved. 23 | https://www.yumpu.com/en/document/view/59996244/amazon-prep-amp-trans-manual/23 | CC-MAIN-2018-26 | refinedweb | 1,452 | 51.68 |
Compares substrings of two specified string objects, int, string, int, int,#2
Compare the path name to "file" using an ordinal comparison. The correct code to do this is as follows:
code reference: System.String.Compare#3
The following example demonstrates comparing substrings.
C# Example
using System; public class StringCompareExample { public static void Main() { string strA = "A string"; string strB = "B ring"; int first = String.Compare( strA, 4, strB, 2, 3 ); int second = String.Compare( strA, 3, strB, 3, 3 ); Console.WriteLine( "When the substring 'rin' of 'A string' is compared to the substring 'rin' of 'B ring', the return value is {0}.", first ); Console.WriteLine( "When the substring 'tri' of 'A string' is compared to the substring 'ing' of 'B ring', the return value is {0}.", second ); } }
The output isWhen the substring 'rin' of 'A string' is compared to the substring 'rin' of 'B ring', the return value is 0. | http://docs.go-mono.com/monodoc.ashx?link=M%3ASystem.String.Compare(System.String%2CSystem.Int32%2CSystem.String%2CSystem.Int32%2CSystem.Int32) | CC-MAIN-2020-05 | refinedweb | 151 | 63.8 |
Interactive Line Graph iOS chart library
ILG
I was tasked with building something akin to Robinhood's graph and while there are several thousand iOS chart frameworks I decided it would be more fun to roll my own and less tedious than modifying someone else's. I tried my best to keep it simple, while still allowing for enough customization for it to remain potentially useful for others.
Disclaimer
I built this for work and while it meets my job's requirements, many areas haven't been fleshed out. I like to think that I'll get back to it but I am very good at putting things on my get-back-to-it shelf.
Things to be aware of (or fix/add if you're feeling communal):
- Not sure where the grid is at, probably still works?
- There was a gradient beneath the line at one point but it broke and I haven't bothered to fix it.
- Line and dot animations leave something to be desired.
Things I do plan on working on:
- GraphViewInteractionDelegate could be fancier/a little more helpful.
- Naming and other general housekeeping.
- Documentation.
- Testing!
Requirements
- Swift 4.2
- iOS 10.0+
Installation
CocoaPods ☕️
You can use CocoaPods to install
ILG by adding it to your
Podfile:
pod 'ILG'
Usage
Don't forget to import
ILG:
import ILG
Create an instance of
InteractiveLineGraphView and add it to your view hierarchy however you would like:
let graphView = InteractiveLineGraphView()
Then call
graphView.update(...) and you're off to the races.
Properties
There are a number of public properties you'll find in
InteractiveLineGraphView.swift, most of them are self-explanatory but here are a few that may not be:
lineMinY and
lineMaxY will force set the lower and upper y-axis limits, if nil then the
.min() or
.max() of your data will be used.
interactionDetailCard is the floating card. It's entirely optional, simply assign it any UIView and it will do the rest. If you do use it and would like to update it be sure to keep a reference to your card so you can update it in the
GraphViewInteractionDelegate callback (maybe in the future I'll have fancier protocols).
Protocols
GraphViewInteractionDelegate will relay all interaction information back to you. And when I say "all" I mean it will just tell you when the highlighted index has changed. Spicing it up a little wouldn't be hard, and I would like to in the future but for now it is what it is. | https://iosexample.com/interactive-line-graph-ios-chart-library/ | CC-MAIN-2020-16 | refinedweb | 419 | 70.13 |
Balanced prime number is a prime number that has the same difference for its previous and next prime numbers. i.e. it is the mean of the nearest next prime and previous prime.
For a prime number to be a balanced prime, it should follow the following formula −
Pn = (P(n-1) + P(n+1)) / 2
Where n is the index of the prime number pn in the ordered set of a prime number.
The ordered set of prime numbers: 2, 3, 5, 7, 11, 13,….
First, balanced primes are 5, 53, 157, 173 , …
In this problem, we are given a number n and we have to find an nth balanced prime number.
Let’s take an example,
Input : n = 3 Output : 157
For this will generate prime numbers and store it in an array. We will find whether the prime number is a balanced prime or not. If it increases the count and if the count is equal to n, print it.
#include<bits/stdc++.h> #define MAX 501 using namespace std; int balancedprimenumber(int n){ bool prime[MAX+1]; memset(prime, true, sizeof(prime)); for (int p = 2; p*p <= MAX; p++){ if (prime[p] == true) { for (int i = p*2; i <= MAX; i += p) prime[i] = false; } } vector<int> v; for (int p = 3; p <= MAX; p += 2) if (prime[p]) v.push_back(p); int count = 0; for (int i = 1; i < v.size(); i++){ if (v[i] == (v[i+1] + v[i - 1])/2) count++; if (count == n) return v[i]; } } int main(){ int n = 3; cout<<balancedprimenumber(n)<<endl; return 0; }
157 | https://www.tutorialspoint.com/balanced-prime-in-cplusplus | CC-MAIN-2021-49 | refinedweb | 267 | 75.13 |
This is the first article in a series to implement a neural network from scratch. We will set things up in terms of software to install, knowledge we need, and some code to serve as backbone for the remainder of the series.
Follow me on Twitter, where I write about Python, APL, and maths.
In this short series I will be guiding you on how to implement a neural network from scratch, so that you really understand how they work.
By the time we are done, your network will be able to read images of handwritten digits and identify them correctly, among other things.
Your network will receive an image like
and will output a
7.
Not only that, but we will also do a bunch of other cool things with our networks.
It is incredible that nowadays you can just install tensorflow or pytorch or any other machine learning framework, and with just a couple of lines you can train a neural network on some task you pick. The downside to using those libraries is that they teach you little about how the models actually work, and one of the best ways to understand how something works is by dissecting it, studying it and assembling it yourself.
When it comes to programming, I will assume you are comfortable with the basic concepts of Python. I will not be using features that are too advanced, but every now and then I might use modules from the Python Standard Library. If you don't know those modules, that's fine. Reading the description from the documentation should be enough to bring you up to speed! I also have a series of blog articles to help you write better Python code, so take a look at that if you feel the need.
Throughout the series I will be assuming you are familiar with the general idea of how neural networks work. If you have no idea what a neural network is, it is with great pleasure that I recommend you watch 3blue1brown's video(s).
Being familiar with matrix algebra and a bit of calculus (knowing what derivatives are and understanding them) will make it easier for you to follow along, but you don't really need to be a master of these subjects in order to implement a neural net: the first time I implemented a neural network I didn't know matrix algebra nor derivatives. I will be giving intuitive explanations of what is going on at each step, and then proceed to justify them with the calculations needed. Feel free to skip the formal calculations if you are in a rush, but bear in mind that going through those is one of the things that really helps the knowledge sink in.
In this article we will be setting up for the remainder of the series. First, I will be covering what needs to be installed. Then we will be taking a look at some of the more basic things we can already implement: we will be modelling and initialising layers, the backbone of any neural network.
The code for this article, and for the future articles of the series, can be found in this GitHub repository.
In order to follow along you will need to have Python 3 installed (I will be running all my code in Python 3.9, which is my main installation at the time of writing this article) and you also need to have NumPy installed.
NumPy is a Python library for numerical computations and we will be using thoroughly
for its matrix algebra capabilities:
turns out it is much easier to implement a neural network if you do most of your
operations in bulk, instead of doing everything by hand with
for loops and whatnot.
(Which is what I did the first time, and let me tell you it was not pretty...)
Installing NumPy should be effortless with
python -m pip install numpy
but check out their instructions if you need help with the installation.
Every now and then, you might trip up on something that isn't working as intended... I know that can be frustrating, but don't forget that Google exists for this type of situation.
Whenever I am doing a project, I usually spend a bit of time thinking about the components I am going to need and how they will interact. Then I need to pick a place to start, and I tend to start some place where I will be producing code that works, so that I can actually see progress happening in terms of result, instead of just seeing the number of lines of code increase.
The first thing we could try to do is create a representation for a layer of a neural network and implement a way to initialise it, so that we have something palpable to play with right from the get-go. But what is a layer in a neural network?
A neural network can be thought of as a very basic emulation of the human brain. I am no neurologist, but from my understanding our neurons are connected to each other and they communicate with each other by sending signals, so in a way we can think of our neurons as entities that can be “on” or “off”. Not only that, but each neuron depends on some other neurons (the ones it is connected to) to know whether it should be “on” or “off”.
Our neural networks will be emulating that behaviour, but instead of having the neurons all mixed up, we will line them up neatly in what we call layers. In the figure below we show four layers, and the neurons of consecutive layers are connected by thin lines:
We will also determine that a neuron can only be connected to the neurons of the next layer and to the neurons of the previous layer, instead of allowing connections between arbitrary neurons. The only exceptions are the neurons of the leftmost layer, as those are the input neurons: they accept the data we give them, and the neurons of the rightmost layer, as those are the output neurons: they return some result.
In the example image we have ten output neurons labelled 0 through 9 and some number of input neurons (784 if you watch the video).
The “on” and “off” state of each neuron will actually be a floating point number and the neurons of the input layer will be receiving their states as the input to the network. Then, the states of each following layer will be computed from the states of the previous layer. In order to compute the state of a single neuron, what you do is take the states of all the \(m\) neurons of the preceding layer:
\[ x_1, ~ x_2, ~ x_3, ~ \cdots, ~ x_m ~~~ ,\]
and then add them up, after weighting them:
\[ w_1x_1 + w_2x_2 + w_3x_3 + \cdots + w_mx_m ~~~ .\]
Think of this as a vote, in the sense that each neuron of the previous layer is casting their vote (\(x_j\)) about whether or not this neuron should be “on”, but then you care more about what some neurons say and less about what other neurons say, so you give a weight (\(w_j\)) to each "opinion".
If we colour the connections depending on the weight they carry, we can have an image like the one below:
Then, you take into account your opinion about the vote, your own bias (\(b\)), and add that into the mix:
\[ b + w_1x_1 + w_2x_2 + w_3x_3 + \cdots + w_mx_m ~~~ .\]
The final step is to take the result of that vote and plug it in a function, that we call the activation function, that processes the result of the vote and computes the state the neuron will be in. If we are using an activation function named \(f\), then the state of this neuron would be
\[ f(b + w_1x_1 + w_2x_2 + w_3x_3 + \cdots + w_mx_m) ~~~ .\]
This is what we will be doing, but instead of computing all of these sums by hand for each neuron, we will be using matrix algebra to represent things in bulk:
In doing so, we will have a weight matrix \(W\) and a bias vector \(b\) that models the connections between two consecutive sets of neurons. If the preceding set of neurons has \(m\) neurons and the next set of neurons has \(n\) neurons, then we can have \(W\) be a \(n \times m\) matrix, and \(b\) be a column vector of length \(n\), and then \(W\) and \(b\) can be interpreted as follows:
Then, if the preceding layer produced a vector \(x\), and if \(F\) is the activation function being used, we can compute the next set of states by performing the calculation
\[ F(Wx + b) ~~~,\]
where \(Wx\) is a matrix multiplication. If you look at each element of the column vector \(Wx\), you will see the multiplications and addition I showed above for a single neuron, except now we are doing them in bulk. The \(+ b\) sums the bias, and then we apply the activation function \(F\) to everything at the same time.
This process by which we take the states of the previous neurons and compute the states of the next set of neurons is often dubbed the forward pass.
Therefore we conclude that we need to be able to create these matrices of weights and these bias vectors in order to model a layer in a neural network, so let us get that off the way.
In order to create the weight matrices and the bias vectors we just need to know their dimensions, so we can define two functions (one for the weight matrices and another for the bias vectors) and take the dimensions as arguments:
import numpy as np def create_weight_matrix(nrows, ncols): return np.zeros((nrows, ncols)) def create_bias_vector(length): return np.zeros((length, 1))
In the code above you can see that I created a column vector as a matrix with only one column. You could create the bias vector as a plain vector, but I did it like so in order to keep the implementation as close as possible to the mathematical calculations we will be doing later, and it helps to have everything consistent.
You can go ahead and create a weight matrix:
>>> create_weight_matrix(2, 4) array([[0., 0., 0., 0.], [0., 0., 0., 0.]])
This is not extremely interesting, and also not very helpful when dealing with neural networks. In general, these weight matrices and bias vectors are initialised with small, random values. When you are more comfortable around these topics, you can look for scientific papers on what are the best practices for deciding what type of randomness you want to use and how small the numbers should be, but I will be doing something that has worked well for me and is fairly sensible: we will sample the weights and the bias, so that they are drawn from a normal distribution with mean 0 and a standard deviation that decreases as the size of the matrix/vector increases.
Thankfully, in NumPy you can generate normally distributed random numbers with ease:
import numpy as np def create_weight_matrix(nrows, ncols): return np.random.default_rng().normal(loc=0, scale=1/(nrows*ncols), size=(nrows, ncols))
Now let us check how a random matrix would look like:
>>> create_weight_matrix(2, 4) array([[-0.09040462, -0.00485385, -0.08010303, 0.29271109], [ 0.12528501, -0.08750708, -0.26472923, 0.13410931]])
For the bias vectors, we can be a little bit lazy and recall that a column vector is just a matrix with a single column:
def create_bias_vector(length): return create_weight_matrix(length, 1)
Now that we have the weights and the bias that model the connections between different layers, we can actually see what it looks like to give some input to our network.
Imagine we have 4 numbers as input, the column vector \((1, 0, 0, 0)^T\), and let us assume that the next layer will have only 2 neurons. If that is the case, then our weight matrix should have 2 rows (one for each neuron we want to update the state of) and 4 columns (one for each neuron that is contributing to the update of the states):
>>> W = create_weight_matrix(2, 4) >>> b = create_bias_vector(2) >>> x = np.array((1, 0, 0, 0), ndmin=2).T >>> x array([[1], [0], [0], [0]]) >>> np.dot(W, x) + b array([[-0.67662439], [ 0.18410246]]))
The only thing missing is the activation function. We will start with a simple activation function called Leaky Rectified Linear Unit, or Leaky ReLU for short. Code speaks louder than words, so here's the implementation:
def leaky_relu(x, leaky_param=0.1): return np.maximum(x, x*leaky_param)
If a number is positive, the Leaky ReLU does nothing and if it is negative, it multiplies it by the parameter of the function:
>>> np.dot(W, x) + b array([[-0.67662439], [ 0.18410246]]) >>> leaky_relu(np.dot(W, x) + b) array([[-0.06766244], [ 0.18410246]])
We now have some of the building blocks in order to get a layer working, so another simple thing we could do is actually put them together.
The implementation becomes simple to follow and to understand if we define
a custom class for an object that we will call
Layer and that will
represent the connections and the flow of information between a column
of neurons and the next:
that is, instead of having the layer specifically represent the neurons
of each vertical column, we deal with what happens in between two columns
of neurons.
If we represent our data in this way, then we do not need to special case the input layer or the output layer in any way, and that simplifies the coding process a lot.
When we create a layer (of connections), we need to tell it three things:
With that information we can already initialise a layer properly:
class Layer: def __init__(self, ins, outs, act_function): self.ins = ins self.outs = outs self.act_function = act_function self._W = create_weight_matrix(self.outs, self.ins) self._b = create_bias_vector(self.outs)
We can also write a little helper method that computes the forward pass in our layer, a method that takes a set of neuron states and computes the next set of neuron states:
class Layer: # ... def forward_pass(self, x): return self.act_function(np.dot(self._W, x) + self._b)
This is exactly the same as the
leaky_relu(np.dot(W, x) + b) line we had above
in the REPL, but this time we are writing it in the general setting of an
arbitrary layer.
We can also include a little demo in our code:
if __name__ == "__main__": """Demo of chaining layers with compatible shapes.""" l1 = Layer(2, 4, leaky_relu) l2 = Layer(4, 4, leaky_relu) l3 = Layer(4, 1, leaky_relu) x = np.random.uniform(size=(2, 1)) output = l3.forward_pass(l2.forward_pass(l1.forward_pass(x))) print(output)
Try running your script and see what output you get. One of the times I ran it, I got
> python nn.py [[-0.07763001]]
Now that we have the backbone for the neural network in place, we can take a look at what we have coded so far and insert the docstring comments we forgot to add because we were in such a rush to get something working. My docstring-writing skills aren't very sharp, but perhaps what I wrote can serve as inspiration for you.
You can find all the code in this GitHub repository and, in particular, the code from this specific article is available under the v0.1 tag.
Currently we have a file that spans for 39 lines. It doesn't look like much, but on the other hand we have advanced quite a lot.
In the next article we will be aggregating layers in a class for a neural network and we will be looking at how neural networks learn.. | https://mathspp.com/blog/neural-networks-fundamentals-with-python-intro | CC-MAIN-2021-39 | refinedweb | 2,643 | 62.72 |
Let me put up my requirement straight away
*I want The vector to be updated rather than creating a new vector
*Will the complexity (running time) be better?
say for eg:
now all i wanted to do is to update the vector after a few steps...now all i wanted to do is to update the vector after a few steps...Code:
#include<iostream>
#include<vector>
using namespace std;
int main()
{
int i,j=0;
vector < vector<long long int> > g(100);
g[1].push_back(2);
g[2].push_back(3);
return(0);
}
i want it to look like a new emplty vector with
ofcourse i can do it by creating a new vector with the second associations alone....ofcourse i can do it by creating a new vector with the second associations alone....Code:
2 associated with 1
3 associated with 2...ie just the reverse of the previous one...i want the previous associations to be removed
but i dont need that...I want the original vector to be updated with minimal complexity | http://cboard.cprogramming.com/cplusplus-programming/115992-vector-updation-printable-thread.html | CC-MAIN-2015-35 | refinedweb | 174 | 55.84 |
Hi there,
I know there are several posts about this and I browsed and read a lot in the net but I can not find the point where I am mistaken.
I'm doing a transmission from a PC program to an AVR which is using CRC. Don't get me wrong: it is working. I got the code from some example and the other part was done by someone else. But as I'm going to need this on some more platforms and occasions and in possibly more modes, I want to fully understand this. The PC part uses a table based on the 0x1021 polynome it starts like this :,
and if I'm not mistaken these are the pre-calculated values for each value a byte can have. (Right?) It is widely used and seems correct.
Now on the AVR there is this code :
unsigned int calcrc(char *ptr, int count) { unsigned long int crc; char i; crc = 0; while (--count >= 0) { crc = crc ^ (unsigned int) *ptr++ << 8; i = 8; do { if (crc & 0x8000) crc = crc << 1 ^ 0x1021; else crc = crc << 1; } while(--i); } return (unsigned int)(crc&0x0000FFFF); }
If I now try to verify the values from 0 to 15 everything is ok, but when I try the value for 16 the table says 0x1231 while I'm calculating (manually following the code) 0x0210.
I'm just confused where I'm thinking wrong.
When I have the 16 << 8 it is 0x1000. Now three shifts as there is no 1 on MSB. Then there is the one , so one shift and XORING the 0x1021. Then another 4 shifts as we have the count from 8 to 0. There we are with 0x0210. Or not ?
Surely you can single step this in the simulator and see exactly how the calculation is done at every step?
Make the variables "volatile" if you want to "watch" them at each step.
Top
- Log in or register to post comments
There are two ways to calculate a CRC. Table driven and looped. The table you show is for table driven, the code you show is looped. Table driven is much faster, probably over ten times faster than looped code, loop code is small but table driven needs a 512 byte table. As for how a CRC works, there are a few explanations on the web.
The polynomial does not get shifted! You get 8 bits of dats, add that into the current crc value, then loop for the 8 bits Xoring with the poly ifthe msb is set then shift the next bit. I think there is at least one pictorial explanation of crc on the web. Failing that, get out your pencil and paper. I've done it this way many times.
Top
- Log in or register to post comments
thank you for your answers
excuse me, but didn't I describe just that ? I did it pencil and paper ... could you do this easy example to show me where I'm wrong ? (Believe me, I read through a lot of sites) and as the code seems to be running ok, I seem to make a mistake when thinking through it.
Top
- Log in or register to post comments
You showed the polynomial as being shifted!
Top
- Log in or register to post comments
Pencil and paper are prone to error (and misunderstanding of C promotion/truncation rules). Have you stepped the code in a simulator? What happened there?
Top
- Log in or register to post comments
@Kartman
if you're XORING the polynomial into the CRC - then of course this value will be shifted if there are zeros - try it.
@clawson
just on it - the result is ok - as expected, now looking deeper
- found it : I counted wrong, in step 7 there is a '1' on 0x8000 position, so that in the next step the polynomial gets xored in again. and then I get 0x1231
thank you and sorry for bothering, sometimes it helps to whine and try to explain it :D
Top
- Log in or register to post comments
Loop 1: Value = 0x1000, So shift left, Value = 0x2000
Loop 2: Value = 0x2000, So shift left, Value = 0x4000
Loop 3: Value = 0x4000, So Shift left, Value = 0x8000
Loop 4: Value = 0x8000, So shift and XOR, Value = 0x1021
Loop 5: Value = 0x1021, so Shift left, Value = 0x2042
Loop 6: Value = 0x2042, so shift left, Value = 0x4084
Loop 7: Value = 0x4084, so shift left, Value = 0x8108
Loop 8: Value = 0x8108, so shift and XOR, Value = 0x1231
Top
- Log in or register to post comments
cross - posted, thank you Joeks just updated the posting above
Top
- Log in or register to post comments | https://www.avrfreaks.net/comment/802562 | CC-MAIN-2020-50 | refinedweb | 780 | 75.95 |
Last updated:04/20/15 08:30 am EDT
Before You Begin
Purpose
The purpose of this guide is an introduction to creating a Grizzly/Jersey REST web service that can be stored and distributed in a single JAR file.
Time to Complete
45 minutes
Background
Typically, when a developer thinks of creating a RESTful web service using Java, they assume that using a Java EE application server is the only way to create this type of application. However, there are simpler lightweight alternative methods for creating RESTful applications using Java SE. This tutorial demonstrates one such alternative using the Grizzly Web server along with the Jersey REST framework.
This tutorial covers the basics of creating a RESTful Grizzly/Jersey application with Maven. The application includes a simple data model to demonstrate how actual REST requests are processed. You create a web service class with annotations that converts your methods into callable REST URLs. Finally, you package your Java REST application into a single portable JAR file which can be executed pretty much anywhere.
Grizzly Web Server
Project Grizzly is a pure Java web service built using the NIO API. Grizzly's main use case is the web server component for the GlassFish application server. However, Grizzly’s goal is to help developers do much more that. With Grizzly you can build scalable and robust servers using NIO as well as offering extended framework components including Web Framework (HTTP/S), WebSocket, Comet, and more!
Jersey, REST and JAX-RS
Jersey RESTful Web Services framework is an open source, production quality framework for developing RESTful Web Services in Java. Jersey provides support for JAX-RS APIs and serves as a JAX-RS Reference Implementation. Jersey helps to expose your data in a variety of representation media types and abstracts away the low-level details of the client-server communication. Jersey simplifies the development of RESTful Web services and their clients in Java in a standard and portable way.
Representational State Transfer (REST) is a software architecture style for creating scalable web services. REST is a simpler alternative to SOAP and WSDL-based Web services and has achieved a great deal of popularity. RESTful systems communicate using the Hypertext Transfer Protocol (HTTP) using the same verbs (GET, POST, PUT, DELETE, etc.) that web browsers use to retrieve web pages and send data to remote servers.
The Java API for RESTful Web Services (JAX-RS) provides portable APIs for developing, exposing and accessing Web applications designed and implemented in compliance with principles of REST architectural style. The latest release of JAX-RS is version 2.0. The Jersey framework is the reference implementation of JAX-RS.
Scenario
The RESTful web service built in this tutorial is the start of a REST application for managing customer data. For this example, customer data is stored in a list. You will create a web method to return all the customers stored in the list and a method to search for a customer by ID. To keep things simple, all data is returned in a plain text format.
What Do You Need?
To follow along with this OBE, you must have the following software installed on your system.
- Maven 3.3.1 or later --> Download from
- Java 8 u45 or later --> Download from
- NetBeans 8.0.2 --> Download from
Installation Notes
Here are some tips related to the tools used for this OBE.
- Running Maven the first time: The first time you run Maven it will download all the required software needed for the current project. So the first time you compile a project, it may take a few minutes. Also by implication, you need a live Internet connection the first time you set up a project.
- Screenshots are taken from a Windows system, but the steps outlined in this tutorial should work with minimal modifications on Linux or Mac OS X.
Creating the Maven Project
The first step in the process of creating a Grizzly/Jersey application is to create a Maven project. An archetype exists for this type of application. To create the project follow these steps:
- Navigate to the directory where you want to create a new Grizzly/Jersey project. For example,
c:\examples
- Execute the following command to create an empty project:: If you are using a proxy server, see the following page on configuring Maven to use a proxy server.
Note: You can change the various options in the command. Here is a brief breakdown of the options.
- archetypeArtifactId: Specifies the kind of project to create.
- archetypeGroupId: Specifies which group this archetype is defined in.
- groupId: Used to uniquely identify the project. Should be based on your domain name. Typically matches your package name.
- artifactId: The name of your project. This also specifies what the project archive files (e.g., jar or war) will be named.
jersey-servicedirectory with the project files included within.
The structure of the project looks like this.
Note: You can open the project in NetBeans (File -> Open project) or in Eclipse (File -> Import -> Maven-> Existing Maven Project) but Maven must be setup correctly in your IDE.
Examining
pom.xml
When you created the main project, you also created the main configuration file,
pom.xml. This file specifies dependencies and versions required to build the project. Below is the file contents in 3 parts.
The first part of the file contains much of the project metadata you specified on the command line.
pom.xml
Note the groupId and artifactId are specified here. The
nametag identifies the name of the project. The artifactId is used to create the jar or war file name along with the text in the
versiontag. For example, this project will create a jar file named
jersey-service-1.0-SNAPSHOT.jar.
The second part of the file contains the code dependencies required for the project.
pom.xml
A couple of comments on the
dependenciessection:
- The
dependencyManagementtag contains libraries that other dependencies depend upon. In this case, the Jersey REST library is specified.
- The first dependency provides containers for loading Jersey classes in the Grizzly web server.
- The second dependency specifies the JUnit library for unit testing.
Here is the third part of the file.
pom.xml
The
build section of the file contains information about plugins required for the application.
- The
maven-compiler-plugincompiles code for the project. The default Java version is 1.7. Note: Change this value to 1.8 so Lambda expressions can be used in the application. Then save the file.
- The
exec-maven-pluginspecifies how to execute the application. Notice the reference to the
Mainclass use to execute the application.
- Note the
propertiestag near the end of the file allows the specification of application wide properties.
Examining the Source files
Two source files are created for this archetype. Both files are shown below with comments.
The
Mainclass starts the application execution by setting up the web server. Any Jersey annotated classes found in the
com.example.restpackage are loaded.; public class Main { // Base URI the Grizzly HTTP server will listen on public static final String BASE_URI = "";.
- The
BASE_URIfield specifies where the REST method you define will be appended. In this case, any methods created will be listed under
myapp.
- The
ResourceConfigclass specifies which package contains annotated Jersey classes to be loaded.
- In the
mainmethod the HTTP server is started and run until the enter key is pressed.
The
MyResourceclass defines the methods for a web service.
MyResource.java
package com.example.rest; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; /* Root resource (exposed at "myresource" path) */ @Path("myresource") public class MyResource { @GET @Produces(MediaType.TEXT_PLAIN) public String getIt() { return "Got it!"; } }
The class contains only a
getItmethod. When this method is called, a text message should be returned:
Got it!
The method is called when a
GETmethod is made against the.
Running the Web Service
With the project setup, Maven can now be used to run the application and start the web server. Here are the steps to make that happen.
- Open a
Terminalor
Command Promptwindow.
- Change into the project directory:
cd C:\examples\jersey-service
- Compile the project:
mvn clean compile
- Execute the project:
mvn exec:java
The output produced from this command may vary a bit, especially the first time you run the command. However, if you look at the end of the output, you should see something similar to the following:
C:\examples\jersey-service>mvn exec:java [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building jersey-service 1.0-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] >>> exec-maven-plugin:1.2.1:java (default-cli) > validate @ jersey-servic e >>> [INFO] [INFO] [INFO] --- exec-maven-plugin:1.2.1:java (default-cli) @ jersey-service --- Apr 30, 2015 5:26:05 PM org.glassfish.grizzly.http.server.NetworkListener start INFO: Started listener bound to [localhost:8080] Apr 30, 2015 5:26:05 PM org.glassfish.grizzly.http.server.HttpServer start INFO: [HttpServer] Started. Jersey app started with WADL available at n.wadl Hit enter to stop it...
This indicates that the Grizzly web service is running with the Jersey REST web service. Now you can test the web service using a web browser.
- Open the Firefox or Chrome web browser.
- Enter the following URL into the browser address window and open the URL:
The output in your browser should look something like this:
Notice that the "Got it!" text is returned to the browser.
Creating a Customer Web Service
To make the application a little more interesting, we can add some sample customer data to our project. Then, the source files can be modified to look up a an individual customer or the complete list of customers.
Adding Customer Data
The customer data consists of two classes. The
Customer.java class uses the builder pattern to represent customer data. The
CustomerList.java class creates a list of customers using sample data.
- Add the
Customer.javaclass to the project.
- Add the
CustomerList.javaclass to the project.
Customer.java - Fragment
public class Customer { private final long id; private final String firstName; private final String lastName; private final String email; private final String city; private final String state; private final String birthday;
This code fragment shows the fields used in the customer class. A
long is used to store the id field. The other fields are strings.
Copy the complete
Customer.java file from this link.
CustomerList.java - Fragment() ); // More code here }
Each customer is added to the list using a fluent approach. A total of 5 customers are added to the list.
Copy the complete
CustomerList.java Java file from this link.
Adding the Web Service Code
With the customer data created, now the web service code can be added to the application.
- Create the
CustomerService.javaclass.
- Add the
getAllCustomersMethod
CustomerService.java
package com.example.rest; import java.util.Optional; import java.util.concurrent.CopyOnWriteArrayList; import java.util.stream.Collectors; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.PathParam; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; @Path("/customers") public class CustomerService { private final CopyOnWriteArrayList<Customer> cList = CustomerList.getInstance(); }
To setup a simple web service, the above
CustomerService class will be used. The class includes a number of imports and starts with an annotation:
@Path("/customers"). This specifies that the methods called in this class will appear under the
/customers path. This means that for our application, method calls will appear under the path. The customer list is created and stored in
cList. Next, two methods will be added to this class.
The
getAllCustomers method will return all the customers in our list. The following is the source code for this method.
@GET @Path("/all") @Produces(MediaType.TEXT_PLAIN) public String getAllCustomers() { return "---Customer List---\n" + cList.stream() .map(c -> c.toString()) .collect(Collectors.joining("\n")); }
There are a number of annotations for this method. Here are some comments on the code.
@GET- This method is called with the HTTP GET method.
@Path("/all")- Specifies the path used to call this method. In this case calling return the list of customers in a text format.
@Produces(MediaType.TEXT_PLAIN)- Specifies output data will be returned in a text format. Other formats could be specified here including JSON or XML.
- Lambda - The lambda expression at the end of the class get the list of customers, converts those object to strings, and then uses
Collectors.joining("\n")to return a single string.
getCustomerMethod
The
getCustomer method searches for a customer in the list and returns the customer information if the item is found. If not found, an error message is returned.
@GET @Path("{id}") @Produces(MediaType.TEXT_PLAIN) public String getCustomer(@PathParam("id") long id) { Optional<Customer> match = cList.stream() .filter(c -> c.getId() == id) .findFirst(); if (match.isPresent()) { return "---Customer---\n" + match.get().toString(); } else { return "Customer not found"; } }
The following are the comments for this method.
@GET- This method is called with the HTTP GET method.
@Path("{id}")- Specifies the path used to call this method. In this case passing a
longin the URL the value 101 into an
idvariable. The
{id}makes the value specified available as a parameter.
@Produces(MediaType.TEXT_PLAIN)- Specifies output data will be returned in a text format. Other formats could be specified here including JSON or XML.
getCustomer(@PathParam("id") long id)- This is where the
idparameter that was specified in the
Pathannotation is turned into a
longvariable and passed to the method.
- Method source - The method uses a lambda expression to search for the specified ID. If found, the customer data is returned. If not found, an error message is returned.
Complete
CustomerService.java Listing
Now the components of this web service have been created. The next section shows how to test the web service.
Testing the Web Service
To test the new web service follow these steps.
Starting Grizzly/Jersey Web Service
If your Grizzly server is still running stop it by pressing
Enter. Complete the following steps to recompile the application and restart the Grizzly server.
- Open a
Terminalor
Command Promptwindow.
- Change into the project directory:
cd C:\examples\jersey-service
- Compile the project:
mvn clean compile
- Execute the project:
mvn exec:java
Testing in your Browser
To test the web service, start a browser. In this example, Google Chrome is used.
- Enter this address in the address bar:
- Enter this address in the address bar:
- Enter this address in the address bar:
Notice the list of all the customers in the list are returned as plain text.
Notice the customer with an ID of 101 is returned as plain text.
Notice an error message is returned since no match can be found.
Testing with
curl
Using the curl network utility is a great way to test REST web services.
curl is installed by default on most Unix and Linux distributions. On Windows, the best way to use
curl is to install Cygwin and make sure to select
curl during the installation. You can also install
curl standalone from this web site.
The tests from the previous example can be perform with
curl as follows.
- To get all customers type:
curl -X GET -i
- To get customer 101 type:
curl -X GET -i
- To attempt to get customer 109 type:
curl -X GET -i
HTTP/1.1 200 OK Content-Type: text/plain Date: Tue, 05 May 2015 21:40:18 GMT Content-Length: 552 ---Customer List--- ID: 100 First: George Last: Washington EMail: gwash@example.com City: Mt Vernon State: VA Birthday 1732-02-23 ID: 101 First: John Last: Adams EMail: jadams@example.com City: Braintree State: MA Birthday 1735-10-30 ID: 102 First: Thomas Last: Jefferson EMail: tjeff@example.com City: CharlottesVille State: VA Birthday 1743-04-13 ID: 103 First: James Last: Madison EMail: jmad@example.com City: Orange State: VA Birthday 1751-03-16 ID: 104 First: James Last: Monroe EMail: jmo@example.com City: New York State: NY Birthday 1758-04-28
Each customer in the list is displayed.
HTTP/1.1 200 OK Content-Type: text/plain Date: Tue, 05 May 2015 21:41:43 GMT Content-Length: 118 ---Customer--- ID: 101 First: John Last: Adams EMail: jadams@example.com City: Braintree State: MA Birthday 1735-10-30
Customer 101 is found and returned.
HTTP/1.1 200 OK Content-Type: text/plain Date: Tue, 05 May 2015 21:42:10 GMT Content-Length: 18 Customer not found
An error message is returned.
Creating an Uber JAR
The last topic to discuss is the concept of an uber JAR. If you use the default Maven settings to create a JAR, only the class files generated from source file are included in the JAR. This works fine when executing an application from Maven since any dependencies are downloaded into the local Maven cache. However, this does not produce a stand alone application file that can be run anywhere. The Grizzly and Jersey libraries must be included in your
CLASSPATH variable for your application to run. When there are a lot of libraries included in your application, the
CLASSPATH can get rather messy. Is there an easier way?
An uber JAR is a JAR file where all the dependencies required to run the application are included in a single jar. This way, your entire application can be distributed in a single file without messy
CLASSPATH variables or extra software installations. To do this, changes must be made to the
pom.xml file.
Updating the
pom.xml File
The first change to the file is the addition of the assembly plugin. Here is the XML for this plugin.
The assembly plugin is typically used to create JAR or WAR files that package Java applications into a file. Notice that the
Main class is set so that an executable JAR is created. Doing this allows the JAR file to execute using the
java -jar command line option. Also note that the
FinalName value has been updated to match the rest of the project.
So now we have created an executable JAR. How are the required libraries for the application added to the jar? The dependency plugin is required to do that.
By adding this plugin with the
copy-dependencies goal, required libraries are copied into the JAR. Thus, all the required class files to run the application are included in the JAR. No external
CLASSPATH or other settings are required to execute the application.
Executing the Application
To build and execute your application follow these steps.
- Change into the project directory.
- Clean and compile the application:
mvn clean compile
- Package the application:
mvn package
- Look in the
targetdirectory. You should see a file with the following or a similar name:
jersey-service-1.0-SNAPSHOT-jar-with-dependencies.jar
- Change into the
targetdirectory.
- Execute the jar:
java -jar jersey-service-1.0-SNAPSHOT-jar-with-dependencies.jar
Your Grizzly/Jersey should now execute and start the service just like it did before from Maven. Since this JAR is self contained, it can be copied to a different location on the machine or to another machine and execute. The application is now completely self-contained.
Adding Flexibility to the Server
Before completing this OBE there is one improvement I would like to discuss. Wouldn't the application be better if it was more flexible at startup? For example, it would be really nice to set the network port or host name at runtime. Right now, the application can only do this when compiled. Maybe something like environment variables could be used for configuration? That sounds like a good approach.
But Mike, what if the environment variables aren't set? Won't that result in null values which require all sorts of checking with
if blocks?
Well... It turns out that the new
Optional class in Java 8 can make this sort of thing fairly easy.
What follows is a new version of the
Main class which checks for the
PORT and
HOSTNAME environment variables at run time. If the values are set, they are used to launch the server, if they are not, default values are used instead.; import java.util.Optional; /** * Main class */ public class Main{ // Base URI the Grizzly HTTP server will listen on public static final String BASE_URI; public static final String protocol; public static final Optional<String> host; public static final String path; public static final Optional<String> port; static{ protocol = "http://"; host = Optional.ofNullable(System.getenv("HOSTNAME")); port = Optional.ofNullable(System.getenv("PORT")); path = "myapp"; BASE_URI = protocol + host.orElse("localhost") + ":" + port.orElse("8080") + "/" + path + "/"; } /** * Starts Grizzly HTTP server exposing JAX-RS resources defined in this application. * @return Grizzly HTTP server. */); } /** * Main method. * @param args * @throws IOException */.
- At the top of the class notice two
Optional<String>variables are declared. The
portand
hostfields will store the result of our environment variable lookup.
- In the
staticinitialization block, the
System.getenvmethod is used to get the environment variable. Notice that the
Optional.ofNullablemethod is called. This method will return either the value stored in the environment variable or an empty
Optionalif no value is returned.
- The
BASE_URIfield now uses the
Optionalvariables to create the URI. The
orElsemethod sets a default value if the optional is empty.
With these improvements, you can set the host name or port using environment variables. The additional code is clear and concise.
That concludes this tutorial.
Download Suggested Solution
If you want to see the complete set of project files, download the source files and Maven project from the following link:
Want to Learn More?
If you want to learn more, here are some related links.
Credits
- Curriculum Developer: Michael Williams
- QA: Sravanti Tatiraju | https://www.oracle.com/webfolder/technetwork/tutorials/obe/java/griz_jersey_intro/Grizzly-Jersey-Intro.html | CC-MAIN-2019-18 | refinedweb | 3,610 | 58.99 |
Hey guys,
does anybody have a glue how to set up the new DigitalOcean Private Docker Registry with CI/CD of Gitlab CE?
I currently use Gitlab CE CI/CD to deploy application(s) onto my DigitalOcean Kubernetes Cluster, but I would now also to integrate this new Docker Container Registry of DO. Currently using a private registry, which is on my Gitlab CE droplet - but I’m facing performance issues from time to time.
Please let me know, if someone is having a solution.
DO rocks - incredible the range of products which they provide by now …a couple of years ago it was only droplets and dns :)
Thanks, Olli
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Hi Olli,
Thanks for your questions!
In order to setup any CI/CD job to push or pull images from a private DO container registry, you first need to have the docker credentials for authentication. Here’s a simple way of authenticating the docker client to interact with the DigitalOcean Container registry:
docker login -u <API_TOKEN> -p <API_TOKEN> registry.digitalocean.com. The API token can be passed as a secret or an environment variable.
For a Kubernetes cluster to pull images from the private registry, you’d need to create a docker registry Secret in the cluster with the docker config. Here’s how you can set up the secret in the namespace of your choice:
Once you create the above secret, you can specify the
imagePullSecretsconfiguration in the pod spec like below:
If you would like all pods in the namespace to pull from DOCR, then, you can specify the
imagePullSecretsconfiguration on the default service account in that namespace:
If you’re familiar with
doctland have it setup as part of your CI environment, this post walks you through authenticating with DOCR using
doctland setting up your Kubernetes cluster to work with DOCR. We are currently working on providing a simpler way to pull images from DOCR onto your Kubernetes clusters and this will be made available soon.
The Gitlab documentation has a
Requirementssection which specifies the resource limits for the droplet and how much memory and CPU to allocate for the droplet. If you have already done this and are still experiencing problems with the droplet, I suggest reaching out to support@digitalocean.com and specifying the problems you’re facing in detail.
Thank you for using DigitalOcean. Hope this helps! | https://www.digitalocean.com/community/questions/how-to-set-up-digitalocean-private-container-registry-gitlab-ce-ci-cd | CC-MAIN-2022-21 | refinedweb | 428 | 57.81 |
Okay, rather than respond to the fragments first, which just seems to be confusing people, as they aren't getting the previous discussion. Here's a quick sketch of what I'm creating: class Meta( type ): tableName = common.StringProperty( "tableName", """The table in which this class stores it's primary data""", defaultValue = "", ) implementationBase = common.ClassByNameProperty( "implementation", """Base class for implementation""", defaultValue = "wxpytable.dbresultset.DBResultSet", setDefaultOnGet = 0, ) ... Now, those objects from "common" are part of a fairly involved hierarchy of descriptor classes which provide type coercion, default-value retrieval, automated lookup of factory functions, managed list-of-type support, and a dozen other features. Here's the framework of what a "basicproperty" does: * hooks the __set__ event (and __get__ and __delete__) o is thereby a descriptor o can be introspected from the object with the properties + has documentation + has meta-data for coercion, type-checking, default values, operations for choosing common values, etceteras, etceteras + can be readily sub-classed to produce new effects, such as using weak references or storing values in a database based on schema objects, or calling methods on a client... * checks the data type of the value o if necessary, coerces the value (normally defering to the "baseType" for the property) * finally, stores the value o tries to do what would have been done if there were no descriptor (with the new, coerced value) o does *not* create new names in the object's namespace (all names are documented w/ descriptors, there's not a lot of '_' prefixed names cluttering the namespace) o does *not* require a new dictionary/storage-object attribute for the object (the descriptor works like any other descriptor, a *stand-alone* object that replaces a regular attribute) It's that last part that I'd like to have a function/method to accomplish. That is, store (and obviously retrieve as well) attribute values for objects, instances, types, classes, __slot__'d instances, etceteras under a given name without triggering any of the __setattr__ machinery which defers to the __set__ method of the associated descriptor. I can work around the problem by violating either of those two bullet points under "finally, stores the value", but I'm looking for something that *doesn't* violate them. See below for responses to particular points... Bengt Richter wrote: ... > client.__dict__['v'] = value >the above line should work for this example, so it must be different from >what you were doing before? Perhaps the client object before was a class? >I guess I'm not getting the big picture yet of your design intent. > That's what the base properties do, but they just can't work when doing a meta-class, as it's __dict__ is a dict-proxy, which doesn't allow assignment. I'm looking for a method to tell the Python system "Okay, what would you have done if there were no descriptors? Here's the key and value to set, go set them." That sample was just to demonstrate why calling object.__setattr__ wouldn't work (it recursively calls the descriptor). >Are you trying to make a base class that has properties that can >set same-named properties in the subclass via attributes >of objects instantiated from the subclass? Or as attributes of the >subclass? Do you need the property ability to munge its arguments dynamically? >Otherwise a plain old attribute assignment to the subclass per se should install >the property for its instances, no? > >). ... Enjoy, Mike _______________________________________ Mike C. Fletcher Designer, VR Plumber, Coder | https://mail.python.org/pipermail/python-list/2003-July/218279.html | CC-MAIN-2014-15 | refinedweb | 582 | 50.36 |
Deploying on Kubernetes #3: Dependencies
This is the second:
Next, we need to add some dependencies!
Dependency Management
I know few complex applications who are deployed in isolation, and who do not require dependencies on other networked services. It could be a relational database, key/value database or blob storage layer — or something more bespoke such as a microservice that exposes an RPC API. At any rate, nearly all applications are dependent on other applications.
When thinking in terms of operations it’s easy to think in terms of hosts, and to think about planning the deployment of our required services on host. We allocate hosts as the “host-1” “host-2”, and deploy the ansible “mysql” or “redis” playbooks depending on that systems requirements.
Kubernetes obviates the need for hosts, and we can think just instead about the services that we need. Instead of needing to negotiate capacity planning for mysql or redis, we can simply mark that it requires a certain amount of resource and deploy it across the cluster.
Packages within packages
As hinted at in the last post we’ll be using helm to manage our Kubernetes deployment. Helm is to Kubernetes as aptitude (
apt-get) is to Debian/Ubuntu or
yum is to RHEL/Centos/Fedora. It’s a package manage — it allows easy distribution of software.
Importantly, it also allows introducing dependencies on other packages:
$ apt-cache depends docker
docker
Depends: libc6
Depends: libglib2.0-0
Depends: libx11-6
Helm allows us to introduce dependencies in the same way. Further, helm’s packaging of application mean that there are large number of networked services that we can simply use as part of our application compilation.
Implementing our dependencies
Earlier, we created a super simple chart which installed a non-functional version of
kolide/fleet. This application requires some dependencies:
- MySQL (a relational database)
- Redis (an in-memory key/value store)
luckily, there is a helm package for this software! The canonical version of helm packages is the Kubernetes charts repo:
There are two publication streams:
- Stable
- Incubator
TLDR, only use stable. For more information, consult the dodcs.
Our Chart
Our chart is now in a state where we can add the required dependencies, and test whether they work. The key is the requirements.yml file:
---
## Here, helm keep track of what dependencies that are required for the application. Dependencies should be listed here,
## then the dependencies updated with:
##
## helm dep update
##
## and the requirements.lock file commited.
##
# dependencies:
# - name: "apache"
# version: "1.2.3-1"
# repository: ""
The documentation above is part of the starter chart that we used to create this resource. The format is as it describes. So! Let’s add MySQL and Redis as dependencies.
$ cat <<EOF > requirements.yaml
> ---
> dependencies:
> - name: "mysql"
> # This version is the version in the file:
> #
> #
> #
> # Tue 27 Mar 17:13:08 CEST 2018
> version: "0.3.6"
> repository: ""
> - name: "redis"
> # See above re. version constraints
> version: "1.1.21"
> repository: ""
> EOF
This creates a file of the syntax expected above. Once the file is created, simply run:
$ helm dependency update
Hang tight while we grab the latest from your chart repositories...
...Unable to get an update from the "local" chart repository ():
Get: dial tcp 127.0.0.1:8879: getsockopt: connection refused
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 2 charts
Downloading mysql from repo
Downloading redis from repo
Deleting outdated charts
A new file appears called
requirements.lock:
dependencies:
- name: mysql
repository:
version: 0.3.6
- name: redis
repository:
version: 1.1.21
digest: sha256:fa0c7bce5404153174d0fdd132227d71f950478594b2b2f6e7351a70bb01dfe7
generated: 2018-03-27T17:17:00.430874158+02:00
This is the lock file for our dependencies, and ensures that we won’t accidentally install the wrong dependency somehow.
Next step, installation!
$ helm upgrade --install kolide-fleet .
Release "kolide-fleet" does not exist. Installing it now.
NAME: kolide-fleet
LAST DEPLOYED: Tue Mar 27 18:25:07 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Secret
NAME TYPE DATA AGE
kolide-fleet-mysql Opaque 2 1s
kolide-fleet-redis Opaque 1 1s
==> v1/ConfigMap
NAME DATA AGE
kolide-fleet-fleet 0 1s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
kolide-fleet-mysql Bound pvc-69d44d21-31db-11e8-81e0-080027c1d0f5 8Gi RWO standard 1s
kolide-fleet-redis Pending standard 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kolide-fleet-mysql ClusterIP 10.104.173.216 <none> 3306/TCP 1s
kolide-fleet-redis ClusterIP 10.106.51.61 <none> 6379/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kolide-fleet-mysql 1 1 1 0 0s
kolide-fleet-redis 1 1 1 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
kolide-fleet-mysql-6c859797b4-gf6lk 0/1 Init:0/1 0 0s
kolide-fleet-redis-6d95f98b98-qswkz 0/1 ContainerCreating 0 0s
NOTES:
fleet
## Accessing fleet
----------------------
1. Get the fleet URL to visit by running these commands in the same shell:
NOTE: It may take a few minutes for the loadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc --namespace default -w kolide-fleet-fleet'
export SERVICE_IP=$(kubectl get svc kolide-fleet-fleet --namespace default --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
echo
For more information, check the readme!
There’s a lot more going on now than there was! In the output above we can see:
- 2 pods (containers) created
- 2 services (kind of like DNS) created
- 2 persist volume claims (storage) created
- 2 secrets created
Also, the names give away that we’ve just installed MySQL and Redis on our cluster:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kolide-fleet-mysql-6c859797b4-gf6lk 1/1 Running 0 1m
kolide-fleet-redis-6d95f98b98-qswkz 1/1 Running 0 1m
For now, they’re not being used. Still! We have our dependencies up and running.
All that’s left now is to craft the actual application. ;) You can see the commit associated with this work at the following URL:
The fleet application uses MySQL and Redis as it's persistent storage layers. This commit adds the latest version…github.com
(I’ll try my best to keep it up to date)
In Summary
Helm allows us to make use of pre-existing software. This software is usually production ready and configured with sane defaults. It considerably shortcuts infrastructure management in the same way that ansible roles shortcut the deployment of VMs.
Necessary Caveats
I used a minikube installation in this test, which apparently supports storage provisioners. This is an advanced topic that we will not be covering — TLDR, use minikube for your testing, or another version of hosted Kubernetes.
Also, I had heaps of trouble with a rogue kubeadm installation on my desktop. Because the office runs on DHCP the IP changed, and kubeadm did not like that.
Thanks
- The kubernetes-cert group at work. I so relish time to work on this stuff, and you guys are an awesome reason. Everyone is getting started with the work, and I appreciate it’s hard — keep it up!
- My work crew for giving me work time to write this stuff up.
See the next in this series here: | https://medium.com/@andrewhowdencom/deploying-on-kubernetes-3-dependencies-e024741a15df | CC-MAIN-2018-22 | refinedweb | 1,203 | 55.84 |
DC is of Remote control in which I have used a keypad. The second simulation contains our two DC Motors and I am controlling the direction of those DC Motors with my Remote Control. XBee Module is used for sending wireless data. The code will also work on hardware as I have tested it myself. So, let’s get started with DC Motor Control using XBee & Arduino in Proteus ISIS:
DC Motor Control using XBee & Arduino in Proteus
- I have designed two Proteus Simulations for this project.
- The First Simulation is named as Remote Control while the second one is named as DC Motor Control.
- I am controlling the directions of these DC Motors from my Remote.
- So, let’s first have a look at Remote section and then we will discuss the DC Motor Control.
Remote Control
- Here’s the overall circuit for Remote Control designed in Proteus ISIS:
- As you can see in the above figure that we have Arduino UNO which is used as a microcontroller and then we have XBee module which is used for RF communication and finally we have Keypad for sending commands.
- You have to download this XBee Library for Proteus in order to use this XBee module in Proteus.
- You will also need to download Arduino Library for Proteus because Proteus doesn’t have Arduino in it.
- The Serial Monitor is used to have a look at all the commands.
- Now next thing we need to do is, we need to write code for our Arduino UNO.
- So, copy the below code and Get your Hex File from Arduino Software.
#include <Keypad.h>
const byte ROWS = 4; //four rows
const byte COLS = 4; //three columns
char keys[ROWS][COLS] = {
{‘7′,’8′,’9’, ‘/’},
{‘4′,’5′,’6′,’x’},
{‘1′,’2′,’3′,’-‘},
{‘*’,’0′,’#’,’+’}
};
byte rowPins[ROWS] = {13, 12, 11, 10}; //connect to the row pinouts of the keypad
byte colPins[COLS] = {9, 8, 7, 6}; //connect to the column pinouts of the keypad
Keypad keypad = Keypad( makeKeymap(keys), rowPins, colPins, ROWS, COLS );
int KeyCheck = 0;
void setup()
{
Serial.begin(9600);
}
void loop()
{
char key = keypad.getKey();
if (key)
{
if(key == ‘1’){KeyCheck = 1; Serial.print(“1”);}
if(key == ‘2’){KeyCheck = 1; Serial.print(“2”);}
if(key == ‘3’){KeyCheck = 1; Serial.print(“3”);}
if(key == ‘4’){KeyCheck = 1; Serial.print(“4”);}
if(key == ‘5’){KeyCheck = 1; Serial.print(“5”);}
if(key == ‘6’){KeyCheck = 1; Serial.print(“6”);}
if(KeyCheck == 0){Serial.print(key);}
KeyCheck = 0;
}
}
- The code is quite simple and doesn’t need much explanation.
- First of all, I have initiated my Keypad and then I have started my Serial Port which is connected with XBee Module.
- In the Loop section, I am checking the key press and when any key is pressed our microcontroller sends a signal via XBee.
- Now let’s have a look at the DC Motor Control Section.
DC Motor Control
- Here’s the image of Proteus Simulation for DC Motor Control Section:
- We have already installed the XBee & Arduino Library for Proteus in the previous section.
- Here you need to install L298 Motor Driver Library for Proteus, which is not available in it.
- So here we have used two DC Motors, which are controlled with L298 Motor Driver.
- XBee is used to receive commands coming from Remote Control.
- Now use below code and get your hex file from Arduino Software:
int Motor1 = 7;
int Motor2 = 6;
int Motor3 = 5;
int Motor4 = 4;
int DataCheck = 0;
void setup()
{
Serial.begin(9600);
pinMode(Motor1, OUTPUT);
pinMode(Motor2, OUTPUT);
pinMode(Motor3, OUTPUT);
pinMode(Motor4, OUTPUT);
digitalWrite(Motor1, HIGH);
digitalWrite(Motor2, HIGH);
digitalWrite(Motor3, HIGH);
digitalWrite(Motor4, HIGH);
Serial.print(“This Arduino Code & Proteus simulation is designed by:”);
Serial.println();
Serial.println(””);
Serial.println();
Serial.println();
Serial.println();
}
void loop()
{
if(Serial.available())
{
char data = Serial.read();
Serial.print(data);
Serial.print(” ======== > “);
if(data == ‘1’){DataCheck = 1; digitalWrite(Motor2, LOW);digitalWrite(Motor1, HIGH); Serial.println(“First Motor is moving in Clockwise Direction.”);}
if(data == ‘2’){DataCheck = 1; digitalWrite(Motor1, LOW);digitalWrite(Motor2, HIGH); Serial.println(“First Motor is moving in Anti-Clockwise Direction.”);}
if(data == ‘3’){DataCheck = 1; digitalWrite(Motor1, LOW);digitalWrite(Motor2, LOW); Serial.println(“First Motor is Stopped”);}
if(data == ‘4’){DataCheck = 1; digitalWrite(Motor3, LOW);digitalWrite(Motor4, HIGH); Serial.println(“Second Motor is moving in Clockwise Direction.”);}
if(data == ‘5’){DataCheck = 1; digitalWrite(Motor4, LOW);digitalWrite(Motor3, HIGH); Serial.println(“Second Motor is moving in Anti-Clockwise Direction.”);}
if(data == ‘6’){DataCheck = 1; digitalWrite(Motor3, LOW);digitalWrite(Motor4, LOW); Serial.println(“Second Motor is Stopped.”);}
if(DataCheck == 0){Serial.println(“Invalid Command. Please Try Again !!! “);}
Serial.println();
DataCheck = 0;
}
}
- In this code, I am receiving commands from my remote and then changing the direction of my DC Motors.
- When it will get ‘1’, it will move the first motor in Clockwise Direction.
- When it will get ‘2’, it will move the first motor in Anti-Clockwise Direction.
- When it will get ‘3’, it will stop the first motor.
- When it will get ‘4’, it will move the second motor in Anti-Clockwise Direction.
- When it will get ‘5’, it will move the second motor in Clockwise Direction.
- When it will get ‘6’, it will stop the second motor.
- It will say Invalid Commands on all other commands.
- Now let’s have a look at its working & results.
Working & Results
- Now run both of your Simulations and if everything goes fine, then you will have something as shown in below figure:
- Now when you will press buttons from keypad then DC Motors will move accordingly.
- Here’s an image where I have shown all the commands.
You can download both of these Proteus Simulations and Arduino codes by clicking below button:
So, that’s all for today. I hope you have enjoyed today’s project in which we have designed DC Motor Control using XBee & Arduino in Proteus ISIS. Thanks for reading !!!
JLCPCB – Prototype 10 PCBs for $2 + 2 days Lead Time
China’s Largest PCB Prototype Enterprise, 300,000+ Customers & 10,000+ Online Orders Per Day
Inside a huge PCB factory:
This Post / Project can also be found using search terms:
- dc motor for proteus using arduino
- telecharger pdf arduino et proteus motor dc | https://duino4projects.com/dc-motor-control-using-xbee-arduino-in-proteus/ | CC-MAIN-2019-13 | refinedweb | 1,033 | 56.96 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.