text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
${version}
Table of Contents
ApplicationListener
Spring Data REST makes exporting domain objects to RESTful clients using
HATEOAS
principles very easy. It exposes the CRUD methods of the Spring Data
CrudRepository
interface to HTTP. Spring Data REST also reads the body of HTTP requests and interprets them as domain objects. It
recognizes relationships between entities and represents that relationship in the form of a
Link.
Spring Data REST translates HTTP calls to method calls by mapping the HTTP verbs to CRUD methods. The
following table illustrates the way in which an HTTP verb is mapped to a
CrudRepository
method.
By default, all of these methods are exported to clients. By placing an annotation on your
CrudRepository
subinterface, however, you can turn access to a method off. This is discussed in more detail in the section on
configuration.
A core principle of HATEOAS is that resources should be discoverable through the publication of links that
point to the available resources. There are a number of ways to accomplish this but no real standard way. Spring
Data REST uses a link method that is consistent across Spring Data REST: it provides links in a property called
links. That property is an array of objects that take the form of something resembling an
atom:link
element in the
Atom XML namespace./json < { "links" : [ { "rel" : "customer", "href" : "" }, { "rel" : "profile", "href" : "" }, { "rel" : "order", "href" : "" }, { "rel" : "people", "href" : "" }, { "rel" : "product", "href" : "" } ], "content" : [ ] }
The links property of the result document contains an array of objects that have rel and href properties on them.
When issuing a request to a resource, the default behavior is to provide as much information as possible in
the body of the response. In the case of accessing a
Repository
resource, Spring Data REST will inline entities into the body of the response. This could lead to poor network
performance in the case of a very large number of entities. To reduce the amount of data sent back in the
response, a user agent can request the special content type
application/x-spring-data-compact+json
by placing this in the request
Accept
header. Rather than inlining the entities, this content-type provides a link to each entity in the
links
property.
curl -v -H "Accept: application/x-spring-data-compact+json" < HTTP/1.1 200 OK < Content-Type: application/x-spring-data-compact+json < { "links" : [ { "rel" : "customer.search", "href" : "" } ], "content" : [ ] }:1.1.0.M1" }
To add Spring Data REST to a Maven-based project, add the
spring-data-rest-webmvc
artifact to your compile-time dependencies:
<dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-rest-webmvc</artifactId> <version>1.1.0.M1< public class MyWebConfiguration extends RepositoryRestMvcConfiguration { @Override protected void configureRepositoryRestConfiguration(RepositoryRestConfiguration config) { config.addResourceMappingForDomainType(Person.class) .addResourceMappingFor("lastName") .setPath("surname"); // Change 'lastName' to 'surname' in the JSON config.addResourceMappingForDomainType(Person.class) .addResourceMappingFor("siblings") .setRel("siblings") .setPath("siblings"); // Pointless in this example, // but shows how to change 'rel' and 'path' values. } }
There are numerous methods on the
RepositoryRestConfiguration
object to allow you to configure various aspects of Spring Data REST. Please read the javadoc for that class to
get detailed descriptions of the various settings you can control.
It may be necessary to add a custom converter to Spring Data REST. You might need to turn a query parameter
into a complex object, for instance. In that case, you'll want to override the
configureConversionService
method and add your own converters. To convert a query parameter to a complex object, for instance, you would
want to register a converter for
String[]
to
MyPojo.
@Bean public MyPojoConverter myPojoConverter() { return new MyPojoConverter(); } @Override protected void configureConversionService(ConfigurableConversionService conversionService) { conversionService.addConverter(String[].class, myPojoConverter()); }
Since Spring Data REST is simply a Spring MVC application, you only need to include the REST configuration
into the configuration for the
DispatcherServlet. If using a Servlet 3.0
WebApplicationInitializer
(the preferred configuration for Spring Data REST applications), you would add your subclassed configuration
from above into the configuration for the
DispatcherServlet. The following configuration class is from the example project and
includes datasource configuration for three different datastores and domain models, which will all be exported
by Spring Data REST.
public class RestExporterWebInitializer implements WebApplicationInitializer { @Override public void onStartup(ServletContext servletContext) throws ServletException { AnnotationConfigWebApplicationContext rootCtx = new AnnotationConfigWebApplicationContext(); rootCtx.register( JpaRepositoryConfig.class, // Include JPA entities, Repositories MongoDbRepositoryConfig.class, // Include MongoDB document entities, Repositories GemfireRepositoryConfig.class // Inlucde Gemfire entities, Repositories ); servletContext.addListener(new ContextLoaderListener(rootCtx));. You can use curl or, more easily, the
rest-shell:
$ rest-shell ___ ___ __ _____ __ _ _ _ _ __ | _ \ __/' _/_ _/' _/| || | / / | \ \ | v / _|`._`. | | `._`.| >< | / / / > > |_|_\___|___/ |_| |___/|_||_| |_/_/ /_/ 1.2.1.RELEASE Welcome to the REST shell. For assistance hit TAB or type "help".:> list rel href ========================================== people profile customer order product
Links are an essential part of RESTful resources and allow for easy discoverability of related resources. In
Spring Data REST, a link is represented in JSON as an object with a
rel
and
href
property. These objects will appear in an array under an object's
links
property. These objects are meant to provide a user agent with the URLs necessary to retrieve resources related to
the current resource being accessed.
When accessing the root of a Spring Data REST application, for example, links are provided to each repository
that is exported. The user agent can then pick the link it is interested in and follow that
href.
Issue a
get
in the
rest-shell
to see an example of links.:> get > GET < 200 OK < Content-Type: application/json < { "links" : [ { "rel" : "people", "href" : "" }, { "rel" : "profile", "href" : "" }, { "rel" : "customer", "href" : "" }, { "rel" : "order", "href" : "" }, { "rel" : "product", "href" : "" } ], "content" : [ ] }
If two entities are related to one another through a database-defined relationship, then that relationship
will appear in the JSON as a link. In JPA, one would place a
@ManyToOne,
@OneToOne, or other relationship annotation. If using Spring Data MongoDB, one would place a
@DBRef
annotation on a property to denote its special status as a reference to other entities. In the example project,
the
Person
class has a related set of
Person
entities in the
siblings
property. If you
get
the resource of a
Person
you will see, in the
siblings
property, the link to follow to get the related
Persons.:> get people/1 > GET < 200 OK < Content-Type: application/json < { "firstName" : "Billy Bob", "surname" : "Thornton", "links" : [ { "rel" : "self", "href" : "" }, { "rel" : "people.person.father", "href" : "" }, { "rel" : "people.person.siblings", "href" : "" } ] }); }
Rather than return everything from a large result set, Spring Data REST recognizes some URL parameters that
will
influence the page size and starting page number. To add paging support to your Repositories, you need to extend
the
PagingAndSortingRepository<T,ID>
interface rather than the basic
CrudRepository<T,ID>
interface. This adds methods that accept a
Pageable
to control the number and page of results returned.
public Page findAll(Pageable pageable);
If you extend
PagingAndSortingRepository<T,ID>
and access the list of all entities, you'll get links to the first 20 entities. To set the page size to any other
number, add a limit parameter:
To get paging in your query methods, you must change the signature of your query methods to accept a
Pageable
as a parameter and return a
Page<T>
rather than a
List<T>. Otherwise, you won't get any paging information in the JSON and
specifying the query parameters that control paging will have no effect.
By default, the URL query parameters recognized are
page, to specify page number
limit, to specify how many results to return on a page, and
sort
to specify the query method parameter on which to sort. To change the names of the query parameters, simply call
the appropriate method on
RepositoryRestConfiguration
and give it the text you would like to use for the query parameter. The following, for example, would set the
paging parameter to
p, the limit parameter to
l, and the sort parameter to
q:
@Override protected void configureRepositoryRestConfiguration(RepositoryRestConfiguration config) { config.setPageParamName("p") .setLimitParamName("l") .setSortParamName("q"); }
The URL to use these parameters would then be changed to:
Each paged response will return links to the previous and next pages of results based on the current page. If you are currently at the first page of results, however, no "previous" link will be rendered. The same is true for the last page of results: no "next" link will be rendered if you are on the last page of results. The "rel" value of the link will end with ".next" for next links and ".prev" for previous links.
{ "rel" : "people.next", "href" : "" }
Spring Data REST also recognizes sorting parameters that will use the Repository sorting support.
To have your results sorted on a particular property, add a sort URL parameter with the name of the property
you want to sort the results on. You can control the direction of the sort by specifying a URL parameter composed
of the property name plus
.dir
and setting that value to(); } }
The rest-shell is a command-line shell that aims to make writing REST-based applications easier. It is based on spring-shell and integrated with Spring HATEOAS in such a way that REST resources that output JSON compliant with Spring HATEOAS can be discovered by the shell and interactions with the REST resources become much easier than by manipulating the URLs in bash using a tool like curl.
The rest-shell provides a number of useful commands for discovering and interacting with REST resources. For example discover will discover what resources are available and print out an easily-readable table of rels and URIs that relate to those resources. Once these resources have been discovered, the rel of those URIs can be used in place of the URI itself in most operations, thus cutting down on the amount of typing needed to issue HTTP requests to your REST resources.
If you're using Mac OS X and Homebrew, then installation is super easy:
brew install rest-shell
Other platforms are simple as well: just download the archive from the GitHub page and unzip it to a location on your local hard drive.
The rest-shell is aimed at making it easier to interact with REST resources by managing the session baseUri much like a directory in a filesystem. Whenever resources are discovered, you can then follow to a new baseUri, which means you can then use relative URIs. Here's an example of discovering resources, then following a link by referencing its rel value, and then using a relative URI to access resources under that new baseUri::> discover rel href ======================================================== address family people profile:> follow people:> list rel href =================================================== people.Person people.Person people.search:> get 1 > GET < 200 OK < ETag: "2" < Content-Type: application/json < { "links" : [ { "rel" : "self", "href" : "" }, { "rel" : "peeps.Person.profiles", "href" : "" }, { "rel" : "peeps.Person.addresses", "href" : "" } ], "name" : "John Doe" }
NOTE: If you want tab completion of discovered rels, just use the --rel flag.
The rest-shell can do basic parsing of JSON data within the shell (though there are some limitations due to the nature of the command line parsing being sensitive to whitespace). This makes it easy to create new resources by including JSON data directly in the shell::> post --data "{name: 'John Doe'}" > POST < 201 CREATED < Location: < Content-Length: 0 <:> get 8 > GET < 200 OK < ETag: "0" < Content-Type: application/json < { "links" : [ { "rel" : "self", "href" : "" }, { "rel" : "people.Person.addresses", "href" : "" }, { "rel" : "people.Person.profiles", "href" : "" } ], "name" : "John Doe" }
If your needs of representing JSON get more complicated than what the spring-shell interface can handle, you can create a directory somewhere with .json files in it, one file per entitiy, and use the --from option to the post command. This will walk the directory and make a POST request for each .json file found.:> post --from work/people_to_load 128 items uploaded to the server using POST.:>
You can also reference a specific file rather than an entire directory.:> post --from work/people_to_load/someone.json 1 items uploaded to the server using POST.:>
If you're calling URLs that require query parameters, you'll need to pass those as a JSON-like fragment in the --params parameter to the get and list commands. Here's an example of calling a URL that expects parameter input::> get search/byName --params "{name: 'John Doe'}"
It's not always desirable to output the results of an HTTP request to the screen. It's handy for debugging but sometimes you want to save the results of a request because they're not easily reproducible or any number of other equally valid reasons. All the HTTP commands take an --output parameter that writes the results of an HTTP operation to the given file. For example, to output the above search to a file::> get search/byName --params "{name: 'John Doe'}" --output by_name.txt >> by_name.txt:>
Because the rest-shell uses the spring-shell underneath, there are limitations on the format of the JSON data you can enter directly into the command line. If your JSON is too complex for the simplistic limitations of the shell --data parameter, you can simply load the JSON from a file or from all the files in a directory.
When doing a post or put, you can optionally pass the --from parameter. The value of this parameter should either be a file or a directory. If the value is a directory, the shell will read each file that ends with .json and make a POST or PUT with the contents of that file. If the parameter is a file, then the rest-shell will simpy load that file and POST/PUT that data in that individual file.
One of the nice things about spring-shell is that you can directly shell out commands to the underlying terminal shell. This is useful for doing things like load a JSON file in an editor. For instance, assume I have the Sublime Text 2 command subl in my path. I can then load a JSON file for editing from the rest-shell like this::> ! subl test.json:>
I then edit the file as I wish. When I'm ready to POST that data to the server, I can do so using the --from parameter::> post --from test.json 1 items uploaded to the server using POST.:>
Starting with rest-shell version 1.1, you can also work with context variables during your shell session. This is useful for saving settings you might reference often. The rest-shell now integrates Spring Expression Language support, so these context variables are usable in expressions within the shell.:> var set --name specialUri --value:> var get --name specialUri:> var list { "responseHeaders" : { ... HTTP headers from last request }, "responseBody" : { ... Body from the last request }, "specialUri" : "", "requestUrl" : ... URL from the last request, "env" : { ... System properties and environment variables } }
The variables are accessible from SpEL expressions which are valid in a number of different contexts, most importantly in the path argument to the HTTP and discover commands, and in the data argument to the put and post commands.
Since the rest-shell is aware of environment variables and system properties, you can incorporate external parameters into your interaction with the shell. For example, to externally define a baseUri, you could set a system property before invoking the shell. The shell will incorporate anything defined in the JAVA_OPTS environment variable, so you could parameterize your interaction with a REST service.
JAVA_OPTS="-DbaseUri=" rest-shell:> discover #{env.baseUri} rel href ================================================================= ... resources for this URL:>
The rest-shell supports a "dotrc" type of initialization by reading in all files found in the $HOME/.rest-shell/ directory and assuming they have shell commands in them. The rest-shell will execute these commands on startup. This makes it easy to set variables for commonly-used URIs or possibly set a baseUri.
echo "var set --name svcuri --value" > ~/.rest-shell/00-vars echo "discover #{svcuri}" > ~/.rest-shell/01-baseUri > rest-shell INFO: No resources found... INFO: Base URI set to '' ___ ___ __ _____ __ _ _ _ _ __ | _ \ __/' _/_ _/' _/| || | / / | \ \ | v / _|`._`. | | `._`.| >< | / / / > > |_|_\___|___/ |_| |___/|_||_| |_/_/ /_/ 1.2.1.RELEASE Welcome to the REST shell. For assistance hit TAB or type "help".:>
If you generate a self-signed certificate for your server, by default the rest-shell will complain and refuse to connect. This is the default behavior of RestTemplate. To turn off certificate and hostname checking, use the ssl validate --enabled false command.
There is also a convenience command for setting an HTTP Basic authentication header. Use auth basic --username user --pasword passwd to set a username and password to base64 encode and place into the Authorization header that will be part of the current session's headers.
You can clear the authentication by using the auth clear command or by removing the Authorization header using the headers clear command. | http://docs.spring.io/spring-data/rest/docs/2.0.0.RC1/reference/htmlsingle/ | CC-MAIN-2017-04 | refinedweb | 2,825 | 53.81 |
Storage
Before finishing up, there are a few other issues that you need to consider that are just part and parcel of cloud applications, regardless of the provider.
First of all, you need to rethink your relationship with your data. Data may or may not be replicated by the service, and it may or may not be stored on the same physical machine (or on the machine that's running the service that accesses the data). Securing the data can be a significant problem. You can encrypt sensitive information like credit-card numbers, but encrypting everything is not a practical solution because you can't issue queries on encrypted data. More to the point, if you're doing the encryption, then there's always a point at which the data is unencrypted and your encryption key is in plain sight. If your application is running on someone else's server, you're potentially vulnerable. Of course, a dedicated VM is less vulnerable than a shared server at an ISP that allows shell access, but you'll never be as secure as you would be in your own data center.
Also, bear in mind that your data is certainly replicated on many servers. This organization can be an important performance enhancer. Consider a video-streaming application that stores its data in the cloud. The actual application may be in only one place, but once we start streaming, we hope that we're connected to the version of the data with the fastest transmission time. However, more servers in more data centers means more vulnerability. And every cloud-service-provider employee who has administrative access to a physical cloud server also has access to your data, so your vulnerability is replicated along with the data.
There's also the issue of back up. A cloud provider may or may not actually back up your data. Google, for example, does tape back ups of gmail, but as far as I know, does not back up its general storage system. Feel free to correct me if I'm wrong, but practically speaking, cloud applications have to assume that there's no underlying backup mechanism. The data is replicated on many servers, and the servers themselves use RAID drives (if they have disks in them), so it would take a global catastrophe to loose your data altogether, but it's not possible to go back in time as you could do using a back up tape. You can write a web application that transfers data from Google to your own local repository, but that's actually a surprisingly difficult (and painfully slow) operation to perform using JDO/JPA, which is the only access method that Google provides.
Security
The biggest problem with cloud applications is actually application security. Security is, of course, a huge problem with most software. Programmers are simply unaware of how to write secure applications, and management is typically unwilling to spend the paltry sum required for the training that would eliminate 90% of the problem. It's symptomatic that most of the really big breaches I've read about in the past few years have been done using SQL Injection, which is not only a venerable, well-understood exploit, but is literally a trivial matter to defeat. This is a case where 10 minutes of training could eliminate millions of dollars of vulnerability, but the training still doesn't happen.
One of the problems is perception. Application security has nothing at all to do with things like firewalls and SSL. The IT department is simply not involved. Most exploits attack an application through a bug of some sort, usually a minor one. And most hackers get at that bug simply by using the program in predictable ways, just like all your other users do. There are no secret back doors, and the vulnerabilities are typically in plain sight. For example, the classic example of a SQL-injection exploit that you find in the books shows you how to get a dump of someone's entire database by exploiting a very-simple bug in a website's password-recovery page (access to which doesn't typically require a password, I would hope). It doesn't matter whether you access that page using HTTPS; if you can access the page at all, you can do the damage.
So, the biggest security problem with a cloud-based Web 2.0 application is the size of the attack surface — the number of places where a hacker can potentially use a bug to break into the system. Most Web 2.0 applications use remote procedure calls, or an equivalent mechanism like REST, heavily; and every one of those calls — in fact, every argument to every one of those calls — represents a potential vulnerability. It's easy to deal with these problems if you know about them (e.g., check that all arguments are valid and reasonable on both the client and server side; if you're using Google's Web Toolkit, you can literally use the same Java code on both sides to do the checking).
The real solution to this problem is simple: training.
The Entire Ecosystem
There are a few loose ends to cover. First, Google provides a reasonably rich set of support services for your web application. Get complete details at, but here's a list of functionalities to be aware of. I'll demonstrate a few of these in future articles.
- Blobstore: Lets you store very large objects that can't be handled by the standard JDO mechanism, and serve them directly to your users if you like. It's handy for things like big images.
- Capabilities: A management API that lets you dynamically detect whether other Google services are operational. You can use it to disable features of your own app when a Google service on which it depends goes down for maintenance.
- Channel: A mechanism for pushing information down to a browser-based client, so that client can update itself without polling.
- Images: A library for doing simple image manipulation. Supports transformations like resizing, rotation, darkening and lightening the image, etc.
- Mail: Lets you both send and receive email from your application. You send using standard JavaMail APIs. You receive by writing a servlet that waits for email to arrive. (Google receives the email, then posts it to your servlet.) This facility is useful, but Google limits the number of emails that you can send in a day to 500, so you can't use this service for mailing lists or bulk mail.
- Memcache: A mechanism for caching chunks of data in "memory." This is a wrapper around Java's JCache APIs. Caching through this API is better than rolling your own cache because Memcache can scale properly if the application is running on multiple machines.
- Multitenency: Effectively adds namespaces to the storage system so that you can partition your data easily.
- OAuth: Provides a mechanism to grant third-party access to Google services. For example, a customer of yours could use this mechanism to allow your application to access his Google Docs files for persistent storage.
- Task Queues: Allows your application to execute background tasks that are not necessarily triggered by a user action.
- URL Fetch: A wrapper around java.net.URL and related classes that lets you access other web content using URLs. Handy for doing things like sending bulk email from a non-Google server.
- Users: Allows you to use Google's login mechanism for your application. That is, one of your users can log in to your application using Google's login page. I have mixed feelings about this service because it's one of the few services that doesn't just implement a standard Java library. If you use it as your sole log-in mechanism, you're effectively giving your user list to Google, and I'd rather know who my users are, thank you.
- XMPP: Support for XMPP-compatible IM services (like Google Talk).
APIs
The final thing to think about are the services that can coexist with your web application. Google, under the moniker "GData," provides API access to literally all of its web applications — from Calendar to YouTube, making it easy to do things like integrate a Google Docs page into your application or update an appointment on a Google Calendar.
You can find the complete list of APIs at. Most of these are simple REST-based APIs. You typically encode a request in the URL and HTTP GET or POST, and receive a result in JSON. However, Google provides both Java and Python libraries that wrap the REST APIs, and it also provides an Eclipse plug-in that makes it easier to write to the APIs. I'll talk about how to use these APIs in future articles.
Related Articles
Getting Started With the Cloud: Logging On With Google OAuth
Getting Started with Google Apps and OAuth
—. | http://www.drdobbs.com/database/getting-started-with-the-cloud-the-ecosy/229301121?pgno=3 | CC-MAIN-2014-52 | refinedweb | 1,487 | 61.56 |
On Thu, 2004-08-05 at 03:06 -0400, James Y Knight wrote: > So, I was a bit irritated recently because there is no way to test for > private name usage in a module's namespace. '__all__' is a good start, > but it is somewhat poor for a few reasons: WOW. THank you for doing this, james. This is more useful than PyChecker output, I think. It's also a little depresing. We really need a better-documented public API! I have created 2 new issues for this, one for adding __all__ to all modules and one for testing that it's obeyed properly. #658 and #659, respectively. They are tentatively assigned to you, but I expect you can carve up Twisted and motivate others to actually do most of the work :). | http://twistedmatrix.com/pipermail/twisted-python/2004-August/008297.html | CC-MAIN-2017-17 | refinedweb | 132 | 74.29 |
Re: Winmngt Service
From: Wesley Vogel (123WVogel955_at_comcast.net)
Date: 03/01/04
- ]
Date: Mon, 01 Mar 2004 22:35:20 GMT
Sandy88;
Sounds to me like the Repository folder is damaged>>>>
[.]]
---------------
C:\WINDOWS\System32\Wbem\Repository\FS
del C:\WINDOWS\system32\wbem\Repository\FS
======================================
I had a problem with WMI, I also had a problem deleting
%SystemRoot%\System32\Wbem\Repository. I had to run Error Checking
(Chkdsk)..]
=====================
What are the missing Registry files??
Can you copy and paste and post them??
-- Hope this helps. Let us know. Wes In news:0A0CA5E5-C9A5-4E70-B868-255D1162204F@microsoft.com, Sandy88 <anonymous@discussions.microsoft.com> hunted and pecked: > Thanks Wesley > > Just had a look - yeah it's set to Automatic. > > I clicked along and when I clicked on "Dependencies" tab an error box > popped up: > > "WMI: Invalid Namespace > > The 'Root\cimv2' namespace is not defined in the WMI repository" > > and an OK button > > So I clicked on ok but I wonder if this has anything to do with it? > > Also just ran PC Doctor OnCall - demo (downloaded from > Kellys-Korner-xp.com - Excellent site!) > It found loads of missing registry files - a lot of them ActiveX > connected. > > Some of these will surely be causing me problems ..........but not > sure if connected to the original problem of a game not running > and/or the winmngt service problem? > > The above program being a demo will not fix these for me unlress I > purchase it - Can anyone recommend an alternative method of > reinstating missing registry files? > > Also - How can I be sure that reinstalling registry files will not > bring back viruses etc which I had problems with a few months ago? > > Sorry if this is a bit long winded!
- ] | http://www.tech-archive.net/Archive/WinXP/microsoft.public.windowsxp.perform_maintain/2004-03/0179.html | crawl-002 | refinedweb | 283 | 67.25 |
in reply to
minimum, maximum and average of a list of numbers at the same time
This would seem a "perlish" solution to me:
use List::Util qw/ min max sum /;
sub min_max_avg {
return min(@_), max(@_), sum(@_) / @_;
}
[download]
_________broquaint
And with List::Util::reduce()...
use List::Util qw/ reduce /;
sub min_max_avg {
$res= reduce {
my $r= ref $a ? $a : [($a) x 3];
if ($b < $r->[0]) {
$r->[0]= $b;
} elsif ($b > $r->[1]) {
$r->[1]= $b;
}
$r->[2]+=$b;
$r
} @_;
return @$res[0,1], $res->[2] / @_;
}
[download]
sub Gen_Stats {
my $stat = {};
my ($cnt, $max, $min, $tot);
$stat->{ADD} = sub {
$cnt += @_;
for ( @_ ) {
$tot += $_;
$max = $_ if ! defined $max || $_ > $max;
$min = $_ if ! defined $min || $_ < $min;
}
};
$stat->{MAX} = sub { $max };
$stat->{MIN} = sub { $min };
$stat->{AVE} = sub { $cnt ? $tot / $cnt : undef };
$stat->{TOT} = sub { $tot };
$stat->{ADD}->( @_ );
return $stat;
}
my $stat_info = Gen_Stats();
while ( <DATA> ) {
chomp;
$stat_info->{ADD}($_);
}
print join "\t", map { $_->() } @{$stat_info}{qw/MAX MIN AVE TOT/};
[download]
Cheers - L~R
use Benchmark qw(:all :hireswallclock);
my @nums = map {rand} 1 .. 10000;
my $subs = {
mma => sub { my @r = min_max_avg(\@nums) },
reduce_mma => sub { my @r = reduce_mma(\@nums) },
mymma => sub { my @r = my_mma(\@nums) }
};
cmpthese(-1, $subs);
$count = 1000;
for my $sub (keys %$subs){
$t = timeit($count, $subs->{$sub});
print "$count loops of $sub:",timestr($t),"\n";
}
for my $sub (keys %$subs){
print "$sub results:\n";
print (join ' ', &{$subs->{$sub}} , "\n");
}
##### Subs to test ######
__END__
$ perl benchmark.pl
Rate mma mymma reduce_mma
mma 86.3/s -- -49% -100%
mymma 171/s 98% -- -100%
reduce_mma 89229/s 103265% 52110% --
1000 loops of reduce_mma:0.0114148 wallclock secs ( 0.00 usr + 0.00 s
+ys = 0.00 CPU)
1000 loops of mma:12.0897 wallclock secs (12.05 usr + 0.01 sys = 12.0
+6 CPU) @ 82.91/s (n=1000)
1000 loops of mymma:5.98583 wallclock secs ( 5.97 usr + 0.00 sys = 5
+.97 CPU) @ 167.56/s (n=1000)
reduce_mma results:
0.956444677504809 0.209810070551129 0.765697545177442
mma results:
0.000119859684559742 0.999992750475673 0.505291691541093
mymma results:
0.000119859684559742 0.999992750475673 0.505291691541093
[download]
I can't replicate your results and you dont show the code you are using. I do recall that when i first posted my solution I had a mistake in the code, which I updated shortly afterwards. (Soon enough that i didnt bother with noting the update -- my bad i guess.) So if you downloaded that then yes, it isnt giving the correct results. But if you use the code that is in the thread now it should.
Do your code process three times the input list?
But you're not required to understand it. But then of course without even having seen it I'm sure it does process it three times. If that is really relevant, then indeed you may want or have to process it once yourself. And if it is very relevant, perhaps you may want to do it as a binary extension.
Well, the author may consider a adding to List::Util a function to calculate more than one relevant quantity at a time, e.g.
my ($min, $max, $sum) = calculate [qw/min max sum/], @list;
[download]
or perhaps with an OO interface:
my $stats=List::Util->calculate([qw/min max sum/])->of(@list);
print $stats->min, "\n";
[download]
possibly even with a mixed interface:
calculate [qw/min max sum/], @list;
print List::Util::lastrun->max, "\n";
[download]
It does look temptingly simple doesn't it?
But you'd be iterating across the list three times - which isn't very efficient. You only really need to iterate across it once. Something like this (off the top of my head and untested).
# use 'mean' instead of 'avg' as it's unambiguous
sub min_max_mean {
my ($min, $max, $tot);
my $count = @_;
$min = $max = $tot = shift;
foreach (@_) {
$min = $_ if $_ < $min;
$max = $_ if $_ > $max;
$tot += $_;
}
return ($min, $max, $tot / $count);
}
[download]
"The first rule of Perl club is you do not talk about
Perl club." -- Chip Salzenberg
If you can reduce the order of a solution, your solution will scale better. We commonly look for O(n log n) solutions to replace O(n2) solutions, so that working with large amounts of data doesn't make our app bog down. If you merely change by a constant factor (as we're talking about in this case), you may see a constant-factor improvement at any size, but you won't alleviate any scaling problems. You're in the realm of micro-optimization.
Walking through the list is not going to be a significant portion of the computation, compared to the comparisons and math being done on the variables of interest. And it's really not worth trying to take the elements two at a time, because that ends up being a very inefficient way to walk through the list, compared to Perl's built-in for.
If our OP were to translate the algorithm into Inline::C, the reduced number of comparisons might compete well with three List::Util calls, with the difference being some constant factor on any size list.
Tron
Wargames
Hackers (boo!)
The Net
Antitrust (gahhh!)
Electric dreams (yikes!)
Office Space
Jurassic Park
2001: A Space Odyssey
None of the above, please specify
Results (113 votes),
past polls | http://www.perlmonks.org/index.pl?node_id=507342 | CC-MAIN-2014-35 | refinedweb | 893 | 73.07 |
Authorization with Zend_Acl in Zend Framework
We now have a way to check if a user is who he/she says he/she is. Next we need to stop certain users from accessing certain parts of the application. To do this, we are going to use the Zend_Acl component
Zend_Acl introduction
ACL (Access Control List) lets us create a list that contains all the rules for accessing our system. In Zend_Acl, this list works like a tree enabling us to inherit from rule to rule, building up a fi ne-grained access control system.
There are two main concepts at work in Zend_Acl—Resources and Roles. A Resource is something that needs to be accessed and a Role is the thing that is trying to access the Resource. To have access to a resource, you need to have the correct Role.
To start us off, let's fi rst look at a basic example. In this example, we are going to use the scenario of a data centre. In the data centre, we need to control access to the server room. Only people with the correct permissions will be able to access the server room.
To start, we need to create some Roles and Resources.
$visitor = new Zend_Acl_Role('Visitor');
$admin = new Zend_Acl_Role('Admin');
$serverRoom = new Zend_Acl_Resource('ServerRoom');
Here we have created two Roles—Visitor and Admin and one Resource—ServerRoom. Next, we need to create the Access Control List.
$acl = new Zend_Acl();
$acl->addRole($visitor);
$acl->addRole($admin, $visitor);
$acl->add($serverRoom);
Here we instantiate a new Zend_Acl instance and add the two Roles and one new access rule. When we add the Roles, we make the Admin Role inherit from the Visitor Role. This means that Admin inherits all the access rules of the Visitor. We also add one new Rule containing the ServerRoom resource.
At this point, access to the server room is denied for both Visitors and Admins. We can change this by adding allow or deny rules:
- Allow all to all resources: $acl->allow();
- Deny all to all resources: $acl->deny();
- Allow Admin and Deny Visitor to all resources: $acl->allow($admin);
- Allow Admin and Deny Visitor to ServerRoom resource: $acl->allow($admin, $serverRoom);
When adding rules, we can also set permissions. These can be used to deny/allow access to parts of a Resource. For example, we may allow visitors to view the server room but not access the cabinets. To do this, we can add extra permission options to our rules.
Allow Visitor and Admin to view the ServerRoom, Deny Visitor cabinet access:
$acl->allow(visitor, $serverRoom, array('view'));
$acl->deny($visitor, $serverRoom, array('cabinet'));
Here we simply add the new permissions as an array containing the strings of the permissions we want to add to the ServerRoom resource.
Next we need to query the ACL. This is done through the isAllowed() method.
$acl->isAllowed($admin, $serverRoom, 'view');
// returns true
$acl->isAllowed($visitor, $serverRoom, 'view');
// returns true
$acl->isAllowed($visitor, $serverRoom, 'cabinet');
// returns false
As we can see, Zend_Acl provides us with an easy, lightweight way of controlling access to our systems resources. Next we will look at the ways in which we can use the ACL component in our MVC application.
ACL in MVC
When looking to implement ACL in MVC, we need to first think about how and where we implement the ACL in the MVC layers. The ACL by nature is centralized, meaning that all rules, permissions, and so on are kept in a central place from which we query them. However, do we really want this? What about when we introduce more than one module, do all modules use the same ACL? Also we need to think about where access control happens—is it in the Controller layer or the Model/Domain layer?
Using a centralized global ACL
A common way to implement the ACL is to use a centralized ACL with access controlled at the application level or outside the domain layer. To do this, we first create a centralized ACL. Typically, this would be done during the bootstrap process and the full ACL would be created including all rules, resources, and roles. This can then be placed within the Registry or passed as an invoke argument to the Front Controller. We would then intercept each request using a Front Controller plugin (preDispatch). This would check whether the request was authorized or not using the ACL. If the request was not valid, we would then redirect the request to an access denied controller/action.
This approach would base its rules on the controller/action being requested, so a rule using this may look something like:
$acl->allow('Customer', 'user', 'edit');
Here we would allow access for a Customer Role to the User Resource and the Edit permission. This would map to the user Controller, and the edit action or user/edit
The advantages of using centralized global ACL are as follows:
- Centralized place to access and manage ACL rules, resources, and roles
- Maps nicely to the MVC controller/action architecture
The disadvantages are as follows:
- Centralized ACL could become large and hard to manage
- No means to handle modules
- We would need to re-implement access controls in order to use our Domain in a web service, as they are based on action/controller
Using module specific ACL's
The next logical step is to split the ACL so that we have one ACL per module. To do this, we would still create our ACL during bootstrap but this time we would create a separate ACL for each module, and then we would use an Action Helper instead of Front Controller plugin to intercept the request (preDispatch).
Advantages:
- Fixes our module handling problem with the previous approach
- Keeps things modular and smaller
Disadvantages:
- We still have the problem of having to re-implement access control if we use our Domain away from the controller/action context.
ACL in the Domain layer
To deal with our last concern about what if we need to use the Domain in another context outside the controller/action architecture, we have the option to move all the Access Control into the Domain itself. To do this, we would have one ACL per module but would push the management of this into the Model. The Models would then be responsible for handling their own access rules. This in effect will give us a e-centralized ACL, as the Models will add all rules to the ACL.
Advantages:
- We can use the Model in different contexts without the need to re-implement the access control rules.
- We can unit test the access control
- The rules will be based on Model methods and not depend on the application layer
Disadvantages:
- Adds complexity to the Domain/Models
- Being de-centralized, it could be harder to manage
For the Storefront, we have opted to use the Model based ACL approach. While it adds more complexity and implementation can be a little confusing, the advantages of being able to unit test and use the Models outside the application layer is a big advantage. It also gives us a chance to demonstrate some of the more advanced features of the ACL component.
Model based ACL
The first thing to look at is some of the main components that we need to implement a Model based ACL. The main elements here are as follows:
- The ACL: This stores the roles we want in the system and any global rules.
- Resource(s): This will be our Model. The Model will become the Resource we wish to access.
- Roles: These are the actual roles we wish to use.
As we can see there is nothing new here, we are just going to be implementing them in a different way than in our earlier examples. To do this, we are going to need to create some new classes and refactor some old ones. We will start though by looking at the main ACL class and Roles.
The Storefront ACL
We will need a Zend_Acl instance for the storefront module. We will use this to store all our roles and rules.();
}
}
The Storefront_Model_Acl_Storefront class will store all the rules that the storefront module's Models add to it. This class subclasses the Zend_Acl class and using the constructor adds the available roles to the ACL tree. We add three roles to the ACL—Guest, Customer, and Admin. The Customer role inherits from the Guest and the Admin role inherits from the Customer. We then deny access to all resources using the deny() method with no parameters. This effectively creates a white-list, meaning that everything is denied unless we explicitly allow it.
We can see that the Storefront_Model_Acl_Storefront class is very simple and is only responsible for setting up the roles. The Models will defi ne the resources and permissions later.
The Storefront roles
Previously, we added the three storefront user roles to the ACL. We will need to create the Role classes for each of them. To create a Role class, we need to implement the Zend_Acl_Role_Interface interface. This interface defines one method (getRoleId()), which should return the role identity string.
application/modules/storefront/models/Acl/Role/Admin.php
class Storefront_Model_Acl_Role_Admin implements Zend_Acl_Role_Interface
{
public function getRoleId()
{
return 'Admin';
}
}
application/modules/storefront/models/Acl/Role/Customer.php
class Storefront_Model_Acl_Role_Customer implements Zend_Acl_Role_Interface
{
public function getRoleId()
{
return 'Customer';
}
}
application/modules/storefront/models/Acl/Role/Guest.php
class Storefront_Model_Acl_Role_Guest implements Zend_Acl_Role_Interface
{
public function getRoleId()
{
return 'Guest';
}
}
Here we have created the three roles available to the storefront (Admin, Customer, and Guest). Each one simply implements the Zend_Acl_Role_Interface and returns a string that uniquely identifi es it. However, there is one more class that needs to implement the Zend_Acl_Role_Interface, which is the Storefront_Resource_User_Item class.
application/modules/storefront/models/resources/User/Item.php
class Storefront_Resource_User_Item
extends SF_Model_Resource_Db_Table_Row_Abstract implements
Storefront_Resource_User_Item_Interface, Zend_Acl_Role_Interface
{
/*... */
public function getRoleId()
{
if (null === $this->getRow()->role) {
return 'Guest';
}
return $this->getRow()->role;
}
}
Here we have updated our user model resource item to implement the Zend_Acl_Role_Interface and added the getRoleId() method. This method either returns the user's current role column or Guest if the role is not set. We do this because we are storing the user item in the Zend_Auth identity and will be passing this to the Models for them to use for authorization checks. We can then be sure that the Models are using a valid identity.
The Storefront resources
As we said before in this implementation, the Model will be the Resource. The way we achieve this is very simple. All we need to do is implement the Zend_Acl_Resource_Interface, and this will then turn our Models (or any class) into valid ACL Resources. Here is a basic example:
class MyClass implements Zend_Acl_Resource_Interface
{
public function getResourceId()
{
return 'MyResource';
}
}
Once our class has implemented the resource interface, we can use it with the ACL:
$resource = new MyClass();
$acl = new Zend_Acl();
$acl->add($resource);
We are half way to having the ACL integrated into our Models. Next, we will create some abstract classes and interfaces that our Models can use to fully implement all the required functionality.
The new base model
To fully use the ACL, our Models need access to the ACL, the resource, and the identity. In our implementation these are Storefront_Model_Acl_Storefront, the Model ($this), and Storefront_Resource_User_Item (stored in Zend_Auth). To make these available to the Model, we are going to need some extra functionality added to each of our Models. To encapsulate this, we are going to create a new abstract model class and some new interfaces.
library/SF/Model/Acl/Interface.php
interface SF_Model_Acl_Interface
{
public function setIdentity($identity);
public function getIdentity();
public function setAcl(SF_Acl_Interface $acl);
public function getAcl();
public function checkAcl($action);
}
Here we have defi ned a new interface for our Models. This also introduces the SF_Model_Acl namespace. As not all of our Models will use the ACL, we will make it optional whether the Model uses the SF_Model or SF_Modle_Acl classes. The interface defi nes fi ve methods. These will be used to set and get the identity and ACL and also to query the ACL.
library/SF/Model/Acl/Abstract.php
abstract class SF_Model_Acl_Abstract extends SF_Model_Abstract
implements SF_Model_Acl_Interface, Zend_Acl_Resource_Interface
{
protected $_acl;
protected $_identity;
public function setIdentity($identity)
{
if (is_array($identity)) {
if (!isset($identity['role'])) {
$identity['role'] = 'Guest';
}
$identity = new Zend_Acl_Role($identity['role']);
} elseif (is_scalar($identity) && !is_bool($identity)) {
$identity = new Zend_Acl_Role($identity);
} elseif (null === $identity) {
$identity = new Zend_Acl_Role('Guest');
} elseif (!$identity instanceof Zend_Acl_Role_Interface) {
throw new SF_Model_Exception('Invalid identity provided');
}
$this->_identity = $identity;
return $this;
}
public function getIdentity()
{
if (null === $this->_identity) {
$auth = Zend_Auth::getInstance();
if (!$auth->hasIdentity()) {
return 'Guest';
}
$this->setIdentity($auth->getIdentity());
}
return $this->_identity;
}
public function checkAcl($action)
{
return $this->getAcl()->isAllowed(
$this->getIdentity(),
$this,
$action
);
}
}
The SF_Model_Acl_Abstract class subclasses the SF_Model_Abstract and implements the SF_Model_Acl_Interface and Zend_Acl_Resource_Interface interfaces. All Models that need ACL support can now subclass the SF_Model_Acl_Abstract.
The setIdentity() method will accept either null, string, array, or a Zend_Acl_Role_Interface instance. The identity should contain the role to be used when checking the ACL. If no role is set, then we default to the Guest role.
The getIdentity() method is designed to lazy load the identity from Zend_Auth. Therefore, we first check if the $_identity property is null. If it is, then we retrieve the identity from Zend_Auth and set it on the Model using setIdentity(). The identity returned by Zend_Auth will be an instance of Storefront_Resource_User_Item. This implements the Zend_Acl_Role_Interfac and as a result it is fine to set it on the Model. During normal use we would always rely on the lazy loading here. The only time we would not is during testing when we need to set the identity ourselves and not from the session.
The checkAcl() method is used to query the ACL. This method simply returns the result of the isAllowed() method of the Zend_Acl class. We can see that we pass the identity, the resource ($this/$the Model), and the action to isAllowed(). The action will be defined by us when we configure the ACL inside the concrete Models and simply represent the permission or the action that is trying to be undergone.
You will notice that we still have not implemented some of the methods defined in the SF_Model_Acl_Interface and Zend_Acl_Resource_Interface interfaces. These will need to be implemented inside the concrete Models as they contain Model specific settings.
Securing the User Model
Now that we have the base class created, we can start securing our application. To do this, we will edit the User Model. The first step is to have the Model subclass the SF_Model_Acl_Abstract class.
application/modules/storefront/models/User.php
class Storefront_Model_User extends SF_Model_Acl_Abstract
{
Once we have the User Model subclassing the SF_Model_Acl_Abstract, we then must implement the getAcl() , setAcl(), and getResourceId() methods.
application/modules/storefront/models/User.php
public function getResourceId()
{
return 'User';
}
public function setAcl(SF_Acl_Interface $acl)
{
if (!$acl->has($this->getResourceId())) {
$acl->add($this)
->allow('Guest', $this, array('register'))
->allow('Customer', $this, array('saveUser'))
->allow('Admin', $this);
}
$this->_acl = $acl;
return $this;
}
public function getAcl()
{
if (null === $this->_acl) {
$this->setAcl(new Storefront_Model_Acl_Storefront());
}
return $this->_acl;
}
First we implement the getResourceId() method, which is defined by the Zend_Acl_Resource_Interface interface and simply returns the string identifying the resource (the Model) as User.
Next, we implement the setAcl() method, which is defined by the SF_Model_Acl_Interface interface. This method is responsible for configuring our ACL by adding the Resources and Rules. We first check to see if the ACL has the Resource registered to it. If not, we then add the Resource ($this) and configure the rules for the User Model. The rules here are as follows:
- Guest can access register
- Customer can access register and saveUser
- Admin can access everything (we pass null as the action)
Once the ACL is configured, we set the ACL on the $_acl property and return $this to allow method chaining. Note that the permission names we use do not have to match method names.
Our final method is getAcl(). This again is defined by the SF_Model_Acl_Interface interface. This method checks if the $_acl property has been set and then if not sets a new Storefront_Model_Acl_Storefront instance as the ACL to be used. We do this to help with testing later on, as it allows us to not use the default ACL and inject a mock one instead.
Now that we have implemented all of our required methods, we can start querying the ACL to deny or allow access to parts of the Model. Edit the User Model and add the following to the methods as shown.
application/modules/storefront/models/User.php
public function saveUser($post, $validator = null)
{
if (!$this->checkAcl('saveUser')) {
throw new SF_Acl_Exception("Insufficient rights");
}
/*...*/
public function registerUser($post)
{
if (!$this->checkAcl('register')) {
throw new SF_Acl_Exception("Insufficient rights");
}
/*...*/
To query the ACL, we simply need to call the checkAcl() method. This will then query the ACL for us and tell us if the current user has the correct access permissions. If the User does not have the correct permissions, then we throw an SF_Acl_Exception. You will need to copy this exception class from the example files.
We now have a fully working ACL that is integrated into the Domain layer of our MVC application and we are not depending on the Controller layer for this functionality, meaning we can use the Models outside the MVC context. It is important to note that we could also implement the ACL in this way for other entities within the application, such as Services. All we need to do is create the base classes for that namespace or create a more generic set of ACL base classes.
Non-Model ACL
With this implementation, we also have the ability to use the ACL in a more common way. We can still add other Resources to the ACL that are not Models, meaning we can control access to non-Model Resources.
For the next step, we will be creating the administrator functionality for the Storefront in future. The way we do this there will not be an admin Model. However, we still want to deny access to this area to anyone without the Admin Role. To deal with this requirement, we are going to create a new Resource and add it to the ACL. To start, create the following ACL resource class:
application/modules/storefront/models/Acl/Resources/Admin.php
class Storefront_Model_Acl_Resource_Admin implements
Zend_Acl_Resource_Interface
{
public function getResourceId()
{
return 'Admin';
}
}
Here we have simply created a new ACL Resource identified as Admin, which will represent the administration area. Next, we need to add the following code to the ACL:();
$this->add(new Storefront_Model_Acl_Resource_Admin)
->allow('Admin');
}
}
Here we add the new Resource to the main ACL and allow admin access for all permissions. We now have the ACL configured and can query it to see if a user is allowed to access this Resource.
It is important to note that when we use the ACL like this it obviously creates a dependency on the application layer, so it is important that we only use it where necessary and make sure we push whatever we can into the Models. The Admin Resource only really exists within the Application layer and has no Model. This will become clearer when we implement the administration area.
Unit testing with ACL
One of the advantages of integrating the ACL into our Models is that we can now test security in our unit tests. There is a whole suite of unit tests included with the example files. Let's have a look at the User Model tests as well as one of the tests that makes use of the new ACL integration:
tests/unit/Model/UserTest.php
public function test_User_Can_Be_Edited_By_Customer_And_Admin()
{
'userId' => 10,
'title' => 'Mr',
'firstname' => 'keith',
'lastname' => 'pope',
'email' => 'me@me.com'
);
// Guest
try {
$edit = $this->_model->saveUser($post);
$this->fail('Guest should not be able to edit user');
} catch (SF_Acl_Exception $e) {}
// Customer
try {
$this->_model->setIdentity(array('role' => 'Customer'));
$edit = $this->_model->saveUser($post);
} catch (SF_Acl_Exception $e) {
$this->fail('Customer should be able to edit user');
}
// Admin
try {
$this->_model->setIdentity(array('role' => 'Admin'));
$edit = $this->_model->saveUser($post);
} catch (SF_Acl_Exception $e) {
$this->fail('Admin should be able to edit user');
}
$this->assertEquals(10, $edit);
}
This test is used to validate the security of the User Model without going into too much detail on how PHPUnit works. We can have three main assertions in this test:
- Guest should not be able to saveUser
- Customer should be able to saveUser
- Admin should be able to saveUser
When we run the Guest assertion, we use the default identity created by our Models, which is Guest. This means that the $_model->saveUser() call should throw an SF_Acl_Exception. If it does not, then we fail the test.
When running the Customer and Admin assertions, we inject the role we wish to use by calling the setIdentity() method. Remember we have no Zend_Auth session, so we manually set the identity. We then fail the test if an SF_Acl_Exception is thrown as Customer and Admin should be allowed to saveUser.
As we can see, the Model-ACL implementation provides us with a flexible platform for testing and using the Models outside the MVC setting.
Summary
We now have a fully working authentication and authorization system integrated into the Storefront.
In this article, we looked at authorization (can they do this?) and different ways of using Zend_Acl. We opted to integrate the authorization layer into our Models so that we could use the Models outside the typical MVC context. To implement this, we created a new base Model class for Models that need ACL support and then secured the User Model using this new functionality.
If you have read this article you may be interested to view :
- Creating a Shopping Cart using Zend Framework: Part 1
- Creating a Shopping Cart using Zend Framework: Part 2
- Authentication with Zend_Auth in Zend Framework 1.8 | https://www.packtpub.com/books/content/authorization-zendacl-zend-framework-18 | CC-MAIN-2017-39 | refinedweb | 3,645 | 52.09 |
On Friday 07 July 2006 09:02, Thomas Singer wrote:
> > That said, it is possible to write file names containing bytes that can't
> > decode as UTF-8.
>
> I can't believe that. Could you please give an reproducible example?
C++ code:
#include <fstream>
int main() {
// the value 0xff and 0xfe must not occur in UTF-8 text
char const filename[] = { 'i','n','v','a','l','i','d',0xff,0xfe,'\0' };
std::ofstream out(filename);
out << "aha!\n";
}
The thing is that, as Wilfredo said and whose attribution you snipped,
filenames are UTF-8 _by_ _convention_ and nothing enforces this. 7 10:21:41 2006
This is an archived mail posted to the Subversion Users
mailing list. | http://svn.haxx.se/users/archive-2006-07/0249.shtml | CC-MAIN-2014-52 | refinedweb | 117 | 71.95 |
5 Things to Look for in JRuby 1.4
The team has finally wrapped up the long summer of JRuby-related travel, and it's time to set our sights on a new version of JRuby for the community to lovingly embrace.
The release itself is tentatively planned to enter a round of release candidates at the end of September. Please prepare yourselves, try the release candidates when they are available, and most importantly, report bugs!
So what can you expect in the next release, other than the usual array of bug, performance, and compatibility fixes? Here are the five areas where we've been focusing efforts.
1. 1.8.7 is the New Baseline
When Ruby 1.8.7 was first released, we had a false start where we eagerly started to add some of the new additions, only to find that Rails (version 2.0 at the time) was rendered useless by the changes. Now, finally, three minor releases of Rails 2.x later, Rails on Ruby 1.8.7 has finally stabilized, and is now the recommended version of Ruby to use with Rails. Additionally, 1.8.7 is showing up as the default version of Ruby in many operating systems, and we began to see more bug reports about missing 1.8.7 features.
So, in JRuby 1.4, we'll finally report compatibility with Ruby 1.8.7p174:
$ jruby -v jruby 1.4.0dev (ruby 1.8.7p174) (2009-09-14 b09f382) (Java HotSpot(TM) Client VM 1.5.0_19) \[i386-java\]
The patch-level (p174) is based on the version of Ruby where we pulled the updated standard library. We always maintain that patch-level is a “best effort” number, and as such, we cannot promise full feature-for-feature (or bug-for-bug!) compatibility. However, as the RubySpec project has matured, it's gotten much easier for us to feel confident about this claim. As always, if you spot an area where we differ from Ruby 1.8.7, please do file a bug on the discrepancy, or even better, make sure that RubySpec covers the issue by submitting a patch to RubySpec.
2. Improving 1.9 Compatibility
Even though Ruby 1.9 is still a moving target, the RubySpec project has recently begun to cover many of the new 1.9 features, which gives JRuby an easier, more robust mechanism for tracking how far along the path to 1.9 we are.
We've also been following our tried-and-true approach of just trying to get stuff working. In particular, IRB, RubyGems, Rake, and even some simple Rails applications are working with JRuby in 1.9 mode.
Other than a couple of major new features remaining (Multilingualization/encoding, IO transcoding, Fibers), JRuby 1.9 mode should just work. To try out JRuby in 1.9 mode, just pass
--1.9 as a leading argument on the command-line:
$ jruby --1.9 -v jruby 1.4.0dev (ruby 1.9.1p0) (2009-09-14 b09f382) (Java HotSpot(TM) Client VM 1.5.0_19) \[i386-java\]
If you've been writing 1.9 code, we'd love to hear if you can get it working on JRuby: do comment!
3. New YAML Parser
Ok, so a new YAML parser is something you'd probably have to give more than a cursory glance to notice. But our good friend Ola Bini has gone and done it again with Yecht, a new YAML engine. Ola re-implemented YAML support in JRuby to be much closer to the idiosyncracies of Syck, the C YAML parser that MRI uses. With this change, a number of long-standing YAML compatibility issues are fixed. Hopefully this is the last YAML engine that JRuby will ever need!
4. Java Method Selection and Coercion API
For a long time, JRuby Java integration features have been something of a black box, especially the way Java methods are selected and invoked and arguments are coerced. This black box has hindered understanding of how JRuby calls Java, and in the case of overloaded methods, has made some Java methods unreachable. Consider this Java class:
public class Overloaded { public static void use(int i) { /* ... */ } public static void use(long l) { /* ... */ } }
And this JRuby code:
Java::Overloaded.use 10
Which Overloaded method is called? It turns out it's
use(long). JRuby behavior is to coerce Ruby Fixnums into Java longs. Thus, with current versions of JRuby, it's impossible to call the version that takes an int.
We'd like to make this part of the Java integration layer more transparent and accessible. Here are a few APIs we're considering:
a.
Object#java_send(name, signature, *args)
Like
Object#send except it would allow you to identify and invoke a specific Java method by its name and signature. A signature for these purposes is an array of classes, where the first element in the array specifies the return type.
b.
Object#java_method(name, signature)
Like
Object#method, except it would take an array of Java classes representing a method signature as an additional argument and return a Method object that can be
#call‘ed.
c.
#coerce_to?(type)
Method convention to allow an object to decide whether it can be coerced to a Java type. JRuby will call this method if it's available to get hints to aid in choosing a method in the presence of several overloaded methods.
d.
#to_java(type)
Method convention to allow arbitrary Ruby objects to control how they are coerced to Java. If a Ruby object responds to
#to_java, JRuby will use it to convert the argument before passing to a Java method invocation.
e.
#to_ruby(type)
Like
#to_java except to be applied to return types and Java objects coming back to Ruby.
All of these APIs are still yet to be implemented, and we could use some good real-world examples of where JRuby falls down to help ensure that we solve those issues.
5. Generate Real Java Classes from Ruby
Since we started writing Ruby2java as an add-on library, we've been looking for a way to expose the functionality of creating a real Java class from Ruby in the JRuby core. JRuby 1.4 will have some new experimental APIs for doing this. The example below explains it in code:
require 'java' require 'jruby/core_ext' class SimpleTest def equals raise "not equal" unless 1.0 == 1 end end SimpleTest.add_method_signature("equals", [java.lang.Void::TYPE]) SimpleTest.add_method_annotation("equals", org.junit.Test => {}) SimpleTest.become_java!
The effect of this code is the same as writing the following Java code:
package ruby; import org.junit.Test; public class SimpleTest { @Test public void equals() { // dispatch back to Ruby code here } }
We are also providing
#add_class_annotation and
#add_parameter_annotation methods as well, rounding out the ability to shape a real Java class.
These APIs are admittedly verbose and low-level, but they do allow you to easily build higher level constructs using Ruby metaprogramming techniques.
Here's a more realistic example of implementing a JAX-RS/Jersey resource using Ruby (full source available):
class Hello < RubyJersey::Resource GET() Produces("text/plain") Returns(java.lang.String) def hello "Hello Ruby Jersey!" end end
Feedback
The release is still a couple of weeks away, so none of this is by any means final. Please send us feedback by commenting here or use the JRuby mailing list for longer discussions, and let us know how the 1.4 release candidates work for you!
Share your thoughts with @engineyard on Twitter | https://blog.engineyard.com/2009/5-things-to-look-for-in-jruby-1-4 | CC-MAIN-2016-22 | refinedweb | 1,255 | 64.91 |
Looks to me like stage is a method or a closure that takes 2 args. The first is a string, the second is a closure.
Erick Nelson
Senior Developer – IT
HD Supply Facilities Maintenance
(858) 740-6523
From: Chris Fouts <chrisfouts@ziftsolutions.com>
Reply-To: "users@groovy.apache.org" <users@groovy.apache.org>
Date: Friday, May 25, 2018 at 6:35 AM
To: "users@groovy.apache.org" <users@groovy.apache.org>
Subject: Help with Groovy syntax
I'm 4 days old new to Groovy. I bought a book, but I just want to learn what this syntax mean for today.
We use Groovy to run a Jenkins file in our Jenkins build. One stage has these statements in it.
def mainScmDetails
stage("Checkout") {
mainScmDetails = checkout scm
dir("some-dir") {
git url: 'git@domain.com/path/project.git', credentialsId: 'some_creds', branch: 'develop'
}
}
Does this define a code block named mainScmDetails? Does the statement...
mainScmDetails = checkout scm
...call two functions, namely, checkout and scm?
Thanks,
Chris | https://mail-archives.eu.apache.org/mod_mbox/groovy-users/201805.mbox/raw/%3C89FF7CC8-F66F-4301-A57F-5923F09ACE90@hdsupply.com%3E/2 | CC-MAIN-2021-25 | refinedweb | 164 | 69.89 |
In this situation strace also known as system-call tracer comes for rescue. It is a debugging tool that monitors the system calls used by a program and all the signals it receives.
A system call is the most common way programs communicate with the kernel. System calls include reading and writing data, opening and closing files and all kinds of network communication. Under Linux, a system call is done by calling a special interrupt with the number of the system call and its parameters stored in the CPU's registers.
Using strace is quite simple. There are two ways to let strace monitor a program.
Method 1:
To start strace along with a program, just run the executable with strace as shown below.
strace program-nameFor example let us trace ls command.
$ strace ls execve("/bin/ls", ["ls"], [/* 39 vars */]) = 0 brk(0) = 0x82d4000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap2(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7787000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=76503, ...}) = 0 mmap2(NULL, 76503, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7774000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/libselinux.so.1", O_RDONLY) = 3 read(3, "177ELF111���������3�3�1���@G��004���"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0644, st_size=104148, ...}) = 0 mmap2(NULL, 109432, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x41d000 mmap2(0x436000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x18) = 0x436000 close(3) = 0 . . fstat64(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7613000 write(1, "01.c a.outn", 1201.c a.out ) = 12 close(1) = 0 munmap(0xb7613000, 4096) = 0 close(2) = 0 exit_group(0) = ?In the above example we are not displaying the complete output of strace command. Even though output from strace looks very complicated, this is only due to many system calls made when loading shared libraries. However, once we have found which system calls are the important ones (mainly open, read, write and the like), the results will look fairly intuitive to us.
Method 2:
If we want to monitor a process which is currently running we can attach to the process using –p option. Thus we can even debug a daemon process.
strace –p <pid-of-the-application>For e.g
#include <stdio.h> #include <unistd.h> int main() { sleep(20); return 0; }We will compile the above code and run it as a background process. Then we try to monitor the program using its process id as shown below.
$ gcc main.c $ ./a.out & [1] 1885 $ strace -p 1885 Process 1885 attached - interrupt to quit restart_syscall(<... resuming interrupted call ...>) = 0 exit_group(0) = ? Process 1885 detached [1]+ Done ./a.outIn contrast to a debugger, strace does not need a program's source code to produce human-readable output.
Some handy options
Below example is used in the discussion of other important options supported by strace.
#include <stdio.h> int main(void) { FILE *fd = NULL; if(fd = fopen("test","rw")) { printf("TEST file openedn"); fclose(fd); } else { printf("Failed to open the filen"); } return 0; }
Providing the time taken by multiple system calls in a program
Using –c option strace provides summary information on executing a program.
It provides information like number of times a system call is used, time spent executing various system calls, number of times errors returned as shown below.
$ strace -c ./a.out Failed to open the file % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 91.47 0.004000 4000 1 execve 8.53 0.000373 124 3 3 access 0.00 0.000000 0 1 read 0.00 0.000000 0 1 write 0.00 0.000000 0 3 1 open 0.00 0.000000 0 2 close 0.00 0.000000 0 3 brk 0.00 0.000000 0 1 munmap 0.00 0.000000 0 3 mprotect 0.00 0.000000 0 7 mmap2 0.00 0.000000 0 3 fstat64 0.00 0.000000 0 1 set_thread_area ------ ----------- ----------- --------- --------- ---------------- 100.00 0.004373 29 4 total
Redirecting the output to a file
Using -o option we can redirect the complex output of strace into a file.
$ strace -o <output-file-name> <program-name>
Time spent per system call
Using –T option we can get time spent per system call. In the below example we can see time spent per system call is printed at the end of the line.
$) = ?
Prefixing time of the day for every line in trace
It is useful sometimes to track at what time a particular is triggered. By using -t option strace will prefix each line of the trace with the time of day, which will be really helpful to find out at particular time at which call is the process blocked.
$) = ?
Tracing only specific system calls
Using –e option we can also specify which system calls to be traced. To trace only open() and close() system calls use the following command:
$ strace –e trace=’open,close’ <program-name>Similarly we can also use negation option to not trace specific system calls. If we don’t want to trace open() system call in previous example we can give the below command
$ strace -e trace='!open,close' ./a.outCheck the man page of strace for other options. | http://codingfreak.blogspot.com/search?updated-max=2010-07-07T12:32:00%2B05:30&max-results=5 | CC-MAIN-2017-34 | refinedweb | 921 | 66.84 |
In a typical application or site built with webpack, there are three main types of code:
This article will focus on the last of these three parts, the runtime and in particular the manifest.
The runtime, along with the manifest data, is basically all the code webpack needs to connect your modularized application while it's running in the browser. It contains the loading and resolving logic needed to connect your modules as they interact. This includes connecting modules that have already been loaded into the browser as well as logic to lazy-load the ones that haven't.
Once your application hits the browser in the form of
index.html file, some bundles and a variety of other assets required by your application must be loaded and linked somehow. That
/src directory you meticulously laid out is now bundled, minified and maybe even split into smaller chunks for lazy-loading by webpack's
optimization. So how does webpack manage the interaction between all of your required modules? This is where the manifest data comes in...
As the compiler enters, resolves, and maps out your application, it keeps detailed notes on all your modules. This collection of data is called the "Manifest" and it's what the runtime will use to resolve and load modules once they've been bundled and shipped to the browser. No matter which module syntax you have chosen, those
import or
require statements have now become
__webpack_require__ methods that point to module identifiers. Using the data in the manifest, the runtime will be able to find out where to retrieve the modules behind the identifiers.
So now you have a little bit of insight about how webpack works behind the scenes. "But, how does this affect me?", you might ask. The simple answer is that most of the time it doesn't. The runtime will do its thing, utilizing the manifest, and everything will appear to just magically work once your application hits the browser. However, if you decide to improve do not. This is caused by the injection of the runtime and manifest which changes every build.
See the manifest section of our Output management guide to learn how to extract the manifest, and read the guides below to learn more about the intricacies of long term caching. | https://webpack.js.org/concepts/manifest/ | CC-MAIN-2018-43 | refinedweb | 384 | 61.77 |
October 2011
Volume 26 Number 10
Asynchronous Programming - Pause and Play with Await
By Mads Torgersen | October 2011
Asynchronous methods in the upcoming versions of Visual Basic and C# are a great way to get the callbacks out of your asynchronous programming. In this article, I’ll take a closer look at what the new await keyword actually does, starting at the conceptual level and working my way down to the iron.
Sequential Composition
Visual Basic and C# are imperative programming languages—and proud of it! This means they excel in letting you express your programming logic as a sequence of discrete steps, to be undertaken one after the other. Most statement-level language constructs are control structures that give you a variety of ways to specify the order in which the discrete steps of a given body of code are to be executed:
- Conditional statements such as if and switch let you choose different subsequent actions based on the current state of the world.
- Loop statements such as for, foreach and while let you repeat the execution of a certain set of steps multiple times.
- Statements such as continue, throw and goto let you transfer control non-locally to other parts of the program.
Building up your logic using control structures results in sequential composition, and this is the lifeblood of imperative programming. It is indeed why there are so many control structures to choose from: You want sequential composition to be really convenient and well-structured.
Continuous Execution
In most imperative languages, including current versions of Visual Basic and C#, the execution of methods (or functions, or procedures or whatever we choose to call them) is continuous. What I mean by that is that once a thread of control has begun executing a given method, it will be continuously occupied doing so until the method execution ends. Yes, sometimes the thread will be executing statements in methods called by your body of code, but that’s just part of executing the method. The thread will never switch to do anything your method didn’t ask it to.
This continuity is sometimes problematic. Occasionally there’s nothing a method can do to make progress—all it can do is wait for something to happen: a download, a file access, a computation happening on a different thread, a certain point in time to arrive. In such situations the thread is fully occupied doing nothing. The common term for that is that the thread is blocked; the method causing it to do so is said to be blocking.
Here’s an example of a method that is seriously blocking:
static byte[] TryFetch(string url) { var client = new WebClient(); try { return client.DownloadData(url); } catch (WebException) { } return null; }
A thread executing this method will stand still during most of the call to client.DownloadData, doing no actual work but just waiting.
This is bad when threads are precious—and they often are. On a typical middle tier, servicing each request in turn requires talking to a back end or other service. If each request is handled by its own thread and those threads are mostly blocked waiting for intermediate results, the sheer number of threads on the middle tier can easily become a performance bottleneck.
Probably the most precious kind of thread is a UI thread: there’s only one of them. Practically all UI frameworks are single-threaded, and they require everything UI-related—events, updates, the user’s UI manipulation logic—to happen on the same dedicated thread. If one of these activities (for example, an event handler choosing to download from a URL) starts to wait, the whole UI is unable to make progress because its thread is so busy doing absolutely nothing.
What we need is a way for multiple sequential activities to be able to share threads. To do that, they need to sometimes “take a break”—that is, leave holes in their execution where others can get something done on the same thread. In other words, they sometimes need to be discontinuous. It’s particularly convenient if those sequential activities take that break while they’re doing nothing anyway. To the rescue: asynchronous programming!
Asynchronous Programming
Today, because methods are always continuous, you have to split discontinuous activities (such as the before and after of a download) into multiple methods. To poke a hole in the middle of a method’s execution, you have to tear it apart into its continuous bits. APIs can help by offering asynchronous (non-blocking) versions of long-running methods that initiate the operation (start the download, for example), store a passed-in callback for execution upon completion and then immediately return to the caller. But in order for the caller to provide the callback, the “after” activities need to be factored out into a separate method.
Here’s how this works for the preceding TryFetch method:
static void TryFetchAsync(string url, Action<byte[], Exception> callback) { var client = new WebClient(); client.DownloadDataCompleted += (_, args) => { if (args.Error == null) callback(args.Result, null); else if (args.Error is WebException) callback(null, null); else callback(null, args.Error); }; client.DownloadDataAsync(new Uri(url)); }
Here you see a couple of different ways of passing callbacks: The DownloadDataAsync method expects an event handler to have been signed up to the DownloadDataCompleted event, so that’s how you pass the “after” part of the method. TryFetchAsync itself also needs to deal with its callers’ callbacks. Instead of setting up that whole event business yourself, you use the simpler approach of just taking a callback as a parameter. It’s a good thing we can use a lambda expression for the event handler so it can just capture and use the “callback” parameter directly; if you tried to use a named method, you’d have to think of some way to get the callback delegate to the event handler. Just pause for a second and think how you’d write this code without lambdas.
But the main thing to notice here is how much the control flow changed. Instead of using the language’s control structures to express the flow, you emulate them:
- The return statement is emulated by calling the callback.
- Implicit propagation of exceptions is emulated by calling the callback.
- Exception handling is emulated with a type check.
Of course, this is a very simple example. As the desired control structure gets more complex, emulating it gets even more so.
To summarize, we gained discontinuity, and thereby the ability of the executing thread to do something else while “waiting” for the download. But we lost the ease of using control structures to express the flow. We gave up our heritage as a structured imperative language.
Asynchronous Methods
When you look at the problem this way, it becomes clear how asynchronous methods in the next versions of Visual Basic and C# help: They let you express discontinuous sequential code.
Let’s look at the asynchronous version of TryFetch with this new syntax:
static async Task<byte[]> TryFetchAsync(string url) { var client = new WebClient(); try { return await client.DownloadDataTaskAsync(url); } catch (WebException) { } return null; }
Asynchronous methods let you take the break inline, in the middle of your code: Not only can you use your favorite control structures to express sequential composition, you can also poke holes in the execution with await expressions—holes where the executing thread is free to do other things.
A good way to think about this is to imagine that asynchronous methods have “pause” and “play” buttons. When the executing thread reaches an await expression, it hits the “pause” button and the method execution is suspended. When the task being awaited completes, it hits the “play” button, and the method execution is resumed.
Compiler Rewriting
When something complex looks simple, it usually means there’s something interesting going on under the hood, and that’s certainly the case with asynchronous methods. The simplicity gives you a nice abstraction that makes it so much easier to both write and read asynchronous code. Understanding what’s happening underneath is not a requirement. But if you do understand, it will surely help you become a better asynchronous programmer, and be able to more fully utilize the feature. And, if you’re reading this, chances are good you’re also just plain curious. So let’s dive in: What do async methods—and the await expressions in them—actually do?
When the Visual Basic or C# compiler gets hold of an asynchronous method, it mangles it quite a bit during compilation: the discontinuity of the method is not directly supported by the underlying runtime and must be emulated by the compiler. So instead of you having to pull the method apart into bits, the compiler does it for you. However, it does this quite differently than you’d probably do it manually.
The compiler turns your asynchronous method into a statemachine. The state machine keeps track of where you are in the execution and what your local state is. It can either be running or suspended. When it’s running, it may reach an await, which hits the “pause” button and suspends execution. When it’s suspended, something may hit the “play” button to get it back and running.
The await expression is responsible for setting things up so that the “play” button gets pushed when the awaited task completes. Before we get into that, however, let’s look at the state machine itself, and what those pause and play buttons really are.
Task Builders
Asynchronous methods produce Tasks. More specifically, an asynchronous method returns an instance of one of the types Task or Task<T> from System.Threading.Tasks, and that instance is automatically generated. It doesn’t have to be (and can’t be) supplied by the user code. (This is a small lie: Asynchronous methods can return void, but we’ll ignore that for the time being.)
From the compiler’s point of view, producing Tasks is the easy part. It relies on a framework-supplied notion of a Task builder, found in System.Runtime.CompilerServices (because it’s not normally meant for direct human consumption). For instance, there’s a type like this:
public class AsyncTaskMethodBuilder<TResult> { public Task<TResult> Task { get; } public void SetResult(TResult result); public void SetException(Exception exception); }
The builder lets the compiler obtain a Task, and then lets it complete the Task with a result or an Exception. Figure 1 is a sketch of what this machinery looks like for TryFetchAsync.
Figure 1 Building a Task
static Task<byte[]> TryFetchAsync(string url) { var __builder = new AsyncTaskMethodBuilder<byte[]>(); ... Action __moveNext = delegate { try { ... return; ... __builder.SetResult(…); ... } catch (Exception exception) { __builder.SetException(exception); } }; __moveNext(); return __builder.Task; }
Watch carefully:
- First a builder is created.
- Then a __moveNext delegate is created. This delegate is the “play” button. We call it the resumption delegate, and it contains:
- The original code from your async method (though we have elided it so far).
- Return statements, which represent pushing the “pause” button.
- Calls that complete the builder with a successful result, which correspond to the return statements of the original code.
- A wrapping try/catch that completes the builder with any escaped exceptions.
- Now the “play” button is pushed; the resumption delegate is called. It runs until the “pause” button is hit.
- The Task is returned to the caller.
Task builders are special helper types meant only for compiler consumption. However, their behavior isn’t much different from what happens when you use the TaskCompletionSource types of the Task Parallel Library (TPL) directly.
So far I’ve created a Task to return and a “play” button—the resumption delegate—for someone to call when it’s time to resume execution. I still need to see how execution is resumed and how the await expression sets up for something to do this. Before I put it all together, though, let’s take a look at how tasks are consumed.
Awaitables and Awaiters
As you’ve seen, Tasks can be awaited. However, Visual Basic and C# are perfectly happy to await other things as well, as long as they’re awaitable; that is, as long as they have a certain shape that the await expression can be compiled against. In order to be awaitable, something has to have a GetAwaiter method, which in turn returns an awaiter. As an example, Task<TResult> has a GetAwaiter method that returns this type:
public struct TaskAwaiter<TResult> { public bool IsCompleted { get; } public void OnCompleted(Action continuation); public TResult GetResult(); }
The members on the awaiter let the compiler check if the awaitable is already complete, sign up a callback to it if it isn’t yet, and obtain the result (or Exception) when it is.
We can now start to see what an await should do to pause and resume around the awaitable. For instance, the await inside our TryFetchAsync example would turn into something like this:
__awaiter1 = client.DownloadDataTaskAsync(url).GetAwaiter(); if (!__awaiter1.IsCompleted) { ... // Prepare for resumption at Resume1 __awaiter1.OnCompleted(__moveNext); return; // Hit the "pause" button } Resume1: ... __awaiter1.GetResult()) ...
Again, watch what happens:
- An awaiter is obtained for the task returned from DownloadDataTaskAsync.
- If the awaiter is not complete, the “play” button—the resumption delegate—is passed to the awaiter as a callback.
- When the awaiter resumes execution (at Resume1) the result is obtained and used in the code that follows it.
Clearly the common case is that the awaitable is a Task or Task<T>. Indeed, those types—which are already present in the Microsoft .NET Framework 4—have been keenly optimized for this role. However, there are good reasons for allowing other awaitable types as well:
- Bridging to other technologies: F#, for instance, has a type Async<T> that roughly corresponds to Func<Task<T>>. Being able to await Async<T> directly from Visual Basic and C# helps bridge between asynchronous code written in the two languages. F# is similarly exposing bridging functionality to go the other way—consuming Tasks directly in asynchronous F# code.
- Implementing special semantics: The TPL itself is adding a few simple examples of this. The static Task.Yield utility method, for instance, returns an awaitable that will claim (via IsCompleted) to not be complete, but will immediately schedule the callback passed to its OnCompleted method, as if it had in fact completed. This lets you force scheduling and bypass the compiler’s optimization of skipping it if the result is already available. This can be used to poke holes in “live” code, and improve responsiveness of code that isn’t sitting idle. Tasks themselves can’t represent things that are complete but claim not to be, so a special awaitable type is used for that.
Before I take a further look at the awaitable implementation of Task, let’s finish looking at the compiler’s rewriting of the asynchronous method, and flesh out the bookkeeping that tracks the state of the method’s execution.
The State Machine
In order to stitch it all together, I need to build up a state machine around the production and consumption of the Tasks. Essentially, all the user logic from the original method is put into the resumption delegate, but the declarations of locals are lifted out so they can survive multiple invocations. Furthermore, a state variable is introduced to track how far things have gotten, and the user logic in the resumption delegate is wrapped in a big switch that looks at the state and jumps to a corresponding label. So whenever resumption is called, it will jump right back to where it left off the last time. Figure 2 puts the whole thing together.
Figure 2 Creating a State Machine
static Task<byte[]> TryFetchAsync(string url) { var __builder = new AsyncTaskMethodBuilder<byte[]>(); int __state = 0; Action __moveNext = null; TaskAwaiter<byte[]> __awaiter1; WebClient client = null; __moveNext = delegate { try { if (__state == 1) goto Resume1; client = new WebClient(); try { __awaiter1 = client.DownloadDataTaskAsync(url).GetAwaiter(); if (!__awaiter1.IsCompleted) { __state = 1; __awaiter1.OnCompleted(__moveNext); return; } Resume1: __builder.SetResult(__awaiter1.GetResult()); } catch (WebException) { } __builder.SetResult(null); } catch (Exception exception) { __builder.SetException(exception); } }; __moveNext(); return __builder.Task; }
Quite the mouthful! I’m sure you’re asking yourself why this code is so much more verbose than the manually “asynchronized” version shown earlier. There are a couple of good reasons, including efficiency (fewer allocations in the general case) and generality (it applies to user-defined awaitables, not just Tasks). However, the main reason is this: You don’t have to pull the user logic apart after all; you just augment it with some jumps and returns and such.
While the example is too simple to really justify it, rewriting a method’s logic into a semantically equivalent set of discrete methods for each of its continuous bits of logic between the awaits is very tricky business. The more control structures the awaits are nested in, the worse it gets. When not just loops with continue and break statements but try-finally blocks and even goto statements surround the awaits, it’s exceedingly difficult, if indeed possible, to produce a rewrite with high fidelity.
Instead of attempting that, it seems a neat trick is to just overlay the user’s original code with another layer of control structure, airlifting you in (with conditional jumps) and out (with returns) as the situation requires. Play and pause. At Microsoft, we’ve been systematically testing the equivalence of asynchronous methods to their synchronous counterparts, and we’ve confirmed that this is a very robust approach. There’s no better way to preserve synchronous semantics into the asynchronous realm than by retaining the code that describes those semantics in the first place.
The Fine Print
The description I’ve provided is slightly idealized—there are a few more tricks to the rewrite, as you may have suspected. Here are a few of the other gotchas the compiler has to deal with:
Goto Statements The rewrite in Figure 2 doesn’t actually compile, because goto statements (in C# at least) can’t jump to labels buried in nested structures. That’s no problem in itself, as the compiler generates to intermediate language (IL), not source code, and isn’t bothered by nesting. But even IL doesn’t allow jumping into the middle of a try block, as is done in my example. Instead, what really happens is that you jump to the beginning of a try block, enter it normally and then switch and jump again.
Finally Blocks When returning out of the resumption delegate because of an await, you don’t want the finally bodies to be executed yet. They should be saved for when the original return statements from the user code are executed. You control that by generating a Boolean flag signaling whether the finally bodies should be executed, and augmenting them to check it.
Evaluation Order An await expression is not necessarily the first argument to a method or operator; it can occur in the middle. To preserve the order of evaluation, all the preceding arguments must be evaluated before the await, and the act of storing them and retrieving them again after the await is surprisingly involved.
On top of all this, there are a few limitations you can’t get around. For instance, awaits aren’t allowed inside of a catch or finally block, because we don’t know of a good way to reestablish the right exception context after the await.
The Task Awaiter
The awaiter used by the compiler-generated code to implement the await expression has considerable freedom as to how it schedules the resumption delegate—that is, the rest of the asynchronous method. However, the scenario would have to be really advanced before you’d need to implement your own awaiter. Tasks themselves have quite a lot of flexibility in how they schedule because they respect a notion of scheduling context that itself is pluggable.
The scheduling context is one of those notions that would probably look a little nicer if we had designed for it from the start. As it is, it’s an amalgam of a few existing concepts that we’ve decided not to mess up further by trying to introduce a unifying concept on top. Let’s look at the idea at the conceptual level, and then I’ll dive into the realization.
The philosophy underpinning the scheduling of asynchronous callbacks for awaited tasks is that you want to continue executing “where you were before,” for some value of “where.” It’s this “where” that I call the scheduling context. Scheduling context is a thread-affine concept; every thread has (at most) one. When you’re running on a thread, you can ask for the scheduling context it’s running in, and when you have a scheduling context, you can schedule things to run in it.
So this is what an asynchronous method should do when it awaits a task:
- On suspension: Ask the thread it’s running on for its scheduling context.
- On resumption: Schedule the resumption delegate back on that scheduling context.
Why is this important? Consider the UI thread. It has its own scheduling context, which schedules new work by sending it through the message queue back on the UI thread. This means that if you’re running on the UI thread and await a task, when the result of the task is ready, the rest of the asynchronous method will run back on the UI thread. Thus, all the things you can do only on the UI thread (manipulating the UI) you can still do after the await; you won’t experience a weird “thread hop” in the middle of your code.
Other scheduling contexts are multithreaded; specifically, the standard thread pool is represented by a single scheduling context. When new work is scheduled to it, it may go on any of the pool’s threads. Thus, an asynchronous method that starts out running on the thread pool will continue to do so, though it may “hop around” among different specific threads.
In practice, there’s no single concept corresponding to the scheduling context. Roughly speaking, a thread’s SynchronizationContext acts as its scheduling context. So if a thread has one of those (an existing concept that can be user-implemented), it will be used. If it doesn’t, then the thread’s TaskScheduler (a similar concept introduced by the TPL) is used. If it doesn’t have one of those either, the default TaskScheduler is used; that one schedules resumptions to the standard thread pool.
Of course, all this scheduling business has a performance cost. Usually, in user scenarios, it’s negligible and well worth it: Having your UI code chopped up into manageable bits of actual live work and pumped in through the message pump as waited-for results become available is normally just what the doctor ordered.
Sometimes, though—especially in library code—things can get too fine-grained. Consider:
async Task<int> GetAreaAsync() { return await GetXAsync() * await GetYAsync(); }
This schedules back to the scheduling context twice—after each await—just to perform a multiplication on the “right” thread. But who cares what thread you’re multiplying on? That’s probably wasteful (if used often), and there are tricks to avoid it: You can essentially wrap the awaited Task in a non-Task awaitable that knows how to turn off the schedule-back behavior and just run the resumption on whichever thread completes the task, avoiding the context switch and the scheduling delay:
async Task<int> GetAreaAsync() { return await GetXAsync().ConfigureAwait(continueOnCapturedContext: false) * await GetYAsync().ConfigureAwait(continueOnCapturedContext: false); }
Less pretty, to be sure, but a neat trick to use in library code that ends up being a bottleneck for scheduling.
Go Forth and Async’ify
Now you should have a working understanding of the underpinnings of asynchronous methods. Probably the most useful points to take away are:
- The compiler preserves the meaning of your control structures by actually preserving your control structures.
- Asynchronous methods don’t schedule new threads—they let you multiplex on existing ones.
- When tasks get awaited, they put you back “where you were” for a reasonable definition of what that means.
If you’re like me, you’ve already been alternating between reading this article and typing in some code. You’ve multiplexed multiple flows of control—reading and coding—on the same thread: you. That’s just what asynchronous methods let you do.
Mads Torgersen is a principal program manager on the C# and Visual Basic Language team at Microsoft.
Thanks to the following technical expert for reviewing this article: Stephen Toub | https://docs.microsoft.com/en-us/archive/msdn-magazine/2011/october/asynchronous-programming-pause-and-play-with-await | CC-MAIN-2022-27 | refinedweb | 4,098 | 52.19 |
I have a program I have setup a Unit file for so that it will run as a service on startup.
The program runs fine from the command line, however it fails to run on startup.
When I check the status of the service, I see that has it failed to load. I see the failure is the result of:
import paho.mqtt.client as mqtt
ImportError: No module named 'paho'
And yet the the program does run from the command line, and also in IDLE, just fine and throws no import error.
Can someone explain what's going on?
Many thanks | https://www.raspberrypi.org/forums/viewtopic.php?p=1470011 | CC-MAIN-2020-05 | refinedweb | 102 | 80.11 |
Reviewed. If you want to know more about this feature, please review the product documentation at the previous link or watch the Channel 9 video on this topic.
Customer Scenario
In a recent case, we were working with a customer who was trying to use Table Valued Parameters (TVPs) to do a ‘batch import’ of data into the database. The TVP was a parameter into a stored procedure, and the stored procedure was in turn joining the values from the TVP ‘table’ with some other tables and then performing the final insert into a table which had some columns encrypted with Always Encrypted.
Now, most of the ‘magic’ behind Always Encrypted is actually embedded in the client library which is used. Unfortunately, none of the client libraries (.NET, JDBC or ODBC) support encrypted columns passed within TVPs. So, we needed a viable workaround in this case to unblock the customer. In this blog post, we explain this workaround by using a simple example.
Walkthrough: Working with Bulk data in Always Encrypted
We first proceed to create Column Master Key (CMK) and a Column Encryption Key (CEK). For the CMK, we used a certificate from the Current User store for simplicity. For more information on key management in Always Encrypted, please refer to this link.
Create the CMK
Here’s how we created the CMK. You can either use the GUI:
Or you can use T-SQL syntax:
USE [TVPAE] GO CREATE COLUMN MASTER KEY [TestCMK] WITH ( KEY_STORE_PROVIDER_NAME = N'MSSQL_CERTIFICATE_STORE', KEY_PATH = N'CurrentUser/My/C17D4826FA1B6B68808951BF81734283388937EF' )
Create the CEK
Here’s how we created the CEK using the GUI:
Alternatively you can use T-SQL to do this:
CREATE COLUMN ENCRYPTION KEY [TestCEK] WITH VALUES ( COLUMN_MASTER_KEY = [TestCMK], ALGORITHM = 'RSA_OAEP', ENCRYPTED_VALUE = 0x016E000001630075007200720065006E00740075007300650072002F006D0079002F0063003100370064003...5E48531480 )
Create the table
Then we create the final table, with the encrypted column defined. Note that in the real application, this table already exists, with data in it. We’ve obviously simplified the scenario here!
CREATE TABLE FinalTable ( idCol INT, somePII VARCHAR(500) ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = TestCEK, ENCRYPTION_TYPE = RANDOMIZED, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') )
Reworking the application to use SqlBulkCopy instead of TVPs
With this setup on the database side of things, we proceed to develop our client application to work around the TVP limitation. The key to doing this is to use the SqlBulkCopy class in .NET Framework 4.6 or above. This class ‘understands’ Always Encrypted and should need minimal rework on the developer front. The reason for the minimal rework is that this class actually accepts a DataTable as parameter, which is previously what the TVP was passed as. This is an important point, because it will help minimize the changes to the application.
Let’s get this working! The high-level steps are outlined below; there is a full code listing at the end of this blog post as well.
Populate the DataTable as before with the bulk data
As mentioned before the creation and population of the DataTable does not change. In the sample below, this is done in the MakeTable() method.
Using client side ad-hoc SQL, create a staging table on the server side.
This could also be done using T-SQL inside a stored procedure, but we had to uniquely name the staging table per-session so we chose to create the table from ad-hoc T-SQL in the application. We did this using a SELECT … INTO with a dummy WHERE clause (in the code listing, please refer to the condition ‘1=2’ which allows us to efficiently clone the table definition without having to hard-code the same), so that the column encryption setting is retained on the staging table as well. In the sample below, this step is done in the first part of the DoBulkInsert method.
Use the SqlBulkCopy API to ‘bulk insert’ into staging table
This is the core of the process. The important things to note here are the connection string (in the top of the class in the code listing) has the Column Encryption Setting attribute set to Enabled. When this attribute is set to Enabled, the SqlBulkCopy class interrogates the destination table and determines that a set of columns (in our sample case, it is just one column) needs to be encrypted before passing to server. This step is in the second part of the DoBulkInsert method.
Move data from staging table into final table
In the sample application, this is done by using an ad-hoc T-SQL statement to simple append the new data from staging table into final table. In the real application, this would typically be done through some T-SQL logic within a stored procedure or such.
There is an important consideration here: encrypted column data cannot be transformed on the server side. This means that no expressions (columns being concatenated, calculated or transformed in any other way) are permitted on the encrypted columns on server side. This limitation is true regardless of whether you use TVPs or not, but might become even more important in the case where TVPs are used.
In our sample application we just inserted the data from the staging table into the final table, and then drop the staging table. This code is in the InsertStagingDataIntoMainTable method in the listing below.
Conclusion
While Always Encrypted offers a compelling use case to protect sensitive data on the database side, there are some restrictions it poses to the application. In this blog post we show you how you can work around the restriction with TVPs and bulk data. We hope this helps you move forward with adopting Always Encrypted! Please leave your comments and questions below, we are eager to hear from you!
Appendix: Client Application Code
Here is the client application code used.
namespace TVPAE { using System; using System.Collections.Generic; using System.Data; using System.Data.SqlClient; using System.Linq; using System.Text; using System.Threading.Tasks; class Program { static private string TVPAEConnectionString = "Server=.;Initial Catalog=TVPAE;Integrated Security=true;Column Encryption Setting=enabled;"; static void Main(string[] args) { var stgTableName = DoBulkInsert(MakeTable()); InsertStagingDataIntoMainTable(stgTableName); } private static DataTable MakeTable() { DataTable newData = new DataTable(); // create columns in the DataTable var idCol = new DataColumn() { DataType = System.Type.GetType("System.Int32"), ColumnName = "idCol", AutoIncrement = true }; newData.Columns.Add(idCol); var somePII = new DataColumn() { DataType = System.Type.GetType("System.String"), ColumnName = "somePII" }; newData.Columns.Add(somePII); // create and add some test data var rand = new Random(); for (var loopCount = 0; loopCount < 10000; loopCount++) { var datarowSample = newData.NewRow(); datarowSample["somePII"] = DateTime.Now.ToLongDateString(); newData.Rows.Add(datarowSample); } newData.AcceptChanges(); return newData; } private static void InsertStagingDataIntoMainTable(string stgTableName) { using (var conn = new SqlConnection(TVPAEConnectionString)) { conn.Open(); using (var cmd = new SqlCommand("BEGIN TRAN; INSERT FinalTable SELECT * FROM [" + stgTableName + "]; DROP TABLE [" + stgTableName + "]; COMMIT", conn)) { Console.WriteLine("Inserted rowcount: " + cmd.ExecuteNonQuery().ToString()); } } } private static string DoBulkInsert(DataTable stagingData) { string stagingTableName = "StagingTable_" + Guid.NewGuid().ToString(); using (var conn = new SqlConnection(TVPAEConnectionString)) { conn.Open(); // create the staging table - note the use of the dummy WHERE 1 = 2 predicate using (var cmd = new SqlCommand("SELECT * INTO [" + stagingTableName + "] FROM FinalTable WHERE 1 = 2;", conn)) { cmd.ExecuteNonQuery(); } using (var bulkCopy = new SqlBulkCopy(conn)) { bulkCopy.DestinationTableName = "[" + stagingTableName + "]"; bulkCopy.WriteToServer(stagingData); } } return stagingTableName; } } }
Join the conversationAdd Comment
Is there a plan in the future to support user defined table types. We have a significant number of stored procedures that take user defined tables types as a parameter, so many of those procedures would need to be refactored.
Hi Alan – we do not have this lined up anytime soon. Can you please contact me offline at arvindsh AT microsoft DOT com and share more use case details, and your contact information if you don’t mind? We are very interested to talk to more customers with this issue so your details would be very useful.
– Arvind.
Is it possible to send an encrypted parameter in a stored procedure to the database to query a table with an encrypted column? More specifically, if I have a table with an encrypted column and I wanted to query it based on a user input, will I be able to write a stored procedure and pass the user input to the database and query the encrypted column inside the stored procedure with my user input in the where clause (deterministic encryption)
The same case with us. We have a significant number of stored procedures that take user defined tables types as a parameter.
Is there any update if this will be back supported or not? | https://blogs.msdn.microsoft.com/sqlcat/2016/08/09/using-table-valued-parameters-with-always-encrypted-in-sql-server-2016-and-azure-sql-database/ | CC-MAIN-2017-30 | refinedweb | 1,408 | 54.02 |
Introduction on React Components
React components are like functions in Javascript, the only difference between react components and Javascript function is the reacting component which returns HTML via render function. Whatever HTML view is visible on UI is generated by components. Previously to create an HTML design like a page for the admin user, page for superuser or for any normal user we have to write HTML for each page, but with the help of it we can create one design and it will be used for all. Because these components work as function and anyone can extend the same HTML design without rewriting the same code. So, in general, we can say components provide greater reusability in UI.
Types of React Components
There are two types of main components in react js they are
- Stateful components
- Stateless components
1. Stateful Components
Any components are called stateful components, these state can be changed anywhere throughout the life cycle of components. With the name itself, we can understand it’s behavior. In these components. Stateful components are class-based components. In class-based components, we will be creating one class where we will extend the core react library. If you know little about the class concept, then you will easily be able to understand it. In this, Once we create a component with a particular class name we can use it by calling <component class name/> It looks very much like any other HTML tag. Assume that you are creating a button and you are going to use that button in all of your projects. Previously when react was not available we use to write the everywhere the same piece of code to create a button. But With the help of react js, we can write code with class and use it as many times as we want.
Example:
In the below example we are developing a react class-based component.
Code:
import React from 'react';
import ReactDOM from 'react-dom';
class ClassComponents extends React.Component {
constructor() {
super()
this.state = {
msg: “Use me anywhere"
}
}
render() {
return (
<div>
<button>{this.state.msg}</button>
</div>
);
}
}
ReactDOM.render(
<ClassComponents />, document.getElementById('root')
);
<div id="root"></div>
Output:
Explanation of code:
Basically here we have created a class with name ClassComponents and this class is extending React core library which we have required. Next, we have written a render function, this function plays an important role in rendering HTML to end-user. Whatever HTML and button design is visible to us is visible because of render function only. We have used constructor function where we are initializing initial message value which will be accessible with this.state.msg code.
When should we use stateful components?
Suppose you are developing one component and you know that this component attribute is going to change many times throughout the life cycle of react components for example if your components contain users’ habits, fruits available in-store .in all these cases they will keep changing. So in such type of situation, we should create stateful components.
Advantage of stateful components
The main advantage of stateful components is we can manage all state and perform operation accordingly, for example, change in any activity of any user components will be reflected by changing into states and replicating the same to end-users components.
2. Stateless components
Any components are called stateless components because in this case, we know that all the attributes of these components will be fixed and not going to change throughout the life cycle of the component. With the name itself, we can understand it’s behavior. This is a Stateless component with a class where everything is static.
Example:
In the below example we are developing a react component having no state, it contains rules to apply a coupon.
Code:
import React from 'react';
import ReactDOM from 'react-dom';
class ClassComponents extends React.Component {
constructor() {
render() {
return (
<div>
<p>Condition to apply coupon:</p>
<ol>
<li>If you are ordering for the first time.</li>
<li>Minimum order amount should be greater than 1000.</li>
</ol>
</div>
);
}
}
ReactDOM.render(
<ClassComponents />, document.getElementById('root')
);
<div id="root"></div>
Output:
Explanation of code:
As we said, in any stateless components state will not change throughout the code. Let’s take the above example, you must have done much shopping online, have you ever applied any coupon on your shopping? if yes then you must have seen all the conditions of any coupons. It is possible that few conditions for any coupon to be applicable will be fixed through the coupon implementation. For example, coupon can be applicable if you are making this order for the first time and another example order minimum amount should be 1000 or any other fixed conditions which is not going to change.
When should we use stateless components?
suppose you are developing one component and you know that these components attributes are not going to change at all throughout the life cycle of react components, for example, if your components contain coupon implementation and few conditions are static and fixed which are not going to change throughout the life cycle. Then in that condition, we should use stateless components.
Advantage of stateless components
The main advantage of any stateless components is they are simple and easy to understand. They are faster as they do not have to maintain states. Because there are only static elements in stateless components which are very easy to manage.
Conclusion
We learned in this tutorial that there are two types of components stateless and stateful, stateful used for dynamic situations and stateless will be used for static situations.
Recommended Articles
This is a guide to React Components. Here we discuss the introduction to React Components along with its types, advantages and respective examples. You can also go through our other related articles to learn more – | https://www.educba.com/react-components/?source=leftnav | CC-MAIN-2021-21 | refinedweb | 976 | 54.12 |
Python Language¶ ↑
This document describes installation and basic use of NEURON’s Python interface. For information on the modules in the
neuron namespace, see:
Installation¶ ↑
- Syntax:
./configure --with-nrnpython ...
make
make install
- Description:
Builds NEURON with Python embedded as an alternative interpreter to HOC. The python version used is that found from
which python.
NEURON can be used as an extension to Python if, after building as above, one goes to the src/nrnpython directory containing the Makefile and types something analogous to
python setup.py install --home=$HOME
Which on my machine installs in
/home/hines/lib64/python/neuronand can be imported into NEURON with
ipython import sys sys.path.append("/home/hines/lib64/python") import neuron
It is probably better to avoid the incessant
import sys… and instead add to your shell environment something analogous to
export PYTHONPATH=$PYTHONPATH:/home/hines/lib64/python
since when launching NEURON and embedding Python, the path is automatically defined so that
import neurondoes not require any prerequisites. If there is a
@<host-cpu@>/.libs/libnrnmech.sofile in your working directory, those nmodl mechanisms will be loaded as well. After this, you will probably want to:
h = neuron.h # neuron imports hoc and does a h = hoc.HocObject()
In the past we also recommended an “import nrn” but this is no longer necessary as everything in that module is also directly available from the “h” object. You can use the hoc function
nrn_load_dll()to load mechanism files as well, e.g. if neurondemo was used earlier so the shared object exists,
h = hoc.HocObject() h('nrn_load_dll("$(NEURONHOME)/demo/release/x86_64/.libs/libnrnmech.so")')
Python Accessing HOC¶ ↑
- Syntax:
nrniv -python [file.hoc file.py -c "python_statement"]
nrngui -python ...
neurondemo -python ...
- Description:
- Launches NEURON with Python as the command line interpreter. File arguments with a .hoc suffix are interpreted using the Hoc interpreter. File arguments with the .py suffix are interpreted using the Python interpreter. The -c statement causes python to execute the statement. The import statements allow use of the following
neuron.hoc.
execute()¶ ↑
- Syntax:
import neuron
neuron.hoc.execute('any hoc statement')
- Description:
Execute any statement or expression using the Hoc interpreter. This is obsolete since the same thing can be accomplished with HocObject with less typing. Note that triple quotes can be used for multiple line statements. A ‘n’ should be escaped as ‘\n’.
hoc.execute('load_file("nrngui.hoc")')
See also
- class
neuron.hoc.
HocObject¶ ↑
- Syntax:
import neuron
h = neuron.hoc.HocObject()
- Description:
Allow access to anything in the Hoc interpreter. Note that
h = neuron.his the typical statement used since the neuron module creates an h field. When created via
hoc.HocObject()its print string is “TopLevelHocInterpreter”.
h("any hoc statement")
is the same as hoc.execute(…)
Any hoc variable or string in the Hoc world can be accessed in the Python world:
h('strdef s') h('{x = 3 s = "hello"}') print h.x # prints 3.0 print h.s # prints hello
And if it is assigned a value in the python world it will be that value in the Hoc world. (Note that any numeric python type becomes a double in Hoc.)
h.x = 25 h.s = 'goodbye' h('print x, s') #prints 25 goodbye
Any hoc object can be handled in Python.
h('objref vec') h('vec = new Vector(5)') print h.vec # prints Vector[0] print h.vec.size() # prints 5.0
Note that any hoc object method or field may be called, or evaluated/assigned using the normal dot notation which is consistent between hoc and python. However, hoc object methods MUST have the parentheses or else the Python object is not the return value of the method but a method object. ie.
x = h.vec.size # not 5 but a python callable object print x # prints: Vector[0].size() print x() # prints 5.0
This is also true for indices
h.vec.indgen().add(10) # fills elements with 10, 11, ..., 14 print h.vec.x[2] # prints 12.0 x = h.vec.x # a python indexable object print x # prints Vector[0].x[?] print x[2] # prints 12.0
The hoc object can be created directly in Python. E.g.
v = h.Vector(10).indgen.add(10)
Iteration over hoc Vector, List, and arrays is supported. e.g.
v = h.Vector(4).indgen().add(10) for x in v : print x l = h.List() ; l.append(v); l.append(v); l.append(v) for x in l : print x h('objref o[2][3]') for x in h.o : for y in x : print x, y
Any hoc Section can be handled in Python. E.g.
h('create soma, axon') ax = h.axon
makes ax a Python
Sectionwhich references the hoc axon section. Many hoc functions require a currently accessed section and for these a typical idiom is
ax.push() ; print secname() ; h.pop_section()
More compact is to use the “sec” keyword parameter after the last positional parameter which makes the Section value the currently accessed section during the scope of the function call. e.g
print secname(sec=ax)
Point processes are handled by direct object creation as in
stim = IClamp(1.0, sec = ax) // or stim = IClamp(ax(1.0))
The latter is a somewhat simpler idiom that uses the Segment object which knows both the section and the location in the section and can also be used with the stim.loc function.
Many hoc functions use call by reference and return information by changing the value of an argument. These are called from the python world by passing a HocObject.ref() object. Here is an example that changes a string.
h('proc chgstr() { $s1 = "goodbye" }') s = h.ref('hello') print s[0] # notice the index to dereference. prints hello h.chgstr(s) print s[0] # prints goodbye h.sprint(s, 'value is %d', 2+2) print s[0] # prints value is 4
and here is an example that changes a pointer to a double
h('proc chgval() { $&1 = $2 }') x = h.ref(5) print x[0] # prints 5.0 h.chgval(x, 1+1) print x[0] # prints 2.0
Finally, here is an example that changes a objref arg.
h('proc chgobj() { $o1 = new List() }') v = h.ref([1,2,3]) # references a Python object print v[0] # prints [1, 2, 3] h.chgobj(v) print v[0] # prints List[0]
Unfortunately, the HocObject.ref() is not often useful since it is not really a pointer to a variable. For example consider
h('x = 1') y = h.ref(h.x) print y # prints hoc ref value 1 print h.x, y[0] # prints 1.0 1.0 h.x = 2 print h.x, y[0] # prints 2.0 1.0
and thus in not what is needed in the most common case of a hoc function holding a pointer to a variable such as
Vector.record()or
Vector.play(). For this one needs the
_ref_varnameidiom which works for any hoc variable and acts exactly like a c pointer. eg:
h('x = 1') y = h._ref_x print y # prints pointer to hoc value 1 print h.x, y[0] # prints 1.0 1.0 h.x = 2 print h.x, y[0] # prints 2.0 2.0 y[0] = 3 print h.x, y[0] # prints 3.0 3.0
Of course, this works only for hoc variables, not python variables. For arrays, use all the index arguments and prefix the name with _ref_. The pointer will be to the location indexed and one may access any element beyond the location by giving one more non-negative index. No checking is done with regard to array bounds errors. e.g
v = h.Vector(4).indgen().add(10) y = v._ref_x[1] # holds pointer to second element of v print v.x[2], y[1] # prints 12.0 12.0 y[1] = 50 v.printf() # prints 10 11 50 13
The idiom is used to record from (or play into) voltage and mechanism variables. eg
v = h.Vector() v.record(h.soma(.5)._ref_v, sec = h.soma) pi = h.Vector() pi.record(h.soma(.5).pas._ref_i, sec = h.soma) ip = h.Vector() ip.record(h.soma(.5)._ref_i_pas, sec = h.soma)
The factory idiom is one way to create Hoc objects and use them in Python.
h('obfunc newvec() { return new Vector($1) }') v = h.newvec(10).indgen().add(10) v.printf() # prints 10 11 ... 19 (not 10.0 ... since printf is a hoc function)
but that idiom is more or less obsolete as the same thing can be accomplished directly as shown a few fragments back. Also consider the minimalist
vt = h.Vector v = vt(4).indgen().add(10)
Any Python object can be stored in a Hoc List. It is more efficient when navigating the List to use a python callable that avoids repeated lookup of a Hoc method symbol. Note that in the Hoc world a python object is of type PythonObject but python strings and scalars are translated back and forth as strdef and scalar doubles respectively.
h('obfunc newlist() { return new List() }') list = h.newlist() apnd = list.append apnd([1,2,3]) # Python list in hoc List apnd(('a', 'b', 'c')) # Python tuple in hoc List apnd({'a':1, 'b':2, 'c':3}) # Python dictionary in hoc List item = list.object for i in range(0, int(list.count())) : # notice the irksome cast to int. print item(i) h('for i=0, List[0].count-1 print List[0].object(i)')
To see all the methods available for a hoc object, use, for example,
dir(h.Vector)
h.anyclass can be subclassed with
class MyVector(neuron.hclass(neuron.h.Vector)) : pass v = MyVector(10) v.zzz = 'hello' # a new attribute print v.size() # call any base method
If you override a base method such as ‘size’ use
v.baseattr('size')()
to access the base method. Multiple inheritance involving hoc classes probably does not make sense. If you override the __init__ procedure when subclassing a Section, be sure to explicitly initialize the Section part of the instance with
nrn.Section.__init__()
Since nrn.Section is a standard Python class one can subclass it normally with
class MySection(neuron.nrn.Section): pass
The hoc setpointer statement is effected in Python as a function call with a syntax for POINT_PROCESS and SUFFIX (density)mechanisms respectively of
h.setpointer(_ref_hocvar, 'POINTER_name', point_proces_object) h.setpointer(_ref_hocvar, 'POINTER_name', nrn.Mechanism_object)
See
nrn/share/examples/nrniv/nmodl/(
tstpnt1.pyand
tstpnt2.py) for examples of usage. For a density mechanism, the ‘POINTER_name’ cannot have the SUFFIX appended. For example if a mechanism with suffix foo has a POINTER bar and you want it to point to t use
h.setpointer(_ref_t, 'bar', sec(x).foo)
See also
Vector.to_python(),
Vector.from_python()
neuron.hoc.
hoc_ac()¶ ↑
- Syntax:
import hoc
double_value = hoc.hoc_ac()
hoc.hoc_ac(double_value)
- Description:
Get and set the hoc global scalar,
hoc_ac_-variables. This is obsolete since HocObject is far more general.
import hoc hoc.hoc_ac(25) hoc.execute('print hoc_ac_') # prints 25 hoc.execute('hoc_ac_ = 17') print hoc.hoc_ac() # prints 17
neuron.h.
cas()¶ ↑
- Syntax:
sec = h.cas()
or
import nrn
sec = nrn.cas()
- Description:
Returns the currently accessed section as a Python
Sectionobject.
import neuron neuron.h(''' create soma, dend[3], axon access dend[1] ''') sec = h.cas() print sec, sec.name()
- class
neuron.h.
Section¶ ↑
- Syntax:
sec = h.Section()
sec = h.Section([name='string', [cell=self])
or
import nrn
sec = nrn.Section()
- Description:
The Python Section object allows modification and evaluation of the information associated with a NEURON Conceptual Overview of Sections. The typical way to get a reference to a Section in Python is with
neuron.h.cas()or by using the hoc section name as in
asec = h.dend[4]. The
sec = Section()will create an anonymous Section with a hoc name constructed from “Section” and the Python reference address. Access to Section variables is through standard dot notation. The “anonymous” python section can be given a name with the named parameter and/or associated with a cell object using the named cell parameter. Note that a cell association is required if one anticipates using the
gid2cell()method of
ParallelContext.
import neuron h = neuron.h sec = h.Section() print sec # prints <nrn.Section object at 0x2a96982108> print sec.name() # prints PySec_2a96982108 sec.nseg = 3 # section has 3 segments (compartments) sec.insert("hh") # all compartments have the hh mechanism sec.L = 20 # Length of the entire section is 20 um. for seg in sec : # iterates over the section compartments for mech in seg : # iterates over the segment mechanisms print sec.name(), seg.x, mech.name()
A Python Section can be made the currently accessed section by using its push method. Be sure to use
pop_section()when done with it to restore the previous currently accessed section. I.e, given the above fragment,
from neuron import h h(''' objref p p = new PythonObject() {p.sec.push() psection() pop_section()} ''') #or sec.push() h.secname() h.psection() h.pop_section()
When calling a hoc function it is generally preferred to named sec arg style to automatically push and pop the section stack during the scope of the hoc function. ie
h.psection(sec=sec)
With a
SectionRefone can, for example,
h.dend[2].push() ; sr = h.SectionRef() ; h.pop_section() sr.root.push() ; print h.secname() ; h.pop_section()
or, more compactly,
sr = h.SectionRef(sec=h.dend[2]) print sr.root.name(), h.secname(sec=sr.root)
Iteration over sections is accomplished with
for s in h.allsec() : print h.secname() sl = h.SectionList() ; sl.wholetree() for s in sl : print h.secname()
Connecting a child section to a parent section uses the connect method using either
childsec.connect(parentsec, parentx, childx) childsec.connect(parentsegment, childx)
In the first form parentx and childx are optional with default values of 1 and 0 respectively. Parentx must be 0 or 1. In the second form, childx is optional and by default is 0. The parentsegment must be either parentsec(0) or parentsec(1).
sec.cell() returns the cell object that ‘owns’ the section. The return value is None if no object owns the section (a top level section), the instance of the hoc template that created the section, or the python object specified by the named cell parameter when the python section was created.
Segment¶ ↑
- Syntax:
-
seg = section(x)
- Description:
- A Segment object is obtained from a Section with the function notation where the argument is 0 <= x <= 1 an the segment is the compartment that contains the location x. The x value of the segment is seg.x and the section is seg.sec . From a Segment one can obtain a Mechanism.
HOC accessing Python¶ ↑
- Syntax:
-
nrniv [file.hoc...]
- Description:
- The absence of a -python argument causes NEURON to launch with Hoc as the command line interpreter. At present, no
file.pyarguments are allowed as all named files are treated as hoc files. Nevertheless, from the hoc world any python statement can be executed and anything in the python world can be assigned or evaluated.
nrnpython()¶ ↑
- Syntax:
nrnpython("any python statement")
- Description:
Executes any python statement. Returns 1 on success; 0 if an exception was raised or if python support is not available.
In particular,
python_available = nrnpython("")is 1 (true) if python support is available and 0 (false) if python support is not available.
Example:
nrnpython("import sys") nrnpython("print sys.path") nrnpython("a = [1,2,3]") nrnpython("print a") nrnpython("import hoc") nrnpython("hoc.execute('print PI')")
- class
PythonObject¶ ↑
- Syntax:
p = new PythonObject()
- Description:
Accesses any python object. Almost equivalent to
HocObjectin the python world but because of some hoc syntax limitations, ie. hoc does not allow an object to be a callable function, and top level indices have different semantics, we sometimes need to use a special idiom, ie. the ‘_’ method. Strings and double numbers move back and forth between Python and Hoc (but Python integers, etc. become double values in Hoc, and when they get back to the Python world, they are doubles).
objref p p = new PythonObject() nrnpython("ev = lambda arg : eval(arg)") // interprets the string arg as an //expression and returns the value objref tup print p.ev("3 + 4") // prints 7 print p.ev("'hello' + 'world'") // prints helloworld tup = p.ev("('xyz',2,3)") // tup is a PythonObject wrapping a Python tuple print tup // prints PythonObject[1] print tup._[2] // the 2th tuple element is 3 print tup._[0] // the 0th tuple element is xyz nrnpython("import hoc") // back in the Python world nrnpython("h = hoc.HocObject()") // tup is a Python Tuple object nrnpython("print h.tup") // prints ('xyz', 2, 3)
Note that one needs the ‘_’ method, equivalent to ‘this’, because trying to get at an element through the built-in python method name via
tup.__getitem__(0)
gives the error “TypeError: tuple indices must be integers” since the Hoc 0 argument is a double 0.0 when it gets into Python. It is difficult to pass an integer to a Python function from the hoc world. The only time Hoc doubles appear as integers in Python, is when they are the value of an index. If the index is not an integer, e.g. a string, use the __getitem__ idiom.
objref p p = new PythonObject() nrnpython("ev = lambda arg : eval(arg)") objref d d = p.ev("{'one':1, 'two':2, 'three':3}") print d.__getitem__("two") // prints 2 objref dg dg = d.__getitem__ print dg._("two") // prints 2
To assign a value to a python variable that exists in a module use
nrnpython("a = 10") p = new PythonObject() p.a = 25 p.a = "hello" p.a = new Vector(4) nrnpython("b = []") p.a = p.b | https://www.neuron.yale.edu/neuron/static/new_doc/programming/python.html | CC-MAIN-2019-22 | refinedweb | 2,949 | 61.33 |
How can I return values to a recursive function without stopping the recursion?
I have a structure with x number of lists in lists and each list x number of tuples. I don't know in advance how many nested lists there are or how many tuples there are in each list.
I want to make dictionaries from all tuples and because I do not know the depth of the lists that I want to use for recursion. What I did was
def tupleToDict(listOfList, dictList): itemDict = getItems(list) # a function that makes a dictionary out of all the tuples in list dictList.append(itemDict) for nestedList in listOfList: getAllNestedItems(nestedList, dictList) return dictList
this works, but in the end I end up with a huge list. I would rather return itemDict in every round of recursion. However, I don't know how (if possible) to return the value without stopping the recursion.
source to share
You are looking for
yield
:
def tupleToDict(listOfList): yield getItems(listofList) for nestedList in listOfList: for el in getAllNestedItems(nestedList): yield el
In Python 3.3+, you can replace the last two lines
yield from
.
You can rewrite your function as iterative:
def tupleToDict(listOfList): q = [listOfList] while q: l = q.pop() yield getItems(l) for nestedList in listOfList: q += getAllNestedItems(nestedList)
source to share
You have two possible solutions:
Generator approach: a function with a yield statement, which can be a problem to implement in a recursive function. (See the phihags example sentence for an example)
Callback approach: you call a helper function / method from within the recursion and can track progress through a second outer function.
Here's an example of non-recursive recursion :; -)
def callback(data):
print "from the depths of recursion: {0}".format(data)
def recursion(arg, callbackfunc): arg += 1 callbackfunc(arg) if arg <10: recursion(arg, callbackfunc) return arg print recursion(1, callback)
code>
source to share | https://daily-blog.netlify.app/questions/1892530/index.html | CC-MAIN-2021-43 | refinedweb | 318 | 60.04 |
data. Turned to my favourite hammer, Python.
And with xlrd, this was as easy as cutting a cake. Here’s three functions that did the job.
GetFilenames collects all the filenames including path given a top level folder. Getdata collects the required data from the given row, col and writeOut dumps all the collected data into a csv file as a 2D table.
from os import walk from os.path import join def GetFilenames(mypath): return (join(dirpath,f) for (dirpath, dirnames, filenames) in walk(mypath) for f in filenames if f.endswith(('.xls','.xlsx')))
import xlrd def getdata(fname,listx,listy): """ Reads data from excel sheet fname excel file name listx=[2,4,6,8,10,12,14,17,19,21,27,29,2,6,8,10,4] # Rows index listy=[1,1,1,1,1,1,1,1,1,1,1,1,4,4,4,4,4] # Column index from which data has to be extracted returns dataList containing the extracted info from excel. """ dataList=[] # initial an empty list book=xlrd.open_workbook(fname) # open workbook sheet=book.sheet_by_index(0) # Get first sheet dataList.append(fname) for x,y in zip(listx,listy): # walk to all the row,col combination and collect data if sheet.cell_type(x,y) <> 3: # if its not a data data dataList.append(sheet.cell_value(x,y)) else: # if its a date data datetup =xlrd.xldate_as_tuple(sheet.cell_value(x,y),book.datemode) dataList.append(datetup[0]) dataList.append(datetup[1]) dataList.append(datetup[2]) return dataList
import csv def writeOut(fname,files): f = open(fname,"wb") writer=csv.writer(f) for f in files: writer.writerow(getdata(f)) print fname,"written !!" return None
Advertisements | https://sukhbinder.wordpress.com/2014/05/15/simple-functions-to-collect-data-from-excel-files-using-xlrd-and-python/ | CC-MAIN-2017-22 | refinedweb | 279 | 52.56 |
Deep Sleep Summary
Hi all,
Edited 2017-08-06 with new Pysense data
Edited 2017-08-22 with new Pysense wake on pin info
Edited 2017-09-06 with Deep Sleep Shield current consumption feedback, 3V3 input
Edited 2017-09-12 to clarify that deep sleep current is from battery, not USB
Here is my current understanding the situation regarding the various deep sleep modes.
Current estimates are based on the setup being powered from a battery, not over USB (this will result in noticeably higher currents).
Battery life estimates are based on a 2500 mAh battery, but do not take into account power drawn during the "wake" periods, which depends on what you do, and how often you do it, nor power drawn by any sensor.
WiPy 2.0, LoPy 1.0 and SiPy 1.0, using expansion board, without Deep Sleep Shield
- use
machine.deepsleep()
- wake on timer: Yes
- wake on pin: Yes
- pins: P2, P3, P4, P6, P8 to P10 and P13 to P23.
- ESP32 powered during deep sleep (including ULP and RTC): Yes (in deep sleep mode)
- deep sleep current: > 12 mA (bad) [±1 week?] due to DC-DC switching regulator (10 mA) and flash (2 mA)
- -> Option 1: provide regulated 3.3V via 3V3 input -> lowers current to ±2 mA [±1 month] there is some controversy on this point, it could actually damage the board
- -> Option 2: use Deep Sleep Shield, Pytrack or Pysense
Using Deep Sleep Shield
- use Deep Sleep library
- wake on timer: Yes
- wake on pin: Yes
- pins: P10, P17, P18
- ESP32 powered during deep sleep (including ULP and RTC): No
deep sleep current: ±7-10 µA (good) [years?]
- deep sleep current: ±500-620 µA [months?]. Waiting for Pycom for reason/fix.
Using Pysense
- use Pysense library
- wake on timer: Yes
- wake on pin: Yes, but requires patched Pysense firmware and librairies and cutting one pin, see
- pins: EXT_IO1 (pin 6 on the External I/O header) only
- ESP32 powered during deep sleep (including ULP and RTC): No
- deep sleep current: ± 12 µA (good) [years?] with updated Pysense library
Using Pytrack
- use Pytrack library
- wake on timer: Yes
- wake on pin: Same as Pysense?
- pins: Same as Pysense?
- ESP32 powered during deep sleep (including ULP and RTC): No
- deep sleep current: ± 20 µA (good) [years?]
- philwilkinson last edited by
@tuftec, yes you are correct, the machine.deepsleep() machine.remaining_sleep_time() and pin_deepsleep_wakeup() functions all work fine.
I have isolated my problem was with the LoRa publishing code. When I commented out the lora_publish in my code below everything worked fine. The bug must be something to do with nvram functions i am using in my lora_publish function. Perhaps this bug is best published in a new thread. Thanks for the comment.
Code published to allow others a quick start!
import machine import utime from machine import Pin import pycom import config #user defined #import LoRa_publish #user defined rst=machine.reset_cause() if rst != 3: # if not woken from deepsleep utime.sleep(10) #to allow ctrl+C pycom.nvs_set('counter', 0) #button = Pin('P23', mode = Pin.IN, pull = Pin.PULL_UP) machine.pin_deepsleep_wakeup(pins = ['P23'], mode = machine.WAKEUP_ALL_LOW, enable_pull = True) machine.deepsleep(60000) else: if (machine.wake_reason()[0])==1: #pin wakeup total_count = pycom.nvs_get('counter') +1 pycom.nvs_set('counter', total_count) print('remaining deepsleep time is {}'.format(machine.remaining_sleep_time())) machine.pin_deepsleep_wakeup(pins = ['P23'], mode = machine.WAKEUP_ALL_LOW, enable_pull = True) machine.deepsleep(machine.remaining_sleep_time()) elif (machine.wake_reason()[0])==2: #RTC timer complete print('timer completed') total_count = pycom.nvs_get('counter') print('counted {} button presses'.format(total_count)) #LoRa_publish.publish(total_count) #INT published to TTN pycom.nvs_set('counter', 0) machine.pin_deepsleep_wakeup(pins = ['P23'], mode = machine.WAKEUP_ALL_LOW, enable_pull = True) machine.deepsleep(60000) #this does not restart a new interval!!
@sslupsky if you are measuring wake to wake then you have the LoRa send time included, not just boot time.
If you use LoRaWAN, then don’t forget that there’s at least 2 seconds after send for the RX1 and RX2 windows, possibly a lot more if the network sends a higher RX1 delay.
Also, a LoRa packet can be pretty slow to send in the slower data rates. It can be even longer for LoRaWAN with the frame overhead, but even in raw LoRa you need to count the preamble. The smallest frame sent over LoRaWAN at SF12 takes nearly 1.5 seconds to send.
@philwilkinson Thanks for the heads up. I have a Pysense and so far the machine.deepsleep() appears to work. I've had it running continuously for about a day now: Wake, send LoRa data packet, Sleep.
I also posted a question regarding the amount of time it takes the LoPY4 to wake. In my simple Wake, send LoRa data packet, Sleep application, it takes about 4.5 seconds to wake (I have a 10 second sleep period and the total time Wake to Wake is about 14.5 seconds). Are you aware of any way to optimize that? Or perhaps reduce the power consumption during the wake / boot?
Thanks again for the feedback.
@philwilkinson I am not sure I agree with your findings.
I have been able to use the machine.deepsleep(60000) to put the device to sleep for 10 minutes, wake up and then reboot my code. My code continues to run as expected, waking up every 10 mins.
I am using a LoPy4.
I also use the wake identification capability to differentiate between a reset/power up and a wake from sleep.
Peter.
- philwilkinson last edited by
@sslupsky I just wanted to add a little to this discussion for new boards e.g. LoPy4 on an expansion board (not Pysense/Pytrack).
-The machine.pin_deepsleep_wakeup() function works fine.
-Using machine.deepsleep(time_in_ms) works fine once only. When the time_in_ms is complete, the unit wakes up OK and runs main.py. machine.wake_reason() correctly identifies a RTC WAKEUP.
However, it is not possible to then run machine.deepsleep(time_in_ms) again. The unit thinks the time has already elapsed and immediately exits deepsleep.
@sslupsky When you use deep sleep controlled by a PySense, PyTrack or the Deep Sleep Shield, deep sleep just completely powers off the xxPy module (ESP32, LoRa/Sigfox and/or LTE modem). No RTC, no ULP, no pin wake-up (on the ESP32), nothing...
If you use modules that support the native
machine.deepsleepwithout the higher power consumption, it's probably always better to use that. The PySense/PyTrack/Deep Sleep Shield deep sleep was just a workaround.
@jcaron Thank you for the suggestion. I did not realize the Pysense API killed the RTC. That is good to know since I want to keep track of time. I have a code module working with the Pysense API now. I will try the native machine API.
@sslupsky even if you are using a Pysense, it probably makes sense to use the native
machine.deepsleepif you are using a LoPy 4. This keeps the RTC running, for instance.
Hi @jcaron I have another question regarding the Pysense. Is it possible to "sleep" the SD Card? Typically that would require that power to the card is turned off since most SD card sleep current specs are 100's of uA.
@administrators May I make a suggestion regarding a future update to the Pysense? Instead of an SD card, could you embed an eMMC chip instead? Or have both? eMMC chips typically have lower sleep current ratings and wider temperature ratings than SD cards.
Hi @dmayorquin ,
Your post was extremely helpful, thank you for pointing that out. It is encouraging that you achieved < 20uA sleep current.
Thank you @jcaron for the clarifications. I should have mentioned that I am using a Pysense and LoPY4.
I think I understand now that when you use a Pysense, use the Pysense API to put the system to sleep (ie: py.go_to_sleep()). If you have just a LoPY4 and are not using the Pysense, then use the machine.deepsleep() to put it to sleep.
Hi @sslupsky, I solved the issue and the complete code is here:
As @jcaron says, machine.deepsleep() works well with the new hardware, but I'm not sure it works too when your setup has a shield like Pysense. In that case I've read that you should use the deep sleep method from Pysense library (or Pytrack or whatever shield you're using).
@sslupsky if you have one of the older modules (LoPy 1, SiPy, WiPy 2), then there is an issue with the “native” deep sleep (
machine.deepsleep) which results in a higher than expected power draw during deep sleep (>10 mA instead of 10-20 uA).
For those modules, the workaround to achieve low current deep sleep is to use an external board: Deep Sleep Shield, PySense or PyTrack, and the relevant library. This will cut all power to the module during deep sleep.
If you use one of the more recent modules, you should in most cases use the native deep sleep.
@dmayorquin Did you resolve this issue? Looking at your code, one question comes to mind that I have been looking to find an answer to.
You use py.go_to_sleep() to enter sleep mode which is part of the Pysense class.
Is that that call functionally equivalent to machine.deepsleep() ?
Hi @jcaron , thanks for your fast answer. Yes, I'm connecting the battery directly to the JST connector on the Pysense and I'm not using any sensor or SD card. In boot.py I disable WiFi:
from network import WLAN wlan = WLAN() wlan.deinit()
And main.py code is exactly as It's shown in my previous reply. I'm going to try with other releases of SiPy firmware. If you have any suggestion please let me know.
@dmayorquin never tried with a SiPy, only a LoPy, but the end result should be the same as the PySense will completely switch off the module.
The usual culprit is powering through USB, but if I understand correctly your setup, this is not the case. You are connecting the battery directly to the JST-PH connector on the Pysense, right?
Do you have anything else connected? Do you have an SD card in the slot? You haven’t setup any of the sensors to remain awake (like accelerometer wake up)?
Hi, I have a SiPy + Pysense + 3.7 V LiPo battery. I'm testing Deep sleep mode from Pysense library but the multimeter is not showing 12 µA current consumption.
SiPy firmware: 1.18.0
Pysense firmware: 0.0.8
My program disabled WiFi in boot.py. In main.py I blink an LED for 5 seconds and then go to sleep. These are my results:
Blinking: ~50 mA
Deep sleep: ~2.2 mA
from pysense import Pysense import pycom import time py = Pysense() #Turn-of heartbeat if pycom.heartbeat() == True: pycom.heartbeat(False) for i in range(5): pycom.rgbled(0x1F) time.sleep(0.05) pycom.rgbled(0x00) time.sleep(1) #Sleep 20 seconds py.setup_sleep(20) py.go_to_sleep()
Is there any piece of code I'm missing? What are you doing to get 12 µA with the Pysense board?
@iotmaker @rloro338 Updated the summary with the current draw reported with the Deep Sleep Shield.
Also added that powering via 3V3 may not be a solution, and that wake on pin with the Pysense is not trivial.
@rloro338 said in Deep Sleep Summary:
@iotmaker I am measuring the same power comsumption ... 620 uA with expansion board and 520 uA without it.
I am glad i am not the only one.. I just took another sipy updated firmware and deepsleep library and same issue 520uA really far from 7 or 10 ua | https://forum.pycom.io/topic/1589/deep-sleep-summary/17 | CC-MAIN-2019-30 | refinedweb | 1,938 | 76.22 |
RailsConf Europe on index cards, part II
Sun Sep 17 16:30:00 CEST 2006
A note to the interested reader: These are the raw notes I jotted down at RailsConf Europe 2006. They are probably misleading, out-of-context and not particularly useful if you didn’t attend the sessions. Nevertheless, enjoy.
Jim Weirich: Playing it safe
- Are open classes a poor fit for large projects?
- The Chainsaw Infanticide Logger Maneuver
- Leeroy Jenkins
- “Guard against Murphy, not Machiavelli” —Damian Conway
- use namespaces
- 10 Node classes on his disk
- Choose project name carefully.
- Avoid toplevel functions and constants.
- lessons from Rake
- Avoid modifying existing classes.
- Prefer adding over modifying.
- When adding, use project-scoped namespace.
- When adding public methods, ask first.
- Chain into the next hook.
- Only handle your special cases.
- Limit the scope of your hook.
- Preserve the original behavior.
- Understand and respect contacts.
- Require no more—promise no less.
- Replaced behavior must be duck-type compatible.
- All in all:
- Be polite with global resources.
- Preserve essential behavior.
Why The Lucky Stiff
- Remove your moustaches
- Ilias, anywhere?
- sandbox demo
(unreproducible on index cards)
Hussein Morsy: Database Optimization Techniques
- 1.5 years Rails experience, PHP before
- works on a German Rails book
- overview of his workplace
- optimizing:
- Rails code
- Indices
- DBMS
- OS
- optimize:
- number of connections
- transmitted data
- DB itself
- read the AR source!
- AR strangeness (proxying)
- selecting only what you need
- preloading children
- preloading and selecting
- counting
- …
Dan Webb: Unobtrusive AJAX with Rails
- History
- Dark age of DHTML
- Fixed resolution
- Web standards arrived
- Benefits of Web standards
- Behaviour: View layer among CSS and XHTML
- enhancing a working app
- graceful degradation
- not the Rails way
- Don’t use
<a>with
onclick=only
-
- Links should not have side-effects
- use
button_to
Form.serialize
- Demo, sneakr.com
Hamton Catlin: HAML
- “My other computer is open source.”
- HTML Abstraction Markup Language
- templating sucks in Rails, compared to the other stuff.
- Principles
- Code should be beautiful
- XHTML is prone to errors when done manually
- XHTML structure matters
- Diet soda can help you lose weight.
- Common markup is good.
- DIVs are building blocks.
- Demo
- indentation matters (ugh?!)
- no control structures (use partials, arrays)
- partials are properly indented
-
James Duncan Davidson: The web is a pipe, and more
- Unexpected things tell us lessons.
- Cigarette smoke is useful for finding cracks in airplanes.
- didn’t expect that Ant will turn into a language
- In the early days, we used FastCGI to deploy.
- FastCGI turned out to be weak sauce.
- HTTP is the real thing, stick to it.
- You learn how stuff works best when you look how it breaks.
- the simpler, the better
- Flatten storage and memory
- Amazon
- S3, they use it themselves
- EC2
- Google: BigTable
- MobileFS
- CPU cycles *do* matter
- more power
- more energy
- help save the environment
Dave Thomas
- This conf had highest four-letter words concentration.
- thinking about terrorism
- not about killing, but leveraging fear, making people afraid
- overreaction
- fear is wasteful
- lessons
- assess risk
- FUD
- lately(?), FUD has started about Rails
- Java is dynamically typed too (e.g. Collections, casts before 1.5)
- “You can’t possibly use something before it’s being used.”
- “Every publisher has Ruby books coming out—they’re all copycats.”
- The opposite of risk is not safety—it’s stagnation.
NP: The Byrds—Glory, Glory | http://chneukirchen.org/blog/archive/2006/09/railsconf-europe-on-index-cards-part-ii.html | crawl-001 | refinedweb | 547 | 59.19 |
On 05/11/2010 02:03 AM, Andrew Morton wrote:> On Sun, 09 May 2010 13:16:38 +0300> Boaz Harrosh <bharrosh@panasas.com> wrote:> >> On 05/07/2010 12:05 PM, Dan Carpenter wrote:>>> For kmap_atomic() we call kunmap_atomic() on the returned pointer.>>> That's different from kmap() and kunmap() and so it's easy to get them>>> backwards.>>>>>> Signed-off-by: Dan Carpenter <error27@gmail.com>>>>>>>> Thank you Dan, I'll push it ASAP. > >> Looks like a bad bug. So this is actually a leak, right? kunmap_atomic>> would detect the bad pointer and do nothing?> > void kunmap_atomic(void *kvaddr, enum km_type type)> {> unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;> enum fixed_addresses idx = type + KM_TYPE_NR*smp_processor_id();> > /*> * Force other mappings to Oops if they'll try to access this pte> * without first remap it. Keeping stale mappings around is a bad idea> * also, in case the page changes cacheability attributes or becomes> * a protected page in a hypervisor.> */> if (vaddr == __fix_to_virt(FIX_KMAP_BEGIN+idx))> kpte_clear_flush(kmap_pte-idx, vaddr);> else {> #ifdef CONFIG_DEBUG_HIGHMEM> BUG_ON(vaddr < PAGE_OFFSET);> BUG_ON(vaddr >= (unsigned long)high_memory);> #endif> }> > pagefault_enable();> }> > if CONFIG_DEBUG_HIGHMEM=y, kunmap_atomic() will go BUG.> > if CONFIG_DEBUG_HIGHMEM=n, kunmap_atomic() will do nothing, leaving the> pte pointing at the old page. Next time someone tries to use that> kmap_atomic() slot,> > void *kmap_atomic_prot(struct page *page, enum km_type type, pgprot_t prot)> {> enum fixed_addresses idx;> unsigned long vaddr;> > /* even !CONFIG_PREEMPT needs this, for in_atomic in do_page_fault */> pagefault_disable();> > if (!PageHighMem(page))> return page_address(page);> > debug_kmap_atomic(type);> > idx = type + KM_TYPE_NR*smp_processor_id();> vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);> BUG_ON(!pte_none(*(kmap_pte-idx)));> set_pte(kmap_pte-idx, mk_pte(page, prot));> > return (void *)vaddr;> }> > kmap_atomic_prot() will go BUG because the pte wasn't cleared.> > > I can only assume that this code has never been run on i386. I'd suggest> adding a "Cc: <stable@kernel.org>" to the changelog if you have> expectations that anyone will try to run it on i386.> Right! Everyone I know runs 64bit. I will add the Cc: <stable@kernel.org>to the patch. Thanks.Boaz | http://lkml.org/lkml/2010/5/11/391 | CC-MAIN-2014-10 | refinedweb | 332 | 58.38 |
I'm getting unexpected token (44.0) & (133.0) when parsing errors when I preview pages on my site. Can anybody explain what these errors are? I have not been able to figure it out.
My code is:
import wixCrm from 'wix-crm'; import wixWindow from 'wix-window'; $w.onReady(function () { }); $w("#newsletterRecipients").onAfterSave( () => { let firstName = $w("#newsletterFirstname").value; let lastName = $w("#newsletterLastname").value; let company = $w("#newsletterCompany").value; let email = $w("#newsletterEmail").value; let formused = $w("#formName1").text wixCrm.createContact({ "newsletterFirstname": firstName, "newsletterLastname": lastName, "newsletterCompany": company, "newsletteremail": email, "formName1": formused }) .then((contactId) => { wixCrm.emailContact("newsletSignup", contactId, { "variables": { "subscriberFirstname": firstName, "subscriberLastname": lastName, "company": company } }) .then((result) => { // popup Success message popUpMessage("Success"); }) .catch((error) => { // popup Error popUpMessage(error); }); }); function popUpMessage(message) { wixWindow.openLightBox('Show Message', {'message':message}); } } import {sendEmail} from 'backend/email'; $w.onReady(function () { $w("#newsletterRecipient").onAfterSave(sendFormData); }); function sendFormData() { const subject = `New Submission from ${$w("#newsletterCompany").value}`; const body = `Name: ${$w("#newsletterFirstname").value} \rLast Name: ${$w("#newsletterLastname").value}`; sendEmail(subject, body) .then(response => console.log(response)); }
Any advice would be appreciated.
If you use triggered email, then you can make up the triggered email yourself.
If you use SendGrid, you will have to put up with how SendGrid sends out the email, unless you make up your own sending template.
You can see previous forum posts about it here.
Have a look at these pages from Vorbly and Nayeli Code Queen) too.
Thank you GOS
It looks like you are mixing up two sets of code here.
The first one is for using a triggered email to a new contact after they have filled in a form, like I have on a site of mine with this code.
Along with the second part being a section from the sending an email on form submission tutorial here.
This is an old yet still working example and there are newer versions available like this one which uses SendGrid REST API.
Or newer which uses SendGrid NPM through Wix Package Manager.
You can't mix and match, you need to choose one or the other method here.
Also, all your imports should be together at the top of your page and you only need the one page onReady function.
On form-submission, I still want the contact to be created, and an email (using a template I prepared) to be sent out. Assuming it can be done using a newer method, then I'm a bit lost as to how to implement it. | https://www.wix.com/corvid/forum/community-discussion/parsing-errors-unexpected-token-public-pages-c1dmp-js | CC-MAIN-2020-24 | refinedweb | 411 | 51.95 |
Eric writes:> So. I've written a cross-reference analyzer for the configuration symbol> namespace. It's included with CML 1.2.0, which I just released. The> main reason I wrote it was to detect broken symbols.> > A symbol is non-broken when:> * It is used in either code or a Makefile> * It is set in a (CML1) configuration file> * It is either derived from other non-broken symbols > or described in Configure.help> If it fails any one of these conditions, it's cruft that makes the kernel> code harder to maintain and understand. The least bad way to be broken is> to be useful but not documented. The most bad way is to lurk in code, doing> nothing but making the code harder to understand and maintain.Could you make a list that splits the symbols up by each of the abovefailure conditions? It would make the task of deciding how to fix the"problem" more apparent.Also, it appears that some of the symbols you are matching are only indocumentation (which isn't necessarily a bad thing). I would start with:*.[chS] Config.in Makefile Configure.helpHowever, I'm not sure that your reasoning for removing these is correct.For example, one symbol that I saw was CONFIG_EXT2_CHECK, which is codethat used to be enabled in the kernel, but is currently #ifdef'd out withthe above symbol. When Ted changed this, he wasn't sure whether we wouldneed the code again in the future. I enable it sometimes when I'm doingext2 development, but it may not be worthy of a separate config optionthat 99.9% of people will just be confused | http://lkml.org/lkml/2001/4/19/43 | CC-MAIN-2015-14 | refinedweb | 276 | 66.33 |
Code Effects solution comes with a powerful feature that allows you to reuse any evaluation type rule in any other rule as if it were a simple field of System.Boolean type.
Imagine if your organization used a simple condition of Age is greater or equal to 21 in hundreds of its business rules. Obviously, if the value of 21 changes tomorrow, your organization would need to edit, test, and re-deploy all those rules with the new value. Having that condition as a reusable rule allows for much greater flexibility, because any change to that rule is instantly adapted by all other rules that may use it.
For example, consider the following source object:
using System;
using CodeEffects.Rule.Attributes;
namespace TestProject
{
public class Patient
{
public string Name { get; set; }
public Gender Gender { get; set; }
[Field(DisplayName = "Date of Birth",
DateTimeFormat = "MMMM dd, yyyy")]
public DateTime? DOB { get; set; }
[Method("Age", "Returns age of the patient")]
public int GetAge()
{
if(this.DOB == null) throw new Exception("DOB is not set");
else return DateTime.Now.Year - ((DateTime)this.DOB).Year;
}
// C-tor
public Patient()
{
this.Gender = Site.Gender.Unknown;
}
}
public enum Gender
{
Male,
Female,
[ExcludeFromEvaluation]
Unknown
}
}
If we were to register Rule Editor on a web page with this Patient class as a source object, we could create an evaluation type rule that checks if the patient is 21 or older, name that rule "Legal Age", and save it:
Later, we could reuse that rule in other rules as if it were a field of bool type called "Legal Age" by adding it to the collection of items in the Rules menu and fields context menu the next time we load the page (more about adding items to the Rules menu can be found in the Toolbar topic). So, the selection of this rule from the Rules menu for editing would look like this:
It would also look like an ordinary rule field in the context menu when you create other rules:
Notice that the Help String displays the description of our rule every time the mouse hovers over its name in either menu. It does so because we added that description in the very first step above.
So, now we can reuse our Legal Age rule in any other rule that uses the same source object:
This powerful feature allows you to easily encapsulate common logic into separate rules and reuse them across your entire organization.
IMPORTANT! If used carelessly, reusable rules can lead to a nasty thing called "circular references". Imagine the following three business rules:
Rule # 1
Rule # 2
Rule # 3
It's easy to see the circular references in these rules. What makes it worse is the fact that rule author cannot see that by using Rule # 1 in the Rule # 2 (s)he actually creates a recursion.
Code Effects solution comes equipped with excellent features that help developers to avoid recursions. For instance, it does not allow rule authors to reuse the rule inside of itself while this rule is being edited - the Rule Editor simply won't display a rule's name in the context menu even if the developer adds it to the menu's item collection. Rule Editor can also check for circular references during the rule validation. But it does this only if it's able to get the Rule XML of the referenced rule and check it for inner references of the rules that it has examined so far. You can force Rule Editor to perform such a check by supplying it the GetRuleDelegate - a method that gets a rule ID and returns the rule's XML.
CodeEffects.Rule.Web.RuleEditor editor = new RuleEditor("divRuleEditor")
{
SourceType = typeof(Patient),
GetRuleDelegate = YourRuleStorageService.LoadRuleXml()
};
If the GetRuleDelegate is set, Rule Editor performs recursion check during rule validation because it knows how to get the referenced reusable rule if it finds one in the current rule. Rule Editor checks not only the initial rule but all referenced rules, rules referenced in those referenced rules, and so on. Don't supply the delegate if you are absolutely sure that the current rule does not reference any other rules.
If recursion is detected, the entire initial rule is marked invalid, and returned back to the client. | https://codeeffects.com/Doc/Business-Rules-Reusable | CC-MAIN-2021-31 | refinedweb | 710 | 53.44 |
Created on 2007-08-13 08:15 by ldeller, last changed 2010-08-01 22:53 by georg.brandl. This issue is now closed.
The xmlrpclib module in the standard library will use a 3rd party C extension called "sgmlop" if it is present.
The last version of PyXML (0.8.4) includes this module, but it causes crashes with Python 2.5 due to the use of mismatched memory allocation/deallocation functions (PyObject_NEW and PyMem_DEL).
It is unlikely that sgmlop will be fixed, as PyXML is no longer maintained. Therefore sgmlop support should be removed from xmlrpclib.
(In case you're wondering why anyone would install PyXML with Python 2.5 anyway: there are still some 3rd party libraries which depend upon PyXML, such as ZSI and twisted).
I'm assuming that stuff won't be removed from 2.5 because it's in maintenance, so should this be removed or changed to raise a deprecation warning in 2.6?
As an aside, how about removing references to _xmlrpclib (which appears to have been removed long ago) as well?
Choice of XML parser is an implementation detail of xmlrpclib not visible to users of the module. This change would not affect the behaviour of xmlrpclib (other than to fix a crash introduced in Python 2.5). Does this mean that a DeprecationWarning would not be necessary? Does it also mean that the fix might qualify for the maintenance branch?
Adding a DeprecationWarning in 2.6 without removing use of sgmlop is pointless, because the DeprecationWarning would be followed by a process crash anyway.
I.
Yes the standalone sgmlop-1.1.1 looks fine: in its sgmlop.c I can see that matching allocator and deallocator functions are used.
I installed PyXML-0.8.4 from source ("python setup.py install" on Win32 which picked up the C compiler from MSVS7.1). The cause of the problem is quite visible in the PyXML source code (see that PyObject_NEW and PyMem_DEL are used together):
Interestingly PyXML-0.8.4 was released more recently than sgmlop-1.1.1. I guess they weren't keeping in sync with each other.
Fredrik, can you please comment? If not, unassign.
I don't really have an opinion here; the best solution would of course be to find someone that cares enough about PyXML to cut a bugfix release, it's probably easiest to just remove it (or disable, with a note that it can be re-enabled if you have a stable version of sgmlop). I'm tempted to suggest removing SlowParser as well, but there might be some hackers on very small devices that rely on that one.
(Ideally, someone should sit down and rewrite the Unmarshaller to use xml.etree.cElementTree's iterparse function instead. Contact me if you're interested).
Minimalistic test crash (python 2.5 cvs, sgmlop cvs(pyxml)) - compiled
with msvc 2005, where after 10 loops ms-debugger is invoked:
data='''\
<?xml version="1.0"?>
<methodCall>
<methodName>mws.ScannerLogout</methodName>
<params>
<param>
<value>
<i4>7</i4>
</value>
</param>
</params>
</methodCall>
'''
import xmlrpclib
def main():
i = 1
while 1:
print i
params, method = xmlrpclib.loads(data)
i+=1
main()
Both PyXML and sgmlop are deprecated now, and support has been removed in xmlrpclib as of Python 2.7. I think this can be closed. | https://bugs.python.org/issue1772916 | CC-MAIN-2021-25 | refinedweb | 554 | 67.35 |
We were out of cat food, and my husband (Doug) was heading to the store anyway, so he volunteered to get some. After I gave him a manufacturer’s coupon, a store coupon, and a store discount card, he was sorry he’d volunteered. Doug just wants the price to be the price.
He is accepting reality, and the other night he turned to our son and said he wants him to write an app that will tell him the lowest price for something he wants to buy – where the app takes into account all possible discounts, coupons, taxes, figures out shipping versus the cost of driving to the store, etc. I couldn’t help laughing, as my new article, written with Sandhya Kapoor, Create a coupon-finding app by combining Yelp, Google Maps, Twitter, and Klout services was being edited for publication.
Our article includes a sample java application that pulls together information using APIs from Yelp, Google, Klout, and Twitter. The application collects data, merges, and then optimizes results using an aggregation sort. We store data in MongoDB, and use a JSP to display the data. I was happy when my favorite local Italian restaurant came out first:
We also include the directions for using Eclipse to develop and deploy the application to Bluemix. This was my first experience using Eclipse with Bluemix. I still prefer the no install approach of DevOps Services, but the Eclipse setup was easy enough and deployment to Bluemix is nicely integrated.
The sample first started in a weekend hackathon at the University of Missouri, Kansas City. Our thanks to Chetan Jaiswal, Parikshit Juluri, and Xuan Liu for getting it started. We hope this article proves helpful in showing how quickly you can build apps that integrate across many different services. It’s not exactly the application Doug is asking for, but it’s sure a great sample if you want to build one. My son is currently more interested in designing an automatic retrieval system for his Nerf Gun bullets, so you have a head start. You can get started with a free 30 day trial of IBM Bluemix.
There is an IBM Bluemix t-shirt with the slogan “Commit & Deploy & Scale & Repeat”. Whenever I see this shirt, I think to myself “huh?”. I really want to deploy my changes to see if they work before I commit anything. Maybe others are perfect coders who don’t need to test first, but more likely, the dichotomy stems from different usage patterns.
This distinction came home to me earlier this week. I’ve written my Node.js articles, and the new Kids Code! Activity Kit using the “Manual Deploy” in DevOps Services. The manual deploy picks up all files I've edited in the Web IDE and deploys my application. This way, I can see if my changes work before I commit them. Also, I’m lazy and it’s far faster to just hit Deploy than to go to the Git Repository, Commit my changes, and Push them to the repository.
Now, there are others firmly in the commit and push your changes first camp, who just use the automated “Build & Deploy” capabilities. If you look at the screen capture below, you see a DEPLOY button in the left center, and a BUILD & DEPLOY button on the top right. The DEPLOY is used for "Manual Deploy", and it picks up all the changes you have made in your Web IDE project. The BUILD & DEPLOY on the top right only sees changes that are committed and pushed. You can see why, in a lab setting, these two deploys could be a little confusing. Especially when the directions are for the manual DEPLOY path, and the instructors are from the automated camp that uses the BUILD & DEPLOY on the top right.
I suspect most of the automated camp typically uses Eclipse or other local IDEs first, so when they push to DevOps Service and Bluemix their code is ready to go. Whereas, I am using DevOps Services exclusively, so the Web IDE is in essence my local laptop. Also, the Manual Deploy only does a Deploy, whereas Build and Deploy includes the Build steps which are important for some other runtimes. I don’t think either path is necessarily the right or wrong path, I think there is good merit for both, and it’s just important to understand the distinct behaviors. I love the suggestion here, to configure the web IDE deploy (DEPLOY) and the Auto-Deploy (BUILD & DEPLOY) to use different app names so that you can use the web IDE deploy tool as a personal test environment and the Auto-Deploy as a team integration environment.
Personally, I’m going to continue my lazy approach to deploy my changes before I commit anything. So, if you are using one of my labs, please be aware of the distinctions. If you are using my labs, and want to also learn how to use the Shared Git repository, here are step-by-step directions and a concept card.
WordPress is one of the world's most popular blogging platforms.
You can create a blog on wordpress.com or install it yourself at a hosting provider from wordpress.org/
But what if you want to run a blog for an event for a week or even a few days? And what if you want to run it in Europe or Asia?
Now Bluemix provides WordPress as a service in the cloud with a PHP buildback, ClearDB, Object Storage and the SendGrid service.
It is really easy to install and to get blogging on Wordpress in Bluemix.
And also easy to add a blog to your Bluemix app.Maybe it should now be called Bluepress?
The SendGrid components provides superb user statistics right out of the box for your Wordpress blog on Bluemix. .
If you want to see Wordpress on Bluemix in action you can take a look at this blog here in Foster City in Silicon Valley
It's really easy to get started taking advantage of it. Here's are some best practices, and code samples to get you started down the path to (near) linear scalability nirvana. Enjoy.
Design, manage, and test multi-instance Bluemix applications ().
In response to: Build a Flexible 3-tier Mobile App on Bluemix
Mobile development is one of the hottest topics on Bluemix. To better serve this requirement, we developed a very flexible mobile development framework called "A Flexible 3-tier Mobile App Prototype on Bluemix". Inspired by the classic Java EE 3-tier framework, the 3-tier framework is divided into mobile app tier, app server tier, and the data tier. Please refer to the figure below for details.
There is no restriction on the implementation of the 3-tier framework. You may choose any language you like. To help easily understand it, we build up a prototype as reference.
The details and sample code are in the DeveloperWorks article "A Flexible 3-tier Mobile App Prototype on Bluemix" (). Though the article is written in Chinese, DO NOT be scared. You may go to for the step by step tutorial in English.
Next time you have an idea for a mobile app, why not implement it on Bluemix? I hope the 3-tier framework sample helps.
Blu.
I was in London in January for a series of Software Defined Environment (SDE) briefings with some of our large enterprise customers.
The briefings were primarily focused on transforming their data centers into SDEs, but we also discussed public and hybrid cloud topics, including IBM SoftLayer, and the yet to be announced, IBM Bluemix. The SoftLayer discussions were interesting, as we shared with them our plans to open a SoftLayer data center in London (now open), alleviating concerns over data proximity. SoftLayer’s differentiated capabilities, including bare metal servers and very rich APIs always gets the attention of infrastructure teams.
However, the Bluemix discussions were rather short, with comments like “you want me to put my code in the public cloud, I’m a ____”. For the blank, you could substitute many of our clients’ industries – as so many of our customers handle sensitive data, and are subject to rules and regulations of their country and industry.
Just like the opening of the SoftLayer data center in London opened up the discussion to using SoftLayer infrastructure as a service as an extension to UK enterprise data centers, last week’s announcement of Dedicated Bluemix opens up new opportunities on how to take advantage of Bluemix. Our clients will have new hybrid scenario options, with the security and privacy of a dedicated instance of Bluemix running in any SoftLayer data center, while IBM handles the installation and management. Applications can mix and match to accomplish their task, including using integration services to reach back into their data center for data as well as leveraging public Bluemix services.
This is another step in providing our customers higher value services, in an easy to consume delivery model. Read more about it on Jeff Brent’s blog, and Listen to Bala Rajaraman and Steve Robinson talk about the announcement. Get ready for dedicated Bluemix by starting your free Bluemix trial today.
I first learned about graceful degradation many moons ago as a best practice for application design in a Computer Science class in college and it's one of the few things that has stuck with me. The concept is simple - your applications should be able to deal gracefully with situations where crucial runtime prerequisites are missing (instead of just crashing unceremoniously). For example, if your application requires a database server and that server is unavailable when the application starts, you should notify the user that the database can't be reached instead of just assuming it's always there and being totally unprepared if it isn't.
With us all constantly writing samples/POCs/applications for Bluemix using the plethora of services available, it's very often the case that a particular app requires an instance of one or more services and/or some credentials (e.g. Twitter API keys) to work properly. Typically many of these projects are shared via IBM DevOps Services or some other Git based repository and will have some setup instructions in the readme.md file. The truth is that most of us won't bother to read the instructions when we fork these apps and they will often crash and burn when we push them to Bluemix the first time.
I've had more than my fair share of these types of crashes after trying to run the code provided by others and not bothering to read all the instructions and it occurred to me that others must have had similar experiences with the some of the samples I have provided.
I thought about it and realized that since I was providing some documentation, (albeit rudimentary) via the readme.md file anyway , I could make my samples more resilient and user friendly with just a tiny bit of extra work.
The procedure I came up with is as follows:
That's it ! When someone forks your projects and pushes it to Bluemix without having the necessary prereqs in place, they'll get a web page telling them what they need to do instead of the Bluemix equivalent of the blue screen of death . Once they've followed the instructions they can simply reload the tab in their browser that has your app's URL and voila, your app is up and running without incident !
I wrote a small Node.js example demonstrating this technique . It requires an ENV variable containing a fictional API's secret key to be defined. Here's what it looks like on first launch:
Here's what it looks like when configured per the instructions and reloaded:
Give it a spin by forking the project and deploying to Bluemix. The code is in IBM DevOps Services here .
As always feel free to leave comments below and if you'd like to give Bluemix a spin around the block, you can get started here.. Cheers.
New article on using Bluemix Cloudant, MQ and Twilio services to build your mobile office platform
With the advent of mobile and social applications, more and more people begin to use smartphones for work. We wanted an easy way to reserve our company's internal and external resources and services, such as meeting room reservation, from a mobile device. Our article "Build an Android app to reserve the meeting facilities" at shows you how to build a simple Android app on Bluemix that you can use to reserve facilities in a workplace environment via the smartphone.
Would you like to have a try by yourself? You just need about one hour to complete the development and deployment for such an application. You'll have a chance to understand how to use and combine multiple cloud services (Cloudant DB, MQ Light and Twilio) in the Bluemix and Android development platforms and can extend the application for your own purposes. It is so easy!
What will girls think about technical topics such as cloud or IBM Bluemix? On
Sept 29, as the organizer of IBM Bluemix Girls Day in Shanghai Jiaotong University,
and embarrassingly, a man, I was a little nervous waiting in the meeting room
before the beginning of the event. Our speaker, Grace, manager of Innovation Center
China was sitting on her seat peacefully waiting for the girls. As an experienced female
IT professional, I thought she would have much to share with the girls coming today.
Soon girls came in small groups, quietly and politely. Grace said hello and asked
some easy questions to break the ice. “What grade and major are you?” ”Have
you ever heard IBM?” From their answers, we knew most of them were junior
or graduate students in software school and showed much interest in IBM.
During the short chat before the speech, the atmosphere became relaxed and friendly.
A nice start, I thought, seeing the smiles on their faces.
After all 23 girls arrived; Grace started the formal speech about IBM and Bluemix.
The Girls listened in concentration and took short notes from time to time. They also
raised some questions about Bluemix, such as “How can Bluemix helps people with
little coding skills?” “Is it free to use in China?” In the registration section, all girls were
proactive using mobile phone to scan the QR code and visit the Bluemix registration
page.It looked like the girls like our smart cloud platform and wanted a trial.
After the speech, we entered the last but not least round-up discussion. The Girls
were encouraged to ask any questions, not only about IBM/Bluemix but also their
concerns in learning/career. The first girl’s question gave us a surprise and we
found it really hard to answer. The question was “I think girls are not good at coding,
any suggestion to make up for this shortage as we’ve already chosen to major
in software? ”Firstly Grace admired her frank attitude and explained coding was not all
in software industry though it was a basic skill to learn in school. “As a SJTU student,
you should have wide vision and find there’re so many subjects to learn during college life.
Don’t let coding be a burden in your major, it’s just a basic skill. Nowadays everyone
is learning coding. Last time I talked to a friend who was studying piano abroad,
he told me that he was self-learning coding. I was shocked and asked why and
he told me otherwise he couldn’t find a job! You see, coding is not unique to
software students at all”. This answer made everyone laugh and free discussion
continued in an easy mood. Many questions about IT and career were raised
and then answered.
Time rolled on; soon we came to the end of this event. Before saying good bye,
let’s take a photo for memory. Camera caught every girl’s smile which we thought
was most important for Bluemix Girls Day.
This past week I participated in a couple of great events centered around innovation and creative solutions coming from university students and the start-up community. There were each coming from a very different focus, one being a hackathon and the other technical enablement for start-ups and entrepreneurs but they both showed how cloud and Bluemix can be used to provide a platform for apps implementing these developers' innovative ideas.
I had the opportunity to be a mentor for the HackRPI hackathon at Rensselaer Polytechnic Institute in NY state this past weekend. HackRPI is a student organized hackathon, the first completely hacker-driven hackathon where hackers choose the prizes and set rules for the competition. The experience of working with students, mentoring and guiding them in using Bluemix with their apps was rewarding. I also conducted a techtalk providing an overview of IBM Bluemix with demos of Java and Node applications using a variety of SaaS services including Cloudant, SQLDB, Internet of Things, Twilio, and Analytics with dashDB.
The competition ran a full 24 hours, it was supported by local RPI students and faculty, partner organizations including the RPI Center for Open Source Software, Major League Hacking, the RPI Embedded Hardware Club and 20 sponsors including IBM and other corporations. 500 students were in attendance from across 60 schools and 91 team submissions were entered into the HackRPI competition. 50 participants began using Bluemix with several innovate projects and apps, including ChordCounter, KwikThinker, EndToOne, tldrcode.com, and Connecting Interests
A notable quote from one of the student participants, “IBM Bluemix is so easy to use it is mindblowing”
Also this week Florida Atlantic University conducted education sessions and forum type events on business and technical topics focused on the entrepreneur and start-up community as part of its unique Global Entrepreneur Week supported by the FAU Tech Runway, Adams Center for Entrepreneurship and IBM. I was invited to conduct two technical education sessions on Bluemix including RapidApps and Internet of Things. Students attending the session stayed a bit extra for us to discuss their ideas for innovative cloud apps and a look at the architecture of Bluemix with Cloud Foundry. One of the students remarked that “This is the coolest thing I’ve seen in school in four years!” You could really tell he's going to be planning to work with Bluemix for some of his apps.
Never in a million years did I expect IBM to have capabilities easy enough for my son to learn to code, but there I was recording a video with him, and listening as he explained how to build a website using DevOps Service and Bluemix.
We only did one pass of the video, and I watched as he went to my starter project, did an edit, fork, and edit to start working with his new project. I was thinking to myself, he did it, but perhaps that part wasn’t all that intuitive. Clearly, the development team agrees, and now there is a simpler way – with a FORK PROJECT button on the first screen. A relatively minor change, and if it required an upgrade, perhaps not one you’d care about, but when voila, there it is, how can you resist?
I love these minor, but really simplifying changes, the DevOps Services team continues to roll-out. My other favorite change is the ability to provide a host name for your Bluemix URL during the Manual Deploy dialog. This is far better than having to edit the manifest file or having to use a random URL.
I used both the new FORK button and the URL naming in my intro video tutorial that goes along with a new “Build a Website with Node.js” Activity Kit. The kit is designed for introductory parent & child coding events. It’s a spin-off from part 2 of my developerWorks article series, Build your first Node.js website. You’ll find even more Step by Step tutorials on Bluemix on the CoderDojo kata. If you are interested in hosting or helping out with introductory events or materials, please contact me.
Earlier this year, we were working on automation to seamlessly move a Java application between different IBM private (IBM PureApplication System and SmartCloud Orchestrator) clouds and across to the PureApplication Service on Softlayer. Since the automation worked at the application layer (exploiting the Web Application Pattern), I asked the developer, Marco De Santis, to also take the same application artifacts and deploy them to IBM Bluemix.
Now, I’ve been around for a while, so I wasn’t naïve enough to think it would just work, and therefore wasn’t surprised when the questions started rolling in. We were soon chasing down folks in java and database land for help. There actually weren’t too many gotchas, but there were enough that Marco and I just published an article, “Build a portable Java EE app across Bluemix and private cloud patterns”. This new article covers how we reworked the application in areas like database access and initialization to keep it portable.
The article includes descriptions of the problems hit and how to address them in a portable fashion, the portable application code, along with application deployment directions for using both the Web Application Pattern and Bluemix (using the Liberty and SQLDB services). We’ve also included directions to use the SmartCloud Orchestration Content Pack for Web Applications to automate application deployments across Bluemix, SmartCloud Orchestrator (now called IBM Cloud Orchestrator), and PureApplication system, and database initialization and automation code.
The database initialization for Bluemix applications is useful even if you aren’t focused on keeping your application code portable. I’ll address this topic more in another blog, but you can get started now with the sample Bluemix database initialization code in Step 8 of the article. Thanks to Claudio Marinelli, Gianluca Bernardini, Paolo Ottaviano for their work on database initialization.
A route is a key part of the URL(s) that point to your Bluemix applications. A Bluemix application will have one or more URLs in this format: [route].[domain] For the application URL the route is twins and the domain is mybluemix.net
Most deployed Bluemix applications only have a single route - in this post I'll give you a brief primer on using multiple routes for each application and give examples of when that capability comes in handy.
The Basics
When you push your application out to Bluemix, a route is created for you using the following rules:
One useful tip to remember about the third option is that a random route is added for EACH push operation, so if you edit the application and push it multiple times you'll end of with a lot of routes. If you don't want to deal with this, edit the manifest file after the first push and put the randomly generated host in the manifest to prevent "route inflation".
If you only use a single route for each application, then your routes will be a lot like electricity. You'll only notice them when they don't work, however you can do a lot more with routes by mapping multiple routes to the same app. Let's explore this some more.
Route commands
The following Cloud Foundry command line tools command allows you to see all the routes for all your apps
cf routes
Here's an example from my account (note the instances circled in red represent multiple routes pointing to the same app)
The command cf map-route is what you need to add a route.
It takes an app name, a domain (e.g. mybluemix.net ) and a new route to your app as parameters.
In this example the URL will now point to the app twins
cf map-route twins mybluemix.net -n twins-subliminal-trickery
Note that cf map-route will create the route if it doesn't exist and simply remap an existing route if it does.
You can delete specific routes with the cf delete-route command. It takes a domain (e.g. mybluemix.net ) and a specific route as parameters
cf delete-route mybluemix.net -n twins-subliminal-trickery
cf create-route perfoms a subset of what the command cf map-route does . It creates the route but doesn't map it to any applications. You can use cf map-route later to map it to an app. It takes a space name, a domain (e.g. mybluemix.net ) and a specific route as parameters
cf create-route dev mybluemix.net -n twins-subliminal-trickery
cf unmap-route perfoms a subset of what the command cf delete-route does. It unmaps the route but doesn't delete it. It takes an app name, a domain (e.g. mybluemix.net ) and a specific route as parameters
cf unmap-route twins mybluemix.net -n twins-subliminal-trickery
How multiple routes per app can be useful
Let's look at a few ways to take advantage of multiple routes per application
For items 1) and 2) you would need code to get the URL used by the client - here's an a snippet for Node.js with Express
app.get('/', function(req, res) {
// This s route + domain e.g. myroute.mybluemix.net
var routeAndDomain = req.host;
...
});
And here's a snippet from a Java Servlet
...
@WebServlet(name="hello",
urlPatterns={"/hello"},
public class HelloWorld extends HttpServlet {
...
public void doGet(HttpServletRequest request,
HttpServletResponse response)
throws ServletException, IOException
{
// Request URL e.g. myapp.mybluemix.net/myservlet
String requestURL = request.getRequestURL();
}
...
}
There you have it - a quick and dirty primer on using routes. If you have any questions about this please let me know via the comments below. If you've never taken Bluemix out for a spin around the block you can get a free test drive here.
Finnish cool company Vaadin has just published the first third-party Boilerplate in Bluemix.
For you who have never heard of them before, they publish open source libraries for very cool user interfaces. These are both productive for developers and for the users that work with them.
In the Nordics we have been working for some time together with them where we present Bluemix and Vaadin UI together. Our technologies fit together like two sides of s coin. Bluemix covers back-end system productivity, building extendabler server side functions with Bluemix services. Vaadin covers the front-end part with a fantastic UI library. It is a perfect match. Also Bluemix and Vaadin UI attracts the same group of developers who want to to develop with agility and efficiency.
So this is very good news indeed both for Vaadin and Bluemix.
If you do not have a Bluemix account register at
Then read Vaadins posting and learn how to try it out at
Cheers
W.
The Cloud Computing Expo is being held at the Santa Clara Convention Center here in Silicon Valley right now.
The Expo is focusing not only on Cloud Computing but also on Big Data, Internet of Things, Software-defined data centers, Web Requisition Tracking Systems and DevOps.
Tomorrow, Wednesday, IBM will be holding a Bluemix Developer Playground at the Expo from 10:30 am to 5:30 pm.
This is a great chance to experience Bluemix first-hand.
And if you bring your laptop you can sit down and go through a series of easy labs that will show you how to get started writing your own Bluemix apps. It is easier than you think.
The labs will take you from writing your first Hello World in Bluemix and DevOps, and then move on from there to more advanced labs. We'll show you how Mobile and Bluemix are tightly integrated and how Bluemix is a portal for everything from Node.js and other open source languages, to NoSQL databases and the Internet of Things all the way to Watson and cognitive computing.
We will have a number of Bluemix experts available, so come and mingle and expand your social network. And meet experts from the IBM Innovation Center in Silicon Valley.
To save time you can sign up for a free Bluemix account on this link right now to be ready for tomorrow.
We'll see you at the show.
I've lately been working on application performance monitoring and scaling with the CloudTrader Java EE application we often use in Bluemix workshops. I first started by load testing the application to understand how it would respond under heavy load and what might be needed for scaling.
I had previously used Apache JMeter for load testing Java EE applications, and eager to apply this experience to Cloud and Bluemix applications. Instead of testing with JMeter right away, I looked at using one of the Cloud application load testing services available in Bluemix, there’s one from LoadImpact and one from BlazeMeter. I selected Blazemeter as it is essentially a “JMeter Load Testing Cloud” and added it to my application, just like any other service in Bluemix. I got started using it right away to generate various load tests and scenarios, the simple URL test and, sure enough, that simple load test using the application response time view showed significantly slower response times as user load increased:
Now that these load testing results show a need to apply scaling to handle larger numbers of concurrent users, I reviewed options to scale applications in Bluemix. One option is to simply increase the number of my application instances and allocate more memory from my available quota. Instead, I applied the Autoscaling add-on to dynamically allocate these resources according to scaling policies I set:
I then continued load testing in Blazemeter which showed very poor response time at ~17 users but then recovered on its own as you can see below. However, at ~22 user load on the application, performance started declining again - and my Autoscaling policy took effect at this point which was able to correct overall application responsiveness, ultimately response times became 2 times faster even with 30 users driving application load:
Autoscaling policy history shows that an additional instance of the application was deployed at the time when that repeated increase in response time started to occur (and Autoscaling also deployed another, third instance of the application just a bit later during the load test):
If you’re interested in load testing and application scaling for your Bluemix apps, the Autoscaling add-on and load testing services can get you started on the fastpath very quickly. In addition to out of the box simple load tests you can use your own JMeter test scripts in Blazemeter, demonstrating the “use what you already have” theme with Bluemix and DevOps for Cloud applications. This article from our team is quite insightful for a more in-depth look at Autoscaling applications in Bluemix and the effect of horizontal vs vertical scaling.
BM Watson represents a new era of cognitive computing and it is live on Bluemix NOW! We built up a simple Android app employing Watson Question and Answer (QA) Service in a few source code lines to demonstrate how easy it is to embed IBM Watson services in apps. We call this app "TravelBuddy". Users can input some travel related questions, and get answers from Watson services immediately. You can get our code and Step by Step directions in our DevOps Services Project at.
The TravelBuddy app is divided into 3 tiers as follows, the mobile app tier, app server tier, and the Waston service tier.
The implementation of the 3-tier architecture is very flexible. You may choose any language you like. In our case, we use the Java Liberty runtime. The app server tier and Watson service land on Bluemix. The mobile app tier only communicates with the app server tier, never Watson service directly. This design makes the architecture flexible and independent.
Accessing Watson service from the Java Liberty runtime is simple and direct. The first step is to fetch the VCAP_SERVICES variables, such as URL, user and password, and the second step is to make the connection to Watson service using the previous variable and return the relevant answers. We customize the answer and encode it into JSON format to return it to the mobile client. (You may take a look at com.ibm.mds.MobileServlet.java class from the source code.)
The mobile client implementation is very simple as well. The major decision lies in how to communicate with the app server tier over HTTP protocol. We suggest using some open source library, like Asynchronous HTTP Library for Android at, which eases handling the JSON data from the app server. (You may take a look at com.ibm.sample.travelbuddy.MainActivity.java class from the source code.)
Want to have a try? Please refer to Code URL for details:.
Still a little worried about how to build it? Don't worry, please refer to the HowTo.doc under the Code URL for a step by step tutorial.
I enjoyed working with women coming from various walks of life, who devoted their weekend to participate in the AT&T Hackathons, the first one held in Seattle, Oct 3rd through 4th and the second one held in Washington D.C., Oct 10th through 11th. There were developers, business leaders and entrepreneurs. I could appreciate their creative ideas, for example, develop a mobile app for real time collaboration on tasks for running a homeless shelter, build an application that enables women entrepreneurs to collect funds for starting their business by building a network of connections with funding agencies.
In addition, the "go to market" proposals presented for each application at the end of the Hackathon were strong and meaningful.
I was able to enable the teams with implementing their ideas using Bluemix. The Mobile Backend Services in Bluemix helped several teams to get applications running on Android and iOS devices quickly. “Close your eyes” application won the first prize. Along with Mobile Cloud Storage, the two member team that developed "Close your eyes", were excited to use Mobile Quality Assurance. Their app would cause a mobile device to start shaking when graphic scenes/language are shown in any movie/TV serial, indicating that the person watching should close his/her eyes. This team used IMDB APIs to collect information about the movies/TV shows. Second prize went to, "Look Thru", an application that used Smart Glasses to enable a person in dire situations like Fire to find the nearest exit door along with social interactions like finding friends to hangout with on a Friday evening.
The AT&T Hackathon in Washington D.C hosted at the Business School in Howard University had a unique flavor in terms of teams developing applications that focused on "Caring for our Community" and "Social Collaboration". There were a dozen different applications written using Bluemix.
It was a busy night for us, enabling teams to leverage our WorkFlow, Business Rules, RapidApps and DevOps services. Java and Node.js runtimes were very popular. A team that borrowed our Arduino board, built fashionable eye glasses that women can wear to protect themselves from assault. They programmed the board with a proximity sensor and used the camera on the smart glasses to take pictures of a person coming too close along with the warning that evidence of assault has been uploaded to the Authorities.
First prize from IBM went to "Text To Do" that allows real-time team collaboration using Text and Voice e.g. exchange text/voice messages to update a shared list as completed tasks are removed and new tasks added for running a Homeless Shelter. "6 Degrees" won the second prize, using RapidApps for Mobile App simulation and development. With this application, we can find friends through friends using phone lists, instead of sharing our profile with strangers via professional sites.
I am continuing to work with several of the Hackathon participants to include Bluemix in the curriculum at high schools in Washington D.C.
I’ve never been a city girl, and after 25 years in NC, I can’t say I was looking forward to going up to NYC for a Bluemix Girls Night out event, especially during a Nor’easter. Turns out I had a great time. I met a set of exceptionally talented (and nice!) women, and thoroughly enjoyed sharing my IBM Bluemix experience with them.
I did a short presentation followed by a live demo. My demo builds out an application to ask survey questions. You can see the application I built running at gno1.mybluemix.net. Under the covers, the application uses Node.js, Cloudant, and Twilio. I do the whole demo using DevOps Services and Bluemix, so no installation is required. Takes me about 10 minutes to get it all running live.
There are just a few steps:
The application is one we are using in our new Bluemix Hands-on Virtual Workshop, so you can sign-up for the workshop and we’ll lead you through it step by step. Or, it’s easy enough you should be able to follow the directions and do it yourself.
I’m still amazed when I think about what it would take to build out an application like this if I had to install Node.js, a noSQL database, a development environment, etc. I know one thing, it would take me longer just to install the pre-reqs than it will take you to get this application running. Hope you all have a chance to join a workshop soon or try the application on your own.
In one of my recent conversations with an ISV considering using IBM Bluemix as user acceptance test (UAT) environment, they asked for a way to bring down the application automatically after working hours, and automatically bring up their application the next morning for users testing. This approach would save on their subscription cost as the testing environment only needs to be up during working hours.
Starting and stopping Bluemix deployments is easily automated using the Cloud Foundry Command Line Interface (CF CLI), shell scripting and crob job. This approach is also handy for many other common monitoring and management scenarios, including checking the status of production deployments, and automating the scale-out of an application for known peaks.
Here are some sample automations to get you started. In these examples, I am using CF CLI installed on a Linux server.
Example 1: Automating time based start and stop
A shell script, app_stop.sh can be written for the purpose of stopping an application.
The shell script is then scheduled to run every evening at 7 PM. This can be done by creating a cron entry in the crontab as below.
To edit the list of cron entries:
# crontab -e
The following example scheduled the script to run from Monday till Friday at 7 PM daily.
Similarly another shell script, app_start.sh can be written to bring up the application and scheduled to run every morning at 8 AM to bring up the application automatically.
The following cron scheduled the script to run from Monday till Friday at 8 AM daily.
Example2: Automating application status check
It is always useful to know your application status especially in production environment. The application owner needs to be informed of the status of deployed application on Bluemix before users are impacted due to unavailable status.
A shell script can be written to retrieve health status of their deployed application on Bluemix. A cron entry can then be configured to run at regular interval, for instance running every 10 minutes, to check the status of application on Bluemix.
Following is a shell script example that checks the status of an application on Bluemix.
The script will check for any running instance for your application. An email will be sent to your designated email address if there is no running instance for your application.
In the following example, I used mailx command in Linux, sending out email notification with Gmail’s SMTP server.
app_status.sh
Notes: <$random-sting-for-different-user> is Firefox’s profile directory. It is a random string for different user in the following format.
~/.mozilla/firefox/<$random-sting-for-dirrent-user>.default
You can easily locate it by looking into the directory ~/.mozilla/firefox.
The following cron job schedule app_status.sh script to run every 10 minutes, every day.
Email notification received if there is no running instance of my application.
Example 3: Automating date based scale-out
The Auto-Scaling for Bluemix Add-on allows users to define their desired scaling policy as in when the user want instances to be scaled in and out automatically. Depending on the runtime used, user can define scaling rules for CPU, Memory or even JVM Heap parameters. However, there may be a requirement for some applications to have planned scale in and out rather than based on usage fluctuation of the application, in order to minimize the impact of user experience. For example, there may be a known need to scale out additional instance(s) towards the end of the month, probably 25th of the month.
The same approach of using CF CLI, shell scripting and crontab can be used to achieve the mentioned purpose. A shell script can be written to scale 3 instances, running on 256M disk limit, 512M memory limit, probably early morning on 25th to minimize the impact of service interruption.
app_scale_out.sh
Appending the cron entry below into the crontab will schedule the task to run 25th 3 AM in the morning every month.
In similar way, the same application can be scaled down later based on business needs.
For more information on Bluemix scaling, see the devWorks article: Scaling Applications in Bluemix.
I hope these examples help you get started automating your Bluemix deployments.
IBM Cognos is a world-class data analysis and reporting platform. When I promote Bluemix to partners in China, many local partners raise questions about whether it is possible to leverage IBM Cognos on Bluemix. To better answer partners' questions, we bring out this article "Leverage IBM Cognos on IBM Bluemix using the Embeddable Reporting service" at.
Embeddable Reporting service brings IBM Cognos capabilities to your Bluemix applications. The article provides step-by-step instructions on creating, developing, and deploying an app employing IBM Cognos capability (Embeddable Reporting service) on Bluemix. With a few configuration steps and source code lines, powerful Cognos reporting features are easily added into apps.
At IBM Big Data and Analytics Developer Day in Chattanooga, I had time for a couple of IBM Bluemix demos, so I let the audience choose from six. I wasn’t surprised when all hands went up for Watson, followed by high interest in Hadoop.
I happily started with Watson because it’s actually a great Bluemix Service example – you use it just like all the other Bluemix Serivces. You start the service from the catalog, and it’s almost immediately ready for you to develop an application using it. If you are like me, next you go to DevOps Services EXPLORE option to find example application starting points.
I used the IBM Bluemix User Modeling Service. I started an instance of the User Modeling Service from the Bluemix catalog, and then deployed the Watson User Modeling node.js application from DevOps Services. I changed the manifest to my copy of the service and the sample app was running.
The application lets you drop in text for analysis. The first time I tried the service, I started typing in text for analysis. When I realized it wanted at least 100 words, I switched over to find something already written. I first found a trip report my husband wrote. His analysis came in with a 3% for Love – glad I wasn’t on the trip.
Then, I found a paper of my son’s. He scored 100% in both Imagination and Authority Challenging. I’m not sure I needed Watson to tell me that.
During the demo in Chattanooga, I decided to drop in some political speeches. OK, I admit this might not be the exact intent of the service, but it was worth a laugh. I won’t share those scores here – but enjoy trying them yourselves!
This is clearly just the beginning. I plan to try out all the Watson beta services, and then spend some time dreaming about how to apply these new capabilities available to me through Bluemix.
I.
My 9-year old son created a website on Bluemix and uploaded the Pokemon cards he’s made so his friends can see them. I’m pretty sure when we started the Platform-as-a-Service (PaaS) journey, this isn’t the usage any of us quite envisioned. But the idea of using Bluemix to help kids on their programming adventures has captured my interest.
As I discuss the challenges for kids programming, two frequent inhibitors mentioned are:
1. pre-reqs (access to hardware for development, software installation, etc)
2. no easy way to share work with friends
The combination of Bluemix and DevOps Services solves both these problems.
First, pre-reqs are minimal as only a web browser is required. Application development is done in the DevOps Services Web IDE with deployment to Bluemix. Many run-times and services are available. Secondly, applications deployed on Bluemix are live on the internet for friends and family to see, and the code is easily shared using DevOps Services projects.
Brendan Murray and Zoryana Tischenko unveiled the first set of CoderDojo ready IBM Bluemix resources to 200 CoderDojo champions at the DojoCon 2014. There is more information and Step by Step Sushi cards on the CoderDojo Kata for getting started with Node.js, Java, or Python all supported by Bluemix.
We have an active community of Bluemix volunteers ready to mentor (remotely and in person) as well as grow and improve these resources for CoderDojo. Use this form on the CoderDojo site to request mentoring and extended free accounts for dojo use.
Be cool.
IBM recently released the 2014 Business Tech Trends Study. The study demonstrates the shift from 2012 to 2014 of cloud, mobile, social, and big data &analytics moving to the mainstream, and discusses how these technologies are now often combined. I have to be honest and say that the study wasn’t all that interesting to me – mainly because there wasn’t really anything that surprised me. The data supports what I anecdotally observe every day in Bluemix as cloud applications are built by composing these services together.
One trend that wasn’t included in the study is the Internet of Things (IoT). I would be curious to see the comparative responses from the same set of folks. I feel IoT creeping in all around me. At the CED Tech Ventures conference in Raleigh last week, Maciej Kranz from CISCO delivered the closing keynote and shared the prediction of the Internet of Everything growing to $19 trillion over the next decade.
I wanted to start some hands-on exploring of IoT using the Bluemix Node-RED run-time. IBM Fellow, John Cohn, agreed to give me a demo of what he is doing with Node-RED, and I soon found myself watching lots of flashing lights representing his house. It was a cool demo, but beyond where I could easily start. My quandary, did I really want to admit to an IBM Fellow that I didn’t know arduino from an armadillo, and I didn’t actually have any “things” that I could use yet.
Fortunately, John was very accommodating, and showed me how to get started with Node-RED by just monitoring twitter and sending SMS to my phone. I was able to easily recreate this myself, and am providing you step-by-step directions and video to do the same. All you need is Bluemix, an instantiation of the Node-RED service, along with twitter and twilio accounts. For my scenario, I didn’t need to write any code, I just wired nodes together like this and Deployed to Bluemix:
So, in about 10 minutes with Bluemix, I have a little application flow running on the Bluemix cloud that combines social, mobile, and is ready to hook up with IoT. Now I know how to do the twitter and SMS pieces in Node-RED, all I need is a device to hook up. I’m going to get started with a simulated device as described in Node-RED Quickstart Application, and hook that up with SMS to text my manager for some money to buy “things”.
If you want to do more with IoT, check out Kyle Brown’s new four part article, Build a cloud-ready temperature sensor with the Arduino Uno and IBM IoT Foundation, or see Jerry Cuomo’s demo where he uses a Raspberry Pi for temperature sensing and connects it with Twitter. Perhaps one day, my refrigerator will just tweet me it has a problem rather than leaking water across the kitchen floor…
// Node.js example
var vcapApp = JSON.parse(process.env.VCAP_APPLICATION);
var logPrefix= vcapApp.application_name + vcapApp.instance_index;
// Node.js example
console.log(logPrefix + ' is up and running !');
# Example cf push to create 2 instances of your app
cf ... push MyApp -i 2
# In first command prompt
cf logs MyApp | grep MyApp0
# In second command prompt
cf logs MyApp | grep MyApp1
And here's the output from the second instance
If you'd like to tinker with this app yourself, it's available as a publicly accessible Git repository in IBM Dev Ops Services. Here's the link
If you have any questions about this please let me know via the comments below. If you've never taken Bluemix out for a spin around the block you can get a free test drive here.
It is a good experience to witness Bluemix's grand launch in China at IBM Tech Summit with 2000+ developers in Beijing on Aug. 21-22, 2014. The theme of this Tech Summit is "Mix with Tech" and it covers what IBM is doing this year, mixing with latest popular technologies such as Cloud, Big Data, Mobile, Social, etc. During the 2-day summit, I can hear and touch Bluemix anytime and anywhere, from morning to evening, from Main Tent, Demo Booth, to Bluemix Track.
Bluemix Live Show: A CTO's Tour on Bluemix
I am impressed by "A CTO's Tour on Bluemix" in the Main Tent, delivered by Mr. Chen Zeng Wei, CTO of iSoftStone. Mr. Chen shares his experiences engaging with Bluemix, plus a live solution demo on how Bluemix helped them build an application to analyze the data from social media like Weibo. According to Chen's speech, iSoftStone's application is built on Java Liberty, SQLDB Service and Embeddable Reporting Service. The social media data is fetched from Weibo via Java API and then dumped into a SQLDB instance and finally analyzed and demonstrated from Embeddable Reporting Service. It is a vivid example to see how developers can combine several services together on Bluemix to help them fulfill the business requirement. Below is the snapshot about the reporting feature of the application. Very beautiful, I think.
Bluemix Demo Booth: Close to Bluemix
Frankly speaking, many developers in China seem to be a little shy at first. When they are close to the Bluemix Demo Booth I can see the interest in their eyes, but many of them just stop by and do not say anything. The problem is easily resolved by the Demo Booth owner actively starting the conversation and not waiting for their questions. After a one-minute warm up talk, many questions are raised. When will Bluemix land in China? Does it support Chinese? How can I try it? What about the pricing model? How about the bandwidth from China? Though we feel very tired after a whole day of discussion, we are glad to meet so many people.
Bluemix Track: From Zero to Hero
As the presenter and lab instructor, I am very excited to see the Bluemix Track full of people with curious eyes. The seats are limited and some of the developers have to stand up to listen to my speech. It is so encouraging that I feel full of energy in delivering the presentation. During the lab time, many of them have the chance to get deep with Bluemix and develop their first simple app on Bluemix with Java Liberty Runtime and SQLDB Service. Some of them finished the Mobile Android app using the MobileData service.
One developer asked, "It is interesting to develop my first app on Bluemix, will Bluemix trial be extended to a longer time?"
"Thanks for your comments and we will forward it to Bluemix team," I answered. "Whether it could be extended or not, let us fully utilize every day of the trial period and try more on Bluemix. I believe it is more interesting if you do more."
If you've logged onto IBM Bluemix this week you may have noticed that there have been a few changes. You can find out about the package of changes that recently went live here :.
There has also been a recent upgrade to v6.5.1 of the command line interface :
Please use this model for Continuous Delivery Model for Bluemix Applications | https://www.ibm.com/developerworks/community/blogs/enablingwithbluemix/?sortby=0&maxresults=50&lang=en | CC-MAIN-2015-22 | refinedweb | 8,916 | 60.95 |
(This article was first published on R snippets, and kindly contributed to R-bloggers)In GNU R the simplest way to measure execution time of a piece code is to use system.time. However, sometimes I want to find out how many times some function can be executed in one second. This is especially useful when we want to compare functions that have significantly different execution speed.
Fortunately times per second benchmark for execution time can be simply evaluated using the following snippet:
tps <- function(f, time) {
gc()
i <- 0
start <- proc.time()[3]
repeat {
i <- i + 1
f(i)
stop <- proc.time()[3]
if (stop - start > time) {
return (i / (stop - start))
}
}}
This function takes two parameters: a function to be benchmarked (f) and how much time is to be used for evaluation (time). It returns an estimate how many times per second function f can be executed.
As a simple application of tps function consider calculating relative speed of standard, lattice and ggplot2 graphics. The following function compares them by plotting histograms:
library(ggplot2)
library(lattice)
test <- function(n, time) {
x <- runif(n)
b <- c(tps(function(i) {
hist(x, 10, main = i)
}, time),
tps(function(i) {
print(histogram(x, nbin = 10, main = format(i)))
}, time),
tps(function(i) {
print(qplot(x, binwidth=0.1, main = i))
}, time))
names(b) <- c("hist", "histogram", "qplot")
return(b)
}
The function takes two arguments. First is number of points to sample for the histogram and second is time passed to tps function. On my computer the test gave the following result for 10000 size of the sample and 5 seconds for each function each :
> test(10000, 5)
hist histogram qplot
192.614770 14.285714 5.544933
We can see that standard hist is over 10 times faster than from histogram from lattice and almost 40 times faster than qplot from ggplot2.... | http://www.r-bloggers.com/times-per-second-benchmark/ | CC-MAIN-2015-18 | refinedweb | 309 | 58.52 |
Advertisements
Write an application that prompts the user to make a choice for a Coffee cup size, S for Small, T for Tall, G for Grande and V for Venti ? the prices of the cup sizes will be stored in a parallel double array as $2.00, $2.50, $3.25, and $4.50 respectively. Allow the user to choose if they would like to add a flavor to the coffee and add an additional $1.00 to the price of the coffee, display the total price of the cup of coffee after the user enters a selection..
import java.util.*; class Application { public static void main(String[] args) { double array[]={2.00,2.50,3.25,4.50}; Scanner input=new Scanner(System.in); System.out.println("Choose Coffee Cup Size:"); System.out.println("S for Small"); System.out.println("T for Tall"); System.out.println("G for Grande"); System.out.println("V for Venti"); double price=0; double additional=1; double totalPrice=0; System.out.print("Enter your choice: "); char ch=input.next().charAt(0); switch(ch){ case 'S': price=array[0]; break; case 'T': price=array[1]; break; case 'G': price=array[2]; break; case 'V': price=array[3]; break; } System.out.print("Do you want to add flavor to coffee?Yes/No: "); String check=input.next(); if(check.equals("Yes")){ totalPrice=price+additional; } else{ totalPrice=price; } System.out.println("Total Bill is: $"+totalPrice); } } | http://www.roseindia.net/answers/viewqa/Java-Beginners/21819-Java-Class-Question-.html | CC-MAIN-2014-35 | refinedweb | 237 | 53.47 |
Cannot run Vault docker sample in Server Mode: invalid reference format error
I am playing with vault image. In description page, there is an example how to run vault in Server mode.
docker run --cap-add=IPC_LOCK -e 'VAULT_LOCAL_CONFIG={"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}' vault server
But running above from windows console (cmd.exe) shows
docker: invalid reference format.
See also questions close to this topic
-
- Spring Boot HashiCorp vault register on Eureka
While information exists on the internet, in how you can enable discoveryClient to find the vault service within a spring boot application.
e.g.
spring.cloud.vault.discovery.enabled: true
No information exists in how you actually can register HashiCorp Vault to Eureka.
How can I register the HashiCorp Vault to Eureka?
- Hashicorp Vault - Can a deleted key be retrieved back?
I am trying to retrieve a deleted key from the Hashicorp Vault tool that is used to store secrets.
I tried to delete a single value, but it seems to have deleted a namespace.
- Vault client systemd integration
Has anyone used vault with systemd, I want to use the vault golang client to retrieve a password in order to automatically mount with fstab. | http://codegur.com/46685821/cannot-run-vault-docker-sample-in-server-mode-invalid-reference-format-error | CC-MAIN-2018-09 | refinedweb | 201 | 62.27 |
Below is a simple Angular 2 app with a real time search to the cdnjs public API. Type-in and see the real time magic:
It uses the power of observables and the Reactive Extensions for JavaScript (RxJS). We’re using a few operators, namely debounceTime, distinctUntilChanged and switchMap to make sure we don’t send too many API calls and that we get only the results from the latest request. Let’s break it down in three parts:
1. Component
Here’s the full code for our component:
import { Component } from '@angular/core'; import { SearchService } from './search.service'; import { Subject } from 'rxjs/Subject'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'], providers: [SearchService] }) export class AppComponent { results: Object; searchTerm$ = new Subject<string>(); constructor(private searchService: SearchService) { this.searchService.search(this.searchTerm$) .subscribe(results => { this.results = results.results; }); } }
Key takeaways
- We inject a search service that we’ll define next. The search service is responsible for the bulk of the work.
- We make use of an RxJS Subject, which acts as both an Observable and an Observer. Our subject has a next() method that we’ll use in the template to pass our search term to the subject as we type.
- We subscribe to the searchService.search Observable in our constructor and assign the results to a property in our component called results.
2. Search Service
Here’s the code for our search service:
import { Injectable } from '@angular/core'; import { Http } from '@angular/http'; import { Observable } from 'rxjs/Observable'; import 'rxjs/add/operator/map'; import 'rxjs/add/operator/debounceTime'; import 'rxjs/add/operator/distinctUntilChanged'; import 'rxjs/add/operator/switchMap'; @Injectable() export class SearchService { baseUrl: string = ''; queryUrl: string = '?search='; constructor(private http: Http) { } search(terms: Observable<string>) { return terms.debounceTime(400) .distinctUntilChanged() .switchMap(term => this.searchEntries(term)); } searchEntries(term) { return this.http .get(this.baseUrl + this.queryUrl + term) .map(res => res.json()); } }
Key takeaways
- First we import the necessary RxJS operators and the Http client service, then we also inject Http in the constructor.
- We define string properties in our service class for the base and query urls of our API, the cdnjs API in this case.
- Our search method takes in a observable of strings, goes through a few operators to limit the amount of requests that go through and then calls a searchEntries method. debounceTime waits until there’s no new data for the provided amount of time (400ms in this case) until it lets the next data through. distinctUntilChanged will ensure that only distinct data passes through. If the user types something, erases a character quickly and then types back the same character, distinctUntilChanged will only send the data once. Finally, switchMap combines multiple possible observables received from the searchEntries method into one, which ensures that we use the results from the latest request only.
- Our searchEntries method makes a get request to our API endpoint with our search term, this gives us another observable. We then use the map operator to map the results to a Json object.
3. Template
And finally here’s our template code:
<input (keyup)="searchTerm$.next($event.target.value)"> <ul * <li * <a href="{{ result.latest }}" target="_blank"> {{ result.name }} </a> </li> </ul>
Key takeaways
- We call the next() method on keyup events of our input and send in the input string value.
- We iterate over our results with ngFor and use the slice pipe to return only the first 10 results. slice is available by default as part of Angular’s Common module.
- We then simply create list items with our results. In the case of this API, the returned Json object has name and latest properties, and latest contains the url to the latest version of the file. | https://alligator.io/angular/real-time-search-angular-rxjs/ | CC-MAIN-2020-34 | refinedweb | 621 | 54.42 |
Tango is currently packaged in two formats: either with or without a compiler. Installing with a bundled compiler is easier since there's less configuration required. API documentation is available as a zip or online.
Download this package and unzip it into a directory. Set the system path to point at the extracted bin directory, and you are good to go.
You will also need some additional static libraries if you are using Windows and wish to use the zlib or bzip2 compression classes. These should be extracted to the lib directory.
Download the package for Linux, OSX or FreeBSD and untar it. Add the bin directory from the extracted package to PATH, and you can fire up the compiler.
Download this package and extract the contents of bundle into an existing dmd\windows directory. This will overwrite the existing windows\bin\sc.ini, add bob.exe and jake.exe to windows\bin, and adds tango.lib to windows\lib.
You will also need some additional static libraries if you are using Windows and wish to use the zlib or bzip2 compression classes. These should be extracted to the windows\lib directory.
Download this package for Linux, OSX or FreeBSD and extract it. Copy the contents of tango-bundle into an existing dmd/[linux/freebsd/osx] directory. This will overwrite the existing [linux/freebsd/osx]/bin/dmd.conf, add bob to [linux/freebsd/osx]/bin, and add libtango-dmd.a to [linux/freebsd/osx]/lib.
import tango.io.Stdout;
void main()
{
Stdout ("hello, sweetheart \u263a").newline;
}
See details here.
Additional static libraries for DMD on Windows | http://www.dsource.org/projects/tango/wiki/TopicInstallTangoDmd | CC-MAIN-2016-40 | refinedweb | 266 | 53.47 |
.
Meh. This isn't as big a change as I was thinking it would be. That said, "KDE Software Compilation" makes for really awkward phrasing (at least in English).
Oh, yes, we recognize that. However, it actually isn't really meant to be used a lot. It's mostly for the release announcement purpose. In most other cases you are talking about a specific app, the workspace or the platform. And if you need to mention the release you can say 'our latest release', call it 4.4 etc.
We didn't want to re-introduce a new brand here because frankly, it's just a bunch of apps which happen to release together. Apps which aren't part of the release schedule are just as important, and by calling the release 'software compilation' we make clear what it is (and what not).
Currently I see I often see KDE X.X used as an easy way to identify software versions.
DudeA: What version of KSnapshot do you have?
DudeB: Iunno. The one from KDE4.3.
What do you recommend be used instead? KSC4.4? KDESC4.4? Or is KDE4.4 still acceptable in informal contexts like these?
That's what I use. I would not use KDE [version] because it would re-instate the ambiguity.
As Luca said, KDE SC 4.4 would be fine, similarly SC 4.4 or even just 4.4 if that's enough in the context. But you can really help us by trying to avoid "KDE 4.4" because that just reinforces the KDE is the software (and in particular just a desktop) thing
i like it and is acurrate to say "KDE Software Compilation 4.4.x" but "KDE Software Compilation 4.4.1" eventually become KDE SC 4.4.1 , and then again Kde 4.4.1....(daniell, dan, d).
kde 4.x.y series it stay with us for a while(hope that, i dont survive another rewriteQT5?).
maybe will be Kool do something more agressive for marketing and confussionless like "photoshop CS series(9.0)"
KDE 4.4 -> KDE SC1 (4.4)
KDE 4.5.1 -> KDE SC2.1 (4.5.1)
instead "KDE Software Compilation 4.4.2"
"kde 4.x.y series it stay with us for a while(hope that, i dont survive another rewrite QT5?)."
Don't worry, any Qt5/KDE SC 5 will be a lot less painful, more like the KDE2 -> KDE3 move was. Qt won't be changing as much, and we won't have to re-write the desktop again, it will be more like clean-up work.
I can't speak for the trolls, but last I heard they have no timeframe yet for Qt5, but I suspect that once they have all the Symbian and Maemo support work completed they will want to have a major clean-up to align everything and break a few things in the process.
Just the be clear, the above comment is purely conjecture. There are no Qt 5 plans at this time that we know of, nor has anything real been discussed about this within KDE (not counting beer-induced planning of world domination).
Cheers
We should not even try to follow the path what Adobe toke with Creative Suite. They will end up to situation where they need to invent again new things. Because now they are on CS4. And few years forward they are on CS5 or CS6. Soon of that they comes to CS10 CS11 and so on.
And right now we have clear history of releases so it is wise to follow it because it is always kept same way what was even very logical. Now we have forth generation desktop, with fourth release cycle coming. It is easy to understand the x.x.+1 updates what are bug fixes and so on.
KDE SC 4.4 sounds good to me. KDE SC 4.4.1 and so on does same thing.
Just that the word 'Compilation' sounds like it is something to do with compiling code, as opposed to assembling a collection of software applications and components for release. Or 'Compendium'?
good point. I like the sound of "Collection" as well.
Yes this sounds good to me.
I also like KDE Software Set,
but I don't like it's abreviation KDE SS.
Anyway, I gess the discussion is already closed,
long life to KDE SC !
That was also proposed during the sprint. I don't remember why, but in the end there was more consensus towards "Compilation"
Those were the top three from our many suggestions. In the end we felt suite was too tight ("this is everything, other stuff is on the outside") and collection was a bit too loose ("just a load of stuff we threw together"). Compilation (for example a music compilation) indicates the idea that we selected some stuff that works well together.
Marketing speak over ;-)
Not 'content'
If I had a vote it'd be for 'KDE Software Release' ...as in the Release of Software by KDE. Then we can abbreviate KDE Release 4.4... KDE r4.4 and it still makes some amount of sense! It's funny, in the writeup and comments there is emphasis about wanting something to convey that the it's not a "joined" group of applications (suite too 'tight') but just software that happens to be *released* together.
Did "Release" really not come up as an option? Kind of funny if not.
It'll be interesting to see if this sticks both inside/outside the community. I'll do my best to train myself appropriately with electro-shock treatment, and sweeties.
Did I mention we made a long list? And checked it twice...
KDE Software release - well, it's not the only software we release (also a lot outside the SC) so that's one problem
You also suggest the handy shortening to KDE Release 4.4 or even KDE r4.4. So... KDE Release 4.4, that's the 4.4 release of KDE, right? Why not just call it KDE 4.4 then?
We also get in to fun sentences such as "Today KDE releases software release 4.4" whici is a bit heavy on te word "release"
then you call it KDESK :o) which sort of fits the description of what it should be....
You shouldn't need to use "KDE Software Compilation" much. It's really just there at all because we happen to release a whole load of stuff together (formerly KDE 4.3.3) and we need a name for that for release announcements and such. If someone sees your computer screen and wants to know what you're running then the answer is probably (KDE) Plasma Desktop/Netbook or possibly the name of one of the apps if that's what they're looking at.
As for the size of the change - well, we didn't want to ditch "KDE", but rather define it properly and strengthen it. We also didn't want to make more work for us and everyone else than was necessary. Even these changes are going to take a lot of time and effort to implement (KDE websites, About dialog in the apps, getting the press and our own community to understand).
Wow, this is quite nice, and less scary then I imaged it could be.
I like the idea of positioning KDE as community, so changing the programs and modules into "KDE something" seams to make sense.
I do have to get used to the term "KDE Software compilation 4.4", but that is something to get used to. Just like 'Smiths -> Lays' and 'Raider -> Twix' once seamed weird in the Dutch market. Nowadays the new names sound a lot cooler. :)
It took me ages to get over the Marathon -> Snickers transition. Lays are Walkers over here in the UK (at least I think so - it's the same logo). At least KDE hasn't had different brands in every country...
I just realized how much easier this is to tell about KDE to co-workers and other people.
Before I told about the Linux desktop thing, and how yes, Linux does GUI's too, and no it's not difficult, and no it even looks nice. Conversations dry up pretty quick that way.
Now I can tell about how KDE is a cool bunch of people, they make quality software, and empower Linux to be great as well. :-) ..and if those people start looking they find the same message at the Internet too. Bonus points for us. :-)
Well said. It has been before difficult to explain what KDE really is. Typically you end up to situation speaking how there are volunteer people and paid people to develop software what is "KDE". And then there has be difficulties to explain how some software belongs to KDE (Konqueror, KMail etc) and how some does not but they use KDE technologies and how some does not at all belong but use KDE technologies (Amarok, digiKam, Kdenlive) but same what KDE use (pure Qt apps).
Now it is all easy.
So here I am, reading comments on a KDE article. I read this comment of a Dutch guy who spells "seems" as "seams", and I think - wait, I know this guy who always does that! And it was him... Hi Diederik! o/
Oh man, I love how in this world you keep seeing the same great guys over and over again ;-)
lol :-) Hi sjors!
I think the name KDE Software Compilation X.Y.Z is kind of misconducting because *a* software compilation can contain multiple different versions (e.g. think of kdelibs 4.4, Plasma 4.3, Okular from 4.2, KOffice 2.1), even more the compilation can be platform dependent. This is kind of weird and does not really makes clear what we want to deliver (we have a software platform and applications of different versions).?
"software compilation can contain multiple different versions"
KDE SC already does contain apps with different version numbers in it; but the SC itself has a consistent numbering as a whole. this is not new.
"the compilation can be platform dependent"
KDE SC, which is what the epochal x.y.z release are, is not. or at least, the parts that are are not built for a given platform.
KDE SC is not 'a' compilation, it's *the* KDE SC. :)
"This is kind of weird and does not really makes clear what we want to deliver"
as noted above already, the SC isn't something we'll be pushing as a brand. it's just a way for us to avoid saying "KDE 4.4". we have a "4.4" release, and that's certainly going to happen (as well as 4.4, 4.5, etc.). but we want to emphasize the components more with the SC fading into the background a bit as a release engineering detail (from the POV of the audiences we will be talking to) and we definitely want to distance "KDE" from "that huge amount of stuff, including a desktop!, that they release".
KDE SC gets us further down that road. at some point, we may not need "KDE SC" at all, but for now it's a needed disambiguator. and again, it's not really a brand itself.
?"
i don't agree that companies should have exclusive right to a word that describes exactly what we are producing.
carbon dioxide is a product of my respiratory system. it's not a company either. ;)
talking about "products" is accurate. as a bonus it is verbage people who are used to proprietary software products are used to. the point is to communicate in words that are descriptive, that we know, that can easily be related to by others and that can be found in literature (yes, including "how to market.." type literature) without constantly translating.
we are a group very different from a monolithic company, something the word 'product' is not going to change in the least. by contrast, i do think that if we talked about 'management structures' with traditional monolithic company terms we would be heading in a poor direction.
it's also interesting to see how this was arrived at. it was very consensus based and "KDE" in how it was done. it didn't happen fast (this takes time no matter what kind of organization structure you have when it is this size, really) but it did happen in line with how we, KDE, have done things and will continue to do things. at least, imho.
We really don't want you to talk of KDE as a product - KDE is the community, right?
Re using the 'products' as a term for the things we produce. Well, you could just as well say KDE produces applications, workspaces and a platform if you want to avoid 'product'. But the end result of production is a product. It can sound corporate but it shouldn't be really.
Good work. Even "KDE Software Compilation", which initially seems clunky, makes sense given the context. "KDE Plasma Desktop" is an attractive and sensible name.
But (you knew there was going to be a 'but'):
I really don't like the "put either KDE or a K in the application name" policy, especially because it implicitly encourages people to do the latter to avoid having to do the former. This leads to names which are either dry, technical, and unfriendly (KImageEditor), needlessly obscure (Okular), or just plain goofy (Rekonq, Kamoso). I'm not debating application authors' right to name their applications however the hell they want, but we should have policies which encourage them to choose good ones. The K-in-the-name can be done artfully sometimes (Krita, Kate), and sometimes just sticking a K in front works well enough (KTorrent), but these seem like the exception rather than the rule. In cases where the application's KDE-ness isn't already part of the name, I think it should just be left out. That information can be conveyed through other means. Neither "KDE Dolphin" nor "KFileManager" sound very attractive, and it also exacerbates the "so you can only use it in KDE?" problem you guys are trying to solve. The foremost priority should be attractive and descriptive names; a 'k' in the name should only be seen as icing on the cake, and should only be done when it doesn't come at the cost of attractiveness and descriptiveness. (For the record, I also support Matthias's Ettrich's idea of adding the application's function as part of the name where it's not immediately obvious, so "Okular Document Viewer", "Dolphin File Manager" and "Amarok Music Player", say, while Konsole could probably just stay "Konsole".)
Obviously you guys are the ones in charge, these are just my thoughts with the hope that you will find them convincing.
'"put either KDE or a K in the application name" policy'
there is no such policy.
there was a very clear trend to do so in the past, mostly as a way to keep the namespace clear (so one of our binaries didn't conflict with one from somewhere else) but also as a way to identify. this was very pre-marketing-ourselves-very-clearly, but wasn't a horrible thing.
people who got hung up on it were .. well .. i never did understand getting distracted by something as insignificant. :)
still, in recent times names like 'plasma', 'dolphin', 'solid', 'phonon' and 'gwenview' are more common and even apps that did things like capitalize a 'k' in an odd place ('amaroK') have since normalized their names nicely.
there will continue to be 'k' names, in part because of namespacing but also in part because of culture and habit. no harm, no foul, really. it will remain up to the author(s) to name their work as they want to. sometimes a 'k' name might even make sense (KDevelop being a good example there in my mind)
as for putting the full 'KDE' as a prefix, that's no different than calling something a 'Toyota Prius' or a 'Microsoft Zune'. most of the time they are referred to as a Prius or a Zune or whatever, but there are times when the umbrella brand is added for clarity or marketing purposes (or pedantry in conversation :). (in the above examples the umbrella brand is also the company's name, but that's not always the case)
I believe you.
But the article seems to imply the opposite:
"Especially for applications that are not well known as KDE applications and are not easily identified as such by a "K" prefix in their name, it is recommended to use "KDE" in the product name."
Since we agree, all I suggest is to make this somewhat clearer then. :-)
So what is the actual policy here? Don't actually make e.g. "KDE Dolphin" be the name of the application, but use that form when talking about it if the name doesn't have a 'k' in it? Which form is going to show up in application launchers? Is it going to be different from the one used in press releases and news & reviews?
And yeah, KDevelop was another example I thought of, but forgot to mention, of names where the K prefix actually works. I seem to notice that it tends to be the K-prefix names which consist of a single word which work well, and it's the ones with multiple words which are clunky, but I'm not sure if this works as a general rule.
"So what is the actual policy here? Don't actually make e.g. "KDE Dolphin" be the name of the application, but use that form when talking about it if the name doesn't have a 'k' in it?"
You got it right. If you talk about a KDE application you CAN refer to it as e.g. KDE Dolphin, but also just Dolphin, or Dolphin built on top of the KDE Platform.
Yeah, perhaps the text is a little ambiguous there.
I see it as:
KDE + App name in launchers - generally, no (not in the KDE workspaces at least or most apps will be KDE something)
KDE + App name on the Dot - again, probably no (we're talking about KDE stuff)
KDE + App name on some other news websites (if not in the context of talking about KDE stuff in general) it might be helpful to link the app with us
KDE + Okular when your Windows using buddy asks you what that cool viewer app you're using is - yes, that would be helpful because then they might not only check out Okular but also remember it's produced by KDE and see what else we have to offer
Apps that have the K prefix tend to be associated with KDE anyway, in some circles, so it's probably less likely to use KDE with those.
Why do you call Okular's name "needlessly obscure"? Isn't it an incident that Eye of Gnome uses a similar metaphor?
On a nearly unrelated side-note, the name "Okular" (in the meaning of "eyepiece") looks like an implicit vote for bug 148527.
Great, I think the general direction is good. I always experienced KDE as a community and it is good to emphasize this. What does not fit for me is the naming in some case. KDE Software Compilation does not sound really like good marketing. I do not have a better idea at the moment, but I would really to suggest to look for something else. May be start a competion for it.
The other thing is the KDE Plasma Desktop, Netbook ... It seems to long. I would suggest to shorten it to KDE Desktop, KDE Netbook. I know, Plasma is very important, but rather as a technology for programmers. Users do not need to know. Calling it will KDE Plasma Desktop is bit confusing too.
"I always experienced KDE as a community"
same here
"KDE Software Compilation does not sound really like good marketing"
it's not a name that will be actively marketed as a strong brand. this is quite intentional. see the above threads on this.
"The other thing is the KDE Plasma Desktop, Netbook ... It seems to long. I would suggest to shorten it to KDE Desktop, KDE Netbook."
unfortunately we already have the "KDE == Desktop" thing going on, in no small part because the 'D' in 'KDE' was 'Desktop'. we are trying to create perceived separation between our workspace offerings (desktop, netbook, etc) and the app framework and individual applications KDE creates.
the reason is that far too often, even today, people assume things like "Krita probably works only on KDE" (we get this on the irc channels all the time, a place you'd expect people who might actually know these things to go!). of course, the sentence is broken in a few ways: Krita works great in all kinds of places, and "KDE" isn't just a desktop environment.
to create the needed separation so that people will feel more comfortable using the KDE dev platform (to create software that runs everywhere, not only in KDE workspaces!) and KDE applications outside of a KDE workspace, we're giving our workspaces names.
we can't refer to it as 'Desktop' in public (ambiguous) so it would become "KDE Desktop" and too often just shortened to "KDE" again.
given the historical as well as the going forward ambiguities, a name was needed. one was found. :)
"I know, Plasma is very important, but rather as a technology for programmers. Users do not need to know."
and users need to know about KOffice, KDE or any of the other similar names? :) Plasma is an identifier, and though you may have come across it as a technology framework, it's used as an accurate disambiguator from both "KDE" and "those other desktop/netbook/mobile UIs out there".
"Calling it will KDE Plasma Desktop is bit confusing too."
how so?
About "KDE Plasma Desktop, Netbook":
Actually it is "KDE Workspaces", which contain "Plasma Desktop" or "Plasma Netbook". KDE Desktop is exactly what was intended to be replaced ;)
> The other thing is the KDE Plasma Desktop, Netbook ... It seems to long. I would suggest to shorten it to KDE Desktop, KDE Netbook. I know, Plasma is very important, but rather as a technology for programmers. Users do not need to know.
Plasma happens to be the name of the technology, but I think this is quite usable for marketing too.
"KDE Plasma allows you to create fluent interfaces".
the word 'plasma' already implies this sort of, and I guess that wasn't a coincidence. What we can end up with is, getting users to demand a "KDE Plasma" interface for their phone/tv/mediabox and laptops. :-)
Congratulations to the KDE marketing team on producing such a well thought out and coherent rebranding plan for KDE that neatly balances logic and emotion. Perhaps the emphasis in the article is on logical consistency but ultimately brands are emotional concepts in the general “mindspace” that serve to short-circuit the effort of too much logical decision making in a world saturated with choice.
On that basis, it is particularly good news is that Plasma is prominent as the workspace brand. What better name for a vibrant and animated user interface? I'm sure the passion of the developers can be projected into such a brand. So no need to be concerned that the term Plasma was going to be publicly deprecated as technological jargon. And the idea of KDE Applications is good, though inevitably the short-hand will be “KDE Apps” (so why not a “KDE App” logo to provide a visual short-hand to identify these in any “App store” to avoid using clumsy phrases as KDE Amarok?).
Where I think logic has got the better of emotion is the term KDE Software Composition 4.4 to formally avoid terms like KDE 4.4. The rationale for this fails to recognise that many great brands are overloaded terms covering both the product and organisation (Coca-Cola, Google, Volkswagen to mention just three) and people automatically deal with the ambiguity without a thought, its always clear from the context. But what people can't get their tongue or head round is something like the BMW Saloon Car 323. It just doesn't work. Whilst the BMW 323 Saloon Car or the BMW 3-series Saloon Car are just fine, though the short-hand will always be the BMW 323 or BMW 3-series (and I'm appearing stupidly pedantic reminding the reader that the context here is the car not the company). So, as we all know that “out there” it's going to remain KDE 4.4, why not just tweak the branding to the KDE 4.4 Software Compilation so the long-hand brand is consistent with the short-hand brand, avoiding the need to “correct” anyone (which would seem very petty if ever done publicly).
The problem with the overloaded brand in the case of KDE is that people do not actually automatically understand the difference between a KDE app and the whole Software Compilation (not Composition.
Your car analogy is off in that regard, it's not about BMW being a manufacturer and a car brand, it would actually be BMW being the standard for roadways, for tyres, for cars, basically the whole "environment" of the car. So you get people to avoid buying a BMW because they think they can't drive it on their Honda Streets anymore. I know, analogies can suck hard. ;) KDE has been the infrastructure (development platform) and the chrome (apps, desktop), and the relationship between them needs to be communicated clearly so it doesn't hurt adoption, especially when thinking of multi-platform use of KDE apps and dev platform.
Another aspect where this lack of distinction has hurt is the reception of KDE 4.0. While many applications have been quite good from the beginning (okular, dolphin, to name just two) people started the KDE 4.0 workspace, were disappointed by its lack of maturity and didn't make this distinction between desktop workspace and applications -- it's all "KDE" after all.
The re-branding of the KDE Software Compilation is there to make clear that it's really about the whole package of individual components (such as Plasma, Okular, Kontact). It also makes it easier to market those applications separately while taking advantage of the well-established and strong KDE umbrella brand.
Thanks for correcting me on Software Compilation - I'll get used to it soon.
I can't argue with your logic and the intention behind it, particularly the need to create a perceptual separation between the Plasma Workspace and the KDE Apps. If the KDE Software Compilation is more an internal community release concept than a publicly marketed brand, as Aaron Seigo suggests in his very recent blog, then that task will be easier.
I also agree with your point on the importance of promoting KDE Apps as multi-platform. So perhaps it would make more sense to refer to the KDE Platform as the KDE Framework (shades of Qt Developer Frameworks here) so the phrase "multi-platform KDE App" unambiguously refers to the underlying operating systems. It would also avoid the phrase "multi-platform KDE Platform" which is the sort of verbal clumsiness the marketing team are trying to eliminate.
Personally I always thought why the Argentinian people did not exploit more the K Desktop Environment for joking. You know: the Néstor Kirchner and Cristina Fernández government has been known as the "K Government". Argentina spoke a lot about the "K style", "K deputies", "K senators", and so. What about giving those jokers a complete and truly amazingly named "K Desktop Environment"...? WOW! This joke really went sour when the inner circle of Kirchner and his wife began to be called "Entorno K" (the K Environment). So, you only have to add "Desktop"!
Now, this is coming to an end, but it's just in time for argentinian people. Watch this if you can read Spanish:
... all this marketing hogwash in the reasoning of the original article is horrible but, at least, they're involuntarily funny because I can now run around, screaming loudly "KDE is people!"... [1] ;->
[1]
I mean I can understand calling the KDE desktop and it's applications together a "Software Compilation"... but when talking about KDE to friends I doubt I am EVER going to go to the trouble of putting the SC on the end, particularly when referring to just the environment as opposed to including the apps as well. Do you see my point?
Surely KDE Platform + KDE Plasma Desktop = "KDE"?
And KDE + KDE Applications = KDE Software Compilation ?
hmm o.O
"when talking about KDE to friends I doubt I am EVER going to go to the trouble of putting the SC on the end, particularly when referring to just the environment as opposed to including the apps as well"
no, in fact you should be talking about the individual pieces. e.g. okular, or the plasma desktop, or the kde dev platform.
if you want to refer to the "whole chunk of stuff i got at once that contains all sorts of stuff" then you can refer to the KDE software compilation.
we really want people to be talking about and more aware of KDE as a modular set of software suites.
interesting how terms change how we talk about things, no? :)
"Surely KDE Platform + KDE Plasma Desktop = "KDE"?"
nope. there's also okular and several dozen other apps that come in the SC, and many more KDE apps that don't come in the SC. this is why we're changing the name, because it's so confusing.
you evidently think "KDE" is a desktop environment. it's not. the desktop environment is one thing we make, but only one thing and not even the absolute central thing. the KDE team hasn't really helped people to understand that due to the communication in the past.
that's why we're changing things, and using 'KDE' exclusively for the community as an umbrella brand for everything we do.
People will call the software KDE forever but never mind :)
Btw I think it hasn't been a problem for any open source project that the community was called the same as the software.
Many campaigns have tried to change how people call things and as we see in stadium renaming schemes, you can pay millions to have the stadium called one thing and people will still call it what they always did.
I just dont see myself saying the whole three word name when three letters has been enough for over a decade.
Will you say "I use Mandriva 2010-KDE4.3" or
"I use Mandriva 2010 - KDE Plasma Desktop" ?
I dont think I ever said or wrote K Desktop Environment before although I always say GNU-Linux when talking wuth tech people to differentiate the kernel from the generic desktop name.
With the known Linux distinctions as well as the difference between Free and Gratis and open source/free software, the free software community has proven that they are clueless when it comes to these things and cant think further than their noses.
For that reason, I trust you folks know what you are doing. The 4.x demanded a leap of faith so that the future is secure for some time to come. It wasnt an easy choice but it was the right one.
I understand the why you want to do it and it makes sense to some degree but asking people to change habits is hard but asking them to go from three letters to three words seems like an even harder battle.
I don't really like the concept of branding. It sounds very stupid to say, you are KDE but DE stands for nothing (oh, DE=Germany of course).
As a brand KDE developed from a community project of (potentially) coding software enthusiasts into a project clouded by artifical announcement gibberish which often clashed with reality. Branding became less language neutral. It is quite a bit self-ironic that the branding now says "KDE is no longer software created by people, but people who create software."
"The expanded term "K Desktop Environment" has become ambiguous and obsolete, probably even misleading. Settling on "KDE" as a self-contained term makes it clear that we... providing...applications and platforms... on the desktop, mobile devices, and more." - "It is not a limited effort to solve the problem of having a desktop GUI for Linux anymore."
So in other words, you give up upon the desktop and become a technology collection. Now, maybe some persons may need this to better justify their KDE involvement in a business environment. In less diplomatic terms it means: We give up on the KDE Linux desktop, mission failed.
Concerning the new naming convention you probably notice that it is unsystematic. So the next step is to rename Kword as "KDE Word" or "KDE Lettera or Lettera".
"KDE applications can run independently of the KDE workspace and can freely be mixed with applications based on other platforms and toolkits."
gets it wrong..
Where is the user in all this? Originally the implicit idea was develop for a user scenario vision, and communication was characterized by interaction with users and their expectations. As developers rule (and any user is a potential future later developer or contributor to other projects which form part of the desktop experience) of course non-contributors got less rank. Now the user is completely out of focus and it is "people who create software". You wonder if they ever eat their own dogfood.
Maybe that was the kardinal problem with the KDE4 release cycle. You develop great toolkits and platforms to be used for (later) potential purposes. But no one has a user scenario in mind to which the technology development is instrumental, the solution. Here one early mockup got it right. It is really about a solution to a problem "Browse the web", "mail mary", not technology and applications per se.
You've covered a lot of ground (good to see you've thought about it) so I'll try and take your points one at a time:
"It sounds very stupid to say, you are KDE but DE stands for nothing"
- The K has officially stood for nothing for a long time
- There are plenty of organisations that have names that used to stand for something but which have moved away from those because they no longer really represent what they do no longer do:
--3M (a lot more than mining and minerals nowadays)
--AT&T (beyond telephone and telegraph)
--BP (allege that they are beyond petroleum ;-)
--SGI (claim to be more than graphics...)
- It is more stupid imho to pretend that KDE is only produces a desktop environment
"We give up on the KDE Linux desktop, mission failed"
- I don't :-)
- Plasma Desktop is one of our greatest achievements and I personally prefer to have my KDE applications running in my KDE workspace
- The KDE Platform is also pretty darn cool, this helps us recognise that
- There are plenty of people using KDE applications because they are just the best out there (Amarok, K3B for a couple) but who don't want to use our workspaces. Separating the two helps us get this message across
"the next step is to rename Kword as "KDE Word" or "KDE Lettera or Lettera""
- That's entirely up to the application teams
- You have to balance the gain from changing a name with the loss from losing a recongnised name. For things like KWord I personally don't think a name change is worthwhile
."
- I don't really understand your point here
- I agree that the problem is that KDE Applications are not thought of as being independent of the workspace - this is a big driver behind the changes we have made
"Where is the user in all this?" (I won't quote your whole paragraph)
- Hopefully they are in a position of having greater clarity about who we are and what we can offer
"Now the user is completely out of focus and it is "people who create software""
- The "people who create software" thing is a catchy, one line summary. It is not complete. We debated who is in the KDE community and decided that really it is something that people almost define for themselves. You could be part of KDE if:
-- You contribute code to anything in the KDE SC
-- You contribute code to any free software app using the KDE Platform
-- You contribute documentation, art, how-tos, feedback, promotion efforts, bug reports, comments on dot stories :-) There's probably a lot more
Re your last paragraph:
- You have to develop the underlying technology before you can use it. If you read the blogs of the people doing this I think you'll see that they have visions of how this translates in to real, usable tools
- One of the things in being an open community is that we talk about things as we're doing them, partly to get more people interested in contributing so things can happen faster. We don't develop stuff in secret and then announce it in a blaze of glory.
Stuart already shared many of the thoughts that sprung into my mind as well, but I'd like to add a bit to this:
"Where is the user in all this?"
Where they've always been .. the people who use the software we write, the people many of us keep in mind while writing that software.
"Originally the implicit idea was develop for a user scenario vision, and communication was characterized by interaction with users and their expectations."
That hasn't really changed, and not at all with this announcement. Go look at the Plasma Netbook effort and consider how it was created from the start.
I do think you are confusing "how we are going to be using our brands" with "here is our marketing strategy". The marketing and communications direction is a much bigger discussion than the definition of the top level brand names. This is more like part of the glossary than the textbook on our marketing.
To be even more clear: we aren't going to be heading around in interviews and what not trumpeting "KDE is us!" as if that's the important message we need to get across; we'll be communicating about the technology we create just as we always have been, but hopefully more effectively and clearly. This set of changes and definition just helps us understand where the term "KDE" fits into that communication. It is not a change in focus.
"Now the user is completely out of focus and it is "people who create software"."
We've been people who create software for 13 years now. That's not new. The term "KDE" is now reserved for the community and organization and not the products and that's the only real change here. In fact, this is getting the naming into line with the reality for the last many, many years (far before 4.0 even). The idea that this reflects a change in what we've been doing "down in the trenches" is completely off-base.
"You wonder if they ever eat their own dogfood."
We do, many of us use a near-continuous (daily, weekly) build of KDE from svn, in fact. So, that question is now answered :)
"But no one has a user scenario in mind to which the technology development is instrumental, the solution. "
Thankfully that isn't true, and as we continue to measure actual results over time this will become increasingly apparent.
(Not sure what any of this had to do with the actual branding thing, though. :) | https://dot.kde.org/comment/108669 | CC-MAIN-2016-40 | refinedweb | 6,686 | 70.84 |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <rtl.h>
int closesocket (
int sock); /* Socket descriptor */
The closesocket function closes an existing socket and
releases the socket descriptor. Further references to sock
fail with SCK_EINVALID error code.
The argument sock specifies a socket descriptor returned
from a previous call to socket.
In blocking mode, which is enabled if the system detects RTX
environment, this functions waits until a socket is closed. In non
blocking mode, you must call the closesocket function again if
the error code SCK_EWOULDBLOCK is returned.
The closesocket function is in the RL-TCPnet library. The
prototype is defined in rtl.h.
note
The closesocket function returns the following result:
bind,. | https://www.keil.com/support/man/docs/rlarm/rlarm_closesocket.htm | CC-MAIN-2020-34 | refinedweb | 119 | 50.23 |
XBMC: build a remote control
Take control of your home media player with a custom remote control running on your Android phone.
XBMC is a great piece of software, and can turn almost can computer into a media centre. It can play music and videos, display pictures, and even fetch a weather forecast. To make it easy to use in a home theatre setup, you can control it via mobile phone apps that access a server running on the XBMC machine via Wi-Fi. There are loads of these available for almost all smartphone systems.
We’ve recently set up an XBMC system for playing music, and none of the XBMC remotes we found really excel at this task, especially when the TV attached to the media centre is turned off. They were all a bit too complex, as they packed too much functionality into small screens. We wanted a system designed from the ground up to just access a music library and a radio addon, so we decided to build one ourselves. It didn’t need to be able to access the full capabilities of XBMC, because for tasks other than music, we’d simply switch back to a general-purpose XBMC remote control. Our test system was a Raspberry Pi running the RaspBMC distribution, but nothing here is specific to either the Pi or that distro, and it should work on any Linux-based XBMC system provided the appropriate packages are available.
The first thing a remote control needs is a user interface. Many XBMC remote controls are written as standalone apps. However, this is just for our music, and we want to be accessible to guests without them having to install anything. The obvious solution is to make a web interface. XBMC does have a built-in web server, but to give us more control, we decided to use a separate web framework. There’s no problem running more than one web server on a computer at a time, but they can’t run on the same port.
There are quite a few web frameworks available. We’ve used Bottle because it’s a simple, fast framework, and we don’t need any complex functions. Bottle is a Python module, so that’s the language in which we’ll write the server.
You’ll probably find Bottle in your package manager. In Debian-based systems (including Raspbmc), you can grab it with:
sudo apt-get install python-bottle
A remote control is really just a layer that connects the user to a system. Bottle provides what we need to interact with the user, and we’ll interact with XBMC using its JSON API. This enables us to control the media player by sending JSON-encoded information.
We’re going to use a simple wrapper around the XBMC JSON API called xbmcjson. It’s just enough to allow you send requests without having to worry about the actual JSON formatting or any of the banalities of communicating with a server. It’s not included in the PIP package manager, so you need to install it straight from GitHub:
git clone
cd python-xbmc
sudo python setup.py install
This is everything you need, so let’s get coding.
The basic structure of our program is:
from xbmcjson import XBMC
from bottle import route, run, template, redirect, static_file, request
import os
xbmc = XBMC(“”, “xbmc”, “xbmc”)
@route(‘/hello/<name>’)
def index(name):
return template(‘<h1>Hello {{name}}!</h1>’, name=name)
run(host=”0.0.0.0”, port=8000)
This connects to XBMC (though doesn’t actually use it); then Bottle starts serving up the website. In this case, it listens on host 0.0.0.0 (which is every hostname), and port 8000. It only has one site, which is /hello/XXXX where XXXX can be anything. Whatever XXXX is gets passed to index() as the parameter name. This then passes it to the template, which substitutes it into the HTML.
You can try this out by entering the above into a file (we’ve called it remote.py), and starting it with:
python remote.py
You can then point your browser to localhost:8000/hello/world to see the template in action.
@route() sets up a path in the web server, and the function index() returns the data for that path. Usually, this means returning HTML that’s generated via a template, but it doesn’t have to be (as we’ll see later).
As we go on, we’ll add more routes to the application to make it a fully-featured XBMC remote control, but it will still be structured in the same way.
The XBMC JSON API can be accessed by any computer on the same network as the XBMC machine. This means that you can develop it on your desktop, then deploy it to your media centre rather than fiddle round uploading every change to your home theatre PC.
Templates – like the simple one in the previous example – are a way of combining Python and HTML to control the output. In principal, they can do quite a bit of processing, but they can get messy. We’ll use them just to format the data correctly. Before we can do that, though, we have to have some data.
Getting data from XBMC
The XBMC JSON API is split up into 14 namespaces: JSONRPC, Player, Playlist, Files, AudioLibrary, VideoLibrary, Input, Application, System, Favourites, Profiles, Settings, Textures and XBMC. Each of these is available from an XBMC object in Python (apart from Favourites, in an apparent oversight). In each of these namespaces there are methods that you can use to control the application. For example, Playlist.GetItems() can be used to get the items on a particular playlist. The server returns data to us in JSON, but the xbmcjson module converts it to a Python dictionary for us.
There are two items in XBMC that we need to use to control playback: players and playlists. Players hold a playlist and move through it item by item as each song finishes. In order to see what’s currently playing, we need to get the ID of the active player, and through that find out the ID of the current playlist. We’ve done this with the following function:
def get_playlistid():
player = xbmc.Player.GetActivePlayers()
if len(player[‘result’]) > 0:
playlist_data = xbmc.Player.GetProperties({“playerid”:0, “properties”:[“playlistid”]})
if len(playlist_data[‘result’]) > 0 and “playlistid” in playlist_data[‘result’].keys():
return playlist_data[‘result’][‘playlistid’]
return -1
If there isn’t a currently active player (that is, if the length of the results section in the returned data is 0), or if the current player has no playlist, this will return -1. Otherwise, it will return the numeric ID of the current playlist.
Once we’ve got the ID of the current playlist, we can get the details of it. For our purposes, two things are important: the list of items in the playlist, and the position we are in the playlist (items aren’t removed from the playlist after they’ve been played; the current position just marches on).
def get_playlist():
playlistid = get_playlistid()
if playlistid >= 0:
data = xbmc.Playlist.GetItems({“playlistid”:playlistid, “properties”: [“title”, “album”, “artist”, “file”]})
position_data = xbmc.Player.GetProperties({“playerid”:0, ‘properties’:[“position”]})
position = int(position_data[‘result’][‘position’])
return data[‘result’][‘items’][position:], position
return [], -1
This returns the current playlist starting with the item that’s currently playing (since we don’t care about stuff that’s finished), and it also includes the position as this is needed for removing items from the playlist.
The API is documented at. It lists all the available functions, but it a little short on details of how to use them.
Bringing them together
The code to link the previous functions to a HTML page is simply:
@route(‘/juke’)
def index():
current_playlist, position = get_playlist()
return template(‘list’, playlist=current_playlist, offset = position)
This only has to grab the playlist (using the function we defined above), and pass it to a template that handles the display.
The main part of the template that handles the display of this data is:
<h2>Currently Playing:</h2>
% if playlist is not None:
% position = offset
% for song in playlist:
<strong> {{song[‘title’]}} </strong>
% if song[‘type’] == ‘unknown’:
Radio
% else:
{{song[‘artist’][0]}}
% end
% if position != offset:
<a href=”/remove/{{position}}”>remove</a>
% else:
<a href=”/skip/{{position}}”>skip</a>
% end
<br>
% position += 1
% end
As you can see, templates are mostly written in HTML, but with a few extra bits to control output. Variables enclosed by double parenthesise are output in place (as we saw in the first ‘hello world’ example). You can also include Python code on lines starting with a percentage sign. Since indents aren’t used, you need a % end to close any code block (such as a loop or if statement).
This template first checks that the playlist isn’t empty, then loops through every item on the playlist. Each item is displayed as the song title in bold, then the name of the artist, then a link to either skip it (if it’s the currently playing song), or remove it from the playlist. All songs have a type of ‘song’, so if the type is ‘unknown’, then it isn’t a song, but a radio station.
The /remove/ and /skip/ routes are simple wrappers around XBMC controls that reload /juke after the change has taken effect:
@route(‘/skip/<position>’)
def index(position):
print xbmc.Player.GoTo({‘playerid’:0, ‘to’:’next’})
redirect(“/juke”)
@route(‘/remove/<position>’)
def index(position):
playlistid = get_playlistid()
if playlistid >= 0:
xbmc.Playlist.Remove({‘playlistid’:int(playlistid), ‘position’:int(position)})
redirect(“/juke”)
Of course, it’s no good being able to manage your playlist if you can’t add music to it.
This is complicated slightly by the fact that once a playlist finishes, it disappears, so you need to create a new one. Rather confusingly, playlists are created by calling the Playlist.Clear() method. This can also be used to kill a playlist that is currently playing a radio station (where the type is unknown). The other complication is that radio streams sit in the playlist and never leave, so if there’s currently a radio station playing, we need to clear the playlist as well.
These pages include a link to play the songs, which points to /play/<songid>. This page is handled by:
@route(‘/play/<id>’)
def index(id):
playlistid = get_playlistid()
playlist, not_needed= get_playlist()
if playlistid < 0 or playlist[0][‘type’] == ‘unknown’:
xbmc.Playlist.Clear({“playlistid”:0})
xbmc.Playlist.Add({“playlistid”:0, “item”:{“songid”:int(id)}})
xbmc.Player.open({“item”:{“playlistid”:0}})
playlistid = 0
else:
xbmc.Playlist.Add({“playlistid”:playlistid, “item”:{“songid”:int(id)}})
remove_duplicates(playlistid)
redirect(“/juke”)
The final thing here is a call to remove_duplicates. This isn’t essential – and some people may not like it – but it makes sure that no song appears in the playlist more than once.
We also have pages that list all the artists in the collection, and list the songs and albums by particular artists. These are quite straightforward, and work in the same basic way as /juke.
The UI still needs a bit of attention, but it’s working.
Adding functionality
The above code all works with songs in the XBMC library, but we also wanted to be able to play radio stations. Addons each have their own plugin URL that can be used to pull information out of them using the usual XBMC JSON commands. For example, to get the selected stations from the radio plugin, we use:
@route(‘/radio/’)
def index():
my_stations = xbmc.Files.GetDirectory({“directory”:”plugin://plugin.audio.radio_de/stations/my/”, “properties”:
[“title”,”thumbnail”,”playcount”,”artist”,”album”,”episode”,”season”,”showtitle”]})
if ‘result’ in my_stations.keys():
return template(‘radio’, stations=my_stations[‘result’][‘files’])
else:
return template(‘error’, error=’radio’)
This includes a file that can be added to a playlist just as any song can be. However, these files never finish playing, so (as we saw before) you need to recreate the playlist before adding any songs to it.
Sharing songs
As well as serving up templates, Bottle can serve static files. These are useful whenever you need things that don’t change based on the user input. That could be a CSS file, an image or an MP3. In our simple controller there’s not (yet) any CSS or images to make things look pretty, but we have added a way to download the songs. This lets the media centre act as a sort of NAS box for songs. If you’re transferring large amounts of data, it’s probably best to use something like Samba, but serving static files is a good way of grabbing a couple of tunes on your phone.
The Bottle code to download a song by its ID is :
@route(‘/download/<id>’)
def index(id):
data = xbmc.AudioLibrary.GetSongDetails({“songid”:int(id), “properties”:[“file”]})
full_filename = data[‘result’][‘songdetails’][‘file’]
path, filename = os.path.split(full_filename)
return static_file(filename, root=path, download=True)
To use this, we just put a link to the appropriate ID in the /songsby/ page.
We’ve gone through all the mechanics of the code, but there are a few more bits that just tie it all together. You can see for yourself at the GitHub page:. | https://www.linuxvoice.com/xbmc-build-a-remote-control/ | CC-MAIN-2022-33 | refinedweb | 2,219 | 61.46 |
I updated my index.template.html by adding the scripts you mentioned above. It worked correctly in IE11.
DEMO:
@dobbel just follow the apexcharts documentation and enable compatibility with ie no quasar.conf.js file
new examples added!
New examples added!
hi patrick I learning from your examples to implements the apexcharts but in the new framework quasar 2.0.0 I couldnt install so I looking what is changing and in the end I found the new way so I want to pass the information for help others programers so in the file the boot “apex.js” not recognize “Vue” so the new way is this:
file name apex.js
<------------------------implement this code------------------------------------->
import VueApexCharts from ‘vue3-apexcharts’
export default ({ app }) => {
app.component(‘apexchart’, VueApexCharts)
}
<----------------------------------end----------------------------------------------->
and you need have install “vue3-apexcharts” if you use the normal mark error too so that is very important i forgot add the file boot in the quasar.config.js i waste to much time for remember i hope you can improve my informacion in your page for helping other programers i hope this info helping see you and thnx for you excelente page. | https://forum.quasar-framework.org/topic/4158/quasar-e-apex-charts-sample-application/6?lang=en-US | CC-MAIN-2022-33 | refinedweb | 192 | 64.91 |
I have had some experiences trying to build a chat application—most of the times I failed, or I had some other important stuff to report to. A few weeks back I wanted to check some services on Microsoft Azure, and Azure Redis Cache got my attention. As much as the name sounds crazy, the service itself is cheese, and runs on the Redis protocol; read more about the protocol here. So Azure Redis Cache is a service offering by Microsoft Azure, to provide you with,
Most of you might have heard the name of Redis, for the use of in-memory caching services. However, I am more interested in the publish-subscribe model of Redis, much more than the in-memory caching services and utilities. The pub-sub model allows us to,
As a subscriber we can then preview what that message was all about. That is exactly where we can perform the most complex operations to slice-dice the message and preview the message in a neat, user-friendly manner.
I would expect that you know the basics of Redis, and some basics of C# programming. I will be using WPF application, because I was facing a few problems in using a simple console application, and I will explain what went wrong and what was the reason I had to chose WPF; otherwise this would have been my .NET Core article. Anyhow, we can now dig into some basics of the technology that are going to cover and then we proceed with the development of the application.
Note: The images in the article look smaller than they are, right-click and open them in new tab for preview them properly.
Before I ask you to get a free Azure account, note that you can use your own locally deployed Redis instances too. So if you want to install the Redis instance locally, consider downloading and setting up the server for the rest of article to work. Visit the downloads page, and continue from there. Most of the steps required to setup the server locally are already explained pretty neatly on the internet and you can check a few blogs to find out how to set it up. Also, the connection string to be passed would then depend on your local instance and not the one I will be using, which is from Microsoft Azure of course.
Anyways, in my instance, I will be going ahead to setup an instance of Azure Redis Cache online in the Azure Portal to access the service and then proceed with the development part. On Azure, search for Redis Cache and you will be provided with the Redis Cache instance to be created on the platform.
Figure 1: Redis Cache service on Azure.
There would be other services there too, but you only need to select this one—unless of course trying out other services. Once on the form, fill in with the details that you find suitable and then create the instance of the service on Azure.
Figure 2: Redis Cache creation form on Azure.
Azure will take a while, and let you know once the service has been deployed for you to start using it. A few things to consider, Azure does support clustering of Redis servers. I will not do that because that will only increase the complexity of the stuff. Secondly, since we won't be using the caching, we don't necessary need the clustering and sharding services. However, the storage persistance and other features might be useful but please read the Redis vs Apache Kafka section for more on this topic.
You would need a few settings from the Azure portal, that you will use to configure the Redis clients.
We are going to use StackExchange.Redis library, which is also recommended by Microsoft to use with Azure. I will cover that part in the next sections when we start working on the project itself.
What we are focusing on, is a simple chat application where everyone has a username and they communicate using the usernames to forward the messages to a specific node on the communication grid. There are a lot of other solutions that we can use, such as socket.io on Node.js app, and much more solutions. I found that Redis can be helpful as it can take care of a lot of critical problems itself—problems like, mapping sockets to channels and channels to sockets for faster iteration and processing.
For a sample chat application with Socket IO and Node.js, consider visiting their own web page where they explain the overall process of doing so,.
In our current requirement and workflow, we want to reach the following workflow and support chat on the platform for multiple users. Please remember, this is just a demonstration there are a lot of other features that are missing and are not implemented because I did not have time to do so, and neither was it in my consideration to do. But, cherish what you got!
style="width: 640px; height: 322px" data-src="/KB/azure/1236871/Screenshot__27_.png" class="lazyload" data-sizes="auto" data->
Figure 3: Preview of the applications running and service 3 users in the chat.
Now without further ado, let's dive deeper into development of this project and the explanation of why I thought this can be done using Redis, and more especially through WPF.
Let's now start developing the WPF application. I chose WPF when console application did not work out quite well. WPF is a good platform to develop graphical applications, instead of using the Windows Forms application development platform, for several reasons that I do not want to talk about. Here, I am using the WPF as client for my domain, where server is hosted on Azure.
If I divide the entire client app in sections, we will capture the following aspects that are to be developed.
All of this can easily be done using a simple console application. I did start with a .NET Core application acting as a client for the Redis service hosted on Azure.
But the problem with console application is, that it doesn't distinguish when you are entering a message, and when you are reading a string from the console. The entire program was working quite fine, til I brought 4 users to talk randomly. Upon receipt of a message, console's began to show me messages, that I no longer wish to remember.
The code for that was like this, and yes, it did work quite fine for 2 users. You can give it a try for sure, no problem at all. :-)
class Program
{
private static ConnectionMultiplexer redis;
private static ISubscriber subscriber;
static void Main(string[] args)
{
connect();
if (redis != null && subscriber != null)
{
Console.WriteLine("Connected to Azure Redis Cache.");
Console.WriteLine("Enter the message to send; (enter 'quit' to end program.)");
string message = "";
while (message.ToLower() != "quit")
{
message = Console.ReadLine().Trim();
if (!string.IsNullOrEmpty(message) && message.Trim().ToLower() != "quit")
{
sendMessage(message);
}
}
}
else
{
Console.WriteLine("Something went wrong.");
}
Console.WriteLine("Terminating program...");
Console.Read();
}
static void connect()
{
redis = ConnectionMultiplexer.Connect("<your-connection-string>");
subscriber = redis.GetSubscriber();
subscriber.Subscribe("messages", (channel, message) =>
{
Console.WriteLine($"{channel}: {(string)message}.");
});
}
static void sendMessage(string value)
{
subscriber.Publish("messages", value);
}
}
Sorry for no commentary on the code, that was just the code I was trying out. The errors in the code made me think of an alternate platform to write the code on. Apart from UWP, WPF was the one that came to my mind and I started porting the code from .NET Core, to .NET framework's WPF framework.
Also, to those who figured it out: Yes, the code was different and used a global channel to send messages to everyone. This WPF app uses a different scheme and forwards messages to only specific users.
The difference in the console app and WPF app enabled me to support a different approach of using the chat messaging. In this, I was able to send the messages to specific users as needed. The previous console app was using the fact that we can use a single channel and broadcast the messages to everyone. The change was made, to make things a bit more clearer. As you have seen in the image above, our app is capable of sending the messages only to the users that are intended recipients. Redis supports multiple subscribers/channel too, and we can definitely use that, for "group chatting".
The idea behind the app was to support every member to have a separate channel to listen to. Think of it, as their own inbox and everyone can dump a message in the channel. This way, we can separate our the members and who listens to a specific channel. A rough sketch of the architecture is something like this,
style="width: 640px; height: 360px" data-src="/KB/azure/1236871/Redis-Instance-Overview.png" class="lazyload" data-sizes="auto" data->
Figure 4: App workflow overview.
The thing is, that now you can visualize how each user can send a message, but they will more likely listen to the stream they are subscribing to—which we do so using their own username that they specify on the app start.
Lastly, as a secondary feature I wanted to make sure that we are able to send the messages to the right recipient. One of the ways to do that was, using the list of active clients and then forwarding the messages to the ones we are interested in. Too complicated in Redis. Why? Because Redis doesn't support active users list, and their channels or other similar information that can be used to access the active user and send a message to them—unless of course you are admin of the service, which kills the purpose of a chat app and allowing users to gain access to such sensitive information. We can do this if we consider using a middleware that takes care of user profiles and authentication, or friends, let's say.
In that case, we would be using the user's ID as the channel-title, such as, "user_1234" and then enabling the messaging. That can help us send a message to anyone who is known on the platform. Once again, this does not guarantee that the message was sent to the user unless we take care of the channels we are writing to, such as selecting the user from a list provided by our server that maintains the list of our contacts. Because we are merely publishing the message on a channel, there is no TCP-like acknowledgement of the message. But, but, but, there is a response in Redis protocol that you can capture and use, in the StackExchange.Redis package library, it is the return type of the Publish(Async) function. The service is provided by Redis protocol and can be used to check how many users got the message, in our case it should be one or two as needed by your business logic. In case the return message is zero, we can conclude that the recipient is offline or does not exist. I will explain this in a minute.
Publish(Async)
In other words, we would have to manage a database that tracks who did we contact and what are the usernames for the contacts that we wish to contact. Redis won't be of any use in that part.
Since all the explanation has been made, it is time we start coding. Start off with the creation of a new WPF project, you can do that in Visual Studio itself. Once you have the WPF application up and running, you can modify the project to mimic what we need.
The front end of the application was made simple and neat. For responsiveness, and since we are using WPF, I used the grid templates and made the application responsive through the Grid control instead of StackPanel. The front-end of the app was like,
Grid
StackPanel
style="width: 640px; height: 423px" data-src="/KB/azure/1236871/Screenshot__29_.png" class="lazyload" data-sizes="auto" data->
Figure 5: App's UI be shown for demonstration.
And, I used the following XAML code to build the front-end, you can download the archives or consider cloning the project from GitHub to try it out.
<Window x:Class="RedisDemo.Chat.Wpf.MainWindow"
xmlns=""
xmlns:x=""
xmlns:d=""
xmlns:mc=""
xmlns:local="clr-namespace:RedisDemo.Chat.Wpf"
mc:
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="45" />
<RowDefinition />
<RowDefinition Height="80" />
</Grid.RowDefinitions>
<StackPanel Orientation="Horizontal" Margin="5">
<TextBlock Text="Username: " Margin="0, 8, 5, 0" />
<TextBox Padding="2" IsEnabled="False" Height="25" Margin="0, 0, 10, 0" Width="250" Name="username" TextChanged="username_TextChanged" />
<Button IsEnabled="False" Content="Connect" Height="30" HorizontalAlignment="Right" Name="setUsernameBtn" Click="setUsernameBtn_Click" Padding="5" />
</StackPanel>
<Grid Grid.
<ListView Name="messagesList">
<ListView.ItemTemplate>
<DataTemplate>
<StackPanel Margin="10, 0, 0, 10">
<TextBlock Text="{Binding Sender}" FontWeight="SemiBold" FontSize="15" />
<TextBlock Text="{Binding Content}" />
<TextBlock Text="{Binding ReceivedAt}" Foreground="Gray" />
</StackPanel>
</DataTemplate>
</ListView.ItemTemplate>
</ListView>
</Grid>
<Grid Grid.
<Grid.RowDefinitions>
<RowDefinition />
<RowDefinition Height="25" />
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition />
<ColumnDefinition Width="70" />
</Grid.ColumnDefinitions>
<TextBox IsEnabled="False" TextChanged="message_TextChanged" VerticalAlignment="Top" Height="25" Margin="10" Name="message" />
<Button IsEnabled="False" VerticalAlignment="Top" Margin="8" Content="Send" Height="30" Name="sendMessageBtn" Click="sendMessageBtn_Click" Padding="5" Grid.
<TextBlock Grid.
</Grid>
</Grid>
</Window>
I used Grid controls, and created columns and rows, to support responsive-scaling of the app's UI. The benefit is that it has a cleaner look on scaling of the application. Aside from that, I did use a few hardcoded Margin values and some other stuff but that was needed to maintain somewhat consistency. The full-screen view on my monitor here looks like this,
Margin
style="width: 640px; height: 339px" data-src="/KB/azure/1236871/Screenshot__30_.png" class="lazyload" data-sizes="auto" data->
Figure 6: Full screen view of the application's UI.
The major part was to manage the code and write the backend code. My primary interest was to make the code cleaner, and better in the terms of, performance and efficiency. I used the async/await modifiers wherever possible. Utilized global variables, instead of local-short-lived variables that need to be created again in every function call.
async
await
Tidbits of the code are as follows, the primary settings that are involved in the establishment of connection and the final function that closes the connections are as follows,
// Variables
private ObservableCollection<Message> collection;
private ConnectionMultiplexer redis;
private ISubscriber subscriber;
public MainWindow()
{
InitializeComponent();
Closing += MainWindow_Closing;
// Set the variables etc.
setThingsUp();
}();
}
}
// Set the variables and the focus of user.
private void setThingsUp()
{
collection = new ObservableCollection<Message>();
username.IsEnabled = true;
setUsernameBtn.IsEnabled = false;
username.Focus();
// Set the list data source;
messagesList.ItemsSource = collection;
}
Now that we have the basic workflow defined, our users will see the main text box as that will have the focus. Later, they will enter their usernames and establish a connection to the Azure Redis Cache service using the connection string we have. Once user inputs the username, they can press the "Connect" button and establish the connection, the code is as follows,
private async void setUsernameBtn_Click(object sender, RoutedEventArgs e)
{
// Prevent resubmission of the request.
setUsernameBtn.IsEnabled = false;
// Establish conncetion, asynchronously.
redis = await ConnectionMultiplexer.ConnectAsync("<connection-string>");
if (redis != null)
{
if (redis.IsConnected)
{
subscriber = redis.GetSubscriber();
await subscriber.SubscribeAsync(username.Text.Trim().ToLower(), (channel, value) =>
{
string buffer = value;
var message = JsonConvert.DeserializeObject<Message>(buffer);
message.ReceivedAt = DateTime.Now;
// This function runs on a background thread, thus dispatcher is needed.
Dispatcher.Invoke(() =>
{
collection.Add(message);
});
});
// Enable the messaging buttons and box.
message.IsEnabled = true;
message.Focus();
Title += " : " + username.Text.Trim();
username.IsEnabled = false;
}
else
{
MessageBox.Show("We could not connect to Azure Redis Cache service. Try again later.");
setUsernameBtn.IsEnabled = true;
}
}
else
{
setUsernameBtn.IsEnabled = true;
}
}
The basic checks are to make sure that we were able to connect to the service. If so, we then proceed and subscribe to our username. One thing to note there is that, we need to check how our usernames are used as a channel.
So that was done using the code,
await subscriber.SubscribeAsync(username.Text.Trim().ToLower(), ...
The code was pretty much neat, and StackExchange.Redis authors did a great job in implicitly converting RedisChannel to string and vice versa, that is why I was using a string value instead of an object representing RedisChannel object. Inside the same function, we provide a lambda that gets called when a message is published on the channel we subscribe to.
RedisChannel
string
string buffer = value;
var message = JsonConvert.DeserializeObject<Message>(buffer);
message.ReceivedAt = DateTime.Now;
// This function runs on a background thread, thus dispatcher is needed.
Dispatcher.Invoke(() =>
{
collection.Add(message);
});
I used a special class, to contain the message. This class contains the properties that show,
Of the message that the user has just received. Json.NET library was used, to deserialize and serialize the message on both the ends. Message type is defined as the following type,
public class Message
{
public int Id { get; set; }
public string Sender { get; set; }
public string Content { get; set; }
public DateTime ReceivedAt { get; set; }
}
Pretty neat, and simple. Right? Last point in that is that we are using Dispatcher.Invoke function. Because, our subscriber listens to the messages and calls a lambda that runs on a background thread. That is why, we need to execute the code using the Dispatcher. And that is pretty much it.
Dispatcher.Invoke
Dispatcher
Now, to send the message, I used the following code to Publish the message to the channel of the user I had shown interest in, using the @username notation.
// Send the message
private async void sendMessageBtn_Click(object sender, RoutedEventArgs e)
{
var content = message.Text.Trim();
// Get the recipient name, e.g. @someone hi there!
var recipient = content.Split(' ')[0].Replace("@", "").Trim().ToLower();
// Create the message payload.
var blob = new Message();
blob.Sender = username.Text.Trim();
blob.Content = content;
// Send the message.
var received = await subscriber.PublishAsync(recipient, JsonConvert.SerializeObject(blob));
// If no recipient got the message, show the error.
if (received == 0)
{
MessageBox.Show($"Sorry, '{recipient}' is not active at the moment.");
}
message.Text = "";
}
Again, the same code that had been talked about before. We are trying to capture the username of the person we are interested in and then we write the message on their channel. One thing to note here is, that Redis can write a string message on the channel too. We are not doing that, instead we are writing a serialized object of our type Message. This gets deserialized on the other end and we see the message on screen.
Also, take a look here,
var received = await subscriber.PublishAsync(recipient,...
We are trying to capture the number of clients that received our message. If that number is zero, we are showing a message stating that the user is offline. This does not mean that the user does not exist, if we had a complete platform of chat, merely that the user was not online. Have a look at this behavior here,
style="width: 640px; height: 430px" data-src="/KB/azure/1236871/Screenshot__31_.png" class="lazyload" data-sizes="auto" data->
Figure 7: An error message showing that the recipient is offline.
These are a few of the common aspects that I wanted you guys to take a look at. Redis so far has been useful in many cases, and has helped us in utilizing a framework/platform to build a complex application environment.
Now that we have got all the code running, we can run the program and see how it works. The code has already been shown in the image above, and the problem was also shown in the image in the previous section. You can take a look at it there.
However, one thing that I want you to consider is that you must always dispose the resources you are using, and in the case of this application there is a lot more to do than just a dispose call. You need to unsubscribe from the Redis channel too, I am doing that in the Closing event.
Closing();
}
}
This is optional, but can help a lot. One way of help is that it can provide a result of "zero", when the user signs out of the application and other users can know that the recipient is offline. Otherwise, it is upto Redis to consider a socket closed after a while when it runs a check.
Apache Kafka is a great platform for streaming data. Apache Kafka focuses more on streaming of the messages through a queue. Queue, similarly, can have multiple subscribers and multiple publishers. The primary feature is that once a message is read it can persist in the queue in Kafka, whereas in Redis it is cleared out.
Redis also is a feature platform, and supports fast message forwarding. Kafka has to manage and persist the data, and thus can be forgiven for storage and persistance, but Kafka still is faster and provides a lower latency in many cases. For the speed benchmarking please consider reading this blog post,.
For the data persistance, Redis supports data storage, where it stores the data (caching data) to the hard drives. The channel queues are cleared as soon as a data is sent to the recipient. While I was trying out the application, I got the following chart on Azure, telling me that none of the messages were stored and each time a message was captured it was sent directly and not cached.
style="width: 640px; height: 413px" data-src="/KB/azure/1236871/Screenshot__32_.png" class="lazyload" data-sizes="auto" data->
Figure 8: Azure portal showing the messages being missed in the channel broadcast.
I would need to replicate this scenario in Apache Kafka and see how that works, and I might consider writing a comparison scenario, but since I do not have any information on how Kafka treats this, I have less to talk about.
Consider reading this thread on SO for a bit more as well,.
Well in this article we merely scratched the complex surface of building a chat application. There are a lot of features missing from this, you can definitely check the source code on GitHub and apply some changes to it. I might also work on a few changes soon as I feel them necessary.
The grouping of the users and the signage of them to be a group members, is a great feature and should be implemented as well. Besides this, there was a feature of tabs in an application I once worked on. I really like that feature, but I did not have time for that much of a complex chat application.
Apart from these, you need to think about the model Redis supports. How many connections can be it support? How many channels can it support? Moreover, how many connections/channel or how many channels/connection can it provide support it? Also, what is the limitation of Azure in this concern, all of these questions can greatly help you out in understanding whether you should consider Redis or not. Also, take a look at the Redis or Apache Kafka section above to find a few ways in which Redis can help or when Kafka might be a better solution for you!
Lastly, this was never meant to be a great chat application. Merely a trial of Redis service, and Azure's solution to hosted Redis cache implementation. I hope this might have helped you out in a few of the ways, and that you learnt how Redis' pub-sub works, with a demonstration. If it is unclear, reach out to me in the comments section below, or install and setup everything on your local machine and try it out.
Can't wait to see what you come up with!
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | https://www.codeproject.com:443/Articles/1236871/Utilizing-Azure-Redis-Cache-as-backend-for-chat-ap?display=Print&PageFlow=FixedWidth | CC-MAIN-2021-43 | refinedweb | 4,005 | 64 |
I found this by googling:
After installing a new translator via qApp->installTranslator() you need
to emit some kind of a languageChanged signal. Then you connect this
signal to all your GUI...
I found this by googling:
After installing a new translator via qApp->installTranslator() you need
to emit some kind of a languageChanged signal. Then you connect this
signal to all your GUI...
we have rebooted the server. Can you check if it works now?
I think external xml files do not work. Can you incorporate the data into your swf?
With Web Runtime? Never.
Do you initiate the object used to get the data once globally or each time separately?
How about a HTML5 Qt app?
No need to learn C++, just Javascript. Or go for Qt Quick it's even easier!
There's no audio streaming for MP3 on Flash Lite 3.
:(
Please check the documentation @
Can't take the credit... Thanks go to Seto!
Check
and also:...
The users do it themselves.
At which state exactly do you get that error message?
Not from Flash Lite.
Are you sure you haven't maybe copy/pasted the filename from somewhere and it includes some extra, perhaps invisible, characters in the end?
Web runtime widgets are meant for simple, website-like apps and data driven applications. That do not require huge amount of processor power from the device. Qt is a full featured, cross platform...
Try going to Tools/Options/Tool chains and remove all manual configurations.
This is an ancient thread, I know, but the issue still arises from time to time and I have not found this solution...
There is either something wrong or missing in your wgz package. Please unzip the one included in the Web TV template and try to see what you have done differently.
Did you check the prerequisites section of the document?
"Prerequisites
• Basic knowledge of web technologies such as XML, JavaScript™, and Cascading Style Sheets (CSS)
• Understanding of WRT
• A...
Please read the documentation and the page where the example can be found. If you have questions after that, I am happy to help ;)
Please try again. It should work now.
What is the error message?
This forum is intended for developers of mobile software. For consumer issues, please try for example Twitter @NokiaHelps
import flash.external.*;
var url = "";
ExternalInterface.call("widget.openURL", url);
import flash.external.*;
ExternalInterface.call("window.close”); | http://developer.nokia.com/community/discussion/search.php?s=76feb0a5fff62b526c5255b0cc771826&searchid=3942913 | CC-MAIN-2014-42 | refinedweb | 401 | 69.68 |
This series of workshops has been a bit of a marathon. There has been so much content and so much helpful input. Before I started, I think I would have reckoned I was an intermediate React dev - now, I think I am an intermediate React dev.
My understanding of the whole React ecosystem has been picked up, restructured and placed on new and stronger foundations. I have better clarity about the patterns, questions and expectations I should have of React. I'm finding I'm able to troubleshoot my code and am better equipped to know what to search for when I need a solution to a problem. I have primarily been a backend dev, so this has been really a helpful enhancement of my understanding and appreciation of React.
This is workshop 6 and is all about performance. How can we make our applications more performant?
The secret about performance:
"Performance is all about less code - less lines and less cycles for your browser to loop over that code."
Super Simple Start to ESModules
We are probably never going to be free of bundlers, we are going to use dynamic module imports. These are available natively in the browser but for more than 100 modules we lose the efficiencies of this approach. Luckily, bundlers do understand this.
To do this with React the syntax is:
const SmileyFace = React.lazy(() => import('./smiley-face'))
We need to wrap the lazy component in
<React.Suspense> boundary and give it a fall
back.
We can also prefetch either by using a
Magic Comment which will instruct Webpack
to load the parcel as prefetched code.
The other way, is to use the useEffect() hook to load the component after the page has rendered.
Cool to see the coverage tool in Chrome Devtools - it shows the proportion of JS and CSS that have been downloaded and used on the page.
Frustratingly, when I tried to implement this on a NextJS project, I found that
React.Suspense is not yet supported by
ReactDOMServer, so this can't be used
currently in an SSR context.
When you have a calculation that is being run in the render cycle, this doesn't need to run if the calculations inputs haven't changed.
We have useMemo, to make sure we don't recalculate when we don't need to. We pass in dependencies, a lot like the useEffect hook.
function Distance({ x, y }) { const distance = React.useMemo(() => calculateDistance(x, y), [x, y]) return ( <div> The distance between {x} and {y} is {distance}. </div> ) }
It was hard to see where the performance gains here were before we came back to the full group. I understood the concept but couldn't prove it in the dev tools - now I can see how it works. If there is a large calculation, implement useMemo.
This is only useful for synchronously calculated values.
React exists in its current form (in large part) because updating the DOM is the slowest part of this process. By separating us from the DOM, React can perform the most surgically optimal updates to the DOM to speed things up for us big-time.
A React Component can re-render for any of the following reasons:
If you have a really long list, there is a cost of trying to calculate whether an item needs to be rendered or not. Even if you're memoizing this is a potential issue.
A well-accepted solution to this is called windowing - you only render what is in view and maybe a bit beyond. Everything else is set with empty divs with the correct height. This makes things way faster!
This was a cool library that works with long lists and grids of data - this stops the creation of the DOM elements until they are in view or close to being in view.
react-window is a good library for this.
This was another demonstration on the importance of co-located state. If state isn't required globally, then we should feel free to co-locate that state.
If not every component needs all of the data, we should separate the context in to separate domains.
Another approach is to create slices of the state and memoizing that. Then, passing the memoized component will stop a re-render when unrelated context is updated.
Context triggers a re-render every time the value object updates. The problem here
is that sometimes the value doesn't
change but is a new instance of that value.
There are a few different ways to optimise this:
Time, it was a-moving on - so this was a demo on the usefulness of the React profiling component.
You can read up on the capabilities of the
<React.Profile /> component here:
Here's a basic usage example:
<App> <Profiler id="Navigation" onRender={onRenderCallback}> <Navigation {...props} /> </Profiler> <Main {...props} /> </App>
The callback is called with the following arguments... }
It's important to note that unless you build your app using
react-dom/profiling and
scheduler/tracing-profiling this component wont do
anything.
Kent has written about production performance monitoring here: "React Production Performance Monitoring" | https://www.kevincunningham.co.uk/posts/kcd-react-performance/ | CC-MAIN-2020-29 | refinedweb | 853 | 64.3 |
In the article Calculating Reverse CRC I described an algorithm to create a four byte patch sequence that can be inserted into a message to give the message a desired CRC. A reader asked if and how the algorithm could be adopted to return a patch sequence with only ASCII characters. This article details how to extend the original algorithm to achieve this.
Problem
As described in the original article, one of the properties of CRC is that you can find and append a four byte sequence to any message and get any desired resulting CRC hash.
For a given input CRC and desired CRC, we can easily check if the four patch bytes are ASCII characters. The problem is that for many combinations of input CRC and desired CRC the byte sequence includes non-ASCII characters. The problem is how we address the case when we don’t get ASCII characters in our patch sequence.
Observations
Before we go into the solution, we can make some observations that may be beneficial. The binary patch sequence is four bytes, or 32 bits. As we want to limit the patch sequence to ASCII characters, each patch byte need to be limited to characters 0-9, A-Z, and a-z. The number of bits we can encode with these characters are roughly 6 (actually 62 different character values, but close enough to 6 bits for our needs). Using four characters would only give us 24 bits of information and we can’t represent all possible patch bytes. By adding two additional characters to our patch sequence, we will get 36 bits which would be sufficient.
Solution
The problem is that the CRC algorithm and the reverse CRC algorithm are working on 8 bit values, and can’t be changed. So we need to find a solution around how we could add the extra two patch characters without changing the nature of the original CRC algorithm.
This can fortunately be done relatively easy. Assume that our patch sequence is six characters represented by the names A-F:
ABCDEF
We know the initial CRC (before the patch bytes are added to the CRC) as well as the desired CRC. We also know that the reverse algorithm operates on four characters. We can use this knowledge to break the problem into two by introducing a temporary CRC.
If we pick two characters for the substring AB and then append these characters to the original CRC we get a new CRC value. The new CRC value can then be used in our original algorithm to get the patch bytes CDEF. If we pick the two characters AB carefully, we can get patch bytes CDEF that are only ASCII.
So the problem is now reduced to finding two characters AB that satisfy the requirements that the patch sequence CDEF only contains ASCII characters.
This problem can be tough to solve, but as we are only trying to find two characters which means that there are only 62 * 62 = 3844 possible solutions. This is a very small number for trial and error on a computer, and the best approach is to do a brute force trial to find these characters, as shown in the following pseudo code:
for (char a in ValidCharset) {
for (char b in ValidCharset) {
Crc32 crc32(msgCrc); // Temporary CRC object
crc32.append(a);
crc32.append(b);
if (ContainsValidChars(crc32.findReverse(desiredCrc)) {
// Return the found patch bytes....
}
}
}
Usage
Finding a patch sequence with a limited character set that generates a desired CRC can be useful to for example finding weak passwords in systems that use CRC for encryption. Other uses may include cases where you want to CRC any message but for some reason have constraints on what the generated CRC can be and the only option for patching is to use printable characters.
Source Code
The complete source code of a CRC implementation that includes a method for finding a six readable character patch sequence for the reverse CRC, as well as some test cases can be found here.
This is an awesome article, thank you. I do however have a question ..
I may be misunderstanding the implementation so please excuse me.
I have a string : abcd - that has a CRC32 hash of 0xccc6120d
testManyAscii returns, as expected 100 results.
but the string returned cannot be used in place of the abcd string I had.
Is this down to the poly in place being different, or am i using the implementation incorrectly?
I added the method to set up the tables with a different polynomial because I had a feeling that it may be needed for you. I tried quickly to find any info on the web but didn't find anything. I'm sure with some more searching it will come up. Maybe there are some other minor differences as well (e.g. the xor on the crc before new data is added.)
Hi, thank you so much for the response and also the addition of the Poly Method :)
Having done a fair bit of reading about the topic, I have found that it is the first and last integers aren't XOR'd for my purposes.
I am facing the challenge of getting this into C# as well as finding the poly that i should be using, so it's a lot to take in at the moment.
Thank you again for such an interesting article :)
Thanks :) Let me know how it goes.
Hi,
I feel really bad for asking as I really appreciate what you have applied in this article, but I am struggling with altering the method to not XOR the first and last CRC32 integers .. can you offer a pointer at all?
It looks like the CRC value is not inverted in PST CRC. so remove all the ~ characters in the file (which inverts the CRC) and initialize crc_ to 0 (the latter is probably not needed for just reversing CRC). Not sure if you tried that already.
Hello, Thank you so much for this, I am learning a great deal from the code, and having a great time doing it. You will have to excuse my ignorance here, I genuinely apologise in advance (or is it too late? :) )
Initialising crc_ to 0 instead of 0xffffffff was tried earlier, I have left that in place.
Removing the ~ from the file stepped me out of my knowledge comfort zone. IT threw an error around the Destructor function stating that the function already has a body.
If you have a password and expected CRC I can try to see how the algorithm needs to be modified. I didn't find any documentation that described it detailed enough to be sure what needs to be done, but if I can see some expected output it may help to get you on the right track.
Thanks!! I've been playing around a fair bit today, but the results have varied significantly.
An example would be :
Password: abcd
CRC: 0xccc6120d
Sorry for dome delay. I updated the zip file with the option to not invert the CRC. You need to comment out the define in the beginning of the Crc32.cpp file. When you do, you get the result you want from your example. Hope this helps.
Hi, there is absolutely no need for you to apologise!! You're helping me enormously!!
I'm trying it at the moment to see if the ASCII strings can be substituted in place of abcd and it isn't working .. for example ... one of the strings returned is from the CRC 0xccc6120d is Bze8PW .... So in an attempt to understand how things go together, I applied the Bze9PW string as a password and it came back with 0x98C67913 as it's CRC hash .. is it me being stupid ?
Try to replace main() with the following (and comment out the INVERSE_CRC define:
#include <string.h>
int main(void) {
Crc32 crc32;
char startStr[] = "abcd";
crc32.append(startStr, strlen(startStr));
UInt32 crcOrig = crc32.get();
printf("CRC of: %s = 0x%.8x\n", startStr, crcOrig);
crc32.set(0); // Reset CRC
const char* patch = crc32.findReverseAscii(crcOrig);
printf("Patch for 0x%.8x: %s\n", crcOrig, patch);
crc32.set(0); // Reset CRC
crc32.append(patch, strlen(patch));
UInt32 crcPatch = crc32.get();
printf("CRC of patch: %s = 0x%.8x\n", patch, crcPatch);
}
This should output:
CRC of: abcd = 0xccc6120d
Patch for 0xccc6120d: 1sUE2k
CRC of patch: 1sUE2k = 0xccc6120d
Let me know if you get the same result. If you use the same Crc32 object (as the example above) you need to reset the CRC between calculations.
I cannot find any other way to say this than you are a wonderful human being.
You have taught me so very much. a HUGE thank you.
Thanks. I'm happy to help.
Hi Daniel!
THANKS A TON! For the guide as well as the source code in C. I haven't used it yet, but I was pleasantly surprised by your prompt response! I will try this and get back to you once I am able to implement this in my work.
Thanks again! You're really helpful!
Thanks a lot. I tried code and working well.
But it works only for ASCII string. What to do with UNICODE string?
One question is what the desired behavior is, e.g. perhaps allowing characters of a particular language. Another is what type of encoding to use, e.g. UTF-8, UTF-16 etc. What use case did you have in mind?
These are the most common benefits that you get by installing an effective data recovery software. Although, you will get several other benefits by installing the software for data recovery, but the above mentioned are the most common that you will surely get by installing them. winzip activation code
I appreciate everything you have added to my knowledge base.
the best carding forum
Touch typing skill is also helpful, if you are typing from a dictation of someone who is currently speaking, either in a meeting or as a secretary typing a letter dictated by your boss. Typing For Beginners GRC tools
Software may also databases, and computer games. Software can help a small business correspond with its customers, keep track of inventory and even answer the phone and process orders. Getintopc
. Convenience testing is a for the most part subjective approach, which guarantees that the software is fit for being used viably in a given arrangement of conditions. best registry cleaner windows 7
Magnificent employment prospects are normal for candidates with at any rate four year college education in PC designing or software engineering and with down to earth work involvement.ebeveyn denetimi programı
Vista provided slower operating speeds and raised a host of security mistakes which led to many hackers being able to quickly and easily access Vista operated machines. That may be perhaps one of the better Microsoft software on the market.
That may be perhaps one of the better Microsoft software on the market.
filehippo
I am really impress with you for the selecting of new and unique topic and also well written article on it. Thanks for sharing with us.
filehippo
file hippo
File hippo
Download Free PC Software
File hippo
I feel absolutely bad for allurement as I absolutely acknowledge what you accept activated in this article.If you accept a countersign and accepted CRC I can try to see how the algorithm needs to be modified.
need more information
I have read your blog it is very helpful for me. I want to say thanks to you. I have bookmark your site for future updates. Getintopc
This comment has been removed by the author.
AshleyDecember 13, 2017 at 10:57 AM
Magnificent employment prospects are normal for candidates with at any rate four year college education in PC designing or software engineering and with down to earth work involvement ebeveyn denetimi programı
here are several ways to install monitors and there is a way to use them. Whether you clean the screens or not, they still have to go.solar savings calculator
I have read your blog it is very helpful for me. I want to say thanks to you. software
Different instances of utilization software incorporate allowing access to the web and printing reports. Application software is the software that by implication collaborates with the computer. edu software
Agarwal Matrimony
Veg Restaurant in Jaipur
Packers & Movers Ranchi
Astrologer in Jaipur
RRB SSE Exam
RRB SSE Coaching | http://www.danielvik.com/2012/01/finding-reverse-crc-patch-with-readable.html?showComment=1516189725034 | CC-MAIN-2019-18 | refinedweb | 2,076 | 63.59 |
1234567891011
12
13
14
15ORACLE AMERICA, INC.,
16Plaintiffs,17v.
18GOOGLE INC.,19Defendant.20
Hearing:Time:Dept:Judge:
2122232425262728GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
TABLE OF CONTENTS
Page
I.
INTRODUCTION ...............................................................................................................1
II.
ARGUMENT .......................................................................................................................2
A.
The Verdict is Not Contrary to the Clear Weight of the Evidence. .........................2
B.
Oracle is not entitled to a new trial based on the fact that Android apps canrun on Chrome OS. ..................................................................................................2
71.
Background ..................................................................................................2
8a.
b.
c.
91011121314
2.
The fact that Chrome OS can run Android apps provides no basisfor granting a new trial. ................................................................................6
15
a.
16171819203.
21
22C.23
The Courts evidentiary and trial management rulings were legally correctand within the bounds of the Courts broad discretion, and Oracle fails toshow substantial prejudice as a result of any of the rulings it challenges..............11
241.
2.27
28
2526
d.
234567
3.III.
CONCLUSION ..................................................................................................................25
8910111213141516171819202122232425262728iiGOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
TABLE OF AUTHORITIES
Page(s)
Federal Cases
567
8910
111213
141516
171819
202122
232425
Richardson v. Marsh481 U.S. 200 (1987) ................................................................................................................. 25Ruvalcaba v. City of Los Angeles64 F.3d 1323 (9th Cir. 1995) ....................................................................................... 11, 15, 17
262728
23
56
89
10
1112
Federal Rules
16
17
18
19
20
22232425262728ivGOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
INTRODUCTIONOracles motion for a new trial asks the Court to undo all of the hard work undertaken by
the Court and the parties, and set aside the unanimous verdict of the ten jurors who sat through
this two-week retrial of Googles fair use defense. A new trial is an extraordinary remedy and
[t]he trial court may grant a new trial only if the verdict is contrary to the clear weight of the
Molski v. M.J. Cable, Inc., 481 F.3d 724, 729 (9th Cir. 2007) (quoting Passantino v. Johnson &
Johnson Consumer Prods., 212 F.3d 493, 510 n.15 (9th Cir. 2000)). None of those circumstances
are present here. Oracles motion fails to justify the extraordinary relief Oracle seeks.
11
being surprised by an announcement at the 2016 Google I/O conference, Oracle has known
since at least the fall of 2015 that Google had created functionality (the Android Runtime for
Chrome or ARC) allowing Android apps to be run on the Chrome Operating System
(Chrome OS), which, in turn, runs on desktops and laptop computers. Google timely disclosed
this functionality in its discovery responses and produced related source code, and three of
Oracles experts addressed it in their expert reports. Thereafter, however, the Court limited the
scope of the retrial to Android smartphones and tablets, and did not allow Oracle to broaden the
scope to include additional products such as Chrome OS with ARC. Thus, there was no reason,
Second, Oracles challenges to several of the Courts evidentiary rulings fail because the
challenged rulings were well within the Courts broad discretion to admit or exclude evidence at
22
trial, and because Oracle fails to show that it was substantially prejudiced as a result of any of the
rulings. The Court properly limited the scope of the retrial to issues raised during the first trial
24
and excluded evidence of new products that were not at issue in the first trial. The Court also
25
properly excluded unfairly prejudicial and irrelevant evidence related to witness Stefano
26
Mazzocchiwho was not even identified in Oracles Rule 26(a) disclosures and whose testimony
27
could have been properly excluded altogether based on Oracles failure to comply with the
disclosure rules. Additionally, the Court properly excluded three Oracle documents on hearsay1GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
grounds. None of these rulings was an abuse of discretion, and none of them resulted in the type
Finally, Oracles argument that the Court abused its discretion by bifurcating the trial is a
non-starter; Oracle itself stipulated to a bifurcated trial after the last trial concluded. In any event,
Oracle has not shown and cannot show substantial prejudice from bifurcation since extensive
evidence of Androids financial performance and its alleged impact on Java SE and Java ME was
Googles fair use defense has been tried twice. Oracle fails to show that it is entitled to athird bite at the apple. Accordingly, and for the reasons described in detail below, the Court
ARGUMENT
As this Court held in denying Oracles Rule 50(a) motion for judgment as a matter of law,
Oracle is wrong in saying that no reasonable jury could find against it. Under the law as stated
in the final charge and on our trial record, our jury could reasonably have found for either side on
the fair use issue. ECF 1988 (Order Denying Rule 50 Motions) at 1. Oracles Rule 59 motion
offers no new argument or authority supporting its assertion that the verdict was against the clear
weight of the evidence. Mot. at 1. Thus, for the reasons stated in Googles opposition to Oracles
Rule 50(a) motion, the Courts order denying that motion, and Googles opposition to Oracles
Rule 50(b) motion filed herewith, Oracle is not entitled to a new trial on this basis. ECF 1935,
1988, 2010.
22232425262728
Oracle is not entitled to a new trial based on the fact that Android apps canrun on Chrome OS.1.
Background
During supplemental discovery, Oracle sought to broaden the case in various ways.Following motion practice, however, the Court limited the scope of the retrial. ECF 1479 (Orderre Googles Mot. to Strike) at 1. Thus, Google had no duty to supplement discovery regardingwhat Oracle now incorrectly (and self-servingly) calls Marshmallow/Chrome OSwhich is anupdate of the fully disclosed ARC (Android Runtime for Chrome) functionality.2GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
2345678910111213
As Oracle notes in its motion, it sought discovery into all Google products incorporatingthe asserted 37 Java SE API packages at issue. See Mot. at 5-6. In response, Google identifiedARC and ARC Welder, among other products. Id. at 6. ARC stands for App Runtime forChrome, which allows Android applications to run on a device based on Chrome OS, an1
operating system developed by Google that is separate and different from Android. ARCWelder provided related functionality to allow Android app developers to test and publish those2
apps to Chrome OS. Oracle acknowledges that Google produced source code for ARC and ARCWelder, as well as other products such as OpenJDK-based Android, Brillo, Google MobileServices, and Play Store. Mot. at 6. It also concedes that Google presented fact witnesses withknowledge of ARC, including Felix Lin and Anwar Ghuloum. See id. at 6-7. Thus, by way of itsdiscovery efforts, Oracle knew long before the close of discovery that Google providedfunctionality that allowed Android apps to run on Chrome OS.
14151617181920212223
Well aware of this functionality, Oracles experts addressed it in their opening reports.Mr. Zeidman, an Oracle technical expert who addressed the extent to which the 37 Java SE APIpackages were allegedly incorporated into various different versions of Android as well as otherGoogle products, devoted 17 paragraphs to ARC. See Decl. of Karwande in Support of GooglesOpp. to Oracles Rule 59 Mot. for a New Trial (Karwande Decl.), Ex. 1 (Zeidman Rep.) 126-43. At the outset of that discussion, Mr. Zeidman noted The App Runtime for Chrome(ARC) is a runtime environment created by Google that allows the Android runtime to be used onChrome OS devices. One of the primary uses for this feature is the ability to run Android apps onthe Chrome OS platform. Id. 126 (footnote omitted). Similarly, Dr. Kemerer, another Oracleexpert, devoted a section of his report to ARC:
24I have reviewed and my technical staff, at my direction, have reviewed sourcecode files and directories regarding Googles App Runtime for Chrome (ARC)product. When ARC is installed on a Google Chromebook computer, it is able
25262728
to run Android apps, and thus uses the Android Runtime. Detail about thecopying of code from the 37 Java API packages into the Android Runtime inChrome is set forth in the report of Mr. Zeidman. I have conferred with Mr.Zeidman and believe his method of analysis is sound and his results regardingsuch code are accurate. Therefore, ARC, operating in connection with Chromeand the Android runtime, necessarily reproduces the code and structure, sequenceand organization of the 37 Java API packages.
ECF 1560-10 (Kemerer Opening Rep.) 55 (emphasis added; footnotes omitted). And Oracles
damages expert Mr. Malackowski also devoted a stand-alone section to ARC, writing as follows:
Google announced the App Runtime for Chrome (ARC) project at the June2014 I/O Developer Conference. ARC allows Google to bring Android Apps tothe Chrome operating system. This means Google is now using Android tooccupy the original, traditional market of the Java Platform. In April 2015,Google released an ARC Welder Chrome app that allows a user to run AndroidApps on Chrome OS or using the Chrome web browser. ARC Welder allowsdevelopers to more easily test Android Apps.
ECF 1560-12 (Malackowski Opening Rep.) 172 (emphasis added; footnotes omitted). In sum,
based on the discovery provided by Google, Oracles experts contended that Google provided
functionality (i.e., ARC) that used the 37 Java SE API packages to allow Android apps to run on
Chrome OS, and that Google therefore competed with Sun/Oracle in the traditional marketplace
for Java SE. That is the same basis upon which Oracle now argues it is entitled to a new trial.
171819202122
In addition to discussing ARC, Oracles experts addressed a variety of other issues thatwere not a part of the first trial. For example, Dr. Schmidt, Dr. Kemerer, and Mr. Zeidmanaddressed alleged infringement of Java SE 6 and 7, rather than limiting themselves to the Java SE3
1.4 and 5 versions that were the subject of the first trial. And Dr. Kemerer and Mr. Malackowskidiscussed various Google products beyond smartphones and tablets, including Android Wear,4
2324
In response, Google filed a motion to strike portions of the opening expert reports ofOracles experts. ECF 1454. Following oral argument, the Court ruled that The upcoming trial
See ECF 1560-10 (Kemerer Opening Rep.) 47-50, 188, 208, 218 and App. G; KarwandeDecl., Ex. 4 (Schmidt Opening Rep.) 97-106; id., Ex. 1 (Zeidman Opening Rep.) 45, 73,106, 120-25.4See ECF 1560-10 (Kemerer Opening Rep.) 56-60 (Brillo); ECF 1560-12 (MalackowskiOpening Rep.) 167-68 (Wear), 169-70 (TV), 171-72 (Auto), and 173 (Brillo).4GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
will proceed as if we were back in the original trial, but now with the instructions on fair use
handed down by the court of appeals. No new copyrighted works will be allowed. ECF 1479 at
1. The Court therefore limited the asserted works to Java SE 1.4 and 5.0, and also delineated
which specific versions of Android were at issue for purposes of the fair use retrial. Id. at 1-2.
Moreover, the Court made clear that code outside of Androids use in phones and tablets, such as
ARC, would also not be part of the retrial: Among possibly others, our trial will not include
implementations of Android in Android TV, Android Auto, Android Wear, or Brillo. Id. at 2
(emphasis in original). Oracles subsequent expert reports did not address functionality such as
ARC, and Oracle did not attempt to offer any evidence of ARC at trial.
10c.
Oracle now complains that Google recently announced an update to Chrome OSs ability
to run Android apps. On the second day of Googles annual developer conference called Google
I/O in 2016, Google announced that it was bringing the Play Store to Chromebooks. So this
means youll now be able to access all your favorite Android apps and games not just on a phone
or a tablet, but also on a laptop running Chrome OS. As this description makes clear, the laptop
in this example is using the Chrome OS operating system, not the Android operating system. To
allow Android apps to run on a Chromebook, Google is not replacing Chrome OS with the
Android operating system; instead, the Android application framework is being placed into a
Linux container that runs on top of the existing Chrome OS stack. Using this technique, an
Android app running on a Chromebook works like a Chrome OS app. ECF 1998-5 (Ex. J to
Mot.) at 8:18-22. Most of the things [the app] expect[s] the Android system to do will be now
provided by Chrome OS . . . . Id. at 8:23-30. Thus, this technology allows Android apps
5GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
(available on the Google Play Store) to run on Chrome OS, much like many virtualization
products allow programs written for one platform to run on a different platform. Cf. Sony
Computer Entmt, Inc. v. Connectix Corp., 203 F.3d 596 (9th Cir. 2000) (addressing Connectixs
Virtual Game Station, which allowed games written for the Sony Playstation to run on Mac and
Windows personal computers). The announcement was not a surprise disclosure as Oracle
claims. Mot. at 1. It simply reflected an updated implementation of ARC, which Oracle had
Oracle falsely alleges that Google planned its announcement of the latest update to ARC
around this lawsuit. Notably, Google has hosted the Google I/O developer conference every
year since 2008. The two-to-three day conference is usually held in late May, although it has
taken place as early as May 10 (in 2011) and as late as June 29 (in 2012).
Google publicly announced that the 2016 I/O conference would take place from May 18-20,
2016.
trial hours per side for the fair use portion of the retrial, and there was no way to predict when
Oracle would rest its case at trial. See ECF 1488 (Tentative Trial Plan). Oracles accusations that
Google waited until Oracle rested to publicly announce the latest update of ARC is baseless.
In January 2016,
At the time, the Court had not yet issued even a tentative order regarding the number of
172.
1819
The fact that Chrome OS can run Android apps provides no basis forgranting a new trial.
Oracle argues that it is entitled to a new trial due to alleged discovery misconduct. Mot. at
2. To prevail on this theory, Oracle must (1) prove by clear and convincing evidence that
Google obtained the fair use verdict through discovery misconduct; and (2) show that Googles
alleged misconduct prevented Oracle from fully and fairly presenting its case. Jones v.
Aero/Chem Corp., 921 F.2d 875, 878-79 (9th Cir. 1990). Oracle fails on both prongs of the test.
2425262728
6GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
Oracle has failed to show that any purported lack of discovery concerning the availability
of Android applications on Chrome OS prevented it from fully and fairly presenting its case.
Oracle ignores that the Court limited the retrial to versions of Android and products that existed
in 2012, and updates to those implementations. ECF 1479 at 1-2. As a result, among other
products, the retrial did not address functionality such as ARC. Id. Indeed, in a later order, the
Court specifically excluded all evidence or expert testimony relating to Android Wear, Android
Auto, Android TV, Brillo, or any other new implementations of Android in devices other than
phones or tablets. ECF 1781 (Mem. Op. Re Googles Motion In Limine No. 2 Regarding New
As the Court explained, [t]he issue in the first phase of this limited retrial is whether
Googles use of 37 API packages from Java 2 SE 1.4 and 5.0 in its implementations of Android in
phones and tablets constituted a fair use. Id. at 3. Because the jury for the first trial did not
determine whether there was a prima facie case of infringement for other products, the jury
[would] not be asked to consider that question in our [2016] trial. Id. Mirroring the issues that
were decided by the first jury, the Court held that at the retrial there will be no analysis of
whether those new implementations constituted fair use (assuming they infringe). Id. Because
the new implementations were not at issue in the retrial, any market harm allegedly caused by
those implementations was irrelevant to the fair use analysis of the accused works. Id.
(emphasis added). For the same reason, any evidence that the new implementations of Android
whether the accused works superseded the copyrighted works. Id. at 3-4.
Not only did the Court conclude that issues concerning implementations of Android on
products other than phones and tablets were irrelevant, but the Court also further concluded that
expanding the scope of the retrial risked unduly confusing the jury. The Court explained, we
already have a long list of infringing products to impose on our jury and a line must be drawn
somewhere to cabin the universe under consideration. Id. at 5; see also Feb. 2, 2016 Tr. at 13:37GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
8 (You two have multiplied this case. Its going to be so hard for a jury to understand, to begin
Accordingly, the Courts rulings effectively precluded Oracle from expanding the scope
of the retrial to include Android apps running on Chrome OS. Oracle therefore was not
Furthermore, Oracles prior treatment of evidence related to ARC suggests that the alleged
new evidence does not have the great probative value that Oracle ascribes to it. Mot. at 4.
The premise of Oracles argument is that the ability to run Android apps on a Chromebook shows
that Android is not transformative and that Android has harmed the market for Java SE. See id. at
4-5. Oracles argument is wrong for the reasons explained below; however, even assuming it had
merit, Oracle would not be entitled to a new trial on this basis because it has long known that
Android apps can run on Chromebooks (via ARC). Indeed, Oracles expert reports included the
same argument that Oracle now claims would have allowed it to challenge foundational aspects
of Googles fair use defense. Mot. at 4; see, e.g., ECF 1560-2 (Malackowski Opening Rep.)
172 (ARC allows Google to bring Android Apps to the Chrome operating system. This means
Google is now using Android to occupy the original, traditional market of the Java Platform.).
Yet, despite having been made aware during discovery that ARC allowed developers to port an
Android app to Chrome OS, Mot. at 3, and despite receiving full and complete discovery on
ARC and ARC Welder, Oracle failed even to mention ARC in any briefing or argument regarding
Googles Motion to Strike or Googles Motion in Limine No. 2 re New Products. Nor did Oracle
argue during trial that Google had opened the door to revisiting those rulings based on the
evidence presented at trial. This demonstrates that the alleged evidence that Oracle improperly
accuses Google of withholding did not prevent Oracle from fully and fairly presenting its case.
The parties stipulated to the only expansion of the case that the Court permitted. See ECF1781,1488, and 1506 (noting the trial would include subsequent versions of Android releasedsince the first trial, namely Gingerbread, Honeycomb, Ice Cream Sandwich, Jelly Bean, KitKat,Lollipop, and Marshmallow).8GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
substantial evidence from which the jury was entitled to conclude that Android transformed the
Java SE API declarations. The fact that Google has also created technology to make Android
Professor Astrachan testified that Google added libraries to Android that are designed
specifically for the mobile platform, with features such as location awareness for GPS
features that are not something you would expect on a laptop or desktop computer running Java
SE. Tr. 1228:14-16, 1228:20-21. Google also designed a different bytecode architecture, Dalvik,
which has smaller bytecodes than youd find in the virtual machine on a Java SE platform,
reducing power consumption and memory usage. Id. at 1228:23-1229:14. Google also selected
the 37 Java API packages from Java SE that were most useful to a mobile platform, and then
wrote implementing code that was optimized for mobile use. Id. at. 1234:3-18. Google
combined that code with a Linux kernel that it also optimized for mobile use. Id. at 1236:15-20.
All of that created a new context for the declarations/SSO from the 37 Java SE API packages. Id.
at. 1236:22-25, 1237:7-13. Professor Astrachan testified that the declarations/SSO were
absolutely transformed when Google created this new context. Id. at Tr. 1288:18-20. Sun, in
contrast, was unable to use Java SE to create a smartphone. Id. at 1238:13-19. Oracles
economic expert, Professor Jaffe, testified that it was a feat for Google to establish Android as a
new, viable mobile platforma feat that other companies, including Oracle and Sun, failed to
achieve. Id. at 1784:25-1785:17. Professor Jaffe also conceded that Suns efforts in the
1800:8.
232425262728
wouldn't expect it to run on a mobile device because it wouldn't use all those APIpackages. And similarly, if we wrote an application that ran an Android, it would-- it might use some of those 37 independent implementations in the packages, butit might use the accelerometer and the location services, and if it used those, thosenew libraries that were designed specifically for the Android platform, it wouldn'twork on your desktop or laptop computer. So in general, those platforms aren'tcompatible.
Tr. 1231:8-25. Professor Astrachan was asked whether the Android platform is compatible with
the Java SE platform, and answered that because Java SE includes maybe a hundred-plus more
Java API packages than are implemented in Android, an application written for the Java SE
platform that used those additional packages wouldnt be expected to run on a mobile device.
Id. at 1231:8-18. Similarly, an application written for the Android platform that used Android-
specific APIs such as for the accelerometer or the location services would not run on your
desktop or laptop computer. Id. at 1231:19-25. He therefore concluded that those platforms
i.e., Android and Java SEarent compatible. Id. at 1231:25. As the full context shows, when
Professor Astrachan referred to a mobile device, he was testifying about a device running the
Android platform, and when he referred to your desktop or laptop computer, he was testifying
Indeed, the same compatibility issue arises with Android applications and Chrome OS.
Chromebooks generally support fewer sensors than are available in a smartphone or tablet, and
Android applications that require hardware unavailable on Chromebooks will not show . . . on
the Play Store for Chromebooks. ECF 1998-10 (Ex. J-1 to Mot.) at 17:54-18:36. For example,
222324
Because the fair use retrial was limited to implementations of Android on smartphones
and tablets, the availability of Android applications on Chrome OSon Chromebook laptops
was irrelevant to the fourth fair use factor. Moreover, Oracles economic expert, Dr. Jaffe, did
not even consider whether Android had harmed market opportunities for Java SE in the desktop
and laptop categories. Tr. 1860:13-24. Dr. Jaffe never spoke with the Java SE team. And as far10GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
as he was aware, Java SE continues to do just fine. Id. at 1861:13-23; see also id. at 1013:11-
20 (videotaped deposition of Oracle 30(b)(6) witness Donald Smith testifying that Java SE
Advanced revenue is growing well and that [s]upport revenue is growing well.).
3.
Oracle argues that Google failed to supplement its discovery responses to further disclose6its updated development of software that makes the Android applications available to devices7running Chrome OS. On February 5, 2016, however, the Court ordered that the retrial would8proceed as if we were back in the original trial, but now with the instructions on fair use handed9down by the court of appeals. ECF 1479 at 1. Aside from new versions of the phone and tablet10implementation of Android, the retrial would not include other versions and implementations of11Android since the last operative complaint preceding the last trial including implementations12such as Android TV, Android Auto and Android Wear. Id. at 2.13Following that order, new implementations of and products for Android, beyond the14agreed-upon new versions of Android released for phones and tablets, were outside the scope of15this case. Thus, they were neither admissible nor reasonably calculated to lead to the discovery of16admissible evidence, and, as such, the burden or expense of such discovery outweighed its likely17benefit. FED. R. CIV. P. 26(b)(1). In short, after functionality such as ARC was deemed outside18the scope of the retrial, there was neither a practical reason nor legal obligation for Google to19further supplement related discovery.202122
C.
The Courts evidentiary and trial management rulings were legally correctand within the bounds of the Courts broad discretion, and Oracle fails toshow substantial prejudice as a result of any of the rulings it challenges.
Oracle challenges several of the Courts trial and in limine evidentiary rulings, asserting23(incorrectly) that those rulings were erroneous and that they entitle Oracle to a new trial. The24Court has broad discretion in deciding whether to admit or exclude evidence. Ruvalcaba v.25City of Los Angeles, 64 F.3d 1323, 1328 (9th Cir. 1995). To justify a new trial, an evidentiary26ruling must be an abuse of that broad discretion and substantially prejudice[] the moving party.27Id.; Harper v. City of Los Angeles, 533 F.3d 1010, 1030 (9th Cir. 2008); see also FED. R. CIV. P.2861 (Unless justice requires otherwise, no error in admitting or excluding evidenceor any other11GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
error by the court or a partyis ground for granting a new trial, for setting aside a verdict, or for
vacating, modifying, or otherwise disturbing a judgment or order.). This requires the movant to
demonstrate that more probably than not the evidentiary error tainted the verdict. Beckway v.
DeShong, No. C07-5072 TEH, 2012 WL 1355744, at *6 (N.D. Cal. Apr. 18, 2012) (quoting
1.
78
Oracle challenges the Courts in limine ruling excluding from this fair-use retrial evidence
of new Google products or implementations that were not shown to infringe Oracles copyrights
at the first trial. The new productsincluding Android TV, Android Auto, Android Wear, and
Brillowere not part of the original trial in 2012. See ECF 1781. The Courts exclusion of these
products was a proper exercise of its broad discretion to manage the presentation of evidence at
The original trial in 2012 was the first of the so-called smartphone war cases tried to a
jury. ECF 1202 (Order re Copyrightability of Certain Replicated Elements of the Java
Application Programming Interface) at 1. The first trial focused on smartphones and tablets, and
the Federal Circuit remanded the case because, amongst other fact disputes that prevented entry
of judgment as a matter of law, the Federal Circuit found that there were reasonable fact disputes
regarding (1) whether and to what extent use of Android in smartphones was transformative, and
(2) the effect of Android on the actual or potential market for a Java-based smartphone:
2122232425262728
On balance, we find that due respect for the limit of our appellate functionrequires that we remand the fair use question for a new trial. First, although it isundisputed that Googles use of the API packages is commercial, the partiesdisagree on whether its use is transformative. Google argues that it is, because itwrote its own implementing code, created its own virtual machine, andincorporated the packages into a smartphone platform....Finally, as to market impact, the district court found that Sun and Oracle neversuccessfully developed its own smartphone platform using Java technology. . . .But Oracle argues that, when Google copied the API packages, Oracle waslicensing in the mobile and smartphone markets, and that Androids releasesubstantially harmed those commercial opportunities as well as the potentialmarket for a Java smartphone device. Because there are material facts in disputeon this factor as well, remand is necessary.12GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
Oracle Am., Inc. v. Google Inc., 750 F.3d 1339, 1376-1377 (Fed. Cir. 2014) (emphases added)
(internal citations omitted). Following the remand, Oracle attempted to broaden the scope of the
retrial to include new Java products and new Android-related products that were not related to
smartphones and tablets and were not at issue in the first trial. Supra at 2. The Court rejected
Oracles attempt to broaden the scope of the retrial, and ruled that [t]he upcoming trial will
proceed as if we were back in the original trial, but now with the instructions on fair use handed
down by the court of appeals. ECF 1479 at 1 (emphasis added). And the Court later confirmed
that the retrial would be limited to the issue of whether Googles use of 37 API packages from
Java 2 SE 1.4 and 5.0 in its implementations of Android in phones and tablets constituted a fair
use. ECF 1781 at 3 (emphasis added). This Courts decision to limit the retrial to smartphones
and tabletsthe only products or implementations at issue in the first trial and the only products
as to which the jury in the first trial found infringementwas consistent with the Federal
Circuits mandate, and well within the Courts broad discretion to manage the trial. Cf. U.S. for
Use & Benefit of Greenhalgh v. F.D. Rich Co., 520 F.2d 886, 889 (9th Cir. 1975) (The trial court
Furthermore, consistent with the Federal Circuits mandate, the Court properly limited the
issues to be tried to fair use, as opposed to a whole new trial on infringement. The jury in the
first trial found that Android in smartphones and tablets infringed Oracles copyrights subject
only to Googles fair use defense, and, at Oracles urging, the Federal Circuit instructed this
Court to reinstate the jurys infringement verdict. Oracle, 750 F.3d at 1381. The Federal
Circuit remanded the case for a limited retrial on fair use only, not on infringement. Id. The new
Android-related products and implementations that Oracle sought to inject into this limited retrial
were never at issue in the first trial and they were thus irrelevant to the fair use retrial, which was
premised on an infringement verdict that was limited to the use of Android in smartphones and
tablets. ECF 1781 at 3 (There has been no determination that the implementations of Android in
other product categories infringe, and the jury will not be asked to consider that question in our
trial.). Thus, the Court was correct to exclude them because, under fair-use Factor Four, the
effect on the market must be attributable to the [alleged infringement]. Wright v. Warner13GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
Books, Inc., 953 F.2d 731, 739 (2d Cir. 1991); see also Arica Inst., Inc. v. Palmer, 970 F.2d 1067,
1078 (2d Cir. 1992) ([T]he relevant market effect is that which stems from defendants use of
plaintiff's expression, not that which stems from defendant's work as a whole.).
Oracles new trial motion does not address the fact that these new products or
implementations were not at issue in the first trial. Instead, Oracle asserts that [i]n all of the
above markets, Android contains the 37 Java API packages. Mot. at 14 (emphasis added). This
assertion misses the mark for several reasons. First, Oracle conflates specific products like
Brillo with the versions of Android that are used in smartphones and tablets and which were the
subject of the first trial. Oracle has never proven that products like Brillo use the SSO and
declaring code from the 37 Java SE API packages at issue in the first trial and in this fair-use
retrial.
Indeed, when Google explained the lack of proof related to Brillo, Oracle withdrew its claim as to
Brillo, conceding that it will agree not to raise Brillo at this trial. ECF 1612-3 at n.1. Oracles
concession and agreement not to raise Brillo at trial is a waiver of any argument based on alleged
harm from Brillo, and demonstrates that Oracles claims about these new products and
And proving this fundamental point is not a mere formality, as Oracle seems to suggest.
Second, even if Oracle could have proven that these new products and implementations
use the SSO and declaring code at issue in the first trial, exclusion would be warranted because
Oracle would still need to prove infringement. Use of some aspect of the SSO or the declaring
code does not constitute per se infringement. In an infringement proceeding, the parties would
have to take discovery into and offer competent, admissible evidence of, amongst other things:
what code is used, what functions the code performs, and whether the use was de minimis. None
of that has been proven with respect to Brillo, Android TV, Android Wear, or Android Auto. The
Court properly recognized that injecting these new products and infringement issues into the
limited retrial on fair use would have further complicated an already complex trial and created a
2728
Oracle conceded this point by arguing in response to Googles motion in limine that Oraclecan prove that KitKat and Lollipop are on Android, Auto TV, and Wear through a few minutes oftechnical expert testimony or through a short (3-5 page) summary judgment motion. ECF 16123 (Oracles Opp. To Mot. in Limine #2 re New Android Products) at 1 n.1 (emphasis added); seealso id. at 5-4.14GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
123456
serious risk that the jury would be unable to grasp the core issues in the case, noting:Thats a different problem from whether or not there is a the use of Android inthese other markets with different products is we have to draw the linesomewhere. We have so much for the jury to understand and grasp now, that Idespair at how much you lawyers think that they can in four weeks that weregoing to be able to teach them.April 14, 2016 Tr. at 115:12-18.The trial court has broad discretionunder both its inherent authority and the Federal
Rules of Evidence to manage the conduct of a trial and the evidence presented by the parties.
See Navellier v. Sletten, 262 F.3d 923, 94142 (9th Cir. 2001). Limiting the number of asserted
claims and accused products is a legitimate exercise of that broad discretion. See, e.g., Thought,
Inc. v. Oracle Corp., No. 12-CV-05601-WHO, 2013 WL 5587559, at *2 (N.D. Cal. Oct. 10,
2013) (District courts possess the authority to limit patent claimants to a set of representative
claims.); Apple, Inc. v. Samsung Elecs. Co., No. 12-CV-00630-LHK, 2014 WL 252045, at *1
(N.D. Cal. Jan. 21, 2014) (In order to streamline the case for trial, the Court has required the
parties to limit their infringement contentions to 5 patents, 10 asserted claims, and 15 accused
products per side.). Indeed, before the first trial in this case, this Court ordered the parties to
reduce the number of patent claims and prior art references in play to ensure that only a triable
number of these items . . . are placed before the jury. ECF 131 (Order re Schedule for
Narrowing Issues for Trial) at 1. Oracle cites no authority holding that the Courts decision to
limit the scope of a retrial to the products and implementations that were proven to infringe
(subject to Googles fair use defense) in the first trial was an abuse of discretion, let alone an
2223
Oracle challenges the Courts exercise of its broad discretion under Rule 403 to exclude24two limited aspects of evidence related to late and improperly disclosed witness Stefano25Mazzocchi. Mr. Mazzocchi was a member of the Apache Software Foundation at the time he26wrote three emails expressing his personal opinions regarding Apache Harmonys27implementation of Java. See ECF 1993-27 (TX 5046), ECF 1993-33 (TX 9200), and ECF 19932834 (TX 9201) (collectively, the Mazzocchi emails). Although Oracle was in possession of15GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
these emails since at least May 13, 2011, it never disclosed Mr. Mazzocchi on any of its Rule
26(a)(1) disclosures and failed to call him as a witness in the first trial. Over Googles objections,
Oracle was permitted to call Mr. Mazzocchi as a witness at the retrial, and all three of his emails
jury that Apache believed it was doing something wrong by implementing the Java SE API
packages in Harmony, and that, by extension, Google knew that its implementation of the Java
The Court could have properly excluded all of this evidence as a result of Oracles failure
to comply with Rule 26(a). See FED. R. CIV. P. 37(c)(1); Yeti by Molly, Ltd. v. Deckers Outdoor
Corp., 259 F.3d 1101, 1106 (9th Cir. 2001); ECF 26 (Standing Order) 26. Now, Oracle seeks a
new trial because the Court excluded one sentence (two lines), of one of the three Mazzocchi
emails and excluded evidence that, two years after writing the emails, Mr. Mazzocchi became a
Google employee where he does not work on Android. The Court properly excluded this
evidence under Federal Rules of Evidence 403 and 701, and, regardless, Oracle has not shown
and cannot show that it was substantially prejudiced as a result of the Courts rulings.
16a.
After briefing and argument from both parties regarding the admissibility of Mr.
Mazzocchis emails and testimony, and despite Oracle failing to identify Mr. Mazzocchi on its
Rule 26(a)(1) disclosures, the Court permitted Oracle to call Mr. Mazzocchi as a witness and
allowed Oracle to use the Mazzocchi emails but ordered Oracle to redact a single seventeen-word
sentence from one of the emails because it was too inflammatory and without foundation. Tr.
1588:11-14. That excluded sentence read: This makes us already doing illegal things. In fact,
Android using Harmony code is illegal as well. Id. at 1588:13-14. The Courts ruling was
The Court excluded TX 9200 and TX 9201 from Oracles case in chief because they were notdisclosed pursuant to Federal Rule of Civil Procedure 26(a). Tr. 1588:21-24. After Mr.Mazzocchis examination by Google the Court allowed these exhibits in for impeachment. Id. at1730:20-1731:5.16GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
First, the excluded sentence lacked foundation because Mr. Mazzocchi, a non-lawyer,
purported to express a legal opinion on the legality of Android. A lay witness may not . . .
testify as to a legal conclusion. United States v. Crawford, 239 F.3d 1086, 1090 (9th Cir. 2001),
as amended (Feb. 14, 2001); FED. R. EVID. 701. Oracle does not dispute that Mr. Mazzocchi is
not a lawyer, has no specialized legal training, and has no knowledge of the fair use doctrine. See
Tr. 1726:21-1727:11. The Court therefore correctly ruled that there was no foundation for Mr.
Id. at 1588:11-14. Indeed, throughout the trial, Oracle objected to testimony that it characterized
as legal opinions from non-lawyers, including testimony from its own former CEO about Suns
practices with respect to the API packages at issue in this case. See, e.g., id. at 508:2-20. The
Court sustained many of Oracles objections to Mr. Schwartzs testimony. See id. at 508:2-8;
508:22-509:9; 510:5-6; 517:2-6. Having excluded such opinions from the plaintiffs former
CEO, the Court was certainly within its discretion to exclude similar statements from Mr.
Second, any probative value of the excluded statement was substantially outweighed by
the risk of undue prejudice to Google. The statement had little to no probative value because, as
noted above, Mr. Mazzocchi is not a lawyer and had no basis for expressing a legal opinion.
Additionally, Mr. Mazzocchi has never worked on Android, Tr. 1586:21-22, and had no factual
bases for an opinion about the legality of Androids use of Apache Harmony. And his off-the-
cuff comments about Apache and Android cannot fairly be attributed to the entire Apache
organization, as Oracle wrongly suggests. Mot. at 15. Mr. Mazzocchis statements were unduly
prejudicial to Google because, as the Court found, his raw, unsupported statement about illegal
things was too inflammatory. Tr. 1588:11-14. Furthermore, the excluded sentence was
largely cumulative in substance of other statements in the Mazzocchi email that the Court did not
exclude. Infra at 18. The Court was therefore well within its broad discretion to exclude the
statement under Federal Rule of Evidence 403. See Ruvalcaba, 64 F.3d at 1328.
Third, the Court correctly excluded evidence of Mr. Mazzocchis employment by Googletwo years after he wrote the emails in question. Mr. Mazzocchi did not work at Google at the17GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
time he wrote the emails and he had no authority to speak for Google. His subsequent
employment at Google, where he does not work on Android, has no bearing on any issue in this
case. Indeed, Oracle abandoned any argument to the contrary when it failed to object to the
Courts limiting instruction stating that Oracle offered the Mazzocchi emails to show the state of
mind of somebody within the Apache project. Tr. 1719:9-10 (courts limiting instruction). The
emails were not relevant to Googles state of mind, but given Mr. Mazzocchis current
employment at Google, there was a high risk that his past statements could have been incorrectly
issue. The Court was therefore well within its discretion in excluding this testimony.
10b.
Even if the Court had erred in exercising its broad discretion under Rule 403 (which it did
not), Oracle would not be entitled to a new trial because it fails to show that the excluded
evidence tainted the verdict. First, Oracle cannot show substantial prejudice as to the two lines
excluded from one of the emails because that sentence was largely cumulative of other statements
I was working under the assumption that we could ignore the trademark (avoid statingthat we are compatible), use the org.apache.java plus classload trick to avoid the java.*namespace and pretend that we don't know of any IP that we infringe until explicitlymentioned. But what I was missing is the fact that the copyright on the API is real andhard to ignore. ECF 1993-27 (TX 5046); Tr. 1717:1-7 (emphasis added).
Simply by implementing a class with the same signature of another in another namespaceand simply by looking at available javadocs could be considered copyright infringement,even if the implementation is clean room. ECF 1993-27 (TX 5046); Tr. 1717:11-15(emphasis added).
So, we are, in fact infringing on the spec lead copyright if we distribute something thathas not passed the TCK and, *we know that*. ECF 1993-27 (TX 5046); Tr. 1717:18-21(emphasis added).
The bigger question in my mind is not about hardware. [W]hat is Oracle going to do with500 million Java powered cell phones? What is Oracle going to do about Androidsripping off some of (now) their IP and getting away with it? ECF 1993-34 (TX 9201);Tr. 1732:17-22 (emphasis added).
These statements are cumulative of the excluded sentences reference to Apache and
171819202122232425
Google allegedly doing illegal things. Oracle repeatedly used the above quoted statements to18GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
argue that those working at Apache and Google knew that their Java implementation was not
authorized by Sun. See Tr. 1943:9-24 (using Mazzocchi emails to cross-examine Googles
(connecting Mazzocchi email to Rubin during closing); id. at 2155:11-13 (highlighting TX 9201
ripping off language during closing); id. at 2155:3-5 (urging jury to look at the Mazzocchi
emails); Id. at 2157:20-21 (referring again to ripping off language); id. at 2178:23-2179:2
(another reference to ripping off language). The jury plainly did not accept Oracles suggestion
that Mr. Mazzocchi, Apache, and everyone in the industry, including Google, knew that Android
was not a fair use. The single, cumulative sentence from one email would not have affected the
1011
jurys verdict.Second, in addition to disregarding Oracles argument about the import of the Mazzocchi
emails, the jury reasonably could have credited the testimony from numerous witnesses
including Suns own CEOthat Apache and GNU were not doing anything improper or unfair
by implementing the Java SE API packages without a license from Sun. Id. at 508:22-509:9;
Oracles complaints about its purported inability to attack Mr. Mazzocchis credibility are
unfounded. Oracles counsel vigorously cross-examined Mr. Mazzocchi on all of these emails.
Oracle attempted to show bias by asking Mr. Mazzocchi whether he spoke with Googles counsel
before testifying in Court. Tr. 1724:17-24. The fact that Mr. Mazzocchi became a Google
employee two years after writing the email would not have provided any significant additional
evidence of bias and Oracles suggestion that such a minor piece of evidence would have
affected the jurys verdict in this wide-ranging, two-week trial is unsupported by any authority.
In April of 2009, Oracle reached a publicly announced deal to purchase Sun. The deal27closed in January 2010, and Sun Microsystems, Inc. changed its name to Oracle America, Inc.28Tr. 1186:6-12. Sun is therefore the plaintiff in this case, and out-of-court statements made by Sun19GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
are hearsay unless they fit within an exception to the hearsay rules. See FED. R. EVID. 801-803.
As part of the regulatory process to finalize its acquisition of Sun, Oracle filed a merger
notification and related paperwork with the European Commission (EC), the executive body of
the European Union. Tr. 1311:10-14; 1312:15-17. When asked about this process, Suns-then-
CEO Jonathan Schwartz testified that he was not involved in the discussions and that Oracle was
clear, they were the only ones who were to speak with competition authorities. Id. at 597:1-
598:13. Later in the trial, Oracle sought to introduce a statement, supposedly attributable to Sun,
contained within one of the documents Oracle filed with the EC. Id. at 1313:18-19. To overcome
the obvious hearsay problem that existed with offering an out-of-court statement by Sun for the
truth of the matter asserted, Oracle argued that the statement could be used to counter Googles
evidence that Sun believed Android was permissible and the purported insinuation that Oracle
had cooked up this lawsuit after acquiring Sun. Tr. 1309:1-6, 1311:20-22. Oracle represented to
the Court that it could showpresumably through evidence that was properly disclosed and
admissible at trialthat Sun supplied the answers within what Ms. Catz, Oracles CEO,
identified as an Oracle document. Id. at 1313:17-19. Oracles counsel represented that Oracle
would prvide the drafts to the Court the next day, and that we produced all of the drafts leading
up to this, showing Suns participation in the drafting. Id. at 1314:1-12. Counsel was mistaken.
Late that night and early the next morning Oracle produced to Googlefor the first
timefive drafts of TX 5295, the EC filing to which Ms. Catz referred in her testimony. These
drafts were attached to emails exchanged among Suns and Oracles lawyers. At the time of
production, Oracle did not notify Google that at least four of the drafts it was producing appear to
privilege and attempted to proffer a handpicked selection from hundreds of previously withheld
documents that it claimed established that Suns lawyers had drafted the relevant statements in
TX 5295. See Tr. 1333:10-18. As the Court remarked, this waiver was extraordinary. Id. at
1328:16-19; 1329:2. Google had never seen these draft documents before, and did not have
access to all of the documents related to the drafting of the EC filing because Oracle did not20GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
produce any other of the hundreds of privileged emails related to the EC filing. The Court
rejected this belated and selective proffer of privileged documents and excluded TX 5295. See
FED. R. CIV. P. 37(c)(1) (If a party fails to provide information as required by Rule 26(a) the
party is not allowed to use that information . . . at a trial, unless the failure was substantially
justified or is harmless). The Court was well within its discretion in excluding this evidence.
First, regardless of whether Sun or Oracle was the declarant, TX 5295 was hearsay and
the Court would have been correct to exclude it on that basis alone. Sun is the plaintiff in this
case, and its out-of-court statements are hearsay unless they fall within a specific exception,
which the statements here do not. TX 5295 thus could have been excluded for this reason alone.
Second, Oracle argues that TX 5295 should have been admitted to show Suns state of
mind, but Oracle failed to establish through admissible evidence that Sun was the declarant of
the statements in the EC filing. This case is thus different from Wagner v. County of Maricopa,
see Mot. at 21, where there was no challenge to whether the hearsay statement at issue was
actually attributable to the declarant and the court did not rule on that issue. 747 F.3d 1048, 1052
(9th Cir. 2013). Here, there was plenty of reason to doubt whether Sun actually authored the
statement: it was contained within an Oracle document, which Suns then-CEO had no memory
of, and was made after Oracle had instructed Mr. Schwartzthe CEO of Sun who had been
responsible for all decisions made by the companythat Oracle employees were the only ones
who were to speak with competition authorities. Tr. 598:2-5, 1311:10-14; 1312:15-17. The fact
that Oracle had the authority to order Sun employees, including the CEO, not to speak with the
EC is more than sufficient to support the conclusion that Oracle, not Sun, controlled the company
and that any statements made by Sun during this period were equally attributable to Oracle.
Indeed, during this same time period, Oracle CEO Larry Ellison appeared on stage at the JavaOne
developer conference and informed thousands of people in the audience about what to expect
regarding Java in the future, even though Oracles acquisition of Sun had not officially closed.
Karwande Decl., Ex. 8 (TX 2939.1). This evidence shows that Oracle was in control of the
company before the acquisition finalized, and that statements nominally made by Sun reflect
late-night production of previously withheld documents, Google had no opportunity to test what
was actually contained in the documents within the subject matter waiver. Oracle attempted to
use privileged documentsproduced at the eleventh hourto somehow prove that Sun was the
declarant of statements within a document that Oracles own CEO referred to as an Oracle
document, without giving Google access to related documents for use in cross-examination. The
Court was not obligated to accept such gamesmanship, and it properly excluded TX 5295.
In any event, Oracle was not substantially prejudiced by exclusion of TX 5295. Theoverwhelming majority of evidence of Suns pre-acquisition conduct showed that Sun accepted
Android as a proper use of the Java SE API packages that was consistent with Suns long-
standing business practices. See, e.g., Tr. 501:8-23; 500:21-23 (Schwartz); 974:9-21, 994:10-11;
974:9-21; 991:18-992:2; 994:6-11 (Bloch); 362:22-363:6 (E. Schmidt); Karwande Decl., Ex. 9
(TX 2352) (Schwartz blog congratulating Google and welcoming Android to the Java
community); id., Ex. 10 (TX 7459.1) (Barr blog commending Google and noting that Android
shows that the era of proprietary and closed mobile platform and networks is finally drawing to
an end); id., Ex. 11 (TX 3441) (Schwartz email offering support for Android). The jury was
entitled to rely on that evidence in reaching its verdict, and TX 5295 could not reasonably have
affected the verdict. The jury also could properly have relied on the complete absence of any
witness testimony offered by Oracle regarding Suns pre-acquisition view of Android. Indeed,
given Oracles emphasis on Suns state of mind, it is telling that Oracle did not present a single
Sun employee who could contradict Mr. Schwartzs sworn testimony and state that the executives
in charge at Sun believed Google was doing anything unfair with Android.
The jury understandably could have placed little weight on Oracles attempt to persuade thejury that Mr. Schwartz believed Android was unfair by offering evidence of Mr. Schwartzsalleged views through Oracles CEO, Ms. Catz. See Tr. 1307:1-1308:5 (Catz testifying aboutemail exchange with Schwartz); Tr. 1308:19-1309:17 (Catz testifying about conversation withSchwartz regarding Android). Particularly so given Mr. Schwartzs unequivocal testimony thathe never suggested that Androids use of declarations/SSO was impermissible. Id. at 508:22509:9; 556:23-557:5; 558:8-11. One additional post-acquisition statement, supposedlyattributable to Suns lawyers, would not have affected the jurys views of Suns state of mind.22GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
12345678910111213141516171819202122232425262728
Courts are rightfully wary when parties create self-serving documents and seek to offerthem as business records. Sana v. Hawaiian Cruises, Ltd., 181 F.3d 1041, 1046 (9th Cir. 1999).The Court has wide discretion when determining whether a supposed business record issufficiently trustworthy to overcome the rule against hearsay. United States v. Olano, 62 F.3d1180, 1206 (9th Cir. 1995); FED. R. EVID. 803(7)(C) (business record exception not met ifopponent shows other circumstances indicate a lack of trustworthiness). After consideringthree internal Oracle presentations (TX 5961, TX 6431, and TX 6470), all created shortly beforeor after Oracle filed this lawsuit, the Court correctly excluded these documents as internal selfserving records and internal propaganda. Tr. 1357:20-1358:6; 1498:21-23; 1499:20-24.The authority Oracle relies on is not to the contrary. In U-Haul International, Inc. v.Lumbermens Mutual Casualty Company, the Ninth Circuit found that the district court did notabuse its discretion by finding that computer-generated summaries of payment data stored in thecompanys databases qualified as a business record. 576 F.3d 1040, 1044 (9th Cir. 2009). This isprecisely the type of information the Court indicated it would (and did) allow in as a businessrecord. See Tr. 1357:20-23 (If it was just a financial statement, I would allow it.); KarwandeDecl., Ex. 12 (TX 9133.1) (admitting certain slides from PowerPoint); id., Ex. 13 (TX 4108)(admitting revenue forecasts for Java products).But that is not the type of information Oracle intended to introduce through TX 5961, TX6431, and TX 6470. Rather, Oracle sought to introduce self-serving commentary, written shortlybefore and after the lawsuit was filed in July 2010, discussing Androids alleged harm to themarket for Java. For example, page 21 of TX 5961, which Oracle characterizes as a spreadsheetof revenue and expenses for the first two quarters of fiscal year 2011 for Java embedded andforecasts for the third quarter is actually a FY10 Performance Assessment that notesDeteriorated profitability in key OEM's due to Android and Android & non-compliant Javaimplementations impacting Java revenues. Mot. at 23; ECF 2002-20 (TX 5961) (April 2010presentation at 21); see also ECF 2002-24 (TX 6470) at 14, 16 (narrative description of Androidspurported impact on Java and legality in August 2010 presentation); ECF 2002-21, 2002-22 (TX23GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
December 2010 presentation). The Court properly excluded these self-serving presentations as
hearsay not within any established exception to the hearsay rule. See Impact Mktg. Intl, LLC v.
Big O Tires, LLC, No. 2:10:CV-01809-MMD, 2012 WL 2092815, at *3 (D. Nev. June 11, 2012)
Even assuming these three documents were admissible, Oracle was not substantially
prejudiced by exclusion of the documents. Mr. Neal Civjan, former Sun executive of sales, and
Ms. Catz testified at length about the information in those presentations, including Androids
supposed impact on licensing, revenue, business, and projections for future revenue. Tr. 1633:2-
1636:13 (Civjan); 1358:8-1363:6 (Catz). The fact that this testimony was not accompanied by
self-serving presentations did not taint[]the verdict. Beckway, 2012 WL 1355744, at *6.
1314
For convenience, to avoid prejudice, or to expedite and economize, the court may order a
separate trial of one or more separate issues [or] claims[.] FED. R. CIV. P. 42(b). The trial court
has broad discretion in deciding how to manage the trial, including whether to bifurcate liability
and damages. M2 Software, Inc. v. Madacy Entmt, 421 F.3d 1073, 1088 (9th Cir. 2005).
Oracle fails to provide any factual or legal support for its argument that the Court abused
its discretion by bifurcating liability and damages, and it does not even attempt to show that any
such abuse of discretion would warrant a new trial here. See Mot. at 19-20. Nor could Oracle
plausibly make such an argument because it stipulated to a bifurcated trial long ago. Following
the first trial, Oracle stipulated that [i]n the event that Oracles claim based on the SSO of the 37
accused API packages or any portion thereof (the SSO Claim) is ultimately submitted to a jury
(the Future Jury) for an assessment and award of monetary relief, then . . . [p]roceedings with
respect to the SSO Claim will be bifurcated, i.e., liability will be tried separately from willfulness
and damages. ECF 1159 (Stipulation and Order Regarding Copyright Damages) at 1.
Oracle asserts that bifurcation prevented it from presenting to the jury evidence ofGoogles monetization strategy and profits. Mot. at 20. Oracle is wrong. Plenty of evidence24GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
regarding Googles Android business model and financials was admitted at trial. See, e.g., Tr
404:16-405:4 (Schmidt) (explaining Googles business model for Android); Id. at 1752:2-24
(Jaffe) (testifying about Googles business model for Android); Id. at 1761:25-1762:23, 1776:20-
1777:14 (testifying regarding Google revenue from Android); see also ECF 1993-14 (TX 190);
ECF 1993-25 (TX 1061); ECF 1993-29 (TX 5183). Contrary to Oracles argument, see Mot. at
20, the bifurcated trial structure did not prevent it from offering this evidence or evidence of
alleged harm to Java ME at trial. See Tr. 1633:2-1636:13 (Civjan); 1358:8-1363:6 (Catz); Tr.
1772:20-1777:21 (Jaffe). In fact, such evidence was admitted at trial and Oracle emphasized it in
its closing arguments. See, e.g., id. at 2140:10-2142:8 (arguing that Android is highly
commercial), id. at 2166:8-2167:5 (arguing that Java ME is a derivative work of Java SE), id. at
2171:25-2172:16 (arguing that Oracle lost business due to Android). Oracle offers no
explanation for how bifurcation could have altered the jurys verdict in view of this evidence.
Oracle also complains that bifurcation provided a structural incentive for the jury to
return a defense verdict. Mot. at 20. Of course, that argument could be made about any
bifurcation order, and yet bifurcation of liability and damages is a common trial management
practice in complex cases. See, e.g., Ciena Corp. v. Corvis Corp., 210 F.R.D. 519, 521 (D. Del.
2002) ([B]ifurcation of complex patent trials has become common.); J2 Glob. Commcns, Inc.
v. Protus IP Sols., No. CV 06-00566DDPAJWX, 2009 WL 910701, at *4 (C.D. Cal. Mar. 31,
2009) (Bifurcation is common in cases where the resolution of one issue may completely resolve
a later issue, particularly where the case is otherwise complex.); see also Manual for Complex
Litigation (Fourth) 33.27 (2004). Moreover, the Court expressly instructed the jury, Please do
not allow any desire to complete trial sooner to influence your thinking. ECF 1981 (Final
Charge to Jury) at 22. Jurors are presumed to have followed the Courts instruction. Richardson
v. Marsh, 481 U.S. 200, 206 (1987). Oracles suggestion to the contrary is unsupported.
III.
CONCLUSIONFor the foregoing reasons, the Court should deny Oracles Rule 59 motion for a new trial.
//
//25GOOGLES OPPOSITION TO ORACLES RULE 59 MOTION FOR A NEW TRIALCase No. 3:10-cv-03561 WHA
1Dated: July 20, 2016
By:
45
67891011121314151617181920212223242526272826 | https://de.scribd.com/document/320535384/16-07-20-Google-Opposition-to-Oracle-Motion-for-New-Trial | CC-MAIN-2022-40 | refinedweb | 10,279 | 53.21 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
While investigating the relationship between modules, their versions and mutual dependencies I came across one hard problem: Given a repository of modules (like the one provided by maven), select such configuration of the module versions so all their dependencies are satisfied.
After a little bit of thinking I concluded that this is NP-complete problem. I have even written down a proof.
I guess many LtU members are qualified to review the proof for soundness. Would you be so kind and tell me if there is anything wrong with it? Thanks.
Visit the proof.
The proof doesn't mean much in practice unless natural development of libraries or repositories leads to a situation like the one you describe. My suspicion is that the domain of the Library Versioning Problem simply doesn't allow the generality your conversion from 3SAT requires.
[Edit: A point made later by Matt helps illustrate this concern. It is not the case that supposedly 'compatible' modules like Fx(1,1), Fx(1,2), and Fx(1,3) can export entirely distinct interfaces Ma, Mb, and Mc.]
Otherwise, the structure of the proof looks okay.
I haven't looked at your proof, but similar problems in Hackage are also understood to be NP-complete. I believe that Hackage and Maven-like products offer very similar dependency specification, so I expect the result is general.
I don't know exactly what you mean by "such configuration of the module versions." It sounds like you may mean a maximal consistent subset? If so, you should note that the seemingly much easier problem of "is it possible to install a single package?" is also NP-complete, in general.
It doesn't seem like hard instances occur in practice...
First of all, thank you all for your comments. I'll adjust the text to avoid traps you have been drawn to.
I'd like to argue with the claim that it does not occur much in practise. I have two answers:
Obviously it is not the NP-complete problem that manifests itself in the real life. Its symptoms are manifested. Except being beaten by such symptoms in our own NetBeans development, I found a nice article Preserving Backward Compatibility about troubles with library reuse in Subversion. The incompatible changes in Apache Portable Runtime are exactly the symptoms of the NP-complete problem. If there is another library that relies on APR (and I am sure it is) then the parallel use of subversion (as a library) and such library is heavily limited (if they each rely on different incompatible version).
Symptoms of such problems are more common in libraries that are not yet popular enough. Their developers may not yet be trained in keeping backward compatibility. However the more popular your library becomes the more you will care. As a result often used libraries are developed in compatible way. Finding whether module dependencies can be satisfied in a repository with only compatible versions is obviously trivial.
But imagine glibc developers went mad and published an incompatible version. As that library is used by everyone, there would be quite a lot of NP-complete problems to solve.
My proof can also be seen as an advertisement for importance of backward compatibility. While we keep it, we live in a simple world, as soon as we start to release big bang incompatible revisions, we'll encounter the NP complexity.
The incompatible changes in Apache Portable Runtime are exactly the symptoms of the NP-complete problem.
I am not familiar with these incompatible changes. More detail would be appreciated.
The Subversion article you pointed to was very high-level and theoretical, and didn't really discuss much about real examples of "troubles with library reuse in Subversion". About the only issue the article gave was clients using different protocols from Subversions standard current protocol. Even then this issue is poorly explained and not developed. You have to wonder if the author knows what the real problem is or not.
glibc is certainly changed from time to time. Most recently an example would be mallinfo. What you have to realize is that more often than not these changes aren't so rapid (a major reason why glibc is used so universally). When things DO change, it often suggests a change in the structure of the problem domain. Changes like mallinfo are even rarer: design flaws.
Bottom line: No one is interested in solving problems that don't need solving, so show us why this needs solving and can't just be avoided using good development processes complimented by tools using heuristics faithful to those dev processes.
Where's the mallinfo incompatible change? As far as I can read the link, new method has been added, but the text does not seem to contain any warning about the old method being removed. My understanding is that the method stays in the library and does what it used to do. Just in some, new situations, its behaviour is not absolutely correct/valuable.
Perfectly backward compatible. If everyone followed good API design practices this way, there would be no NP-complete problems of this kind to solve.
Btw. when talking about tools. I should mention that the automated tools for checking binary backward compatibility are quite useful. There is Sigtest for Java and another tool for C/C++. Those help eliminate the NP problem before it arises.
My understanding is that the method stays in the library and is just useless in some situations (while behaving the same as it behaved in previous versions).
This statement is self-contradictory. Can you clarify?
Where's the mallinfo incompatible change?
Look at the subsystem as a whole.
As the page on backward compatibility that you linked says:
Of course, there is some flexibility here, as at some point you have to be able to say, "This behavior is a bug, so we changed it"; otherwise, there is little point in releasing new versions of your software.
But really, what good is that? If your library's clients contain code to work around a known bug (and most serious programs have such workarounds in them), what happens when you fix the bug? You have broken backward compatibility, that's what. And yet the whole idea of backward compatibility is "Always use the current library; it has the bug fixes you need"; but what about the bug fixes you don't need and don't want? If you have to change your code to use the new library, it's not backward compatible, by definition.
This comes up not so much with libraries as with protocols, where clients and providers often belong to mutually suspicious organizations with completely independent upgrade cycles.
The client geek says: "I asked for service version 2.1, and I should get exactly what 2.1 has always provided."
The server geek replies "But the data delivered by 2.1 is complete @#$*! So we deliver 2.2 when you ask for 2.1; it's only sensible. That's the whole point of minor versions."
Client geek: "You think I don't know that? I already have workarounds in place, and what's more, they're hard-coded into Gadgetron model 2055. You think all my customers are going to field-upgrade it [if it even is field-upgradeable]? They're going to get bad data, and they're going to blame me!"
Server geek goes off grumbling "So they should". But it's the real user who suffers.
If your library's clients contain code to work around a known bug (and most serious programs have such workarounds in them), what happens when you fix the bug? You have broken backward compatibility, that's what.
I'd say what you did was fixed a bug in your code which revealed a bug in the client code. If client code performs 'work-arounds' to a bug without performing a test to see if the bug exists, that's a bug in the client code.
Server Geek is exactly right to blame Client Geek when Client Geek codes to the implementation rather than to the documentation. Client Geek took advantage of what Server Geek was offering, then violated the contracts describing said service. Client Geek had other options, such as coordinating with Server Geek to fix the problem, or to implement the service without introducing a dependency, or to renegotiate the contract to ensure fully bug-compatible service was maintained for so many years so his clients don't suffer.
Anyhow, there are multiple levels of backward compatibility described on the page linked. Functional compatibility is 'same result' which would suggest bug-compatibility. It seems to me there's a missing level - interface compatibility - that suggests persons who programmed to the interface (preconditions and postconditions) will continue to have results compatible with the documentation. Even more general are source compatibility (can re-compile) and binary compatibility (can link new object without re-compile).
the new open flag, O_PONIES --- unreasonable file system assumptions desired.
(Supposedly b/c KDE developers didn't read POSIX, ever, and assumed ext3 would always be the fs KDE uses).
Here is the problem. Client Geek will never acknowledge, or perhaps even understand, that Server Geek is right.
Blaming customers is wrong strategy to win their hearts. Thus blaming client geek is wrong. The whole fault needs to be on the shoulders of the server geek. Btw. thanks for your(?) comments on the talk page.
Do you go the Old Microsoft route and have an entire division in the company dedicated to making sure Sim City 2000 and Adobe Illustrator work on the next version of Windows despite using undocumented features or whatever else chicanery?
Manager to Structural Engineer: "Sorry, you can't blame the welder that this will likely collapse even if it his fault. Find some way to make this thing stable and sturdy. I heard about this thing software developers do to give customers pink ponies..."
Blaming customers is wrong strategy to win their hearts. Thus blaming client geek is wrong.
Sorry, Jaroslav, but that attitude strikes me a bit like "don't blame the customers who vandalize your store; sure, they broke the social contract for participating in your service, but you won't win their hearts by blaming them!".
You should make it clear to your customers which behavior they can assume will remain consistent in future versions. If they then violate the contract, you can blame them. You can be polite about it. I.e. you can provide a press release to the effect of "yes, this could break some code, but our customers have been told to code to the interface and we believe that most of them have done so. If they coded to the interface, their projects will automatically reap the benefits of this upgrade."
By putting the responsibility where it belongs you send a clear message: pay attention to the documentation.
Chances are, a few people will mess it up the first time. They'll be a little upset at needing to redo some work. A few of them might even abandon you as a service provider, running off to whatever they view as 'greener pastures'. But no pain, no gain, right? You can't keep everyone happy all the time: fact is, if you have multiple customers, you'll win some and lose some based on nearly ANY decision you make. Even forking for different customers can piss customers off that expected to share expenditure for upgrades. And if you have only one customer, you've already set yourself up for oblivion.
By taking a stand, you'll ensure that everyone gets the message clearly. Your service will then have a great deal more freedom to advance and repair itself over time. Even better: you can take advantage of this freedom to give clients better opportunity to propose fixes and changes that remain within specification.
The future couldn't last. We nailed it to the past... -- Savatage
I have nothing against blaming those who misbehave. If you are using a library in a wrong way and it does not work, well, then OK. It won't work for you. If you are vandalizing a store, you are unlikely to buy anything.
However I when I use some library in strange/unexpected way and it works, then I want this to work in next version too. If I buy a motorbike in a store and return back to apply warranty, I do not want to hear: "we are no longer in motorbike business, go away, now we sell only refrigerators". That would be unfair blaming.
Documentation is indeed important. If something is clearly marked as "don't do this", then I should not do it. Defensive design is even better. If a type is not supposed to be subclassed, then it is better to make it final than write into documentation that is should not be subclassed.
If a method does accept int but only in certain range, then throwing assertion errors is OK. But this has to be done since introduction of such method.
However forgetting to be defensive or to provide a warning in first version and later trying to justify incompatible change by saying "this class was not intended for subclassing" or "this parameter was obviously out of range" is clearly not acceptable for me.
Whatever is not forbidden, is allowed. Library designers deserve to be blamed for violating this rule.
Okay. But if the documentation states - or even implies - "do not depend on undocumented behavior", that should be taken as a claim of: "whatever is not explicitly documented, will break later." And library clients deserve to be blamed when their code breaks as a result of depending on undocumented behavior.
I agree with "defensive" programming of the library or protocol - primarily as a basis for security, I favor capability security designs. But not all languages and situations allow this to be done, at least without sacrificing performance and other desirable properties.
When I use some library in strange/unexpected way and it works, then I want this to work in next version too
If you use the library in a strange/unexpected way, and it works, but you for some reason expect the next version to have the same implementation quirks, then you've earned whatever your naivete bought you.
Perhaps you should load the library with the O_PONIES option.
What started as a call for a review of a proof ends up in a layman discussion about how to lead an IT business and respond dogmatically to environmental conditions ( customer requirements, users demands, ...), reflecting behaviors in terms of "right" and "wrong" (!), which reinforces prejudices about the brash naivety of techie positivism.
Is this still the LtU we want to read?
Touché.
I guess I got the answer I was looking for regarding my proof. Thanks.
If there remains anyone who wants to continue to discuss who's fault wrong API usage is you can do so elsewhere.
Thanks again for your comments.
Client Geek began by coding to the documentation, but the data he got was erroneous ("complete @#$%", as Server Geek says). Let us say Server Geek is providing a stock-pricing service, only in 2.1 all stock prices are too high by 100. Client Geek noticed that GOOG was being priced at 554 instead of 454 (I wish!). Server Geek being a bit slow to fix this problem, Client Geek worked around it by subtracting 100 from all prices and shipped by his deadline. There is, obviously, no way to test whether this bug exists, short of subscribing to two services.
Server Geek, working to his deadlines, was dealing with something else and didn't even prioritize the problem until a few hundred different Client Geeks reported it. Then he fixed it and upped the version number to 2.2, which broke Client Geek's code. It's too late for Client Geek to conditionalize his code at this point, because it has shipped.
The fault here is entirely Server Geek's, but everyone is quick to put the blame on Client Geek. What does that tell us about our own assumptions?
(I just introduced clients and servers as an intuition pump; the issue works the same with field-upgraded libraries invoked by programs that are not field-upgraded.)
The fault here is entirely Server Geek's, but everyone is quick to put the blame on Client Geek. What does that tell us about our own assumptions?
The error in 2.1 is clearly Server Geek's. The work-around is Client Geek's error. The apparent lack of communication between the two entities is certainly an error on both their parts - e.g. the Client Geek didn't establish a contract to maintain a bug-compatible version of the service on a dedicated port.
What does putting the blame entirely on Server Geek tell us about your assumptions?
I have no joke, I would just like to point out that Eclipse's p2 provisioning system apparently uses a SAT solver for dependency resolution, and that there was apparently some consideration of using a SAT solver for OSGi runtime dependency resolution.
Research yum, rpm, apt, or other package management solutions for various ways to deal with this issue, both in logical design and satisfiability optimization (using heuristics). It turns out wth heuristics you can usually get good results. By logical design, I mean "the best way an engineer can solve a problem is to avoid it", or at least I like to think so.
The results of the EDOS projects could be relevant. They include a statement of NP-completeness, and a review of the (flaws of) the algorithm used by existing package managers ('survey and state of the art' section).
The problem seems solvable in O(N), where N is the number of modules versions, as long as there are no cyclic dependencies between modules (assuming bounded dependencies per module). Further, if you start with a DAG and add a cycle forming edge to some module, it would seem to require only a single search over the versions of that module to resolve, and disjoint cycles combine complexity linearly. This seems like an obvious way to characterize the 'bad' dependency graphs and to explain why we don't generally see this in practice.
It isn't a simple dependency-graph problem, Matt, due to the existence of incompatible modules. Incompatibilities, expressed in the problem as difference in "major" version numbers, can force a naive algorithm to backtrack and try again whenever it selects two mutually incompatible modules. It's worth noting that the conversion from 3SAT described in the proof involves no cycles.
In practice, heuristics will probably rapidly resolve the problem in 99%+ of all cases. If simple heuristics can't solve the problem, that should probably raise a flag; one must question how development of a project might lead to such a thing.
I was considering the wrong problem
That an algorithm uses backtracking doesn't imply it is in O(2^n). Backtracking parsers for example can be linearized using memoization.
The set {A} already satisfies the conditions for being a configuration. Same with {A, B} where A >> B or B >> A or both.
Your definition is not strict enough to exclude non-trivial solutions.
And please! Use your own terminology consistently and don't suddenly start to talk about "modules" when you formally introduced entities you called "APIs".
But there was a 3rd satisfaction condition:
if A>>B(x,y) and A>>B(p,q) then it must be the case that (x == p) and (q == y).
I had to read it a few times before grokking the purpose of that third condition, since it wasn't clearly worded like other conditions.
Combined with the conditions requiring equal-major version plus minor-version greater than or equal to the dependency requirement, the trivial solutions are excluded.
The actual error in my remark was that I overlooked that all B(x,y) in M with A >> B(x,y) have to be considered. So just forget about this complaint. Maybe I find something else :)
And please! Use your own terminology consistently and don't suddenly start to talk about "modules" when you formally introduced entities you called "APIs".And please! Use your own terminology consistently and don't suddenly start to talk about "modules" when you formally introduced entities you called "APIs".
Improved, thanks.
Fine.
Although a formal definition is necessary for a formal proof and I was wrong about my initial concerns, the description is very terse and it is not easy to see where the problem actually lies ( truth of an assertion is one thing, insight into a problem another one - the major reason why mathematicians dislike computer generated proofs for anything but tedious manual labor ).
My take on the configuration problem is this.
When we start with a module A we can create the transitive closure of all possible module dependencies. This is a dependency graph which is too big to be useful because it contains all modules in all versions. We need a rule according to which we select a single version of a module. The best criterion is always the one by which we select the module with the highest possible version number.
This is not problematic when dependencies are kept local: when A depends on B and C(x1, _) and B depends on C(x2, _) then a dependency conflict can always be avoided when B doesn't export names of C(x2, _). The interface is kept small. But when B exports C symbols they are also visible in A. So the initial choice of C(x1,_) may be invalid when selecting B with a C(x2, _) dependency. So we have to revise our choices. In the worst case none of the C dependencies of A fits together with the C dependencies of B. We have to expect that all admissible C dependencies of A have to be matched against those of B. This propagates along the whole transitive dependency closure and is the intuitive reason why determining whether A has a valid dependency graph ( configuration ) is NP-complete.
When our dependency model contains only one of the relational operators ">" and ">>" then we might rule out valid configurations in case of ">>" or permit invalid ones in case of ">". Choosing ">" might work under the optimistic assumption of keeping the interface small. The configuration problem becomes linear then. But when we give up on it, we run into NP-completeness and what's worse, we invalidate otherwise valid configurations.
I'd nevertheless recommend to start with the pessimistic assumption even though the computational overhead is considerable and let the library author incrementally improve by manually inserting ">" where this is appropriate.
There were a few papers in 2006 on the EDOS site about using a SAT solver for checking repository consistency. IIRC, one of them also used a reduction to 3SAT.
It's not clear to me what you're trying to show, so let me give some solutions to different situations.
Let's create a directed graph (V,E), where each vertex in V is a pair (API,version) and an edge in E is a dependency relationship (this is, an edge exists iff two (API,version) modules are compatible)..
...issues of dependency and compatibility.
an edge in E is a dependency relationship (this is, an edge exists iff two (API,version) modules are compatible)
You are confusing too
As dmbarbour said, but not explained, the mapping of the version and dependencies problem to graph that you propose is wrong.
this is, an edge exists iff two (API,version) modules are compatible
(n, x.y) is compatible vertex with vertex (m, u.v) iff n == m && x == u && y >= v
Dependencies represent the necessary environment of a given module/vertex. That is something else than compatibility.
The Gallium project at Inria has interesting results on formal management of software dependencies.
This paper (from the EDOS project that bluestorm has mentioned previously) has interesting results. They also map vertices to package names + versions, and have edges for both dependencies and conflicts (fig. 2).
One of the results of that paper (Theorem 1, p. 11) is:
Theorem 1 (Package installability is an NP-complete problem). Checking whether a single package
P can be installed, given a repository R, is NP-complete
But that's a different problem from "select(ing) such configuration of the module versions so all their dependencies are satisfied". I still think that verifying if all dependencies in a repository are satisfied can be solved in polynomical time.
Ok, I read and now understand the setup. In your correspondence to SAT you introduce compatible modules F_{u,v} which re-export incompatible modules M_{x,y}. I don't understand how the Fs can be compatible if they re-export incompatible things. Can you clarify this?
For each variable Vk in the SAT problem, there is a distinct pair of modules: Mk(1,0) and Mk(2,0). It's worth noting that Mk is completely distinct from M(k+1) and M(k-1).
Now, the Fi(1,1)..Fi(1,X) corresponds to a given clause in the CNF form. The module T only needs one of those, not all of them. Each clause has a distinct 'i'.
So, supposing there were only two clauses:
(va or ~vb or ~vc) and (~va or ~vb or vc)
This corresponds to:
F1(1,1) >> Ma(1,0)
F1(1,2) >> Mb(2,0)
F1(1,3) >> Mc(2,0)
F2(1,1) >> Ma(2,0)
F2(1,2) >> Mb(2,0)
F2(1,3) >> Mc(1,0)
T(1,0) >> F1(1,0)
T(1,0) >> F2(1,0)
Now, note that there is no F1(1,0) and no F2(1,0). But F1(1,1) and F1(1,2) and F1(1,3) can each satisfy F1(1,0) because they're all "minor" versions of the same module.
An incompatibility will exist if and only if there exists some k such that T(1,0) must export Mk(1,0) and Mk(2,0) simultaneously. This corresponds 3SAT being unsatisfiable if and only if there exists some k such that vk must be held true and false simultaneously.
Clearly the 3SAT problem can be solved by:
(va and ~vb)
or (va and vc)
or (~vb and vc)
This corresponds to T(1,0) loading modules:
(F1(1,1) and F2(1,2)) (which loads Ma(1,0) and Mb(2,0))
or (F1(1,1) and F2(1,3)) (which loads Ma(1,0) and Mc(1,0))
or (F1(1,2) and F2(1,3)) (which loads Mb(2,0) and Mc(1,0))
Hopefully that clarifies how Fs can be compatible even if some of them export incompatible things.
It's worth noting that Mk is completely distinct from M(k+1) and M(k-1).
Yes, good point. I shouldn't have refered to them as 'incompatible modules' when I really just mean 'different'.
But F1(1,1) and F1(1,2) and F1(1,3) can each satisfy F1(1,0) because they're all "minor" versions of the same module.
My point is that they don't have the same interface, though, because they re-export different things. This would seem to me to make them incompatible. If minor versions are only supposed to be backward compatible (within the same major version), it still looks like this is a bad encoding since F1(1,2) doesn't at least have the re-exports of F1(1,1). What am I missing?
Since you're objecting to the realism of the translation, I agree. See my first post on this subject. Your particular argument against that realism - that F1(1,1), F1(1,2), and F1(1,3) are each exporting largely distinct interfaces and thus shouldn't be 'minor versions' of one another - is a good one.
In Chapter 10, Cooperating with Other APIs of my book I even talk about transitivity of incompatible change. If a re-exported API changes incompatibly, so must all its re-exporters. How could I forget to mention that in the context of the proof!?
Well, I probably have not mentioned that because it does not apply to this proof. The F_{1.1} and F_{1.2} and F_{1.3} depend on completely different M^a, M^b and M^c. It is not the case that there would be F_{1.1} depending on M^a_{1.0} and F_{1.2} depending on M^a_{2.0} (such 3-OR would be v & not(v) which is a tautology and we can ignore it).
v & not(v)
But you are right that: if F_{1.1} is re-exporting some M^a and F_{1.2} is re-exporting another M^b, how can F_{1.2} be compatible with F_{1.1}? Hardly.
That probably does not disqualify the proof, but it may lead us to define some easily decidable condition on real repositories that would prevent conversion of 3SAT to such repositories. I can think of using Signature Testing tools. Those can verify that a module in version F_{1.2} is really binary compatible with F_{1.1} and if not force its re-numbering to F_{2.0} before accepting it to the repository. Congratulation, you probably discovered new way to fight with NP-Completeness in library versioning area.
PS: Of course, Signature tests are only good if we are talking about binary compatibility, they cannot help us with functional one.
This comment is inspired by a note that Equinox is using SAT4J solver. The module system behind Equinox has its own specifics and surprisingly it allows following setup:
Each M^i_{1.0} would export two packages (think of each M^i as a different implementation of the same API, plus some specifics):
package a; public class API {}
package b; public class Mi {} // the i would be a variable
package b; public class Version10 {}
Each M^i_{2.0} would export compatible package a and incompatible package b:
package a; public class API {}
package b; public class Mi {} // the i would be a variable
package b; public class Version20 {}
now the F_{1.i} and would have just one object:
package f; public class F extends a.API {}
Thus the F_{1.3} is quite compatible with F_{1.2} and it is compatible with F_{1.1}. All have the same class F which extends a.API and the signature of a.API is in each case the same. This is true in spite of the fact that the each two versions of M^i are itself different and incompatible.
Moreover Equinox allows F to import just a part of the M^i - only single package a. As a result it seems that this is perfectly valid setup for that module system. The question for the Equinox to solve is which of all the available M^i implementations to choose. Computing the answer is then NP-Complete, hence the need for SAT4J.
If you think this can't be real, then you are probably wrong. I too find some features of the system behind Equinox weird. But at the end I am not the author of that module system. Our NetBeans Runtime Container does not allow tricks like this.
Thus the F_{1.3} is quite compatible with F_{1.2} and it is compatible with F_{1.1}
The class F contained in F_{1.3} may be compatible with the class F contained in F_{1.2}, but the class F is not the only thing contained in the module F_{1.i} and I would consider the modules incompatible.
You are right, if we always treat module as a unit. The module F_{1.3} would then re-export whole M^i and this compound set of APIs would be different to compound set of F_{1.2} re-exporting some other M^j. I am aware of that and thus I used "quite compatible" not "compatible".
But if a module can be split into smaller parts (and Equinox allows to import and re-export just a single package from M^i), then the compound view of F_{1.3} is the same as F_{1.2}. They are fully "compatible".
I don't know how much the partial re-export of a module is useful, but it is clear it is quite dangerous (e.g. allows the NP-Complete problem to really appear in real life).
This relevant paper just showed up on the programming subreddit:.
Seems a solid result, some overdue remarks/questions.
Isn't NP-hardness shown instead of NP-completeness? [Just add that it takes polynomial time to verify a solution.]
The result only holds for a very stringent definition of a module system. [I.e., an assumption that incompatible modules cannot be installed at the same time may not hold for all languages.]
Even if NP-complete, it is very unlikely any 'hard' problem will be encoded, so I expect almost all problems can be solved in microseconds - which isn't a lot for a library problem. [The result just implies that simple DFS may not be the best candidate for a calculating a solution, but a DPLL solver (adoption) or a constraint solver will probably calculate a solution very fast.] | http://lambda-the-ultimate.org/node/3588 | CC-MAIN-2019-51 | refinedweb | 5,628 | 64.2 |
Documentation Team/API Documentation
Contents
Definition."
Good apidox take some effort to write -- but less effort) code.
Preamble
APIDOX are what make a program accessible to other contributors. They are not essential, but they certainly help a great deal for new people who want to work on your code, or better yet, re-use it elsewhere with a minimum of modification.
Look at the KDE documentation to get a feeling of what good apidox look like. There is a consistency of style, a thoroughness which permeates all the documentation. It is possible to learn a lot about KDE.
APIDOX Basics
APIDOX are generated by the Doxygen documentation tool. This tool reads the source code for your application (or library) and produces nicely formatted documentation from it. There is a good reference manual available -- but let's hope to make it unnecessary to read, for basic use anyway.
You don't even need to have Doxygen installed to work on apidox for your application. Every few hours, api.sugarlabs.org compiles apidox for all of the Sugar consist of two types of comments python docstrings and doxygen directives. These docstrings are surrounded by """ (quote, quote, quote) --.
Doxygen directives are one line comments preceded by a #. They are short statments that help Doxygen determine the organization of your package.
But documentation can be very straightforward: just write down what a method does, surrounded by """ and """, like this:
""" This method increases the IQ of end users. Therefore, it should be called whenever possible (i.e. instead of having idle time, you might think of calling this method.) You might even insert it into your own event loop to ensure it is called as often as possible. If these calls decrease the number of new features, it's still no problem to call it. """ def increaseIQ (IQ):
Note: This process is slightly different from the instruction on the Doxygen site. Standard Doxygen comments are surrounded by /** and */. In order to maintain compatibility with standard python documentation, api.sugarlabs.org runs a preprocessor that converts docstrings into doxygen comments.
For proper apidox, you need to document every "thing" in your program. "Things" here are: sugar modules, python packages, python modules, classes, functions, and variables. Complete apidox looks something like this:
# \file example.py # \brief An example Python program. # Demo class class Demo: """ \brief A demo class, it's really just for demonstration. The detailed description of the class would appear right here. However, as this class is utterly useless when talking about its functionality it actually has no detailed description which is sort of a pity, since a couple of lines of documentation would make it look like a real documentation. But as this is just an example of how the doxygen output might look like a one-liner has to be enough. Insert your documentation here as appropriate. You get the idea now, don't you? If not, I can't help it but I certainly won't type in a lot of nonsense just to make it look \em real. No, definitely not. """ def __init__(self): """The constructor.""" def foo(self, bar): """ The infamous foo method. There's no detailed description necessary for the \em foo() function as everybody know what it does. \param bar The \a bar argument is compulsory, never leave it out. \return The \a bar input after processing by the \em foo() function. """ pass ## protected: def spam(self, amount): """ Return an amount of spam. \param amount (\c int) The amount of spam. \return An amount of spam. """ return amount*"spam" # Another demo class class AnotherDemo(Demo): """\brief This class is derived from the demo class.""" def __init__(self): pass
You can see here that each nesting level of "things" is documented with a comment -- the module, the class and the method. Things that are private do not need apidox.:dfarning@sugarlabs.org) to enable apidox generation for your module. And I will.
Writing APIDOX in New Code packages completely
Packages are what's most visible to users (in this context, users are developers who are re-using) of your code, and they should be complete. Document each structural bit of the package as you go along. This means:
- Every package should have a package comment.
In the root directory of your packages create a file named Mainpage.dox. The first line of the comment must be the @mainpage doxygen directive followed by your package name. Next is the package comment wrapped in a docstring. Last are some apidox variables which help define the menu structure.
# @mainpage Sugar """ Here is the description of the package named sugar. """ //DOXYGEN_SET_PROJECT_NAME=Sugar //DOXYGEN_NAME=Sugar-module //DOXYGEN_EXCLUDE
- Every module should have a module comment.
A given module only needs a comment once in your source tree (or within one bunch of files that generate apidox together). Near the top of each module __init__.py file, create a @mainpage <module_name> directive followed by a description of what the module is for and what it defines. Wrap this up in docstring.
""" @mainpage Control Panel Here is a description of the module named controlPanel. """
-).
The same caveats apply as with namespace apidox: make sure the class follows its apidox immediately.
-.
3. Watch this space!
Watch api.sugarlabs.org for the results of your apidox work. Check the log files for errors -- Doxygen can complain quite loudly.
4. Write a main page for your application.
This is usually done in a separate file in the top-level of a library or application. The file's content is just a single apidox comment that starts with /** @mainpage title ; the rest of the file is just a long comment about what the library or application is for. | http://wiki.sugarlabs.org/go/Documentation_Team/API_Documentation | CC-MAIN-2020-16 | refinedweb | 947 | 66.94 |
Introduction to ActionScript 2.0/Properties and Methods< Introduction to ActionScript 2.0
Key concepts:
- Properties and methods
- Global functions and variables
- The main timeline as a MovieClip
- Dot access
this,
this._parent
with
- Property access with []
- Basic MovieClip methods and properties
- Initialisation with constructors
- Object literals
null,
undefined
Properties and methods are the two components that constitute a class. A property is a variable that belongs to a certain class. A method is a function that belongs to a certain class. This chapter is about this important twosome.
Contents
What are top-level functions and global variables?Edit
ActionScript provides a huge number of built-in functions. Some of these functions are methods of built-in classes like MovieClip and Math. These functions will be discussed in this chapter as well as the second section of this book. Other built-in functions do not belong to a built-in class. They can be used whenever you want, wherever you want in your file. Our beloved
trace() is one of these functions. These functions are called top-level functions or global functions.
A global variable is a variable that can be accessed and changed anywhere you want. The
_quality variable is a good example: it affects the quality of the entire SWF. They are provided by Flash.[1]
Breaking news!Edit
We interrupt this chapter to bring you some breaking news!
Since you made it to this chapter, it's time to break the news to you: The main timeline (the timeline on which you've been working on all along is a MovieClip instance.
What does this imply?
All objects have properties and methods, remember? Therefore, the main timeline has various built-in properties and methods we have to learn. We will learn some of them in this chapter, and save the rest for the next section.
Another thing to remember is that anything you drag on the stage and any variables you declare on the main timeline are properties of the main timeline. The reason why new properties can be added to the instance is because MovieClip is a dynamic class. We will talk about dynamic classes in the third section.
How can I access the property of another object?Edit
The most common way to access the property of another object is as follows:
objectName.propertyName
Note that in order for this to work, we have to ensure that objectName is a property of what we're working on.
Let's try this out. Open Flash. Create a new MovieClip in the library. Draw an apple in it and drag it anywhere onto the stage. Give it the instance name of 'apple'.[2] Now click on the apple again. on the first frame, type in
var juiciness:Number = 5. (Recall: all variables you declare in a timeline is a property of that MovieClip.) Type in the following code:
In this code, we have traced the juiciness property of the apple using the dot operator.
What are
this and
_parent?Edit
this is used to address the current location. For example, if you're in the Main Timeline, this.crunchiness will return the crunchiness of the main timeline. Let's try out our apple example again with
this.
This does the same thing, except it explicitly states that apple is a property of the current object (the main timeline). As you can see,
this is not always necessary.
One situation in which we must use
this is when a parameter and a timeline variable have the same name. In this case, one must put
this before the timeline variable so that the computer knows you're not talking about the local variable. Here's an example:
In this example,
return (someString + " " + this.someString); concatenates the local variable someString, a space character and a timeline variable someString together.
So far, we've only been referencing the apple from the main timeline. Is it possible to reference the main timeline from the apple? The answer: yes, and it involves
_parent.
Open the Flash file with the apple again and type in the following code:
In this example, the apple is contained in the main timeline, so the main timeline is the apple's 'parent'. Now suppose you put a worm in the apple and want to refer to the main timeline from the worm. No problem: you can put in as many layers of
_parent as necessary:
The worm's parent is the apple and the apple's parent is the main timeline. Works like magic!
What's with
with?Edit
Now suppose the worm goes on a quest to reduce the crunchiness and sweetness of the main timeline and the juiciness of the apple. We can do this:
Now suppose we're too lazy to type in 'this._parent' so many times. We can use
with to solve this problem:
That saves us from having to type in this._parent three times. All the computer has to interpret your code is to append this._parent in front of each statement in the block.
What if I want to treat the property name as a string?Edit
Sometimes, we need to treat a property name as a string so that we can concatenate several strings to form the property name. Here's an example:
In this example, we concatenated currentAnimal ('cat') and 'food' to form the property name catFood, returning the string 'fish'.[3]
How can I access an object's properties through a variable?Edit
In the second chapter, we've learnt that by assigning anything other than a primitive data type to a variable, we are putting a link to that object inside the variable. We can also access an object's properties with a variable:
What are some MovieClip properties and methods that I must know?Edit
The following are some MovieClip properties that will be used throughout the rest of the section.[4]
- _x and _y (both numbers) indicate the position of a MovieClip relative to its parent. Remember that in ActionScript, the higher the value of _y, the lower the position of the MovieClip. The origin (0,0) is located on the top-left corner. This may be counter-intuitive if you are familiar with Cartesian planes.
- _width and _height (both numbers) indicate width (horizontal) and height (vertical) dimension of a MovieClip respectively.
- _visible indicates whether the MovieClip is visible. If _visible is true, the MovieClip is visible and vice versa.
- _currentframe indicates which frame the MovieClip is currently on.
_x, _y, _width, _height and _visible can all be changed by assigning a different value to them. Open the Flash file with the apple again and type in the following code:
apple._x = 100; apple._y = 200; apple._width = 100; apple._height = 100;
You should find noticeable changes in the position and dimensions of the apple (unless, of course, your apple happened to have exactly the same position and dimensions on the stage!)
Unlike the others, _currentframe cannot be changed with assignment statements. Instead, several functions are used to manipulate the current frame:
play()and
stop()will play and stop playing the frames in order respectively. The default is to play, so whenever you want to stop at a frame, you should add the
stop()code. For example, if you put nothing on the first five frames, then
stop()on the sixth, Flash will play until the sixth frame.
gotoAndPlay(frame)and
gotoAndStop(frame)will go to the specified frame, then play and stop the Flash respectively. If there is a
stop()statement on the target frame of gotoAndPlay, Flash will stop, and if there is a
play()statement on the target frame of gotoAndStop, Flash will also stop.
prevFrame()and
nextFrame()go to the previous and next frame respectively. Unless there is a
stop()statement on the target frame, they will play.
More properties and methods will be introduced in the second section.
How can I call a method of another object?Edit
Calling a method of another function is just as easy as accessing their variables. Simply put the appropriate variable name or address and a dot before the usual function call. Let's say we have a second frame on our apple MovieClip with a half-eaten apple.
In this example, the worm symbol traces 'First frame!' when it is initialised, but the main timeline will quickly make the apple symbol go to the second frame.
What is a constructor?Edit
A constructor function is the most important method of a class. It is used to initialise an instance of a class. Remember the syntax for initialisation that we learnt in the first chapter? Although it worked perfectly for the primitive data types, it is more complicated for the composite data types (which you can't type in directly).
We won't learnt how to make our own constructor functions until the third chapter. For now, let's sticking to calling constructor functions. The syntax for calling a constructor function is as follows:[5]
var variableName:DataType = new DataType(parameter1, parameter2);
For instance, let's say there's a Dog class and we want to make a new Dog instance. If there are two parameters, species and colour, in the constructor, we should write the following code:
This creates a new spot instance of the Dog class with the species 'chihuahua' and the colour 'black'.
The primitive data types can also be applied the constructor function when they are initialised. This will be covered in the second section.
It is possible to use a constructor class outside of variable declaration/assignment statements. This will create an instance without a warm and cozy variable home.
When don't I need a constructor?Edit
Firstly, primitive data types don't usually need constructors since they can be initialised with their literals. For example, var someString:String = "Wikibooks" will do, and you don't have to write someString:String = new String("Wikibooks").[6]
We don't need to use the constructor function on the Object class, either. (Remember the Object class? It is the grand-daddy of all classes.) Instead, it can be generated like this:[7]
var variableName:Object = {property1:value1, property2:value2, ..., method1:function1, method2:function2 ...};
The function1, function2, etc., should be replaced with the syntax for putting a function inside a variable (which we've met in the functions chapter). Look at this example:
A new object is created with 7 as its crunchiness and 5 as its juiciness.
What are
null and
undefined?Edit
In computer science if a value is null or undefined, there isn't anything in it. For example, the key field in a database table must be unique and non-null, which means it can't be empty. In ActionScript,
null and undefined are two flexible, all-purpose values that can be put into variables of any type. A variable which is not yet defined (i.e. you haven't assigned any value to it and haven't called its constructor) is automatically set to undefined. Null and undefined are considered equivalent values with the normal equality operator (i.e.
null == undefined returns
true), but not the strict equality operator (i.e.
null === undefined returns
false). Consider the following example:
In this example, someNumber is at first
undefined, then set to
null. Next, someString is set to
undefined. (That is just to show that undefined can be fit into any data type.) Finally, it is found that undefined is equal, but not strictly equal, to null.
One should be more careful when using the ! operator with
null and
undefined.
In the first trace, someBoolean was undefined. !someBoolean means 'someBoolean is not true', which is different from saying 'someBoolean is false'. Since someBoolean was undefined, it was 'not true' (but not false either), so !someBoolean is true. Also note that a number can be used with the ! operator when it's undefined, as shown in the second trace statement.
NotesEdit
- ↑ It is also possible to define your own global variables like this: _global.variableName = value; Notice that global variables are not strong typed. Defining your own global variables is a deprecated practice and should be used sparingly.
- ↑ You can change the instance name of any MovieClip in the Properties panel.
- ↑ In fact, there is more to this syntax than meets the eye, but let's keep up the suspense until the third section.
- ↑ In ActionScript 3.0, all the leading underscores have been abolished.
- ↑ Instances of the MovieClip and TextField classes are created with special methods. We'll discuss them in the second section.
- ↑ There is a subtle difference between these two. "Wikibooks" is just a primitive string literal, while new String("Wikibooks") is an object of the String wrapper class. We will learn more about this in the second section.
- ↑ Note that the parts enclosed in braces are also considered a literal of the object. | https://en.m.wikibooks.org/wiki/Introduction_to_ActionScript_2.0/Properties_and_Methods | CC-MAIN-2017-39 | refinedweb | 2,148 | 66.84 |
console games in Scala
I've been thinking about rich console applications, the kind of apps that can display things graphically, not just appending lines at the end. Here are some info, enough parts to be able to write Tetris.
ANSI X3.64 control sequences
To display some text at an arbitrary location on a termial screen, we first need to understand what a terminal actually is. In the middle of 1960s, companies started selling minicomputers such as PDP-8, and later PDP-11 and VAX-11. These were of a size of a refrigerator, purchased by "computer labs", and ran operating systems like RT-11 and the original UNIX system that supported up many simultaneous users (12 ~ hundreds?). The users connected to a minicomputer using a physical terminal that looks like a monochrome screen and a keyboard. The classic terminal is VT100 that was introduced in 1978 by DEC.
VT100 supports 80x24 characters, and it was one of the first terminals to adopt ANSI X3.64 standard for cursor control. In other words, programs can output a character sequence to move the cursor around and display text at an aribitrary location. Modern terminal apps are sometimes called terminal emulators because they emulate the behavior of terminals such as VT100.
Good reference for the VT100 control sequences can be found at:
CUP (Cursor Position)
ESC [ <y> ; <x> HCUP Cursor Position
*Cursor moves to
<x>; <y>coordinate within the viewport, where
<x>is the column of the
<y>line
In the above,
ESC stands for
0x1B. Here's a Scala code to display "hello" at (2, 4):
print("\u001B[4;2Hhello")
CUB (Cursor Backward)
ESC [ <n> DCUB Cursor Backward
Cursor backward (Left) by
<n>
This is a useful control sequence to implement a progress bar.
(1 to 100) foreach { i => val dots = "." * ((i - 1) / 10) print(s"\u001B[100D$i% $dots") Thread.sleep(10) }
Saving cursor position
ESC [ s
**With no parameters, performs a save cursor operation like DECSC
ESC [ u
**With no parameters, performs a restore cursor operation like DECRC
These can be used to save and restore the current cursor position.
Text formatting
ESC [ <n> mSGR Set Graphics Rendition
Set the format of the screen and text as specified by
<n>
Using this sequence, we can change the color of the text. For example, 36 is Foreground Cyan, 1 is Bold, and 0 is reset to default.
print("\u001B[36mhello, \u001B[1mhello\u001B[0m")
ED (Erase in Display)
ESC [ <n> JED Erase in Display
Replace all text in the current viewport/screen specified by
<n>with space characters
Specifying
2 for
<n> means erasing the entire viewport:
print("\u001B[2J")
EL (Erase in Line)
ESC [ <n> KEL Erase in Line
Replace all text on the line with the cursor specified by
<n>with space characters
Especially when the text is scrolling up and down, it's convenient to be able to erase the entire line. Specifying
2 for
<n> does that:
println("\u001B[2K")
SU (Scroll Up)
ESC [ <n> SSU Scroll Up
Scroll text up by
<n>. Also known as pan down, new lines fill in from the bottom of the screen
Let's say you want to take over the bottom half of the screen, but let the top half be used for scrolling text. Scroll Up sequence can be used to shift the text position upwards.
On REPL, we can do something like:
- Save the cursor position
- Move the cursor to
(1, 4)
- Scroll up by 1
- Erase the line
- Print something
- Restore the cursor position
scala> print("\u001B[s\u001B[4;1H\u001B[S\u001B[2Ksomething 1\u001B[u") scala> print("\u001B[s\u001B[4;1H\u001B[S\u001B[2Ksomething 2\u001B[u") scala> print("\u001B[s\u001B[4;1H\u001B[S\u001B[2Ksomething 3\u001B[u")
Jansi
On JVM, there's a library called Jansi that provides support for ANSI X3.64 control sequences. When some of the sequences are not available on Windows, it uses system API calls to emulate it.
Here's how we can write the cursor position example using Jansi.
scala> import org.fusesource.jansi.{ AnsiConsole, Ansi } import org.fusesource.jansi.{AnsiConsole, Ansi} scala> AnsiConsole.out.print(Ansi.ansi().cursor(6, 10).a("hello")) hello
Box drawing characters
Another innovation of VT100 was adding custom characters for box drawing. Today, they are part of Unicode box-drawing symbols.
┌───┐ │ │ └───┘
Here's a small app that draws a box and a Tetris block.
package example import org.fusesource.jansi.{ AnsiConsole, Ansi } object ConsoleGame extends App { val b0 = Ansi.ansi().saveCursorPosition().eraseScreen() val b1 = drawbox(b0, 2, 6, 20, 5) val b2 = b1 .bold .cursor(7, 10) .a("***") .cursor(8, 10) .a(" * ") .reset() .restoreCursorPosition() AnsiConsole.out.println(b2) def drawbox(b: Ansi, x0: Int, y0: Int, w: Int, h:: Ansi, i: Int) => b.cursor(y0 + i + 1, x0).a(wallStr) } walls.cursor(y0 + h - 1, x0).a(bottomStr) } }
BuilderHelper datatype
A minor annoyance with Jansi is that if you want to compose the drawings, we need to keep passing the
Ansi object arround in the correct order. This can be solved quickly using State datatype. Since the name State might get confusing with game's state, I am going to call it
BuilderHelper.
package example class BuilderHelper[S, A](val run: S => (S, A)) { def map[B](f: A => B): BuilderHelper[S, B] = { BuilderHelper[S, B] { s0: S => val (s1, a) = run(s0) (s1, f(a)) } } def flatMap[B](f: A => BuilderHelper[S, B]): BuilderHelper[S, B] = { BuilderHelper[S, B] { s0: S => val (s1, a) = run(s0) f(a).run(s1) } } } object BuilderHelper { def apply[S, A](run: S => (S, A)): BuilderHelper[S, A] = new BuilderHelper(run) def unit[S](run: S => S): BuilderHelper[S, Unit] = BuilderHelper(s0 => (run(s0), ())) }
This lets us refactor the drawing code as follows:
package example import org.fusesource.jansi.{ AnsiConsole, Ansi } object ConsoleGame extends App { val drawing: BuilderHelper[Ansi, Unit] = for { _ <- Draw.saveCursorPosition _ <- Draw.eraseScreen _ <- Draw.drawBox(2, 4, 20, 5) _ <- Draw.drawBlock(10, 5) _ <- Draw.restoreCursorPosition } yield () val result = drawing.run(Ansi.ansi())._1 AnsiConsole.out.println(result) } object Draw { def eraseScreen: BuilderHelper[Ansi, Unit] = BuilderHelper.unit { _.eraseScreen() } def saveCursorPosition: BuilderHelper[Ansi, Unit] = BuilderHelper.unit { _.saveCursorPosition() } def restoreCursorPosition: BuilderHelper[Ansi, Unit] = BuilderHelper.unit { _.restoreCursorPosition() } def drawBlock(x: Int, y: Int): BuilderHelper[Ansi, Unit] = BuilderHelper.unit { b: Ansi => b.bold .cursor(y, x) .a("***") .cursor(y + 1, x) .a(" * ") .reset } def drawBox(x0: Int, y0: Int, w: Int, h: Int): BuilderHelper[Ansi, Unit] = BuilderHelper.unit {b: Ansi, i: Int) => bb.cursor(y0 + i + 1, x0).a(wallStr) } walls.cursor(y0 + h - 1, x0).a(bottomStr) } }
All I am doing is here is avoiding creation of
b0,
b1,
b2 etc, so if this code is confusing you don't have to use
BuilderHelper.
Input sequence
Thus far we've looked at control sequences sent by the program, but the same protocol can be used by the terminal to talk to the program via the keyboard. Under ANSI X3.64 compatible mode, the arrow keys on VT100 sent CUU (Cursor Up), CUD (Cursor Down), CUF (Cursor Forward), and CUB (Cursor Back) respectively. This behavior remains the same for terminal emulators such as iTerm2.
In other words, when you hit Left arrow key
ESC + "[D", or
"\u001B[D", is sent to the standard input. We can read bytes off of the standard input one by one and try to parse the control sequence.
var isGameOn = true var pending = "" val escStr = "\u001B" val escBracket = escStr.concat("[") def clearPending(): Unit = { pending = "" } while (isGameOn) { if (System.in.available > 0) { val x = System.in.read.toByte if (pending == escBracket) { x match { case 'A' => println("Up") case 'B' => println("Down") case 'C' => println("Right") case 'D' => println("Left") case _ => () } clearPending() } else if (pending == escStr) { if (x == '[') pending = escBracket else clearPending() } else x match { case '\u001B' => pending = escStr case 'q' => isGameOn = false // Ctrl+D to quit case '\u0004' => isGameOn = false case c => println(c) } } // if }
This is not that bad for simple games, but it could get more tricky if the combination gets more advanced, or we if start to take Windows terminals into consideration.
JLine2, maybe
On JVM, there's JLine2 that implements a concept called KeyMap. KeyMap maps a sequence of bytes into an Operation.
Because JLine is meant to be a line editor, like what you see on Bash or sbt shell with history and tab completion, the operations reflect that. For example, the up arrow is bound to
Operation.PREVIOUS_HISTORY. Using JLine2, the code above can be written as follows:
import jline.console.{ ConsoleReader, KeyMap, Operation } var isGameOn = true val reader = new ConsoleReader() val km = KeyMap.keyMaps().get("vi-insert") while (isGameOn) { val c = reader.readBinding(km) val k: Either[Operation, String] = if (c == Operation.SELF_INSERT) Right(reader.getLastBinding) else Left(c match { case op: Operation => op }) k match { case Right("q") => isGameOn = false case Left(Operation.VI_EOF_MAYBE) => isGameOn = false case _ => println(k) } }
I kind of like the raw simplicity of reading from
System.in, but the JLine2 looks a bit more cleaned up, so it's up whatever you are more confortable with.
Listening to keyboard in the background
What
System.in code makes it clear is that waiting for keyboard input is equivalent of reading from a file. Another observation is that the most of the microseconds will be spent waiting for the user. So what we want to do, is grab user inputs in the background, and when we are ready periodically pick them up, and handle them.
We can do this by writing the keypress events into Apache Kafka. Haha, I am just kidding. Except, not completely. Kafka is a log system that programs can write events into, and other programs can read off of it when they want to.
Here's what I did:
import jline.console.{ ConsoleReader, KeyMap, Operation } import scala.concurrent.{ blocking, Future, ExecutionContext } import java.util.concurrent.atomic.AtomicBoolean import java.util.concurrent.ArrayBlockingQueue val reader = new ConsoleReader() val isGameOn = new AtomicBoolean(true) val keyPressses = new ArrayBlockingQueue[Either[Operation, String]](128) import ExecutionContext.Implicits._ // inside a background thread val inputHandling = Future { val km = KeyMap.keyMaps().get("vi-insert") while (isGameOn.get) { blocking { val c = reader.readBinding(km) val k: Either[Operation, String] = if (c == Operation.SELF_INSERT) Right(reader.getLastBinding) else Left(c match { case op: Operation => op }) keyPressses.add(k) } } } // inside main thread while (isGameOn.get) { while (!keyPressses.isEmpty) { Option(keyPressses.poll) foreach { k => k match { case Right("q") => isGameOn.set(false) case Left(Operation.VI_EOF_MAYBE) => isGameOn.set(false) case _ => println(k) } } } // draw game etc.. Thread.sleep(100) }
To spawn a new thread, I am using
scala.concurrent.Future with the default global execution context. It blocks for user input, and then appends the key press into a
ArrayBlockingQueue.
If you run this, and type Left, Right,
'q', you'd see something like:
Left(BACKWARD_CHAR) Left(FORWARD_CHAR) [success] Total time: 3 s
Processing the key presses
We can now move the current block using the key press. To track the position, let's declare
GameState datatype as follows:
case class GameState(pos: (Int, Int)) var gameState: GameState = GameState(pos = (6, 4))
Next we can define a state transition function based on the key press:
def handleKeypress(k: Either[Operation, String], g: GameState): GameState = k match { case Right("q") | Left(Operation.VI_EOF_MAYBE) => isGameOn.set(false) g // Left arrow case Left(Operation.BACKWARD_CHAR) => val pos0 = gameState.pos g.copy(pos = (pos0._1 - 1, pos0._2)) // Right arrow case Left(Operation.FORWARD_CHAR) => val pos0 = g.pos g.copy(pos = (pos0._1 + 1, pos0._2)) // Down arrow case Left(Operation.NEXT_HISTORY) => val pos0 = g.pos g.copy(pos = (pos0._1, pos0._2 + 1)) // Up arrow case Left(Operation.PREVIOUS_HISTORY) => g case _ => // println(k) g }
Finally we can call
handleKeyPress inside the main while loop:
// inside the main thread while (isGameOn.get) { while (!keyPressses.isEmpty) { Option(keyPressses.poll) foreach { k => gameState = handleKeypress(k, gameState) } } drawGame(gameState) Thread.sleep(100) } def drawGame(g: GameState): Unit = { val drawing: BuilderHelper[Ansi, Unit] = for { _ <- Draw.drawBox(2, 2, 20, 10) _ <- Draw.drawBlock(g.pos._1, g.pos._2) _ <- Draw.drawText(2, 12, "press 'q' to quit") } yield () val result = drawing.run(Ansi.ansi())._1 AnsiConsole.out.println(result) }
Running this looks like this:
Combining with logs
Let's see if we can combine this with the Scroll Up technique.
var tick: Int = 0 // inside the main thread while (isGameOn.get) { while (!keyPressses.isEmpty) { Option(keyPressses.poll) foreach { k => gameState = handleKeypress(k, gameState) } } tick += 1 if (tick % 10 == 0) { info("something ".concat(tick.toString)) } drawGame(gameState) Thread.sleep(100) } def info(msg: String): Unit = { AnsiConsole.out.println(Ansi.ansi() .cursor(5, 1) .scrollUp(1) .eraseLine() .a(msg)) }
Here, I am outputing a log every second at
(1, 5) after scrolling the text upwards. This should retain all the logs in scroll buffer since I am not overwriting them.
I am sure that are lots of other techniques like that using ANSI control sequences. I've used Java libraries like Jansi and JLine2, but there's nothing JVM or library dependent things in what I've shown.
A note on Windows 10
According to Windows Command-Line: The Evolution of the Windows Command-Line written in June, 2018:
In particular, the Console was lacking many features expected of modern *NIX compatible systems, such as the ability to parse & render ANSI/VT sequences used extensively in the *NIX world for rendering rich, colorful text and text-based UI's. What, then, would be the point of building WSL if the user would not be able to see and use Linux tools correctly?
So, in 2014, a new, small, team was formed, charged with the task of unravelling, understanding, and improving the Console code-base … which by this time was ~28 years old - older than the developers working on it!
It seems like Console on Windows 10 are now compatible with VT100 control sequences, which also explains that we've been using Microsoft's page as a reference guide.
Summary.
Libraries like JAnsi and JLine2 make some code nicer to read/write. In addition, they would provide fallback on Windows, but I am not sure how well it works on either modern Windows 10 vs older ones.
The code example used in this post is availble at. | http://eed3si9n.com/node/268 | CC-MAIN-2018-47 | refinedweb | 2,385 | 56.66 |
C# is a programming language based on object oriented programming concepts and like other programming languages like Java, It supports multithreading, so this article provides you details of multithreading in C# with console application examples..
A multithread program contains two or more parts that can run concurrently. Each part of such a program is called thread, and each thread defines a separate path of execution. Thus, multithreading is a specialized form of multitasking.
How does multithreading work?
.
To use multithreading we have to use the Threading namespace which is included in System. The
System.Threading namespace includes everything we need for multithreading. This namespace consists of a number of classes.
System.Threading.Thread is the main class for creating threads and controlling them.
The Thread class has a number of methods. A few interesting methods are shown below:
- Start(): starts the execution of the thread.
- Suspend(): suspends the thread, if the thread is already suspended, nothing happens.
- Resume(): resumes a thread that has been suspended.
- Interrupt(): interrupts a thread that is in the wait, sleep or join the stage.
- Join (): blocks a calling thread until the thread terminates.
- sleep(int x): suspends the thread for a specified amount of time (in milliseconds).
- Abort(): Begins the process of terminating the thread. Once the thread terminates, it cannot be ·restarted by calling the function Start() again.
- Yield(): is used to yield the execution of current thread to another thread.
This class also has a number of interesting properties as shown below:
- IsAlive: if true, signifies that thread has been started and has not yet been terminated or aborted
- Name: gets/sets the name of the thread
- priority: gets/sets the scheduling priority of a thread
- ThreadState: gets a value containing the state of the current thread
- IsBackground: is used to get or set value whether current thread is in background or not.
- ManagedThreadId: is used to get unique id for the current managed thread.
Let's take a look at a simple threading example:
using System; using System.Threading; namespace MultiThreadingInCSharp { class Program { static void Main(string[] args) { ThreadStart job = new ThreadStart(ThreadProcess); Thread thread = new Thread(job); thread.Start(); for (int i = 0; i < 5; i++) { Console.WriteLine("Main thread: {0}", i); Thread.Sleep(1000); } } static void ThreadProcess() { for (int i = 0; i < 10; i++) { Console.WriteLine("ThreadProcess thread: {0}", i); Thread.Sleep(500); } } } }
Output:
Main thread: 0 ThreadProcess thread: 0 ThreadProcess thread: 1 Main thread: 1 ThreadProcess thread: 2 ThreadProcess thread: 3 ThreadProcess thread: 4 Main thread: 2 ThreadProcess thread: 5 Main thread: 3 ThreadProcess thread: 6 ThreadProcess thread: 7 Main thread: 4 ThreadProcess thread: 8 ThreadProcess thread: 9
The code creates a new thread which runs the
ThreadProcess Job.
So,; public class Program { public static void Seconday_Thread() { Console.WriteLine("Secondary thread created"); Thread current_thread = Thread.CurrentThread; Console.WriteLine("The details of the thread are : "); Console.WriteLine("Thread Name : " + current_thread.Name) ; Console.WriteLine("Thread State: " + current_thread.ThreadState.ToString() ); Console.WriteLine("Thread priority level:" + current_thread.Priority.ToString()); Console.WriteLine("Secondary thread terminated"); } public static void Main() { ThreadStart thr_start_func = new ThreadStart(Seconday_Thread); Console.WriteLine("creating the Secondary thread"); Thread sThread = new Thread(thr_start_func); sThread.Name = "Seconday_thread"; sThread.Start(); //starting the thread } }
In this example, we are creating a new thread called sThread, which when started executes the function called Seconday_Thread(). Note the use of the delegate called ThreadStart that contains the address of the function that needs to be executed when the thread's Start() is called.
Output:
creating the Secondary thread Secondary thread created The details of the thread are : Thread Name : Seconday_thread Thread State: Running Thread priority level:Normal Secondary thread terminated
For creating threading application, I have mentioned all the methods above, but there some important methods which used regularly for implementing threading.
- Thread Join
- Thread Sleep
- Thread Abort
We have already seen example of Thread.Sleep, now I will show you examples of Thread.Join & Thread.Abort, so you can understand how to implement it.
Thread Join example in C#
Thread.Join() make thread to finish its work or makes other thread to halt until it finishes work. Join method when attached to any thread, it makes that thread to execute first and halts other threads. Now on the same we will see a simple example where we apply Join method to thread.
using System; using System.Threading; public class Program { public static void Main(string[] args) { Thread oThread = new Thread(JoinMethod); oThread.Start(); oThread.Join(); Console.WriteLine("work completed (Main Thread)!"); } static void JoinMethod() { for (int i = 0; i <= 10; i++) { Console.WriteLine("work is in progress in JoinMethod !"); } } }
Output: completed (Main Thread)!
Thread Abort example in C#
The Abort() method is used to terminate the thread. It raises ThreadAbortException if Abort operation is not done.
using System; using System.Threading; public class MyThread { public void Thread1() { for (int i = 0; i < 10; i++) { Console.WriteLine(i); Thread.Sleep(200); } } } public class ThreadExample { public static void Main() { Console.WriteLine("Start of Main"); MyThread mt = new MyThread(); Thread t1 = new Thread(new ThreadStart(mt.Thread1)); Thread t2 = new Thread(new ThreadStart(mt.Thread1)); t1.Start(); t2.Start(); try { t1.Abort(); t2.Abort(); } catch (ThreadAbortException tae) { Console.WriteLine(tae.ToString()); } Console.WriteLine("End of Main"); } }
The output is unpredictable (may throw error also) because the thread may be in running state. | https://qawithexperts.com/article/c-sharp/multithreading-in-c-with-example/201 | CC-MAIN-2021-21 | refinedweb | 882 | 58.58 |
//This is the code for drawing a square #include<iostream> using namespace std; class square { public: int h,w; square(){ cout<<"Square's constructor!!"<<endl; h=0; w=0; } ~square(){ cout<<"Square's destructor!!"<<endl; } public:void drawsquare(){ cout<<"This program draws a square."<<endl; cout<<"Enter the height and width respectively.."<<endl; cin>>h; cout<<endl; cin>>w; cout<<endl; for(int i=0;i<h;i++){ cout<<endl; for(int x=0;x<w;x++){ cout<<"*"; } } cout<<endl<<endl; } }; //This is the code for drawing a cross #include<iostream> using namespace std; class cross { public: int h,w; cross(){ cout<<"Cross's constructor!!"<<endl; h=0; w=0; } ~cross(){ cout<<"Cross's destructor!!"<<endl; } public:void drawcross(){ cout<<"This program draws a cross"<<endl; cout<<"Please enter the height and width, respectively.."<<endl; cin>>h; cout<<endl; cin>>w; cout<<endl; int t=(h/2)+1; for(int i=0;i<(h/2);i++){ for(int i=0;i<(w/2);i++){ cout<<" "; } cout<<"*"<<endl; } for(int i=0;i<w/2;i++){ cout<<"*"; } cout<<" "; for(int i=0;i<w/2;i++){ cout<<"*"; } cout<<endl; for(int i=0;i<t;i++){ for(int i=0;i<(w/2);i++){ cout<<" "; } cout<<"*"<<endl; } } }; //Then,this is a driver program for implementation //Written by Kudayisi tobi,8th Oct.,2009...all rights reserved #include"square.cpp" #include"cross.cpp" #include<iostream> using namespace std; int main() { int choice=0; cout<<"This is a driver program that tests my new classes,cross and square."<<endl; cout<<"This program draws many shapes on the screen(depending on your choice)"<<endl; cout<<"(0)quit,(1)square,(2)cross...."; cin>>choice; if(choice==0){ return 1; } if(choice==1){ square x; x.drawsquare(); } else if(choice==2){ cross y; y.drawcross(); } cout<<"Thank you!!"<<endl; return 0; } This code is for beginners who can't decipher their heads from their toes in c++ Hope this helped!!
Are you able to help answer this sponsored question?
Questions asked by members who have earned a lot of community kudos are featured in order to give back and encourage quality replies. | https://www.daniweb.com/programming/software-development/code/228972/shape-drawer-class | CC-MAIN-2018-39 | refinedweb | 355 | 66.23 |
WF 4.0: Building a Hello World Sequential Workflow
Note: This post is based on Visual Studio 2010 Beta 1 which is the latest version available in the time of writing this post, so by the time this technology ships, there are probably things that will be slight different.
Start Visual Studio 2010 Beta 1 and create a new Sequential Workflow Console Application.
After you click OK, visual studio creates the new projects and creates a new WF project, in which there are some things you should know about:
References:
- System.Xaml – Now that there are several technologies based on Xaml, this is a new Assembly in .Net Framework 4.0 that contains Xaml services such as serialization.
- System.Activities is the assembly that contains the implementation of WF 4.0, and as you can guess, the corresponding namespace is System.Activities and anything beneath it.
- System.Activities.Design contains the designers for the activities and the designer re-hosting implementation. Since the new designer is based on WPF, you can also note references to WindowsBase, PresentationCode and PresentationFramework.
- System.ServiceModel contains WCF implementation (as of .Net Framework 3.0) and now System.ServiceModel.Activities contains the activities used for Workflow Services (the integration between WF and WCF).
Sequence.xaml:
By default, workflows are created declaratively in WF 4.0 and represented in .xaml files with no code behind. If you open this file with an Xml editor, you will see it clearly.
<?xml version="1.0" encoding="utf-8" ?>
<p:Activity x:Class="HelloWorld.Sequence1"
xmlns:p=""
xmlns:s="clr-namespace:System;assembly=mscorlib"
xmlns:sad="clr-namespace:System.Activities.Debugger;assembly=System.Activities"
xmlns:sadx="clr-namespace:System.Activities.Design.Xaml;assembly=System.Activities.Design"
xmlns:
<p:Sequence sad:XamlDebuggerXmlReader.
</p:Sequence>
</p:Activity>
The new WF Designer shows an empty sequence, representing an empty block of execution. Note the warning sign that says that the sequence has no child activities.
From the Procedural section in the Toolbox, drag the WriteLine Activity to the design surface.
Now, you get a warning that the Text property is not set. To set it, go the properties window, and open the Expression Editor. In WF 4.0 you use the Expression to set values to variables and parameters, and you can use either static values (like in this case) or VB. Yes! Expression are written in VB.
Type the text you want to display to the console and click OK.
Now, that the simple workflow is completed, you can use Ctrl + F5 to run it like you normally do.
Debugging a Workflow
The debugging experience when debugging a workflow in WF 4.0 is very similar to the debugging experience when using code. You can right click an activity the in designer and select Insert Breakpoint, or use F9 to do this.
Once you do this and run the workflow, the debugger highlights the current activity using a yellow border (similar to the yellow background for the current statement when debugging code).
Enjoy!
PingBack from
Normal 0 21 false false false FR X-NONE X-NONE Let’s have a look to see what the tool NDepend shows
#.think.in infoDose #30 (18th May – 22nd May) | https://blogs.msdn.microsoft.com/bursteg/2009/05/19/wf-4-0-building-a-hello-world-sequential-workflow/ | CC-MAIN-2016-44 | refinedweb | 535 | 50.02 |
#include <Wt/Ext/Button>
A button.
A button with a label and/or icon, which may be standalone, or be added to a ToolBar (see ToolBar::addButton()) or to a Dialog (see Dialog::addButton()).
The API is similar to the WPushButton API, with the following additional features:
Alias for the activated() signal.
This signal is added for increased API compatibility with WPushButton.
Return if is the default button.
Configure as the default button.
This only applies for buttons that have been added to a dialog using Dialog::addButton().
The default button will be activated when the user presses the Enter (or Return) key in a dialog.
The default button may be rendered with more emphasis (?). | https://webtoolkit.eu/wt/doc/reference/html/classWt_1_1Ext_1_1Button.html | CC-MAIN-2016-50 | refinedweb | 115 | 50.73 |
I have a table that is generated else where. the problem i have is that the majority of the field attributes have different amount of spaces before and after the attribute. for example "____Blackhawk Sub No 1 _______". I have been manual using field calculator doing the !myfield!.lstrip, !myfield!.rstrip !myfield!.strip for each field and that sucks.
Is there a why to strip all the spaces infron/begining and at the end?
This table has both number and string.
I have been trying with the code below but i get an error on line 11. so i am thinking my expression is incorrect?
ERROR 000622: Failed to execute (Calculate Field). Parameters are not valid.
import arcpy from datetime import datetime as d startTime = d.now() arcpy.MakeTableView_management("C:\Temp\ParAdminTable.dbf", "parAdmin") for field in arcpy.ListFields("parAdmin", "*", "String"): sqlFieldName = arcpy.AddFieldDelimiters("parAdmin", field) calcSql= "' '.join( field.strip().split())" arcpy.CalculateField_management("parAdmin",field,calcSql,"PYTHON_9.3","#") arcpy.TableToTable_conversion("parAdmin", "C:\Temp" , "ParAdmin_Test.dbf") try: print '(Elapsed time: ' + str(d.now() - startTime)[:-3] + ')' except Exception, e: # If an error occurred, print line number and error message import traceback, sys tb = sys.exc_info()[2] print "Line %i" % tb.tb_lineno print e.message
You have two problems. 1) ListFields returns field objects and you're attempting to leverage these as strings when you need to leverage the name property of the field object. 2) your expression is missing the delimiters for the field calculation.
I've pasted a corrected copy of your code below along with a couple of alternatives that I thought of off the top of my head. | https://community.esri.com/thread/170291-strip-attributes-of-entire-table | CC-MAIN-2019-13 | refinedweb | 269 | 53.07 |
Apple's iTunes is the most popular audio and video podcast aggregator. If you want to add your feed to iTunes so that your content is available through the iTunes Music Store, you can adapt the RSS feed that you created with the instructions on pages 379385 by including iTunes special elements to describe your data.
You'll first prepare your RSS feed for iTunes, and then add the individual iTunes elements (as described on the following pages).
To prepare your RSS feed for iTunes elements:
Within the rss element, specify the iTunes namespace by adding xmlns:itunes="".
Tips
Remember: a podcast is nothing more than an RSS feed with multimedia enclosures. The iTunes tags are optional, though they do describe your podcast more fully in iTunes itself.
Podcasts on iTunes are currently free.
Namespaces let you add extensions onto RSS without changing RSS itself. There are several different namespaces that are commonly used with RSS, including iTunes and the RSS Media Module (, which is used for Flickr's photo RSS feeds), among others.
For more information about namespaces, you might like to consult my book on XML: XML for the World Wide Web: Visual QuickStart Guide, published by Peachpit Press.
Although RSS already has ways of specifying the owner and technical lead on a podcast, iTunes prefers you to use its own tags.
To add contact information about yourself for Apple:
1.
After the initial channel element at the top of your RSS feed, type <itunes:owner>.
2.
Next, type <itunes:name>you </itunes:name>, where you is your name.
3.
Type <itunes:email>your email </itunes:email>, where your email is the address where you would like Apple to contact you if they have any trouble with your podcast or any news to relate.
4.
Type </itunes:owner> to complete the contact information.
For example, when you submit a podcast to iTunes, Apple will send you an email letting you know whether or not it was accepted.
In addition, if Apple changes the specifications for iTunes RSS elements, they will notify you at the address you give in the itunes:email element.
The information enclosed in the itunes:owner element is not visible in iTunes. However, it is not private, since anyone can look at the source code of the XML document if they wanted to.
The next section of iTunes elements are those that describe your podcast as a whole. They are used in podcast's main window on iTunes to tell what your podcast is about.
To add information about your podcast:
Add the following elements to the channel element to describe your podcast further in iTunes: (Use the <tag> content</tag> syntax except for itunes:image which is a single tag that ends in />.) Type <itunes:image to specify the URL of a square image to be used as the cover art for your podcast in iTunes.
If you say your podcast has explicit language, a small explicit icon () will appear next to its name in the Name column as well as next to the cover artwork in the iTunes Music Store.
Only use the official categories listed on Apple's site:
You can also add <itunes:block>yes </itunes:block> to completely block a podcast from appearing in the iTunes Music Store, although it probably makes more sense when used to block individual episodes (see page 391).
Although none of the iTunes tags are required for the podcast to appear in the iTunes Music Store, they are required if you want to be featured on the iTunes Home Page.
Note that iTunes uses the value of the regular RSS title, link, and language elements to advise prospective viewers of your podcast's title, URL, and language, respectively.
You can use iTunes' special RSS elements to give information about your podcast's individual episodes.
To describe individual episodes:
Within the item element of the episode in question (in <tag>content</tag> format): Use <itunes:author> to denote the person who created the particular episode. This name will appear in the Artist column. Use <itunes:subtitle> to give a short description for the episode. It will appear in the Description column. Use <itunes:duration> to specify how long the episode lasts, in one of the following formats: HH:MM:SS, H:MM:SS, MM:SS, or M:SS. Use <itunes:keywords> to specify up to 12 keywords that are specific to this particular episode, and not necessarily to the podcast as a whole.
New episodes appear in iTunes in order of their publication date (as specified in the pubDate element), not in the order in which they appear in the RSS feed.
The contents of the title element for an item is used in the Podcast column in the iTunes list (Figure 25.18).
The image in the itunes:image element applies to the entire podcast, not specific episodes. You can add images to individual MP3 podcast episodes (but not video ones). Select the podcast in iTunes, choose Get Info, click the Artwork tab and then click the Add button. Choose the desired JPEG image and then save the changes. Then make sure it is this MP3 file that you upload for your podcast.
The contents of the description element within an item element will appear when a visitor clicks the circled i () in the Description column (which rather confusingly contains what's in itunes: subtitle, not what's in description) in the iTunes list. You can also use <itunes:summary> for containing the circled-i information, but I prefer using the more standard description element which will also be understood by other aggregators.
The contents of the pubDate element is used in the Release Date column in the iTunes list.
You can add <itunes:block>yes </itunes:block> to keep an individual explicit episode from appearing in iTunes, perhaps to avoid getting your entire podcast removed.
You can add <itunes:explicit>yes </itunes:explicit> to individual episodes to alert potential viewers of their content. | https://flylib.com/books/en/2.62.1.296/1/ | CC-MAIN-2021-25 | refinedweb | 999 | 59.43 |
Hi guys,
I'm very new to the Crimson/Emerald editor and so far I thinks it rocks…but there is one feature I use a LOT in my current editor (ConText) that I wonder if Crimson has already. I'm not even sure that the feature I use in ConText is intended to be there, but it sure is handy….
the feature I use is when I have text like this:
abc 123 !@#
abc 456 !@#
abc 789 !@#
def 789 !@#
def 456 !@#
def 123 !@#
and I want to create this:
abc-A 123 !@#
abc-A 456 !@#
abc-A 789 !@#
def-A 789 !@#
def-A 456 !@#
def-A 123 !@#
in essence, I've only added "-A" starting at the 4th character in every line. In ConText, this can be done by adding it the first time, turning on columnar blocking (even though its not a column I'm blocking) and highlight and cut the "-A" and now if I insert the clipboard object, it adds the "-A" and drops me down to the next line at the same character position I was on in the previous line prior to the paste command, but without disturbing the text in rest of the first line or the preceeding text in the second line. So, I can just paste a whole bunch of times and it creates the output I am looking for with it automatically dropping me to the next line after the paste…it works really fast and I use this feature a lot for scripting things.
Is there something like this in Crimson/Emerald that I can use…if so, that appears to be the only function I might lose if I switched.
Thanks in advance.
-=]NSG[=- | http://sourceforge.net/p/emeraldeditor/discussion/574349/thread/7efb1b23/ | CC-MAIN-2014-23 | refinedweb | 286 | 66.3 |
Retrograding Weekend News 06.29.03
Right. I’ve spent most of this week doing interviews and resume sending so I haven’t had much free time. I’ve just learned my fiancee is going nuts back in the UK, and I realized no matter what I write, it’s going to be a let down compared to last weeks Adam Ryland interview. So I guess it time for me to do an ACTUAL news update. Scary I know. But that’s life. What’s that? The phone is ringing? Oh wait, that’s just Alex phoning it in…
Star Wars Galaxies launches
My best friend spent a good part of yesterday figuring out what he was going to play in this game. I think he’s going as a Bothan scout, but who knows. Half the words that came out of his mouth made me realize how he feels when I talk Sakura Taisen or Shin Megami Tensei. The only words with either of those games he probably recognizes is “Nyarlathotep.”
I really want to insult this game with something like, “If you’re a geeky acne-ridden social retarded deviant, but just can’t couldn’t sit down at the Battletech table at your local High School Cafeteria, or you had to much trouble doing the cyber sex that seems to be required when you play Everquest, or maybe you just wanted some droids and midget furries to kick around, well now you’re in luck!” But I can’t because my best friend likes Star Wars and thus he might be offended. Of course, he doesn’t own any of the movies, wait in line for weeks on end for a ticket to a film he’s just going to bitch about for years after seeing, or sign up as a Jedi for his religion, so I’ll say this: Star Wars fanatics are the creepiest bunch of psychos ever. And this is from a guy who went to Botcon and was asked questions like “Whom is Arcee f*cking: Hot Rod or Springer?” That’s right. I said WHOM. It’s called an English Education degree people and I USE IT. Except in regards to my patented run-on sentences. As you can tell, I really hate Star Wars.
Beta testers were heavily divided on their feelings in this game; most feeling it was released months too soon. Some really enjoyed the online ability and the chance to find other hermits who jacked off to Carrie Fisher in “Return of the Jedi,” conveniently living in denial of the fact she’s not very pretty at all, and instead focused on the fact the only time it’s the most naked skin on a girl they would ever see. Seriously though, the game’s customization abilities is nothing to sneeze at, nor are the exclusives you got for buying the collector’s edition or being one of the first to buy this game. However it doesn’t change the fact that beta testers called this game, “a fairly discordant mess,” or “This is one of the first betas (of many) that I haven’t needed to LOOK for bugs…they jump out and smack you!” So are those beta testers just really jaded? Sadly no, because even hardcore SW fans have said there are bug issues. But the game came out a few days ago, so it’ll be the online players who decide if Sony has once again sent something out before it was fully ready to go, or if the game only needs a minor path or two.
Of course, most players never got a chance to find out. See, thanks to the crippling illness that I’m going to have to call “Star Wars Madness,” and Sony’s underestimation of it, the servers died the first day out due to overloading. And that’s AFTER Sony installed a few more servers for that first day blow out. So yes, within minutes of the game being declared open and people were allowed to play, it died. Many players encountered error messages just trying to register their game. And hardware problems began being reported a plenty.
Sony and Lucas Arts have been quick to down play the problems, pointing out correctly that it’s all due to the massive popularity of Star Wars, but then also ignoring the actual problems with servers and hardware.
And there’s only 1,400 people currently playing. ONE AND A HALF THOUSAND PEOPLE. There’s more people still playing Phantasy Star Online on the damn Dreamcast than that. Look, I’m sure this is a great game for those into Multi-player online RPG’s or Star Wars or whatever, but that little people shouldn’t crash a system. Especially when the creators of Everquest are involved. This just speaks of shoddiness and laziness to me. I know two Beta testers personally. Both have mixed feelings on the game. Both agree Lucas Arts and Sony just should have delayed it another month instead of allowing this first day “Oops” to happen. Still, it’s news, and there’s no such thing as bad publicity. You’ll still never catch me playing it though.
SONY SPONSORS OZZY!
SCEA continues their brainwashing of the world by teaming up with another agent of Satan: Ozzy Osbourne. Okay, I don’t mind either, but it’s just fun to mention Satan and bat-eating whenever Ozzy comes up.
Ticketgoers will get the chance to play PS2 games at the concert, which is meant to prevent brainless headbangers from taken advantage of the mass of drunken scantily clad and drugged up chicks in the audience that normally would allow their inebriated state to override their aversion to most of their fellow concert goers. Thankfully seeing these people playing games like Primal and Tomb Raider will instantly sober them up and prevent any horrible breeding accidents from occurring. Speaking of Primal, concertgoers get a chance to win a copy of that game, along with and PS2 jacket. Cause once again, nothing says Chick Magnet’ like a Jacket emblazoned with a PSX logo and also Ozzy’s face.
Seriously though. Good PR move for the unstoppable Juggernaut that is the PS2.
River City Ransom comes out for the GBA 11/16
Just buy the game you bastards.
Nintendo Sponsors…some other tour?
Okay. Maybe it’s because I was in England for a year. Maybe it’s because I listen to Henry Rollins, Sisters of Mercy and Gwar. But to me it appears once again Nintendo gets boned.
Nintendo announced it’s doing a 25 city tour that will feature all sorts of Game Cube and GBA games. Oh and allow the hit stylings of, “bands like Evanescence, Cold, Revis, Cauterize, and Finger Eleven.”
Who? No, seriously. Who the f*ck are these guys? I admit I was stuck with crap like Robbie Williams, Busted, Will Young and S Club Juniors, so I’m probably out of the loop. But still: WHO?
Now for an actually important story.
The Entertainment Software Rating Board has announced several upcoming changes to the rating system it uses for games sold in the US. The changes include bolder labels intended to draw consumer attention to content descriptors, the short standardized phrases that alert buyers to content elements that may be of interest or concern. Four new content descriptors are also being added to the 26 already in use: cartoon violence, fantasy violence, intense violence, and sexual violence. Effective September 15, the ESRB will require the placement of the new labels on game boxes and the use of modified logos for the “Mature” (17 and over) and “Adult” (18 and over) ratings that clearly specify the minimum recommended age for those categories.
“This change is designed to ensure that parents can’t miss the critical content information printed on game boxes, which frequently provides greater insight into why a game has received its rating,” said.”
Credit: Gamespot
Jesus. This is like the comic code authority. But it hasn’t gone away. I’m all for keeping six year olds from playing really violent games or nudie hentai games, but I swear I’ve seen the above four descriptors before.
And what if you have a GI Joe video game. Is that cartoon violence? Is it Intense violence? Who decides these things? Remember this is America. A place where Christianity is actually still taken to extreme levels. Where Pokemon and Magic: The Gathering have been defamed as Satanic. Where people actually buy Chick tracts. Parents actually concerned about what their kids are playing don’t need ratings. Resident Evil is not going to have box art or Mr. Rogers riding a pony. But Mr. Rogers is dead so they could probably show him eating that pony. But the point remains, just by looking at a game, you should be able to tell what you’re in for. Is our society’s parenting that bad that they need a nameless faceless organization telling them how to think and raise their kids? Good parents aren’t going to give little Timmy Grand Theft Auto as a present for finally being toilet trained. They’re going to give him Bear in the Big Blue House or a hug. And by the time the kids hit teens anyway, a parent’s not going to know if their kid has a crystal meth lab at a friends house or a secret stash of porn, much less if they have a GASP, Adult video game. Teenagers are sneaky by nature, and putting a big This ain’t for you label’ on something isn’t going to prevent a kid from playing it. Unless they’ve been raised by a cult. It’s going to make them want it more.
Sorry, the above rant is kinda pointless and rehashed. I just hate ratings systems. What’s R over here is for 10 and up in England. Clockwork Orange was censored over in the UK while a Levi’s commercial featuring rat headed people stealing cats is banned in America. Censors get too zealous sometimes. Like me and praising Persona.
RETRO NEWS!
Well yeah, it’s about another convention. But it’s the CLASSIC GAMING EXPO! Now if Chris and Joe got to go to E3, 411 should foot my ticket to Las Vegas for this. After all I am’ the premiere Retrogamer of all Retrogamers. (Sorry, Bebito’s been telling me to be a lot less humble and more of an egotistical jerk.) Plus he should come too. Who wouldn’t want a CGE report from 411’s non-Scott Keith Keith cash cows?
The Classing Gaming Expo is August 9-10 in Las Vegas, and will feature Allan Miller (Cofounder of ACTIVISION), Warren Davis (Creator of Q-bert), and Tim Kelly and Jay Smith; two guys I don’t really know but they did graphic programming. Hooray.
At least there will be Amiga’s “IT came from the Desert,” some people that still remember the TG-16, and that love Shining Force as much as I do.
Cheap Plug time
First up, Say hello to the new guy! Jeff Watson debuts with a new column, taking the Friday news slot. He’s actually hard core news without the insane biased commentary that you’ve come to expect from the rest of us. Good to see one of us doesn’t beat the word Journalism into the ground like a drunken Irishman beating his 14 kids because his dole check didn’t arrive on time and so he actually had to sober up for five minutes of his worthless life. Yeah. The Irish suck baby! Who’s got a problem with that? Okay, so I got some emails from Brits complaining about how much I bashed their country and asked me to bash someone else’s country for once before I send their tourism industry into an even bigger toilet than it’s in now. Okay, so I didn’t get anti hate mail from brits and just felt like bashing Ireland. Ireland sucks.
Bryan Berg is a weener since he only gave me a silver medal this week after once again being the only guy here that lines up exclusive interviews with people in the industry. Just kidding. I’m above paltry ego boosting awards. His column rightfully rips on pay video games sites, because they suck. And he just had his three-year anniversary. Hopefully he didn’t give her a SILVER MEDAL FOR GIRLFRIENDISM. Just kidding. I’m in a really sarcastic mood today. Bryan Rules. I love Bryan. At least I’ve been on MSNBC…
Lee Lee kisses my ass at least three times in his column. The man loves me and thus I love him. But in a non sodomizing way. Which is now legal thanks to the Supreme Court. Still doesn’t mean I want any part of it. My Anus is Virginally pure baby! Read Lee. Read Lee or die. Die like Lupi Valez did. (Bonus points if you can remember what famous TV show pilot that very subtle pop culture reference is too.
And of course, Bebito Jackson. And I still think Hoochie is a better gimmick than the rumour monkey’s bitch…if only because I’m against bestiality and Mrs. Jackson has to be annoyed with poor Bebs wearing a gimp mask and getting whipped and beaten by a dominatrix monkey making You worthless sniveling worm. Remove the lice from my head and eat them like a proper simian!’ noises. Next week, Bebito will be selling videos of these depraved acts on Ebay. Kids, make sure they have the appropriate ESRB label or your parents will think it’s just Disney Cartoons.
My Interview With Adam Ryland. Yeah you’ve probably read it a few times. But this is a reminder for the very cool contest I’m doing in honour of me being in EWR 4.0. Design my stats for 3.0 Winner gets a free video game from me (No it’s not EWR. That game’s already free.). Just pick a console other than the Neo-Geo or some import system like the Wonderswan that you want a game for and you’ll get one. Lee’s already forced me to reveal if you choose the PS1, you get Persona 2:Eternal Punishment, and the other games are just as good. You have until the end of July, so get cracking! I want to be able to do a column for every day of the week when this ends and now I’ve only got enough for 2-3. MORE MORE MORE!
News Bytes (See, I spelled bites as Bytes. It’s a lame video game pun. LAUGH YOU BASTARDS!! LAUGH!!!!
Street Fighter Gallery opens in Japan. Women dressed and Chun Li greet you at the door. And people wonder why video gamers have a rep for being losers obsessed with sex.
Columbia House starts selling video games. Prediction for next year: Many under 18 year olds will sign up, get the discs and never pay knowing minors can’t actually enter into a contractual obligation and will continue to sign up again using obviously fake names like Richard Less and Seymour Butts. Columbia House will stay too stupid to notice. And kids will get their hands on BMX XXX through mail order, thus once against proving how f*cking useless it is to have a Video games rating board.
Tomb Raider: Angel of Darkness is considered to be so bad that even in the United Kingdom where it’s okay to publicly whip out your dick and masturbate over her image because most of their women are fat snaggletoothed crones with a liver the size of Andre the Giant’s (Except Elisa. She’s very attractive.) that Eidos stock fell by 10% the day it came out in America. Oh and the EEC has not given final approval to allow the game to be released in Europe yet because one of the five language translations is as bad as the game itself. Which means Lara saying “I’m a whorish tart” in English comes out like UGFVT^&^*&*&4747′ in Spanish or German. Nice to See Core Designs and Eidos are still the champions of quality. And of course the UK’s Official Playstation Magazine will give this game a 10/10 because it’s got a chick with boobs in it.
Shin Megami Tensei 3 is still not announced for a US release even though Atlus message board on their US website is pretty much nothing but “Give us this damn game.” Alex predicts they will eventually release the game two years after it was considered cutting edge in Japan and then half the game will be missing or one of the characters will be changed into a Ballerina and will pirouette crazy across the screen. Okay, only Persona fans will get all the in jokes that were in there meaning if you got any of them you’re either super cool or you’re a pathetic obsessive gamer geek. I haven’t decided yet.
QUESTION TO THE READERS
Okay. Why are goth chicks either really fat and ugly and actually think that if vampires exist that one would come and turn them into a vampire as well so they can live a life of dark morbidity and forget that the fangs couldn’t probably penetrate all that blubber…or are actually super hot? Why does no other subculture seem to have things that black or white. God I’m dreading my Trip to Madison’s “The Inferno.” There’s going to be so many overweight BDSM freaks that there won’t be any oxygen for the people who don’t ingest a box of Ho-Ho’s as a morning snack. Discuss.
ALEX FIGHTS THE MAN!
Bebito and I (Yes, Beb’s you’re going down with me) have been trying to get some people whose work we enjoy posted on 411games. For whatever reason, Chris hasn’t responded. Hey, he’s busy and has a life. So maybe he just forgot. So I’ve taken it upon me to post a Mega Man Network Transmission Review by my good friend LiquidCross in here. Hey! It’s a Gamecube game! And a Mega Man game! This is prize shit. And we can always use some more reviews. So here it is.
MEGA MAN NETWORK TRANSMISSION
System: Nintendo Game Cube
Introduction/Story
Mega Man makes his first outing on the Nintendo Gamecube with Mega Man
Network Transmission. This game takes place a month after the events in the
first Mega Man Battle Network game, so if you’ve been living under a rock
and haven’t finished that game yet, you may want to. It’s not absolutely
necessary, though; MMNT is still a highly enjoyable game on its own. The
backstory just makes a lot more sense if you’ve completed MMBN.
The year is 200X, and in this lovely future, just about everyone carries a PET
(PErsonal Terminal); think of it as a very high-tech
PDA/cellphone. Many people also have Navis (short for Net Navigators)
living in their PETs. These Navis are sentient AIs, and often keep their
operators company, as well as helping them with a variety of tasks, such as
helping them with homework, virus busting, running errands on the internet, etc.
Anyway, the problem is that some shadowy virus known only as “Zero” is running
rampant on the internet. Rather than damaging computer systems, however, this
one directly attacks Navis and screws with their core functions (i.e., making
them go crazy, controlling them, etc.). It’s up to Lan and his Navi,
MegaMan.EXE, to put a stop to this.
Gameplay
Seeing as how it’s Mega Man’s 15th anniversary (in the US, anyway), it’s only
fitting that his next-gen console debut be a side-scrolling platformer. Unlike
the Battle Network games, where you control both Lan and MegaMan.EXE, Lan
only has a supporting role here; you talk to him, and see him in his room, but
“jacking in” at various locations is done via the map screen. All the action
takes place on the internet, with MegaMan.EXE running around blasting things.
Capcom did the right thing by letting you use the D-pad to control MegaMan.EXE
here; you can also use the analog stick, but I don’t know why the hell you’d
want to. Analog sticks are horrible for side-scrollers.
Word to the wise: this game is tough. When you start the game, you’re
pretty damn weak. MegaMan.EXE’s Mega Buster (his arm cannon) sucks ass. It does
jack squat for damage, and fires rather slowly. Luckily, later on in the game,
you can upgrade it in three different ways: rapid (speed of firing), attack
(damage inflicted), and charge (upgrading once will give you the ability to
charge, upgrading more will enhance it).
If you don’t want to rely on the pea shooter on MegaMan.EXE’s arm, then you’ve
got a plethora of Battle Chips to choose from. Just like in the MMBN
games, you’ve got a Folder full of 20 chip types you can use in combat. You’ve
also got a Pack, where you store ALL the chips you’ve collected (there’s a total
of 137 different kinds!). The strategy lies in deciding which chips you want to
take with you into the internet by swapping them in and out of your Folder. New
chips can be earned by downing enemies, or by buying them at various stores.
Money (in the form of Capcom’s ubiquitous “zenny”) is found scattered througout
the internet.
At the top of the screen is your “Custom” meter, which slowly charges over time.
When it’s full, pressing the “Z” trigger will open up your Chip Menu, where you
can select up to five Battle Chips. Lan sends you chips at random, so be
prepared; you may not always get the ones you want right away. After selecting
them, they’ll appear at the bottom left of the screen. The “L” and “R” triggers
will switch between them; the “Y” button uses the active chip. Chips consume MP,
which is the green bar next to your lifebar. MP also recharges over time. A nice
touch is “Standby Mode;” pressing “X” will freeze the game so that you can
switch chips without distraction (much like the weapon select screens in the
original Mega Man and Mega Man X titles).
Even though you can only have 20 different types of Battle Chips in your Folder,
you can have more than one of each chip; for example, you start out with over 20
Cannon chips. This comes in handy if you want to pummel an enemy with powerful
shots. If you ever run out of a chip, don’t worry; when you complete a level
and/or jack out, your chips are refilled.
Unlike the MMBN games, you have more than one life (in this case, they
call them “Backup Chips”). However, if you do get killed, you start at the last
warp point you came through, which can get annoying.
Boss battles are apparent, naturally. These are marked by pink warp gates, and
if you touch one, Lan will ask if you’re ready for battle yet. This is a good
time to get your chips arranged properly. Once you’re ready, you’ll be
teleported into the boss’ room, a brief cutscene will ensue, then it’s time to
throw down!
One more thing to mention: Program Advances. If you select certain chips in a
certain order, they’ll “merge” into a much more powerful chip. This chip is
temporary; once you use it, it’s gone, as are the chips used to make it (again,
leaving the stage will refill them). A list of Program Advances you’ve found
appears in your Library, in case you forget specific combinations.
Graphics
As you can see, everything’s rendered in full 3D, with cel-shaded characters and
enemies (ignore the Japanese text in the screenshots; Capcom USA just never got
around to updating the shots on the official US page):
border=”0″>
src=”” border=”0″>
border=”0″>
src=”” border=”0″>
The graphics looks pretty slick, and everything moves smoothly; I haven’t run
into any graphical glitches or slowdown as of yet. The backgrounds are very
reminiscent of the inside of a computer, with cooling fans, keypads, and all
manner of electronic things stuck in random places. The cutscenes are cel-shaded
as well, and look fantastic, especially the opening cutscene of MegaMan.EXE
facing the final boss of the first MMBN game.
Sound
This is probably the game’s biggest flaw. The music is cool, but the sound
effects are honestly nothing special. The sound of MegaMan.EXE’s Mega Buster in
particular is pretty weak. The voices are thankfully still in Japanese, with
English subtitles (c’mon, we all know how bad Capcom dubbing is after suffering
through the anime cutscenes in Mega Man 8 and Mega Man X4!).
Overall
If you’re looking for a challenging Mega Man game, this is it. Hardcore fans of
the series (like yours truly) will love this title, as it provides a challenge
rarely seen in video games these days. Plus, it’s hard to go wrong with classic
side-scrolling Mega Man action. Newbies to the Mega Man world are probably
better off starting with something a tad easier, so this game doesn’t put them
off entirely.
If you were expecting a 411′ or some points, it’s not here. Liquidcross has also requested not to have his email on here because he’s a xenophobe. No, not the kind Ripley kills. Those are Xenomorphs. He just hates spam.
If you’re still reading…
Same deal as with LC. But this is for good ol’ Charles Platt. He’s a fun read too.
Hardcorpse: The Death of a the Gamer Lifestyle
by
Chuck Platt
I have an admission to make. I’m not proud of it. I can’t believe it myself
but underneath my television, next to my Gamecube, lies my copy of Legend of
Zelda: The Wind Waker, half played. I know in some part of my brain that I’m
missing out on a good, if not great game, another Shigeru Miyamato product
to lose myself in. Then why do I keep making excuses, playing my Neo Geo
Pocket Color, and even, god forbid, read instead of throwing myself into
it’s first-party goodness? Why are Mario Sunshine, Metroid Prime, and too
many other games tossed aside? I was a hardcore games, once. So how did I
get here? Some of the answers might surprise you.
In every gamer’s lifecycle there are two givens: a stack of games and a few
magazines. Each one of these components in the gaming food chain has lead to
my apathy, and most probably, the apathy of others. It might seem odd, but
the two groups with the most to gain from the hardcore gamer are the very
cause of his death. My first whipping boy will be the game companies
themselves.
On the outside, it would seem that a company would seek to create a cult of
followers for it’s product and cultivate these zealots to keep them happy.
Yet it seems that, especially in enthusiast circles, companies, video game
or not, want to sell out the cult they so eagerly developed for wide appeal.
Ask wrestling, miniatures, collectible card game, and comic book fans how
often a company has sold out it’s grassroots support for a shot at the
mainstream. Thus, Sony has no reason to care about quality now that everyone
and their dog has a PS2 (I don’t but I’m not the Sony type.) If a company
can make more profit churning out crappy games, they will. And, as long as
there are mainstream gamers, more Grand Theft Auto, Madden, Tony Hawk, and
Tomb Raider is coming your way. Not that those are bad games, but they are a
symptom of why good and great games get left in Japan, of why difficult or
quirky games go unpurchased, and of why the racks of games at most stores
look like an algebra problem (too many numbers, too many X’s)… the current
state of the gaming media.
Oh the halcyon days of decent game magazines. Remember when the editors were
obsessive geeks, counting frame rates and ripping developers for lack of
innovation? When you read a review and KNEW that the reviewer had tread
every inch of game play? Hell, I wish it read like the staff on some of the
magazines now know that there is a country called Japan and they make the
majority of games there. Seriously, the toilet humor, pseudo- Maxim
editorial style of Incite, the worst game magazine EVER, has become the
standard. EGM, GamePro, and Game Informer are all but unreadable, filled
with meaningless tripe and not a whit of actual incite into the games they
cover. PLAY, which I actually like, sullies itself with Dave Halverson’s
constant mention of videogame characters having nice boobs or butts. For
those who don’t know, and I feel sorry for you, Dave was the editor of the
greatest game magazine in the history of our species, Game FAN. In the pages
of Game FAN you could expect multipage reviews of quality games, previews of
import games that had no chances of a domestic release, and stunning art and
layout. That was a staff that cared about games, not getting themselves over
by talking about how cool they were. Of course, the way they wrote let you
know, anyways, but being an completist, elitist, quality junky was treated
like a good thing. Hell, even Next Gen had an editorial direction. It was an
evil, X-Box, PC, and networking future, but it was a direction nonetheless.
Maybe I’m just bitter, but I can’t seem to find any energy to play much
anymore. Is it the game media or the manufacturers? Is it just me? Oh wait,
Wario World is out soon. Sweet! Ummm, so what was I talking about?
Wasn’t that good? No email for him either, just because he never said whether he wanted it on our not.
YOU MUST HAVE THE ATTENTION SPAN OF…SOMETHING WITH A REALLY GOOD ATTENTION SPAN!
Last up an ADVANCE review from Good ol’ Alex mark 2. I mean Alex Williams. I’ll probably be giving him complete control of ADVANCE (sans the retrograding label) if Chris allows it due to real life time constraints. Here’s hoping as he’s a good kid. And he likes Ikaruga. Even if he wouldn’t kill for a copy of it…
Review: Sega Arcade Gallery (GBA)
Four classic Sega titles now on the GBA. But can you see them well enough
to play them?
Game: Sega Arcade Gallery
System: Game Boy Advance
Genre: Compilation (Action Racing/Shooting)
Developer: Sega
Publisher: THQ
Released: 5/21/03
I’m sure that in some gaming circles, it’s quite fun to bash Sega games
and Sega as a company. “Haha, the Dreamcast is dead. Haha, they’re losing
money. Haha, their mascot games are lame now,” is what I hear a lot when
I go around certain message boards. My answer to the Sega bashing is “Why?”
Sega may have made some mistakes over their company history, but that hasn’t
kept them from pumping out quality games that are new, original, and are
actually addictive.
Today’s review is going to focus on four such games that were recently
released on a compilation cartridge entitled “Sega Arcade Gallery”. These
four titles, After Burner, Space Harrier, Out Run, and Super Hang On, may
be nearly 20 years old, but they still retain the replay value and fun
factor that made them so much fun to begin with. And NOW, you can take
these four titles with you wherever you go! But the question of the day
is: Do these titles translate well onto your GBA? Read on to find out!
(NOTE: For the purposes of this review, each game will be looked at
separately, rather than breaking the entire thing into Gameplay, Graphics,
etc. Each game will garner a separate mini-review, and at the end, I’ll
give a score to the overall package. This is to ensure each game gets the
attention it deserves.)
————
AFTER BURNER
————
Strapped into the cockpit of your F-14XX, it’s your job to shoot down
the enemy fighter jets. Not much of a story, but you really can’t expect
much of one from a 1985 arcade game, now can you?
Use the control pad to move your plane Up, Down, Left, and Right. Controls
are inverted (Up is Down, Down is Up), but that can be changed in the options
menu. Use the A Button to fire your machine guns when you line up your
crosshairs with the enemy to shoot them down. Or, when you see a white
square appear around an enemy, press B to shoot homing missiles. Be careful
though, for your missile supply is limited. (They will be refilled if you
progress far enough.) Pressing L will center your craft on screen, while
R will speed you up. Pressing both L+R will let you perform a barrel roll.
Gameplay-wise, it’s a bit difficult to maneuver your plane when you
try and dodge planes and enemy fire. Even when you move left and right,
you never move that far from the center of the screen. This fact is complicated
further due to the small screen on the GBA. Not only is it difficult to
maneuver through enemy fire, you can’t even SEE half of it coming at you.
It will look like sometimes your plane will randomly catch fire and crash
or self-destruct when you think you’re clear of the missiles. (NOTE: This
fact might be null on the GBA player, but I do not have one to test that
theory out on.) Luckily to make up for this, you’re given 5 continues right
off the bat.
Graphically, the game is practically arcade perfect. Every sprite is
converted faithfully to the GBA. Unfortunately, many of these sprites are
VERY small, and difficult to see. You’ll be squinting your eyes looking
for enemy craft to blow up, only to realize that your plane has, once again,
self-destructed.
All the music and sound have been kept from the arcade as well. The
light-rock tracks serve the action very well. The game also supports voices,
as a “Warning” voice warns you of incoming fire. It helps a LITTLE, but
not much.
This game, like the others, doesn’t support too many extras. While you
do have a Sound Test and Music Test option, you can’t change the difficulty,
or the number of lives/continues you start with, which I find displeasing.
Overall, while the game can provide you with a few fun-filled hours, it
seems that the translation is more than the GBA can hold.
Gameplay: 6
Graphics: 9
Sound: 8
Fun Factor: 7
————-
SPACE HARRIER
————-
Step into a mystical world full of dragons and other monsters. Using
your trusty gun and jet pack, you must fly through 18 stages of mayhem
as you shoot down the evil that threatens your home.
Again, use the control pad to move Up, Down, Left, and Right. Like After
Burner, the controls are inverted, but can be changed. Both A & B fire
your gun, while R speeds you up.
Each stage is filled to the brim with not only evil creatures, but also
nasty obstructions that you have to avoid. (Trees, pillars, clouds, etc.)
Enemies will come at you in a pattern each stage. Things become much easier
when you recognize the patterns. At the end of each stage, you’ll face
a boss character, whether it is a two-headed dragon, or a huge mass of
Easter Island heads. You must kill them in order to advance. Also, Stages
5 and 12 are Bonus stages where you ride on the back of a dragon (who looks
like a big cat for some reason). Fun stuff.
Maneuvering is MUCH more forgiving here. You can move your character
anywhere on screen to avoid pillars and the like. Some enemies might appear
a tad small, but enemy fire is HUGE in comparison. You’ll know when giant
blue bullets and massive fireballs come at you. You WILL be able to tell
what you are doing on the small screen, which is a huge plus. For some
reason, at the start of the game, you have infinite lives until the timer
at the top of the screen runs out. After that, you have 5 lives, and five
continues to use.
Again, the game is translated flawlessly. Graphically, all the sprites
made it in tact, and are more easily recognizable on the GBA of the two
shooters. Musically, the game only has one or two tracks that play during
the action levels, but there are a few varied boss tracks at the end of
each level.
Again, Sound & Music tests are your only real options here. Even
so, this is the game out of the collection that you’ll probably be spending
the most time on, given the colorful visuals and interesting concepts.
Gameplay: 9
Graphics: 9
Sound: 8
Fun Factor: 10
——-
OUT RUN
——-
Here’s the first racer out of the bunch. The goal isn’t to come in first
place, or pass the other cars. You’re simply on a high-speed joy ride that
lasts until time runs out.
Controls are fairly simple here. Press A for acceleration, B for the
brakes, and R to switch from Low to High gear. Left and Right will steer
the car, and Down will center you out. There are two more control schemes
you can select in the Options menu.
This highly addictive racing title’s goal is simply to get from Point
A to Point B. To do this, you’ll need to avoid the other cars and trucks
on the road, avoid the many billboards and other objects when you accidentally
get off the road, and negotiate many hazardous turns. You’re given 80 seconds
to start off with. After you start, you’ll need to get to your destination
as fast as possible without running out of time. Luckily, there are Check
Points breaking apart each section that will give you extra time. An interesting
thing about this game is that there are multiple paths to get to the end.
Every so often, the road will split into two, and you’ll have the option
of going either left or right. Each path will give birth to different scenery,
as well as making nearly every drive a unique one.
The controls themselves are pretty responsive. While there’s no driving
wheel to speak of (On a handheld? Please…), moving left and right doesn’t
take a lot of effort. You might have some trouble on the hard left and
right turns, but once you learn to adjust with the Brake button, you’ll
be set.
Again, the graphics have been ported 99% exactly. The other cars and
roadside obstacles are big enough so you can see what you’re doing. There’s
no random crashing here, although the crashes that do happen are extremely
hilarious. (You car flips over 5 times, dumps you out, lands about a mile
away, and yet you get up and continue driving. If only those were real-life
physics…)
The sound and music have also been ported faithfully. A nice feature
is that before each race, you’re given the option of choosing between three
tracks to listen to while driving. This makes the game’s life a bit longer
considering the music doesn’t get overly monotonous immediately.
Options = Music & Sound Test. Not much else other than the different
control schemes. Other than that, the game is a wonderful alternative to
the other racing titles out there. And it’s over 15 years old!
Gameplay: 8
Graphics: 9
Sound: 9
Fun Factor: 9
————-
SUPER HANG ON
————-
The last game of the bunch, and it’s another racer. This time, you’re
on a motorcycle, and once again, you need to reach the end before time
runs out. Think you got the right stuff?
Controls for this game are also easy to learn. Press A to accelerate,
and B to Brake. Left and Right steer, and Down centers your steering. L
will slow down the steering response, and R will give you a speed boost
when you reach the top speed of 280 MPH. Like Out Run, the controls are
responsive on the GBA.
To be honest, this game is remarkably like Out Run, only with motorcycles.
You begin by choosing one of four courses (each one with a different amount
of stages), and then get down racing. Your goal is to get to the end of
the race before time runs out. You’re given 50 seconds to start off with,
and you HAVE to make each second count. You’ll have to avoid the other
bikers, and the various objects you’ll find off road. The speed boost is
a unique feature, allowing you to go even faster on straight roads. Just
don’t be stupid and use it on a hard turn. It can lead to trouble, believe
me.
Again, the graphics have been ported faithfully to the handheld, and
you are able to see what you are doing clearly. Also, music and sound have
been ported faithfully as well. Like Out Run, you’re given a choice of
music at the beginning of the game. You get to choose between four songs
this time.
The main options, aside from two extra control schemes, are the same
as the other titles. (Looks like Sega & THQ didn’t spring much for
extras, huh?) In any case, this is another fun title out of the collection.
If you like motorcycles and mock racing, I highly recommend it.
Boy this review is getting redundant rather quick, isn’t it? Let’s wrap
this up so we can all go home. (Unless, well, you ARE home and…well…never
mind.)
Gameplay: 8
Graphics: 9
Sound: 8
Fun Factor: 9
——-
OVERALL
——-
Overall Gameplay: 8
Overall Graphics: 9
Overall Sound: 8
Overall Fun Factor: 9
The 411:.
FINAL SCORE: 8.5
Jesus People, it’s PAGE TWELVE! GO OUTSIDE!
And that’s the weekend news. And so it’s not the norm. You got a hodge podge from FOUR different writers, you got to see Alex do something other than my usual Retro stuff and all the other exclusive stuff that usually get from me. That just means you’ll be hoping next week you get the usual as humans generally fear change. But ooh! This was different! DIFFERENT! Spooky spooky spooky! Let the emails of Alex, are you on PCP this week?’ commence. | http://diehardgamefan.com/2003/06/29/1396/ | CC-MAIN-2019-35 | refinedweb | 7,237 | 80.82 |
By Mike Gunderloy
Sometimes it seems to take more code to support a component than to implement its functionality. For example, you might sell a client a server-side component with a license that limits the user to five simultaneous instances of the component. This business decision has development consequences: You now need to come up with a way to enforce that license count. If you’re working in the .NET world, there’s an easy answer. You can use the System.EnterpriseServices namespace to limit the number of simultaneous users without writing a lot of code.
COM+ to the rescue
System.EnterpriseServices is the .NET wrapper around COM+, a part of the Windows operating system that provides various infrastructure-level services to interested applications. These services include automatic transaction management, just-in-time activation, component queuing, and (central to this article) object pooling. .NET components that use COM+ are called serviced components. Here are the typical steps in creating a serviced component:
- Create a class that inherits from System.EnterpriseServices.ServicedComponent.
- Assign a strong name to the assembly containing the class.
- Install the assembly into the Global Assembly Cache (GAC).
- Use the Services Installation tool (Regsvcs.exe) to install the assembly to the COM+ catalog.
A serviced component example
To see how this works, follow along as I create a very simple serviced component. To begin, you’ll need to create a key file to use in assigning a strong name to the assembly. This key file is essential to the cryptographic signing that .NET uses to verify assembly integrity. You can create a key file from the Visual Studio .NET command prompt by running the sn utility:
sn –k trsc.snk
Now, launch Visual Studio .NET and create a new Visual Basic .NET Class Library project, naming it TRSC. Right-click the project and add a reference to the System.EnterpriseServices component. Rename the default Class1.vb to TimeServer.vb and fill in its code:
Imports System
Imports System.EnterpriseServices
<ObjectPooling(CreationTimeout:=10000, _
Enabled:=True, MaxPoolSize:=2)> _
Public Class TimeServer
Inherits ServicedComponent
' Return the current time
Public Function GetTime() As String
GetTime = DateTime.Now.ToLongTimeString()
End Function
' Enable object pooling
Protected Overrides Function CanBePooled() _
As Boolean
CanBePooled = True
End Function
End Class
Obviously, this is an extremely simple class, but the same principles will work with your thousands-of-lines-long business rule extravaganza. Note the scaffolding code to enable object pooling; I’ll come back to that later.
Next, you need to make some changes to the AssemblyInfo.vb file. In particular, you need to change and add some assembly attributes. If you haven’t looked at attributes before, you can think of them as a way to add metadata to your assemblies. The Common Language Runtime (CLR) uses these attributes to determine what to do with your assembly and how it relates to other pieces of software such as COM+. At the top of the AssemblyInfo.vb file, you need to make the System.EnterpriseServices namespace available:
Imports System.EnterpriseServices
And the assembly attributes go at the bottom:
<Assembly: AssemblyVersion("1.0.0.0")>
<Assembly: ApplicationName( _
"TechRepublic TimeServer")>
<Assembly: Description( _
"Delivers the current time on demand")>
<Assembly: AssemblyKeyFile("..\..\trsc.snk")>
The first of these assigns the version number for the library. The ApplicationName and ApplicationDescription will help you identify the library in the future. The AssemblyKeyFile attribute locates the file containing the key pair for strong naming.
At this point, you can build the assembly by selecting Build | Build Solution. Then switch to a Visual Studio .NET command prompt and register it, first in the GAC and then with COM+:
gacutil /i TRSC.dll
regsvcs TRSC.dll
Managing the COM+ application
At this point, the serviced component is installed in the COM+ catalog and can be instantiated by client programs or can be administered through the Component Services administrative tool. To launch the tool, choose Start | Programs | Administrative Tools | Component Services. The tool runs in the familiar MMC interface, as shown in Figure A.
Right-click on the TRSC.TimeServer component and you can view its properties, as shown in Figure B. You can now see how the ObjectPooling attribute that you applied to the class is translated into COM+ properties.
Object pooling is the COM+ feature that solves the problem of limited concurrent usage for this library. When you set up an object pool, you’re telling COM+ how many copies of the object it can make available to client applications. You can specify the minimum number of copies of the object to keep in memory at all times (in this case, zero), the maximum number to make available (in this case, two, which I’m assuming as the license count for this demonstration), and the amount of time to wait for an object if the pool is exhausted (here, 10,000 milliseconds, or 10 seconds).
COM+ implements a Pooling Manager that handles the details of object pooling. When the COM+ application is started, the Pooling Manager creates the minimum number of objects and thereafter maintains them in the pool at all times when the application is running. Each time that the Pooling Manager receives a request to create an object, it checks to see whether the object is available in the pool. If the object is available, the Pooling Manager provides an already created object from the pool.
If there’s not an object available, the Pooling Manager will create one, as long as the pool isn’t already at its maximum size. If no object is available and no new object can be created because of the size restriction of the pool, the client requests are queued to receive the first available object from the pool. If an object cannot be made available within the time specified in the CreationTimeOut property, an exception is thrown.
When the client is done with an object, it should invoke the object’s Dispose() method. The Pooling Manager intercepts this request and calls the CanBePooled() method on the object to check if the object is interested in being pooled. If the method returns True, the object is stored in the object pool. On the other hand, if the CanBePooled() method returns False, the object is destroyed forever.
Pooling in action
To see how this works in practice, you can create a new Visual Basic .NET Windows application (I named mine TRSCClient). To begin, add references to System.EnterpriseServices and to the TRSC.dll file created by compiling the class library. Place a button (btnLaunch) on the default Form1 and add a bit of code behind it:
Private Sub btnLaunch_Click( _
ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles btnLaunch.Click
' Create a new client form
Dim f As New frmClient()
f.Show()
End Sub
Next, add a new form, frmClient, to the application. This form should contain a single TextBox control named txtTime. Here’s the code to go behind frmClient:
' Instance of the pooled class
Dim t As TRSC.TimeServer
Private Sub frmClient_Load( _
ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles MyBase.Load
Try
' Create the pooled object and
' execute its GetTime method
t = New TRSC.TimeServer()
txtTime.Text = t.GetTime()
Catch ex As Exception
MessageBox.Show(ex.Message, "frmClient")
Me.Close()
End Try
End Sub
Private Sub frmClient_Closing( _
ByVal sender As Object, _
ByVal e As System.ComponentModel.CancelEventArgs) _
Handles MyBase.Closing
' Give up the pooled object
If Not t Is Nothing Then
t.Dispose()
End If
End Sub
Now simply run the application and click the Launch button. This will create a new pooled object, open the client form, and retrieve the displayed time. Repeat this process and you’ll have two client forms open. Now try to launch a third client form. It won’t appear, because the object pool is exhausted. Instead, after the 10-second timeout elapses, you’ll get an error message: “COM+ activation failed because the activation could not be completed in the specified amount of time.” Close one of the first two forms and you’ll be able to launch another instance.
Managing the pool
The nice thing about this technique is that you don’t have to recompile the component to adjust the pool size. Instead, you can just go into the properties of the class, which you saw in Figure B. Of course, there’s nothing to prevent your customers from doing the same. You’ll be depending on the honor system to keep the license count set properly. But if you can’t trust your customers to do that, will they respect any other system, or will they try to crack it? The System.EnterpriseServices approach has the advantage of being easy to implement and built right into the operating system, which ultimately means less custom code to harbor bugs. | http://www.techrepublic.com/article/let-enterprise-services-track-your-license-count/ | CC-MAIN-2017-17 | refinedweb | 1,481 | 57.67 |
Building, Maintaining, and Tuning the Box.
Understanding Partition Types and Requirements
Partition sizing and configuration for a server is one of those planning tasks that, although mundane, will affect your entire Windows NT network. The ability to run utilities, perform automated upgrades, service applications, and store system dumps depends on how much free space you have on your operating system partition. It's very important to get it right ahead of time because it's very difficult and intrusive to reconfigure a partition. Before you can make intelligent decisions on how many partitions to have and how large they should be, you must understand the different partition types and their requirements.
The system partition contains the hardware-specific files, such as Ntldr, Boot.ini, and Ntdetect.com, needed to load Windows NT. This term system partition is also used by some hardware manufacturers for a partition to put system hardware configuration utilities and diagnostics in a fast and available location instead of always having to load a CD-ROM or set of disks. It's small (less than 50MB), and it doesn't have a drive letter. It requires a special BIOS, which makes it accessible before the Windows NT loader takes control.
The OS partition, sometimes called the boot partition, should contain the Windows NT operating system, critical system utilities such as the backup program and logs, and little else. The system utilities are on this partition so that full system functionality can be recovered by restoring the C: partition. This partition needs to be big, much bigger than you'd expect. The partition size you choose now must keep pace with Windows 2000, which is vastly larger than its predecessor. The partition can also incorporate many different services, each taking varying amounts of space.
Some of the unexpected disk hogs are listed here:
Page files—You can move the page file off the OS partition, but if you want a system crash to dump the contents of memory to a disk file, the page file must be equal to the real memory size.
Dump files—The partition must be large enough to hold the dump file in addition to the page file. Its default location is %Systemroot%\memory .dmp, where %Systemroot% is the environment variable assigned to the operating system directory (usually c:\Winnt). On a system with 512 MB of RAM, these components have already chewed through 1GB—and that's not even considering the operating system. If you want to store more than one copy of the dump file, add another chunk of space equal to the system's memory size.
Large SAM support—A domain controller in a large Windows NT 4 domain requires a significant boost in the Registry Size Limit setting to accommodate the resulting large Security Account Manager (SAM) database. This, in turn, boosts the paged pool requirements, which require an increase in page file size.
Active Directory—Large Windows 2000 domains require a large directory service database, %Systemroot%\Ntds\Ntds.Dit. A large company with 60,000 user accounts, 54,000 workstations, and the _full complement of attributes associated with these can have an Ntds.Dit of almost 800MB. You can choose on which partition to locate this, but you should definitely allocate plenty of room.
Spool files—The directory where temporary printing files are stored is in %Systemroot%\System32\Spool\Printers, on the OS partition. If you print large files or have a large number of active printers attached to the server, you need to account for this occupied space.
The operating system—The Windows NT 4 Server operating system directory takes a minimum of 130MB, excluding the page file. Windows 2000 disk requirements vary, from merely voracious for a member server to monstrous for a domain controller in a large enterprise. Big surprise, eh? The Windows 2000 %Systemroot% directory alone takes 600MB installed. This doesn't include the files it creates in \Program Files, and the installation process requires significantly more.
Table 5.1 is a quick reference of conservative disk space recommendations for a server's OS partition.
Table 5.1 Recommended Server OS Partition Sizes
Note that these partition requirements don't say whether a server is a domain controller or a member server. That's because in Windows 2000 it's a simple task to promote a member server to a domain controller. If you're parsimoniously allocating your disk space to a member server, the time will certainly come when you need to promote one in an emergency and can't because you don't have the free space.
Tip Remember that hardware—especially disk space—is cheap compared to support costs. Disk space is an area where it's much better to err on the conservative side. And these estimates don't include room for expansion.
FAT or NTFS?
When Windows NT Advanced Server 3.1 appeared, administrators argued with fervor over the benefits of formatting the operating system partition with the venerable FAT file system versus NTFS. There are valid points for both sides, but without getting into too much gory detail, the battle has been won. If you spend any time at all looking at the future of Windows NT and examine the space requirements previously discussed, it's pretty obvious that only NTFS is capable of managing the impending space, speed, and security required of a Windows NT server partition of any size. A FAT partition of a decent size would have a cluster size of 32KB, which means that every file would occupy at least 32KB of disk space, regardless of its size.
The FAT file system is best for drives or partitions under approximately 200MB and is faster for sequential I/O. FAT32 can address much larger disks, but it still has none of the security features of NTFS. NTFS works best for file systems above 400MB. NTFS has its own log file for internal consistency; FAT has none. Additionally, an NTFS Version 5 volume is required for the Windows 2000 SYSVOL. Finally, NTFS has detailed security built into it. This is important now for user data, and its granular security is becoming more important as many different Windows NT components that are managed by different groups are being incorporated into the %SYSTEMROOT% directory (usually C:\WINNT).
The FAT32 file system fixes a lot of FAT's problems, especially for larger disk sizes, but it's no more appropriate for a large Windows NT server than FAT is.
Tip The single most important first step you can take in securing your servers against security attacks is to use NTFS on all partitions.
Windows 2000 will have FAT32 support to ensure compatibility with Windows 98 and the OSR2 release of Windows 95. You'll finally be able to dual boot Windows 98 and Windows 2000, or to actually install Windows 2000 Professional on a system that has an OEM install of Windows 98 on a FAT32 partition. Windows NT 4, however, is incompatible with FAT32. You could get around this incompatibility, if you're a little daring, with a utility called FAT32 from. FAT32 is a driver for Windows NT 4 that allows the operating system to access a FAT32 volume. The freeware version allows read-only access, and the $39 full version grants full access. This would allow you to dual boot between Windows 98 and Windows NT 4—but why would you want that on a server anyway?
Windows 2000 Startup Options and Automated System Recovery
Windows 2000 now comes with a Win9x-like boot menu, called Advanced Setup, that allows you to choose from a number of startup options:
Note: The following options are documented in Windows NT 5.0 Server Beta 2 ADVSETUP.TXT.
Safe mode—Starts Windows 2000 using only basic files and drivers, without networking. Use this mode if a problem is preventing your computer from starting normally.
Safe mode with networking—Starts Windows 2000 using only basic files and drivers—as with safe mode—but also includes network support.
Safe mode with command prompt—Starts Windows 2000 using only basic files and drivers, and displays only the command prompt.
Last-known good configuration—Starts Windows 2000 using the last-known good configuration. (Important: System configuration changes made after the last successful startup are lost.)
To start your computer using an Advanced Startup option, follow these steps:
Restart your computer.
When the OS Loader V5.0 screen appears, press F8.
Select the Advanced Startup option you want to use, and then press Enter.
Partition Recommendations
Here's a list of recommendations that apply to most partitions:
To ensure that you have enough room to grow (and my, how it does grow!), your OS partition should be no less than 2GB, under any circumstances.
Keep all partition sizes down to what you can fit on one backup job (and, therefore, one restore job). If a partition takes 16 hours to back up, either it's too big or you need to upgrade your backup hardware.
If your fault tolerance setup allows it, keep the page file on the fastest disk possible.
Don't use Windows NT volume sets. A Windows NT volume set is a collection of partitions over a number of disks that allows you to create a large, contiguous volume out of several smaller ones. Once it spans more than one disk, however, you'll have jeopardized all your carefully planned fault tolerance. If one component partition of this volume set is on a non–fault-tolerant volume and it dies, the volume set will fail and your only recourse will be to recover from backups.
Seriously consider a disk defragmenter, and run it from the beginning of the server's life. It has much lower user impact to continuously correct small amounts of defragmentation than to schedule massive defrag sessions. If your fragmentation is bad enough that the defragger can't do a good job, your only choice is to back up, reformat, and restore to start clean before you enable the defrag service. Windows 2000 has built-in disk defragmentation courtesy of Executive Software. It's only manual, however, and is the equivalent of their DisKeeper Lite freeware product. To get fully automated defragmentation—which is obviously a useful feature—you must buy DisKeeper just as with your Windows NT 4 systems.
Building the Box
Now that you have unboxed the hardware, put it together, and performed initial partitioning and formatting, you're ready to install the operating system. This isn't a thorough treatise on how to install Windows 2000; as with everything related to this new version, it's a very big subject so I'll leave that to the doorstop books (so named because they make good doorstops on a windy day). Instead, I want to point out some issues on naming the box and cover some highlights of the SYSPREP utility for automated builds in Windows 2000.
In both Windows NT 4 and Windows 2000, all servers and workstations must have a unique name of up to 15 characters to identify them on the network. This is called the NetBIOS name, computer name, or machine name. In Windows NT 4, this name is the primary way clients locate a Windows NT system; Windows 2000 doesn't care much about it, but you choose one for compatibility with downlevel systems. Unlike DNS, which has a hierarchy separated by periods (myserver.mydepartment.mycompany.com), NetBIOS is a flat namespace. This means that the name myserver must be unique across the entire Windows NT network. This isn't a big deal for a small company whose naming convention can follow the Marx Brothers, but ensuring uniqueness can be a big headache for a large corporation.
Some pointers in choosing workstation and server naming conventions include these:
Choose a unique identifier by which each node on the network can be mapped to a person responsible for it. The simplest way to do this is by naming workstations with an employee number, which can then be looked up in the company's HR database. The key is to be able to look up the name; a workstation named JIMBOB might be acceptable as long as you can make a directory services query to discover that the owner is James Robert Worthington III. Perhaps the simplest method is to name the workstation after the owner's Windows NT account name.
Name your servers alphabetically ahead of your workstations. In Windows NT 4, the domain master browser has a buffer size of 64KB. This limits the number of workstations and servers displayed in the browse list to 2,000–3,000 computers, easily reached in a large Windows NT domain. When this happens, names beyond the limit simply won't appear on the list. Because servers are more heavily browsed than workstations, define a naming convention so that servers are alphabetically ahead of workstations. That way, in a large domain, workstations will drop off the browse list instead of servers.
Warning: Beware of the forward-compatibility problem between NetBIOS names and DNS names. In Windows 2000, the NetBIOS restriction will be lifted and network nodes such as servers and workstations will use DNS names to identify themselves. Remember that in Windows NT 4, NetBIOS names may contain special characters. In general, DNS domain and host names have restrictions in their naming that allow only the use of characters a–z, A–Z, 0–9, and "-" (hyphen or minus sign). The use of characters such as the "/," ".," and "_" (slash, period, and underscore) are not allowed.
Servers or workstations with NetBIOS names that don't fit into the DNS naming convention will have problems under Windows 2000. Windows NT 4 will warn you before attempting to create or change a NetBIOS name with a slash (/) or period (.), but it will allow an underscore. The underscore is allowed in Microsoft's implementation of DNS, but it doesn't comply with RFC (Request for Comments) 1035, "Domain names—implementation and specification." This means that if you use an underscore in your naming convention, it will work with Microsoft but probably won't work with other industry-standard DNS servers—and Microsoft could clamp down on this loophole in the future. Even this rudimentary character checking isn't done if you've upgraded from a previous version of Windows NT. The most common name violation is use of the underscore in NetBIOS names. Start stamping out incompatible names now!
Automating a Windows NT build has historically been very time-consuming and error-prone. Automating the final 20% was very tedious—and sometimes almost impossible. Microsoft has had much feedback from its customers to make automating a Windows NT rollout much easier to do. A very popular method has been to build a "template" Windows NT system, and clone it to hundreds of other systems by copying the template system's hard disk to the new systems. Cloning, as it's known, is no longer a no-no, but you should be aware of two issues.
Because you've exactly duplicated the template system, you've also duplicated its security identifier (SID). I won't get into SIDs here, but strictly speaking they are supposed to be unique on a Windows NT network. If you've cloned hundreds of workstations, you'll have hundreds of duplicate SIDs on your network.
Duplicate SIDs cause security problems in a Windows NT network in two situations: Workgroups (no Windows NT domains) and removable media, formatted with NTFS. Problems with the former happen because user SIDs on one workstation, with no domain component, can be identical to user SIDs on another. For instance, this allows the first-created user account on one workstation to access files of the first-created user account on another's workstation. The removable drive problems occur because NTFS ACLs depend on the SID for security. If the system is cloned and the security references a local account instead of a domain account, a removable drive could be inserted into a cloned machine. The hacker logs onto an account with the same name, such as Administrator (the password needn't be the same), and the data can be read.
So, duplicate SIDs aren't as big a problem in a domain-based environment as everyone thought. There is a basic requirement for cloned systems, though. Windows NT Setup goes through all sorts of hardware detection, so a cloned machine must have the same hardware of the template machine, or funny things start happening.
Note: This hardware exactness applies especially to hard disk controllers and the HAL. In one situation, cloned machines began breaking for no discernable reason. After many weeks of troubleshooting, the team discovered that all the broken machines had a slightly newer version of a hard disk BIOS.
Third-party tools such as Ghost from Symantec to clone systems have been very popular, and now that GhostWalker (a SID-scrambling tool) is also available, systems can be cloned and still have unique SIDs.
For Windows 2000, Microsoft has given in and provided the SYSPREP utility. SYSPREP is used after the template system has been configured, just the way you like it. The utility is run as the last act before shutting down the system. The template system's hard drive will then be cloned by a third-party tool. When the target machines are first booted, SYSPREP generates a unique security ID for the machine and then pops up with a short menu used to generate a few last items such as computer name and domain membership. SYSPREP has switches to do useful things such as run in unattended mode, re-run the Plug and Play detection on first startup, avoid generating a new SID on the computer, and automatically reboot the system when the final configuration is complete.
For system rollouts that will use different kinds of hardware, the old method of unattended setup with a customized answer file is still around. It has been expanded to take care of the differences in Windows 2000, but it's essentially the same as Windows NT 4.
The final, most sophisticated, and most restricted method of rollout is available via the Remote Installation Service of Windows 2000 Server. The "empty" workstation boots, contacts a Remote Installation Server, and downloads and installs the OS. Pretty slick, eh? Before you start jumping up and down, you need to know that you must have your Windows 2000 server infrastructure (including Active Directory) in place and Remote Installation Service up and running. Only then can you install Windows 2000 Professional on it. This option could be useful down the road, most likely after you've upgraded your clients to Windows 2000 Professional and when you're ready to begin replacing existing systems or installing new ones.
The situations where you can automate a rollout fall into three categories: cloning systems with exact hardware, building systems with unattended setup and answer files, and using the new Remote Installation Service to remotely install the Windows 2000 operating system.
Maintaining the Box
Four of the most important areas of maintaining a Windows NT server are backing it up, scanning for viruses, performing hard disk defragmentation, and maintaining software. All of these are discussed in the following sections.
Backing Up Windows NT Servers
Backing up server data, and coming up with a system recovery strategy have always been low on a network administrator's list of fun things to do. If you do the job right, people complain about the money spent on tape drives, tapes, and operators. If you don't do the job right—or if you do it right but don't constantly monitor the backup and tune the strategy—you can jeopardize the company, lose your job, or at least get yelled at.
Your Windows NT network's system recovery strategy should be designed at the same time you determine the type of backup hardware and backup media. This is because the implementation of a system recovery strategy depends on the media, while the amount the media is used (therefore influencing the choice of media type) depends on the strategy. This section does not include a thorough analysis of disaster recovery because much fine material has already been written about it (for example, John McMains and Bob Chronister's Windows NT Backup & Recovery). This section does cover the basics of putting a good strategy in place.
Using Storage Management Software
One of the first and most important choices you must make when putting together a backup strategy is that of the storage management software. (We all refer to these things as backup software, but the biggest packages do much more than just backups; storage management software is really more accurate.) After storage management software is in place, it becomes deeply entrenched in the environment because of the software expense, the operator training, the customer training (if customers can perform their own restores), and certainly the backups themselves if the software uses a proprietary format.
The storage management software must be capable of growing with your company's needs. You may not initially need enterprise management utilities for multiple servers, but with good fortune you'll need it in the future. You don't want to be forced to switch storage management software because yours couldn't stretch to fit your growing company's needs.
Another advantage to fully featured storage management software from a major vendor is consistency. Whenever possible, you should have one vendor for all your storage management needs, simplifying management, taking advantage of the integration within the vendor's product line, and reducing your total cost of ownership.
Following these guidelines, you should choose your storage management software from one of these four vendors:
ADSTAR Distributed Storage Manager (ADSM), by IBM
ARCserve, by Computer Associates
Backup Exec, by Seagate Software
Networker, by Legato
I haven't listed these products in any particular order. All are very good and offer a wide range of utilities; they've been the market leaders ever since Windows NT's inception. Ntbackup, the backup utility included with the operating system, is a lobotomized version of Backup Exec, and most of these company's products have existed for many years for other operating systems.
The following are recommendations to keep in mind when choosing your storage management software:
Clearly define your requirements. Do you want to back up your clients as well as your servers? Do you want your clients to be able to do their own restores? Do you need Web-based administration?
Think big. Choose a solution once that will do all you'll ever need so that you never have to do it again. And do your research carefully: If you haven't looked at storage management solutions recently, you'll be astounded at the depth and breadth of what they can do. Of course, this makes you continually re-examine your requirements. When you realize what these suites can do, you'll be able think of new ways to handle data you didn't know were possible.
Choose a vendor that supports the widest variety of client operating systems possible—not just Windows NT. You want to be able to back up all your business's clients, not just your Microsoft-based ones.
Make sure the software you're considering also supports all the databases you're using—and those you may ever possibly use in the future. Look carefully into the support so that you're sure you understand what it can and can't do. SQL Server, Oracle, SAP, and Exchange all may require additional software that not every solution may support. For example, if you use the OpenIngre database, of the four mentioned only ARCserve for Windows NT has an agent to back it up.
The storage management software must support the broadest range of backup devices, from 4mm DAT up to tape libraries with terabyte capacities. Again, think about your company's growth and a single-vendor solution.
Choosing a Backup Hardware Format
It's worth noting that when you're perusing the brochures for different backup types, most have split numbers such as 4/8, 7/14, 15/30, 35/70, and so on. This is a capacity description of each tape, uncompressed and with maximum 2:1 hardware compression with the right data (such as bitmaps). Obviously, your mileage will vary depending on the type of data you're backing up. If you have a good mix of data in your backup stream, splitting the difference between the two extremes will probably yield a valid estimate. I've seen a solid 27GB per tape on a 15/30 DLT when backing up standard file server data: Word documents, Excel spreadsheets, Dilbert" cartoon archives, and so on.
Note: There's a very important trend to be aware of in data storage: The ability to put more data on disk drives has in no way kept up with the ability to back them up. With the exception of network backup servers using autoloaders and tape libraries, file servers of enterprise scale cannot be adequately backed up without using tape autoloaders. You must take this into consideration when you're putting together a purchase order for servers. Don't cut corners on your backup solution; if anything happens—and, of course, it will—you'll be reviled more for not having adequate backups than for the redundant power supply you just had to have.
A number of tape formats are available. You can quickly narrow your choices, however, to one or two formats based on how much you must backup and automation you need, what speed you need, and how much you're willing to pay. The next few sections talk about the most popular formats and their strengths and drawbacks.
QIC
The QIC (quarter-inch cartridge) format and capacity has evolved over the years. Originally limited to 100MB or 200MB per tape, it's now capable of up to 8GB using Travan cartridges. QIC been a mainstay of PC backups for many years. Its throughput can rival that of 4mm, but because it won't scale to larger systems through use of magazines, it's limited to workstations and workgroups.
8mm
8mm tapes are about the size of a deck of cards and use the same helical scan technology found in the family VCR. Backing up between 7GB and 14GB, 8mm tapes can be found with autofeeders to increase its capacity. As with your family VCR, however, the drive demands regular cleaning when subjected to heavy use.
4mm DAT
DAT (digital audio tape) is only about two-thirds the size of an audio cassette. Relatively inexpensive and fast, DAT also uses helical scan echnology in which data is recorded in diagonal stripes with a rotating drum head while a relatively slow tape motor draws the media past the recording head. DAT tapes hold 2GB–24GB depending on the compression achieved by the backup hardware, and they have some popularity in the IA server world. To increase their capacity, DAT tapes are available with loader magazines that can exchange up to 12 tapes without operator intervention. DATs can back up data at about 1GB–3GB per hour. (These are conservative numbers based on actual experience rather than product brochures.)
DLT
DLT (digital linear tape) is a fast and reliable backup medium that uses advanced linear recording technology. DLT technology segments the tape medium into parallel horizontal tracks and records data by streaming the tape across a single stationary head at 100–150 inches per second during read/write operations. Its path design maintains a low constant tension between the tape and read/write head, which minimizes head wear and head clogging. This extends the operational efficiency of the drive as well _as its useful life by as much as five times over helical scan devices such as the 4mm and 8mm formats. DLT is the most expensive, per unit, of standard tape backup solutions, but for your money you get the fastest throughput (3GB–9GB/hour real-world), greatest capacity (35GB–70GB in the newest drives), and best reliability (minimum life expectancy of 15,000 hours under worst-case temperature and humidity conditions; the tapes have a life expectancy of 500,000 passes) of all drive types. The current standard is 35GB uncompressed or 70GB with 2:1 compression, and it's been around for a while. On the horizon is 50/100, but it still isn't keeping up with the increase in disk capacity.
DLT cartridges look disconcertingly like old 8-track cartridges. Loaders to handle multiple tapes and increase capacity without operator intervention are also available from many vendors, and these are a necessity if you have a number of servers to back up and limited operators to swap tapes.
Network Backup
One problem with traditional local tape backups is that when you have a lot of servers, you have a lot of tapes and tape drives to manage. An unattended network backup solution moves the backup media to one server specifically designed to back up massive amounts of data as quickly as possible. Truly awesome amounts of storage can be built into supported devices: an ADIC Scalar 1000 DLT library integrated into ADSM can hold up to 5.5TB. Whatever its advantages and disadvantages for server backups, network backups are a huge win for lowering the total cost of ownership in a network. All backup operations are automated, which means that the single most expensive overhead item for operations—the operators themselves—aren't needed for changing tapes at all hours and performing tedious data restores.
Tape Format Recommendations
Table 5.2 shows a comparison of backup methods. I've listed the advantages and disadvantages in relative order of importance, based on a goal of high availability. Your business priorities for your Windows NT network may be different, perhaps compromising on longer restoration times to save hardware money. Keep in mind, however, that mainstream backup hardware and tapes almost always cost far less than the price of tens or hundreds of workers sitting on their hands because the server's down.
My recommendation hands-down is DLT for a shop of any size. Besides speed and reliability, it comes up the winner in an area most people haven't thought about: longevity of the format. If you're backing up design data to be archived for 5 years, or company financials for 10 years, you have to think of the environment at the time of restore. Will a tape drive of the right format still be around? I know of a large company that kept a Digital Equipment Corporation RV20 write once-read many (WORM) optical drive around (and had to pay for maintenance) for years after the technology was obsolete because it was the one piece of hardware that could read their archives.
Table 5.2 A Comparison of Backup Methods
Building a System Recovery Strategy
Before you begin designing a system recovery strategy, you must ask your customers, your managers, your operations staff, and yourself a number of questions.
Gathering Customer Requirements
Here are some questions to ask your customers and their managers:
How long should a file exist before it gets backed up? In a typical backup scenario, any file that has existed more than 24 hours gets backed up. This doesn't mean, however, that it will stay backed up for months or even weeks. If a document is created on a Tuesday and erased on a Thursday, its chances of being recovered after several weeks are less than if it was created on a Thursday and erased on a Tuesday. This seems arbitrary, but due to standard business hours, full backups are best done over the weekend. This period where files greater than 24 hours old on any day of the week are backed up can be described as the high-detail backup period. So be aware: Depending on when they are created and erased, a file's life on backup media will vary. Fortunately, most people's habits keep them from quickly erasing any data they deem valuable. When's the last time you saw disk space utilization on a server volume go down?
How many copies of a file should be kept? Is it important that users can retrieve a specific version of a file? In most file server environments, the number of versions created by several weeks of full backups is often enough, but special applications may require more.
How far back in time should a file be recoverable? Certainly, everyone would like to be able to recover that year-old weekly report of accomplishments for the following year's annual review. In reality, however, most people ask for restores within the last two or three weeks—and most of these ask for recovery from yesterday's backup! Increasing backup media storage time, known as the retention period, beyond three to six months dramatically increases the cost of media.
Should the customer be able to restore the file himself? This would be a nice feature, but server backups don't currently allow users to restore their own files. It's a server backup, so it must be restored by server operations. This could be done in an indirect manner, however, if you have a network backup system such as ADSM. ADSM can back up client workstations with a simple user interface that allows workstation users to restore their own files. If the client includes a personal share on the server as part of the workstation's backup, he will also be able to recover the server share. Of course, this raises a number of other problems, one of which is multiple backups of the same data. The server may back up the same client share as the workstation backups because it has no knowledge of the workstation backups, and vice versa. Another problem is network shares on the user's workstation. After backing up his personal share, the simple marking of a checkbox may allow the user to back up a 1GB department share (which is already being backed up by the server backups).
How long may restoration of the user data to the server take? If the integrity of the data is more important than its restoration speed—and especially if you have lots of data—a network backup product may be acceptable. As availability requirements increase, the backup data storage possibilities move from local tape to online copies, to network mirrored data, to clustering.
Gathering Data Center Requirements
Here are some questions to ask your management and other IS managers:
How long may the server itself be down? If the Windows NT operating system is dead but the user data is fine, you need to be able to recover the operating system quickly and with as little pain as possible. This is a balance between your business needs of server availability and what you're willing to pay for it. If you require instant availability, you should have a clustering or network mirroring solution. Data on the OS partition changes slowly, so if you have partitioned it correctly, a weekly full backup to tape will ensure that the operating system can be rebuilt within 30 minutes or so. If recovery time isn't important and the server receives its data from replication, backups of the operating system partition may not even be necessary.
Do you have SLAs (service-level agreements) signed with your customers stating the maximum recovery time? If you do, you must base your calculations on this. If it turns out that you were hopelessly optimistic in your recovery time estimates, you'll have to crawl back _to the negotiating table for more money or renegotiate the recovery times based on realistic expectations with the business.
What level of disaster recovery does the company believe should be implemented? Despite your belief in the importance and irreplaceability of your Windows NT systems and data, the company may not share your convictions. Assuming that your company has an existing disaster recovery plan for its existing systems, the decision needs to be made whether to include the Windows NT systems in it. Certainly the simplest solution is to incorporate Windows NT into an existing DR plan. Disaster recovery can imply several levels, too; when you say "disaster recovery" to a server operator, she may think of two hard drives failing simultaneously. When you say it to a DR planner, he may think of a 747 crash-landing on the roof. These require different levels of planning, and the disaster scenarios must be thought out ahead of time. If your company doesn't have a disaster recovery plan for its computers, run—don't walk—to the bookstore where you bought this, and buy a book on disaster recovery planning! Do something, even if it means carrying one set of tapes home weekly.
Is there an operations staff at each server site? You need operators to change tapes. This is where network backups shine. No operators are needed for network backups to a tape library, only for assistance in catastrophic system recovery. If operators are unavailable or must make limited trips to the site, consider an autoloader tape magazine that can hold a number of tapes. You can schedule various jobs to run on different tapes in the magazine, and you can even include a cleaning tape that runs at scheduled intervals. Of course, in this scenario there's limited disaster recovery potential because the backup tapes are sitting next to the computer!
What are their hours and how busy are they? Even if the computer room or server area is staffed 24x7, you need to consider operator availability when scheduling tape changes, tape drive cleaning, offsite disaster recovery shipments, and so on. Balance this with the need _to run backups at off-peak hours, and you have defined certain time windows in which tapes must be handled.
What is the network bandwidth at each server computer room and between these locations? This factor determines whether network backups are practical. In large installations, at least a 100Mbps network backbone is necessary to provide enough bandwidth to _back up multiple servers in a practical amount of time and without saturating the network.
Has the recovery plan been tested, and have all operators been trained on it? Too often, the recovery plan isn't really tested until a failure happens.
Have you factored performance degradation into your recovery times? Be very conservative in estimating the amount of time required to recover a system, especially if it's to be restored from a network backup system. If you don't have a dedicated backup network, the length of time required to restore the server depends on the network traffic. Indeed, you may have a service agreement in place that requires data restores to wait until the evening if network traffic is above a certain level. If you run this kind of risk, you shouldn't be using network backups.
If chargeback (billing customers for your services) is not used, how much is the information systems department willing to spend on backups? In most shops I've encountered, the IS department simply eats the cost because the amount billed isn't worth the overhead for journal entries. In other words, what's your budget? Be prepared to go back for more after you've done your research. You could also consider offering a tiered pricing structure based on the level of service being offered.
Do you have any way to automate the review of backup results? You must have a way to notify your operators if a backup has failed. Enterprise-scale storage management software now has automated alerting functions, but unless you're prepared to write your own _log-viewing facility, you also have to buy the coordinating systems management software.
How important are all these to you and your customers (i.e., what are they willing to pay for these things)? Every one of these costs money—some much more than others. A 4mm DAT drive can be acquired for $1,000 and will hold between 4GB and 8 GB, while an automated tape silo costs well into five figures and holds terabytes of storage. You and your managers need to establish where the balance lies between high server availability and the cost to keep it high. Unfortunately, braggin' rights tend to focus on the 99.9% availability your servers averaged last month; it's harder to crow about how much less your servers cost to maintain!
Types of Backups
Regardless of your choice of backup hardware or media, the type of backup being performed usually falls into three categories:
Full—Everything on the selection list is backed up, period. The archive bit, a property of all PC-based files that indicates whether the contents of the file have changed since it was last set, is unconditionally reset to 0. Full backups are often broken into two categories: weekly and monthly. They perform the same function; the difference is that weekly backups are once a month, while monthly backups are cycled for the length of the tape retention period. For an example, take a standard configuration where a month equals five weekly cycles and tapes will be retained for one year. In this case, a weekly job will be reused every five weeks, and a monthly job will be reused once a year. Any file that exists during a weekly job and that is less than five weeks old may be recovered. Any file that exists during a monthly job and that is less than a year old may also be recovered.
The advantage is that because all data that was selected was backed up, this is a snapshot in time of an entire disk volume or server (if you built the selection list correctly). With this set of tapes, you should be able to restore a server to some level of service. Full backups are the foundation of the remaining backup types.
The disadvantage is that because everything gets backed up, this chews up a lot of tape. Full backups are the most time-consuming backups. If backups are run across a network, they consume a lot of network bandwidth during this long backup time.
Incremental—Everything on the selection list with the archive bit turned on is backed up. After the backup completes successfully, the archive bit is reset to 0. From a practical point of view, this means that every file that has changed since the last backup (full or incremental) gets backed up. In backup documents, this is usually (but not always) the type of job defined as a "daily."
The main advantage of this type of backup is that it is much speedier than a full backup because, if incrementals are run daily, relatively few files have changed compared to all files on a volume (typically 5% or less on a file server). Incrementals allow versioning: If they run daily and the contents of a file are also changed daily, the file's archive bit gets set to 1 and the incremental job backs up a new version every day.
The main disadvantage of an incremental backup is that it usually cannot be used by itself to restore a volume or server; it must depend on the data from a full backup being restored first. On a system such as a database server, where data files are interrelated and depend on each other's versions, all incrementals must be applied to a restore job. This can be a time-consuming process, and the time required for restoration will eventually outweigh the convenience of a speedy backup. For example: An SQL server's database files are backed up with a full backup on Sunday and incrementals early every morning during the week. If the database becomes corrupted Friday afternoon, the restoration process requires 1) restoring Sunday's full backup, and 2) restoring the five incremental backups taken Monday through Friday morning.
Differential—Everything on the selection list with the archive bit turned on is backed up. Unlike an incremental, the archive bit is not reset to 0. As with an incremental, every file that has changed since the last backup gets backed up. This type of backup job is occasionally used _as a "daily" is much speedier than a full backup.
Differential backups also offer versioning. Unlike an incremental backup, a volume or server can be restored more quickly to a "snapshot in time" with a full backup plus a differential. This is because the backup contains all the changes made since the last full backup; an incremental may contain only changes since the last incremental. (Though it's possible to run differentials after incrementals, it gets very complex and isn't recommended.)
Because the archive bit isn't reset to 0 after a differential is run, the number of files to be backed up grows every day. A system recovery strategy that uses daily differentials consumes more tape than one that uses daily incrementals, but unlike incrementals, you don't have to restore multiple differentials to rebuild a snapshot of the system.
As with an incremental backup, a differential backup usually cannot be used by itself to restore a volume or server; it must depend on the data from a full backup being restored first.
Copy—A copy job is identical to a full backup job, with the exception that the archive bit isn't reset. It is usually used for special jobs such as disaster recovery, where a complete copy of the system is desired but won't normally be available for file restoration.
In a world with unlimited backup storage and tremendous backup speeds, a full backup every day would be the simplest system recovery strategy. Because this would require a tremendous amount of time and tapes, a good system recovery strategy balances retention period, high-detail backup period, disaster recovery, operator availability, and minimizes tape use. It's no wonder that a good system recovery strategy is hard to find.
A Backup Example
As a real-life example, let's answer many of the questions ourselves:
How long should a file exist before it gets backed up? 24 hours.
How many copies of a file should be kept? This isn't as important to the customer as recovery time—let's say three copies.
How far back in time should a file be recoverable? Six months.
How long may restoration of the user data take? Within four hours.
How long may the server be down? Two hours.
What kind of disaster recovery should be implemented? Data will be stored offsite. Because of operational considerations, it will be between three weeks and six months old.
What is the site operations staff's hours? Staff are available 7AM–4PM, Monday through Friday.
What is the network bandwidth at each site and between these sites? The servers are on FDDI rings at each site, and the major sites are linked by an ATM network. Remote sites are connected to the main campus by T1 WAN circuits with apparent bandwidth of 10Mbps.
What kind of backup hardware will be used? 35-70GB DLT, single tape (not changers).
Armed with this information and data on the available backup options, we can put together a system recovery strategy. For the sake of an interesting example, let's assume that ADSM is already being used for workstation backups across most of the company, including remote sites.
Figure 5.1 is an example of a backup schedule for systems with a dedicated DLT tape drive. It covers an entire month and handles full and incremental backups, cleaning, and disaster recovery jobs. The letters A–E in Figure 5.1 stand for the following tape operator duties:
Dismount Daily tape. If needed, mount cleaning tape and allow drive to clean heads and eject tape. Mount corresponding coded Weekly tape into drive.
Ship third-oldest Weekly tape offsite for storage as Disaster Recovery tape.
Dismount Weekly tape. Relabel Weekly tape as Monthly tape, and store. Mount corresponding coded Daily tape into drive.
Dismount Weekly tape. Mount corresponding coded Daily tape into drive.
Manually clean drive, if necessary.
The following are more complete descriptions of the individual jobs:
Weekly and monthly jobs—Full backups will be taken weekly and be kept for five weeks. Ideally, these long-running backups would be taken over the weekend when there is little user activity. In this case, however, the operator's schedule dictates that they must run on a weekday. (Let's choose Wednesday.) The full backup jobs are called weeklies; monthly jobs will simply be weekly jobs that are removed from the five-week backup cycle and kept onsite.
Daily jobs—Incrementals will run on the remaining nights of the week. They will be kept for two weeks, and they'll be called daily jobs. These jobs will be of two types: Daily Append and Daily Replace. Daily Replace will run once a week after the weekly/monthly and will overwrite (replace) the contents of the tape. Daily Append will run on the remaining days of the week and append its data to the tape. The combination of these two jobs creates a single tape with a week's worth of incremental backups.
Disaster recovery jobs—Disaster recovery jobs can be created in two ways. If you have 24x7 or 24x5 operator support, a DR job can be run one night a week after the nightly job has completed. This requires one more intervention by the operator than described on the calendar, but this will assure that disaster recovery tapes are no more than one week old. If you don't have the luxury of 24-hour operator support or an autoloader, unless you are able to run backup jobs during the day (not recommended due to open files and server load), you must use the simpler method indicated in the backup schedule. In this case, disaster recovery jobs will be the third-oldest weekly job, rotated offsite until the next weekly job is run. The obvious drawback to this method is the age of the data; it's always at least three weeks old. The reason for this method is because a single tape drive doesn't allow you a way to change tapes without operator intervention, and your operator's schedule ensures that no one will be around to perform the change required for the DR job after the regular nightly job. Other possibilities are to substitute a DR job for a daily (losing 24-hour recoverability for one day of the week) or pay an operator overtime one night a week to drive in and swap tapes. Only one set of DR tapes is kept offsite; you may want to increase that to two sets for redundancy.
Cleaning—DLT drives don't need to be cleaned nearly as often as 4mm tapes, and an indicator light tells the operator when it's needed. The CLEAN job on Wednesday that uses a special cleaning tape is optional and needn't be performed unless the cleaning light is on.
Open files—Any file on a Windows NT system that has an exclusive lock on it—whether it's an open Word document or the SQL Server master database—may not be backed up. I say may because most storage management software offers optional open file backup modules, but if you don't get it and a file is open, the backup program will skip it. This has very large and unfortunate consequences (especially related to database systems) if you aren't aware of this. That SQL server you've been backing up for six months with the native Windows NT backup tool actually has no worthwhile data on tape because SQL wasn't shut down before backups and has locked all its data structures open. As a result, the master database and all the database devices have been skipped by the backup program. This is an excellent reason to examine your backup logs on a daily basis, because these errors show up there.
If you must have high availability on your databases and you either won't buy an open file backup module or your software doesn't offer it, a simple process can provide data integrity:
On a nightly basis, just before the scheduled backup time, dump the databases to dump devices. The scheduled job will then be capable of backing up these files.
If possible, perform a full backup of the server with the database shut down. This should ideally be done whenever a physical database device has changed on the system because those are the files that the operating system sees. To minimize the number of database downs, you'll want to change this device as little as possible. The advantage of a full physical backup of the database devices is that in the event of a failure, the rebuilding of the database system will be greatly speeded. SQL Server won't have to be reinstalled and the database devices won't have to be re-created manually.
If you can't bring the database system down for the occasional backup, keep printed copies of database device information. After a failure, you'll have to rebuild them manually before you can load the database dumps from tape.
Restoration of service will be slower with this method because you must reinstall SQL Server and rebuild the database structure before loading your database dumps. If you shut down the database before backups and backed up everything at once, a simple restore job would bring back the entire database and its executables and data structures at once. It's a compromise you must make between availability and restoration speed.
To determine the right number of tapes to purchase for this backup scenario, we need to add up the number of tapes needed for each kind of job, the number of times they run, and factor in the various retention periods. Let's figure it out by job type:
Note: "1/6 tape per daily job" really means that the same tape is used for dailies all week.
Note: In this case, "# of tape sets pulled from rotation" is equal to the retention period of six months because a set of weekly tapes is set aside every month for six months. Likewise, "# of months that weeklies are in rotation" means that you don't need to account for special monthly tapes when you can pull a valid weekly tape (one that's in current backup rotation) off the rack. In most cases, this will be equal to 1.
These formulas may seem like overkill when you can probably work it out with a little head scratching, but they will work in a situation where it gets too large or complicated for seat-of-the-pants reckoning. Having said that, don't forget to add Finagle's Constant of about 10% to cover bad tapes, underestimation of how many tapes you think you'll need, and so on.
Virus Scanning
It's well known by now that Windows NT is susceptible to viruses. The boot partition and users' files are especially at risk, with much resulting pain and anguish. A server can be infected with a boot sector virus through infected service disks such as system configuration utilities, Windows NT boot disks, BIOS updates, and others. It's vital that you practice safe hex by scanning these disks regularly with an up-to-date virus scanner.
Building a comprehensive virus policy for Windows NT servers is more complicated than it might first appear. The most obvious requirements of a virus scanner are listed here:
Detection and correction—The scanner must be capable of finding and fixing as many viruses as possible.
Unobtrusive operation—The scanner must interfere with normal server operations as little as possible.
Comprehensive notification and reporting—It's very important to _have flexibility in how virus alerts are sent. A scanner with a highly configurable alert utility will allow you to distribute file disinfection to local administrators by partition or share.
Selecting and configuring a Windows NT Server virus scanner is the easy part of building a virus scanning policy; figuring out what to do with a virus when you find it is the hard part. Before you sneer, "Automatically clean it off—Duh!", think ahead to the consequences.
Whose responsibility should it be to disinfect viruses? It's pretty clear that the administrations/operations group should keep the boot sector and OS partition clean. What about the user partitions?
What is the process for cleaning a virus off the server? The only way some viruses can be cleaned requires erasing the infected file. How will that be handled? "Dear sir: We erased your critical spreadsheet from the server because it was infected with a virus (even though you could read the data). Have a nice day, MIS." If the virus is on a personal share, the owner can spread it by emailing the file to others. If it's on a group share, it may be infecting many other people. The moral: Automatic correction on user partitions is okay if the file is not deleted, but get the alert about the virus out to the user right away so the user can notify others who may have been infected. If the virus requires that the file be deleted, move it to a special directory and notify the user immediately.
Who gets notified if a virus is detected on a user partition? The server administration staff? If the infected file is on a group share, where does the alert go?
Don't forget that here economies of scale work against you. If you have 1,000 users on a server and have 5 servers, and if 35% of them have at least one virus on their files, that's 1,750 cases that need to be dealt with. If you roll out a virus scanner across multiple servers in a short period, the help desk will be swamped with calls.
Suppose that one day you get all the viruses off the server. You now have a clean server with hundreds or thousands of dirty workstations reinfecting it every day. To work effectively, a comprehensive server virus policy must dovetail with a workstation virus policy. Fortunately, workstation virus scanners have become quite sophisticated and can catch many viruses the instant they're loaded into memory. You also need a process of regularly updating the virus signature database on all servers.
Fragmentation
Yes, Virginia, there is disk fragmentation on NTFS volumes, and it can affect your system's performance. Although they don't fragment as quickly as FAT, NTFS partitions can be badly fragmented by normal operations. Take print spooling, for example. The process of printing a file to a network printer requires a spool file be created and almost immediately deleted. This process alone, repeated hundreds or thousands of times in the course of a normal working day, can defragment any type of partition.
Disk fragmentation is measured by how many fragments a file is broken into. A completely defragmented file has a fragments per file ratio of 1.0: The file consists of one fragment. Executive Software, the company that _literally wrote the book on the subject, believes that any partition with a fragments per file ratio greater than 1.5 is badly fragmented and severely impacts performance. An analysis I did of a Windows NT file server whose disk array had been around for three years yielded a fragments per file ratio of 3.64! Historically, the only thing that could be done about fragmentation was to back up the volume, format it, and load the data. With the advent of Windows NT 4, however, hooks have been added to the operating system's microkernel to allow real-time defragmentation, and several products are on the market.
Software Maintenance
The task of keeping Microsoft operating system software up-to-date is pretty straightforward once you understand the concepts of hotfixes and Service Packs. Equally important, you should understand how much Microsoft itself trusts each of these updates.
Using Hotfixes
As problems are reported with Windows NT, Microsoft develops fixes for them. (We'll leave the subject of how quickly and how well for another time.) Called hotfixes, not much regression and integration testing is done on them.
Hotfixes can be installed in one of two ways. The simplest way is to save the hotfix executable in a temp directory and run it. A more organized way, especially if you have a number of hotfixes, is to run the executable with the /x switch. This will extract the hotfix and its symbol files. You can then install the hotfix with the HOTFIX command. Table 5.3 lists the switches (that is, options) for the HOTFIX command.
Table 5.3 Switches for the HOTFIX Command
There are two other ways to quickly check whether hotfixes have been applied to a system. The first is to check in the %systemroot% directory for hidden directories:
$NTUninstallQxxxxxx$
Here, xxxxxx is the hotfix number.
The second method is to look directly in the registry:
HKEY_LOCAL_MACHINE \Software \Microsoft \Windows NT\CurrentVersion\Hotfix
If the \hotfix key isn't there, or if it's empty, no hotfixes have been installed.
Tip If you come to Microsoft operating systems from a different background, be careful about your assumptions on software maintenance. Unlike IBM mainframe maintenance (which recommends that you keep up with monthly fixes and believes these program update tapes [PUTs] are safe), Microsoft takes a different position. Don't apply hotfixes unless they fix a specific, critical problem you're encountering—and then be prepared for something else that had a dependency on the hotfixed files to possibly break. Microsoft doesn't hide the fact that most hotfixes are only minimally tested and certainly aren't for all the dependencies in the OS. So don't apply a hotfix unless you really need it, and then test it thoroughly in your environment before releasing it to your production servers. As always, make sure you have a good backup of your server before you apply any maintenance of this type.
Using Service Packs
As a large number of hotfixes are accumulated, they'll be rolled up into an overall maintenance package called a Service Pack. Service Packs are infrequent events; there were only five for Windows NT 3.51, and there are four so far for Windows NT 4. Service Packs also developed in size and sophistication over their history. Besides becoming easier to install, since Windows NT 4 SP3 they've featured an uninstall option. The _original files are stored in a hidden directory named %systemroot%\ $NTServicePackUninstall$. Unlike individual hotfixes, the updates in a Service Pack are regression and integration tested to ensure that they're more stable than an ad-hoc collection of individual hotfixes. Service _Packs are cumulative, which means that updates from previous Service Packs and hotfixes are rolled up into later ones. Service Pack 4 is 100MB in size and has more than 1,200 files.
Service Packs are more thoroughly tested and theoretically are more stable than hotfixes, but again Microsoft's policy has long been, "If you aren't having problems, don't apply it." There are several reasons to cautiously change that mindset. The first is that as the world has begun to pay attention to Windows NT, the security attacks against it have dramatically increased, and therefore so have Microsoft's security hotfixes. Test carefully before you install an individual security hotfix, but applying a Service Pack with a tested suite of these updates is a good idea.
The second reason is that Service Packs have evolved into providing new features as well as fixing existing ones. Password filtering in Windows NT 4 Service Pack 3 is a good example; it provides a higher level of password security if you choose to implement it.
You should read the release notes before installing a Service Pack, and carefully consider all the implications of what you read. For example, Service Pack 4 updates the NTFS file system driver, so a Windows NT 4 _system will coexist with NTFS Version 5. Does this affect your disk utilities, such as your defragmenter or your virus scanner? You need to have answers to questions like these before you apply the newest Service Pack.
The Windows NT Service Pack site (for all releases of Windows NT) can be found at this address:
Using System Dump Examinations
When a system bugchecks and generates the infamous blue screen of death (BSOD) and a dump file, symbol files are needed during dump analysis to determine where in the operating system code the problem occurred. The symbol files can be found on the Windows NT Server CD at \support\_debug\processor type\symbols; the dump analysis tools are at \support\_debug\processor type. However, you don't have to keep all this straight because the batch stream \support\debug\expndsym.cmd will install the symbol files in the correct place. The syntax is as follows:
Expndsym <CD-ROM_drive_letter> %systemroot%
Here's an example:
expndsym d: c:\winnt
You could easily substitute %systemroot% or even hard code it into any copies you make of the batch stream. This would work as long as you were actually running under the operating system on which you wanted to use the symbol files.
Symbol files take up a lot of space (approximately 100MB) and must be kept up-to-date. To accurately debug a dump with symbol files, every Service Pack or hotfix applied to the OS must have its corresponding _symbol file updated in %systemroot%\system32\symbols. When repeated over hundreds of servers, this can obviously be a configuration management nightmare. Fortunately, you don't need to install symbol files on every server. Instead, install them on a single troubleshooting server that runs the OS release and fix level of your production baseline. If you have more than one OS baseline, install that baseline with symbols in another directory. When a bugcheck occurs on a production server, send the dump to the _troubleshooting server—you may want to zip it if you have slow links—_and perform the dump analysis from there. The DUMPEXAM command is the starting point for reducing the dump:
dumpexam -v -f output_filename -y
symbol_search_path, usually "c:\winnt\system32\symbols"
crashdumpfile
Obviously, it's easier to write a batch program to take care of the dump file processing. Listing 5.1 is a simple batch stream called DUMPSHOOT that runs DUMPEXAM, writes it to a text file, then displays it for you.
Listing 5.1. A Batch Stream to Simplify the Dump Processing
@Echo off if "%1"*==""* goto a if "%1"*=="/?"* goto help if "%1"*=="?"* goto help set dumpfile=%1 goto exam :a set dumpfile=memory.dmp :exam Echo Attempting to extract information from c:\dumpster\%dumpfile%... dumpexam -v -f c:\dumpster\dumpshoot.txt (continued) -y %systemroot%\symbols c:\dumpster\%dumpfile% Echo Would you like to view the crash dump analysis?(CTL-C if not!) pause notepad c:\dumpster\dumpshoot.txt goto exit :help Echo "DUMPSHOOT <dump file>" Echo DUMPSHOOT condenses NT crash dumps into a useable format. Echo DUMPSHOOT invokes DUMPEXAM with the right parameters. Echo If no dump file is specified, the file is assumed to be Echo found in C:\DUMPSTER\MEMORY.DMP. Echo If you specify a dump file, I still assume it's in C:\DUMPSTER. Echo The output is C:\DUMPSTER\DUMPSHOOT.TXT :exit
The output from DUMPSHOOT is dumpshoot.txt. Most of the time it contains all the information that Microsoft product support needs to move forward in problem resolution, without having to ship a 128MB dump file to their ftp server.
Software Maintenance Recommendations
Here's a summary of my software maintenance recommendations:
Only apply hotfixes if you really need them, and test them in your environment before putting them in production.
As with anything else brand new, don't be in a rush to install a new Service Pack. Microsoft has had a mixed record on the reliability of its Service Packs; unless you're dying for the updates, wait until they've aged just a little.
Be sure you pull down the hotfix for the right server architecture. The IA architecture hotfixes end in i, and the Alpha ones end in a.
For Windows NT 4, apply maintenance such as hotfixes and Service Packs only after you've installed and configured all the system's software. If you install a Service Pack and then install a software component, the installation process will overwrite the updated components with the original media components. This recommendation therefore leads into the following one.
If you have installed software after applying maintenance on a Windows NT 4 system, reapply the maintenance. This unfortunately means that more OS partition disk space is chewed up because a new uninstall directory is created every time a Service Pack is applied.
Service Packs in Windows 2000 are supposed to be intelligent enough that if you install software after the Service Pack has been applied, you won't have to re-apply the Service Pack.
Print and read the documentation very carefully. This is not some program you want to just install without looking! Even though it has an uninstall option, a service pack often changes the basic structure of the SAM database or the registry so that it's not possible to completely back out without restoring from backups.
Take a full backup of your boot partition, and update your Repair Disk, before you install a hotfix.
Don't keep hotfixes older than the most recent Service Pack. At this point, they've been rolled into that pack; if they haven't, it's because the hotfix has been withdrawn. If that's the case, you don't want it on your system anyway!
Monitoring Performance and Tuning the Box
Windows NT is the most self-tuning operating system ever devised for the commodity server market. As a result, there are very few knobs the Windows NT administrator can turn to alter the performance of a Windows NT server—and turning them without restraint will probably degrade the system more than if you had left it alone. However, it's important to understand the performance characteristics of Windows NT and learn where it's most likely to get clogged up. I'll just hit the high points of detecting bottlenecks and tuning a Windows NT server by subsystems, sprinkled with some general rules. For a complete treatment on Windows NT performance, look in the Optimizing Windows NT volume from the Windows NT 3.51 Resource Kit (a similar volume doesn't exist in the 4 Server Resource Kit).
Note: A note on Windows 2000: Even though performance documentation may say 3.51 instead of 5.0 or Windows 2000, 98% of it is perfectly relevant. At all but the deepest level of detail, performance characteristics and bottlenecks of Windows NT are the same from versions 3.5 to 5.0. Indeed, the basics apply almost exactly across any virtual memory operating system, whether IBM, Sun, Compaq, or Microsoft is on the box.
What follows are four of the basic tenets of system performance diagnosis and tuning for all computer systems. The fifth (Task Manager) is specifically for Windows NT systems:
Thou shalt not change more than one system parameter at a time. It's important to look at performance problems logically because you will always have these four dynamic variables interacting with one another in a system—and it's easy to lose track of where you are in a four-variable equation. If, in your hurry to get your boss off your neck, you tune several system parameters at one time to correct a problem, you'll never know exactly what fixed the problem and what didn't. Ask your boss if he really wants to see the problem appear again because a good problem analysis wasn't done the first time, or would he spend some extra time and fix it just once?
There is always a bottleneck; tuning just minimizes it and moves it around. One subsystem will always have more load than another, even if just a little. A bottleneck occurs when a task on the system must wait for a system resource (processor, memory, disk, or network) because it's tied up with another task. Bottleneck equals wait.
One bottleneck may mask another. A heavily loaded system may have several bottlenecks, but until the first bottleneck is corrected, often only one shows. A common example is a database system without enough CPU resources. The processor is pegged (old analog gauge slang, for you new technocrats) at 100%, but disk I/O is at manageable levels. Upgrade the processor or add a second processor, and that _bottleneck is removed, allowing the database engine to make I/O requests to its database unhindered by a slow processor. Suddenly the disk I/O goes through the roof! This is something you need to warn your boss about before it happens so that you don't look like an idiot.
The Heisenberg Uncertainty Principle also applies to performance _monitoring. To paraphrase an important tenet of quantum mechanics: "You can interfere with the performance of a system simply by monitoring it too closely." (Mr. Heisenberg was specifically referring to the momentum and location of subatomic particles.) Performance Monitor, when recording lots of data over short intervals, can impact the performance of a Windows NT system. The Perfmon utility uses CPU, memory, and disk I/O. If you monitor the server remotely, you reduce these three, but you increase network I/O as the performance data is sent over the network to the computer running Perfmon. This isn't normally enough to worry about but, it's good to be aware of. If, for example, you're remotely collecting log information and have selected the process, memory, logical disk, and network interface objects, a moderate but continuous load has been put on your network interface.
Use the Task Manager. In Windows NT 4 and Windows 2000, the Task Manager (shown in Figure 5.2) has been greatly expanded from its original role as a simple way to shut down unruly applications. Launched from the three-finger salute (Ctrl+Alt+Delete) or by simply clicking on an empty spot of the taskbar with the secondary mouse button, it now has Processes and Performance property sheets that can provide a great deal of detail on the current system status. The Processes property sheet allows you to quickly view processes that were previously more time-consuming to reach; by clicking on the column headers, you can sort for the highest values in each field. The menu item View, Select Columns allows you to add up to 13 object to monitor and sort. A limitation of this expanded tool is that it can be run only locally. To view system processes remotely, you must use Performance Monitor.Figure 5.2: The Task Manager.
Because Windows NT is such a self-tuning operating system, performance and tuning often distill into two steps. Step 1 is finding the Windows NT subsystem(s) with the bottleneck, and step 2 is throwing hardware at it! To use an old Detroit saying, "There ain't no substitute for cubic inches." Most of us don't have unlimited hardware budgets, however, so a detailed performance analysis will tell you exactly where the problem lies, will offer the best course of action to fix it, and will provide documentation _to support your conclusion when the bean counters get upset.
What about performance and tuning for Windows 2000? There's no need to get worked up over the new release in this area because performance basics are the same for any server. The user interface to find the right knobs has definitely been pushed around, however. Where there's a difference, I'll show how to get there. For example, Performance Monitor from Windows NT 4 has been moved to the MMC as a snap-in (Perfmon.Msc). It functions pretty much the same as its predecessor (see Figure 5.3).
A Windows NT system can be analyzed in four sections: processor, memory, disk I/O, and network I/O. Use this organization to logically investigate any performance problem you encounter on a Windows NT server.
Tuning the Processor
People who don't know much about Windows NT performance always seem to focus on the CPU as the source—and the solution—to server performance problems. Although that isn't true, it's pretty easy to spot CPU bottlenecks.
The following are recommendations of the Performance Monitor processor-related counters to watch:
System % Total Processor Time—Performance Monitor object consistently near 100%. A snapshot can also be seen from the Windows NT 4 Task Manager Performance property sheet, CPU Usage History section.
Here's an easy way to see what processes are hogging the CPU: In Perfmon, select the Process object. Select the % Processor Time instance associated with the Process object. To the right of these, click the second instance (below "_Total"), and drag the mouse down to include every instance. Now either scroll up and Ctrl+click to remove the Idle instance, or delete it later. Click the Add button. You're now tracking every process on the system by percentage of processor utilization. To make the chart easier to read, click Options, Chart, or the rightmost button on the display. Change the Gallery section from Graph to Histogram, and click OK. Hit the backspace key to enable process highlighting. Perfmon now displays a histogram of all the active processes on the system. You can scroll through them with the up and down arrows, and the instance that's in focus will be highlighted in white. If you haven't deleted the Idle instance, you can do it now by selecting it from the list and pressing the Delete key.
The following are recommendations about how to tune your processor:
Take the doctor's advice: "If it hurts when you do that, don't do that." At least not during prime time. Schedule CPU-hungry jobs for off-hours, when possible. For example, programs that read the SAM of a large Windows NT 4 domain to process user accounts for expired passwords can peg the primary domain controller for quite a while.
Upgrade the processor. If you're considering whether to switch from an Intel architecture to an Alpha, look at the System Context Switches/ Sec. object. Don't switch if this counter is the primary source of processor activity; relatively speaking, an Alpha takes as long as an Intel to do context switches. (A context switch occurs when the operating system switches from user mode to kernel mode, or vice versa.) And, of course, you shouldn't make big decisions like this based solely on the System Context Switches counter!
Add processors if the application in question is multithreaded and can take advantage of multiple processors.
Use fast network cards. A 16-bit network interface card (NIC) uses more CPU than a 32-bit card.
Use bus-mastering SCSI controllers. A bus-mastering controller takes over the process of an I/O request, thus freeing the CPU.
Use the START command. This command has /low, /normal, /high, and /realtime switches to start programs with varying levels of priority. This is the only way to externally influence the priority of individual programs.
You can also tune the foreground application response time with the Performance property sheet, found in the System applet of the Control Panel. In Windows NT 4, it's a three-position slider. Figure 5.4 shows how it looks in Windows 2000.Figure 5.4: Foreground application response in Windows 2000.
This alters the following:
SYSTEM\CurrentControlSet\Control\PriorityControl\Win32PrioritySeparation
This code has a value from 2 (highest foreground priority) to 0 (all programs have equal processor priority).
Tip It's interesting to note that in Windows NT Server 4, this counter is tuned to give foreground applications priority over the background applications that are, after all, the main business of a server. The reasoning may be that if you do run a program from the console, it's not done casually, so you want good response time. You should consider setting this counter to None. In Windows NT Server 5.0, it's correctly set to Background Services.
Understanding Memory Performance
Memory, not processor utilization, is the first thing administrators should look at when a Windows NT system is experiencing performance problems.
Paging is a necessary evil—bad, but unavoidable. Okay, to be fair, it's an integral part of memory management, so "bad" may be an overstatement, but avoid it as much as possible. If you're reading this book, you've _probably heard the term "paging" for a while, have seen it occurring with Perfmon, and can even convince your boss that you know what it means—but you probably would hate to be cornered into defining it or explaining the concept to a new hire. Here's a (hopefully) simple explanation.
Windows NT uses a demand-paged virtual memory model. That's four adjectives attached to one noun, so it deserves explanation. Windows NT is a 32-bit operating system, so programs that run on it can see 232 GB, or 4GB, in a flat address space. ("Flat" means that there are no segments or other compromises to worry about, as in previous versions of Windows.) The upper 2GB is reserved for system code, so the lower 2GB is available for user programs. The basic problem is obvious: You can run a program that may try to load data up near a 4GB memory address (location), but you probably don't have 4GB of physical RAM shoehorned inside your servers. This is where the term "virtual" comes in. Windows NT juggles its limited amount of memory resources by pulling data into main memory when it's asked for, writing it out to disk when it has been written to in memory, and reclaiming memory by writing the least recently used data to a page file. This process is called paging. For efficiency's sake, this data is moved around in chunks called page frames that are 4KB in size for Intel systems and 8KB for Alpha systems. Virtual memory is how the operating system lies to everyone and everything that asks for memory by saying, "Sure, no problem, I have room in memory for you!" and then scurrying around under the covers to page data in and out of main memory to provide it. (A good definition I heard for virtual memory is that the word that follows it is a lie.) Good virtual memory managers are masterful at maximizing the use of system memory and automatically adjusting their actions to outside conditions.
So, the page file (PAGEFILE.SYS) is the space on your hard disk that Windows NT uses as if it were actually memory. Why is it a problem if the system pages out to the page file? (Paging to get data from disk is unavoidable, so it doesn't matter in this discussion.) Isn't that how it's designed? Well, yes, it is, but it's slow. How slow is it, you may ask? Average computer memory today has an access time of 50 nanoseconds, or 5x10-8 seconds. Very fast disk access time today is about 6 milliseconds, or 6x10-3 seconds. This means that memory is 120,000 times as fast as disk!
A good analogy is to increase the time scale to something we're more comfortable with. A Windows NT program executing in the CPU asks for data. If that data is already in main memory, let's say it takes 1 second to return it. If the data it needs is out on disk, it will have to wait almost a day and a half to get the data it needs to continue. Now, the virtual memory manager mitigates this wait by passing control to other programs that don't have to wait, but it's obviously a tremendous performance impact. When paging rates go too high, the system gets caught in a vicious cycle of declining resources and begins "thrashing."
The two best ways to avoid paging are to add physical memory and to tune your applications (especially database applications) carefully to balance their needs with the operating systems needs. Unfortunately, the most obvious Performance Monitor counter of Memory Pages/Sec can be misleading, as explained here.
The following are recommendations of what Performance Monitor memory-related counters to watch:
Memory Available bytes consistently less than 4MB (Intel) or less than 8MB (Alpha). A snapshot of this can also be seen from the Windows NT 4 Task Manager Performance property sheet, Physical Memory section, Available counter. As an indicator of the amount of free memory pages in the system, when this value drops below 1,000 pages (4MB in an Intel system using 4K pages), the system paging rate increases in an attempt to free up memory. This was also seen in Windows NT 3.51 from the WINMSD utility, Memory section, as memory load. In that utility, a memory load of
0 = 1100+ available pages
and a load of
100 = 100- available pages
Any values in between have an appropriate load. For example, a memory load of 25 indicates that 3MB are available, and a memory load of 75 means that only 1MB is available. Several shareware or freeware memory load monitors can be found to monitor this important indicator.
Memory Available bytes decreasing over time. This indicates a memory leak condition, where a process requests memory but never releases it—there's a bug in an application. To determine the culprit, monitor the Private Bytes counter of the Process object, and watch for an increasing value that never goes down. (Actually, the term "memory leak" is a misnomer; memory isn't leaking out of the system—it's being kept by a process.)
Paging File: % Usage, % Usage Peak is near 100%. Don't let the page file grow, as it will have a significant impact on system performance. All disk I/O ceases during page file growth. Not only that, but page file growth very likely causes fragmentation of the page file. This means that during normal paging operations, the operating system will have to jump the physical read/write heads all over the disk instead of one contiguous area. The simplest way to avoid this is to make the page file larger than its default size of physical memory plus 12MB, especially on memory-constrained systems. The next simplest way, after the page file has already become fragmented, is to move it to another partition, reboot, defrag the original partition, and move the page file back. This will create a contiguous page file.
Memory Committed Bytes is less than RAM. Memory Committed Bytes is the amount of virtual memory actually being used. If the system is using more virtual memory than exists in physical memory, it may be paging heavily. Watch paging objects such as Memory Pages/Sec and Memory Page Faults/Sec for heavy usage. The Task Manager equivalent of Memory Committed Bytes can be found in its Performance property sheet, Commit Charge section, Total counter.
If Memory Committed Bytes approaches Memory Commit Limit, and if the page file size has already reached the maximum size as defined in Control Panel, System, there are simply no more pages available in memory or in the page file. If the system reaches this point, it's already paging like a banshee in an attempt to service its memory demands. The Task Manager equivalent of Memory Committed Bytes can be found in its Performance property sheet, Commit Charge section, Limit counter. This is the same as %Committed Bytes In Use. A number less than 80% is good.
Memory Pages/sec can be a misleading counter. For performance reasons, in NT 4 Memory Pages/Sec was moved from the memory subsystem to the file subsystem. Instead of detecting actual page faults in memory, it simply increments every time a non-cached read (i.e., from disk) occurs. This makes the counter somewhat unreliable in a file server where many open file activities take place and very unreliable where a database server (that manages its own memory) may be doing much database I/O.
The following are recommendations on how to optimize your memory performance:
Add physical memory. Generally, the best thing you can do to boost Windows NT performance is to add memory. Lack of memory is by far the most common cause of performance problems on Windows NT systems. If your boss corners you in your office and asks why server XYZ is so slow—and you didn't even know XYZ existed—answer, "It's low on memory," and you'll probably be right. You can approximate (or, guess) how much memory you need by looking at the page file(s) and using the following line of reasoning: If you had a system with no memory constraints, it would almost never page and the page file utilization would approach zero. You don't, so the operating system needs some number of megabytes in the page file to back its memory requests. The worst-case amount can be found in the Paging File % Usage Max counter of the Paging File object. So, if the system in question has a page file of 100MB and the Paging File % Usage Max counter is 75%, at its most heavily loaded point the system required 75MB more than it had available in physical memory. Therefore, adding 75MB of physical memory would be a good guess. Of course, the Paging File % Usage Max counter measures an instantaneous maximum, so if an operator quickly launched and then canceled three big utilities from the server console during your monitoring period, the value will be too high. On the other hand, if you already have a processor or I/O bottleneck, the value may be too low. As I said, it's just a guess.
If one application is the troublemaker, run it during off-peak hours. Remember that it will have to share time with long-running system utilities such as backups, anti-virus scanners, and defragmenters.
If the page file utilization hits 100% and its size is less than the maximum set in Control Panel, System, Performance, Virtual Memory, Paging File, the page file will extend itself. You don't want this to happen, for several reasons. First, all system I/O will halt while the page file extension occurs. Secondly, the odds are good that no _contiguous space will be available after the page file, so it will become fragmented. This means that whenever the system becomes heavily loaded enough to use the extra space, the disk heads must jump around the disk to simply page, adding extra baggage to a system already in trouble. Set the initial page file size sufficiently large when the system is built or recently defragmented so that it won't need to extend. Disk space is cheaper than a fragmented page file.
If the system in question is a BDC of a large Windows NT 4 domain, consider converting it to a member server. All Windows NT 4 domain controllers have a SAM database that is stored in paged pool memory. This means that when a domain controller authenticates a user's logon, it must page the entire contents of its SAM into main memory to get the account's credentials. If it doesn't perform any more authentications for a while. the dirty pages will get reused for other programs; however, authentications on a domain controller usually happen frequently enough that this doesn't happen. So, a domain controller has a chunk of main memory semi-permanently allocated for user authentications. How much memory is used depends on the size of the SAM.
Tuning Disk I/O
Because of the mechanical nature of hard disk drives, the mass storage subsystem is always the slowest of the four subsystems in a computer. As we've seen so far in this section, it's 120,000 times slower than memory. As a result, all sorts of elaborate data caching and buffering schemes have been devised to minimize the disk's performance penalty. This subsystem can be the most important area you tune. If you have a 500MHz processor but your hard drives came from a salvage sale, you've effectively put a ball and chain around its leg whenever the system has to page!
When working with hard disk drives, a good analogy to use is that of an LP-playing jukebox. Inside the drive's case are one or more constantly spinning aluminum-alloy platters, arranged one on top of another in a stack. When you are at work on your computer, you enter commands through your keyboard or mouse. The hard drive's actuator arm—much like a jukebox's tone arm—responds to these commands and moves to the proper place on the platter. When it arrives, the drive's read/write head—like the needle on the tone arm—locates the information you've requested and reads or writes data.
The following are recommendations of which Performance Monitor disk I/O-related counters to watch:
Physical Disk % Disk Time counter consistently at or near 67%. This is the percentage of time that this particular disk drive is busy servicing read or write requests.
Physical Disk Queue Length > 2. Any time the queue length on an I/O device is greater than 1 or 2, it indicates significant congestion for the device.
The following are recommendations of how to tune your disk I/O:
Minimize head movement. The slowest actions of a hard drive are the time expended waiting for the disk's actuator arm to move its read/write heads to the correct track (the seek time)—and once it's there, you must wait for the correct sector to come under the heads (rotational latency) so that data can be read or written. There isn't much you can do about rotational latency, but you can minimize head movement. The most effective way to minimize head movement is to defragment your disk and to make sure that your page file(s) are contiguous. You can tell if the page file is fragmented by looking at the text mode results of a disk analysis from DisKeeper. The operating system won't allow disk defragmenters to defragment the page file, so you must do it yourself. The technique is simple: Create a page file on another partition and remove the original, reboot, and recreate the original configuration.
The second way to minimize head movement is to think about what kind of data is on the disk. Place large, heavily accessed files on different physical disks to prevent the heads from jumping back and forth between two tracks. For example, let's say that you create an SQL server with the operating system on disk 0, the database device _on disk 1, and the transaction log on disk 2. A little later, you discover that the database device is both heavily accessed and isn't large enough, so you extend it with a second database device on disk 2. You now have a case of head contention on disk 2. The read/write heads focus on the heavily accessed database device (at the inside of the physical disk platters, because it was created last), with constant interruptions from the transaction log (at the outside of the physical disk platter because it was created first). The transaction log is written to in small bursts whenever a transaction is made to the database device.
The heads continuously bounce back and forth across the full extent of the disk. For the same reasons, you shouldn't install Office components on the same physical disk if you're not using RAID.
This won't apply in systems where the disk subsystem has been striped in a RAID 0 or RAID 5 configuration. Data is evenly striped across the physical RAID set regardless of where it appears to be on a logical partition.
Use NTFS compression sparingly. Disk compression is a great way to squeeze more data onto a disk. It's also a great way to increase the average percentage of processor utilization and to fragment the disk. I recommend that compression be used for low-access document shares and to temporarily buy back disk space when a server's data drive is almost full. In Windows NT 4, compressed files on disk must be decompressed by the server before the data is sent to the client. Windows 2000 Professional will support compression over the wire, which keeps the data compressed until it reaches the client where it is then decompressed. This offers two big benefits: It offloads the CPU cycles required for decompression from the server to the client, and it decreases network bandwidth. Compression load on a processor will be less of an issue if you're buying a new server with the latest high-speed processors.
Use fast disks, controllers, and technology. Almost all modern disks and controllers supplied with servers are SCSI. Fibre Channel technology (133MB/sec) is faster than Wide Ultra-2 SCSI (80MB/sec), which is faster than Wide UltraSCSI (40MB/sec), which is faster than Fast Wide SCSI (20MB/sec), which is faster than Fast SCSI (10MB/sec), which is faster than SCSI-2 (5MB/sec), which, finally, is faster than IDE (2.5MB/sec).
Use mirroring to speed up read requests. The I/O subsystem can issue concurrent reads to a pair of mirrored disks, assuming your disk controller can handle asynchronous I/O.
If you are using a RAID 5 array, increasing the number of drives in the array will increase its performance.
Tuning Network I/O
Network I/O is the subsystem through which the server moves data to its users. This is the server's window to the world. You may have spent money on the fastest server in the world, but if you use a cheap NIC, it will look just as slow as your oldest servers. Here are some recommendations to help your network I/O:
The more bits, the better. The number of bits in a NIC's description refers to the size of the data path, so more is better. 32-bit NICs are faster than 16-bit NICs, which are faster than 8-bit NICs. A caution to this is that you should match the NIC to the bus. If you have a PCI (32-bit) bus, you should use a 32-bit card. An EISA bus will support 8-, 16-, and 32-bit NICs, but if you follow the previously stated rule, a 32-bit NIC will be the best performer.
Install the Network Monitor Agent service—but leave it in Manual mode. If you have Network Monitor Agent installed, a very useful Network Interface object will be added to Performance Monitor. This provides 17 different counters on the virtual interface to the network. I say "virtual" because, in addition to any physical NICs you have installed, it also includes an instance for every virtual RAS adapter you have defined on your system. For the NIC that you're probably interested in, however, it monitors the physical network interface. Leave the service turned off until you need it to reduce system overhead.
The server should always have a faster network interface than its clients. A server is a focal point of network traffic and should therefore have the bandwidth to service many clients at one time. This means that if your clients all have 10BaseT, the server should have 100BaseT. If the clients are running at 100, the server should have an FDDI interface.
The following are recommendations for what Performance Monitor network I/O-related performance counters to watch:
Network Interface Bytes Total/sec is useful to figure out how much _throughput the card is getting compared to a theoretical maximum. For instance, a bytes total/sec of 655,360 for a 10baseT NIC on standard Ethernet is shown here:
(655360 bytes/sec)¥(8 bits/byte)/(1024 bits/Kb)¥(1024Kb/Mb)=5Mbps
Because the theoretical bandwidth for standard Ethernet is 10Mbits/sec, this card is running at 50% of its theoretical maximum. In reality, it's much closer to its maximum because the Ethernet collision rate begins to rise dramatically when network utilization rises above 66%.
Broadcasts/sec or Multicasts/sec is greater than 100/sec. A certain number of network broadcasts or multicasts are normal; for example, DHCP requests from clients are broadcasts. However, excessive broadcasts or multicasts are bad because every card on the network segment must examine the broadcast/multicast packet to see whether it's destined for its client. This means that the NIC must generate an interrupt on its clients' CPU and allow the packet to be passed up to the transport for examination. This can cause serious processor utilization problems.
Network Segment % Network Utilization should be considered when things start slowing down to the point at which they are no longer acceptable. Some say that this point is around 40%–50%. Then the network is the bottleneck.
The following are recommendations for how to tune your network I/O:
Analyze network I/O based on the OSI model. (For more information on the 7-layer OSI model, see.) This allows you to look at network I/O performance problems from the bottom up.
Consider the following at Layer 1 (the Physical Layer): Is the network overloaded? Is the NIC handling too much data? Are there excessive network broadcasts that the NIC must receive and analyze?
Consider the following at Layer 4 (the Transport Layer): Is your primary protocol first in the network binding order? If it isn't, you've unnecessarily increased the average connection time to other network nodes. Figure 5.5 shows the most common situation on a Windows NT 4 system. When you request a connection to shared resources on a remote station, the local workstation redirector submits a TDI connect request to all transports simultaneously; when any one of the transport drivers completes the request successfully, the redirector waits until all higher-priority transports return. For example: The primary protocol for your network is TCP/IP, and that's the only protocol most of your workstations are running. You have TCP/IP and also NetBEUI installed on your server because you must still service the occasional NetBEUI workstation. NetBEUI is first in the network binding order. When the server attempts a session setup with another network resource, the server must wait for NetBEUI to time out before completing the TCP/IP session setup.Figure 5.5: Poor binding order for a TCP/IP network
Consider the following at Layer 5 (the Session Layer): The Server service's responsibility is to establish sessions with remote stations and receive server message block (SMB) request messages from those stations. SMB requests are typically used to request the Server service to perform I/O—such as open, read, or write on a device or file located on the Windows NT Server station.
You can configure the Server service's resource allocation and associated nonpaged memory pool usage. In Windows 2000, it's buried in the Network Control Panel applet, Local Area Connection properties, then File And Print Sharing For Microsoft Networks Properties (see Figure 5.6).
You may want to consider a specific setting, depending on factors such as how many users will be accessing the system and the amount of memory in the system. The amount of memory allocated to the Server service differs dramatically based on your choice:
The Minimize Memory Used level is meant to accommodate up to 10 remote users simultaneously using Windows NT Server.
The Balance option supports up to 64 remote users.
Maximize Throughput for File Sharing allocates the maximum memory for file-sharing applications. You should use this setting for Windows NT servers on a large network. With this option set, file cache access takes priority over user application access to memory. This is the default setting.
Maximize Throughput for Network Applications optimizes server memory for distributed applications that do their own memory caching, such as Microsoft SQL Server. With this option set, user application access to memory takes priority over file cache access.
Tuning Database Servers
Database servers deserve special mention here because they are so often accused of poor performance. Using the following rules will help you understand the performance characteristics of a database server.
Most of the time, poor performance isn't the server's fault—it's the application's design at fault. It's much easier to write inefficient relational database queries than to mess up the tuning of a Windows NT system. Unfortunately, you will almost always have to prove beyond the shadow _of a doubt that the system is performing adequately before the application developers will go back and begin optimizing their code. It's all too common to be forced into throwing hardware at a poorly designed application.
Allocate enough memory for Windows NT, and then give the rest to the database. It may seem obvious, but after the operating system, the most important entity in a database server is the database engine. Most databases designed for Windows NT have a parameter to reserve physical memory for their own use—and most databases voraciously gobble up every byte you can give them. Exactly how many bytes to give them is an inexact process, but the general process for Windows NT 4 is listed here:
Give 24MB to Windows NT and the rest to the database.
Watch Windows NT's paging rate. If under normal conditions Windows NT pages excessively (consistently more than 30–40 pages/sec), give it more memory by reserving less for the database. Keep doing this until the average paging rate is manageable. Because the Performance Monitor Memory Pages/Sec object can't separate database paging from operating system paging, to get Windows NT paging rate you must subtract from that number the sum of SQL Server Page Reads/Sec and SQL Server Single Page Writes/Sec.
Watch the databases paging rate. Again, you want to minimize database paging because it incurs a high performance penalty. If _the database paging rate is too high (consistently more than 10–20 pages/sec), you must add physical memory to the system.
Database servers are the biggest beneficiaries of multiple processors. _By adding a second processor to a uniprocessor database server that's a bit processor-bound, you may almost double your throughput. Adding additional processors will continue to boost performance, but the single biggest gain will come from adding a second processor. Don't forget: In addition to running the system setup utility, in Windows NT 4 you must update the Windows NT OS to a multiprocessor kernel and HAL before it will be recognized. The UPTOMP.EXE utility in the Windows NT Resource Kit automates this process.
Watch the database server subsystem loads. The load on database server subsystems is listed here, in order:
Processor
Memory
I/O subsystem
Network I/O
Processor, memory, and disk I/O are heavily used by a database, while network I/O is relatively low. This is because a well-designed client/server database passes only the database query and the query results over the wire. The operations to form the query are done on the client, and the execution of the query is done on the server.
Even though you use a logical approach to performance analysis, there are so many variables out of your control that, in the end, there's still some black art to it. You must look at your systems regularly, understand the applications they are running, and be able to read the tea leaves to come up with a feel for a system's performance problems.
Tuning Control Panel Settings
The Control Panel is the place to go for 90% of a Windows NT 4 server's general tuning. The other 10% are sketchily documented or undocumented keys and values in the Registry. In Windows 2000, you can forget almost everything you learned about where controls are located in the user interface; most have changed out of recognition. Fortunately, beta feedback has pointed this out to Microsoft, so the help system has a specific section on how to find the new ways to do old tasks. What follows are tips on Control Panel settings to make managing a server a bit easier:
The Console—In the Layout tab, change the screen buffer size height to 999. This will give you a scrollable command prompt window that will display the last 999 lines of data or commands. In Windows 2000, the easiest way to reach this is to launch a command prompt from the start menu, click the icon in the upper-left corner, choose Properties, and then select the Layout tab.
Tip In a command prompt window, you can view the buffer of your previously entered commands by pressing F7.
Network—Review your bindings to be sure that you have removed or disabled all unnecessary protocols.
Server—Update the description field with pertinent information about the server. This might include the server model, the owning organization, the purpose, and the location. In Windows 2000, the Description field is buried in Control Panel, Computer Management. Right-click the uppermost icon labeled Computer Management, choose Properties, and then choose Networking ID.
Services—Review the services. Do they all need to be started? For example, the Messenger service can be disabled on most servers because they rarely need to receive a message sent via NET SEND to the console. In Windows 2000, there are a ton of new services; services administration has also moved to the Computer Management tool.
System—The System applet controls basic functionality (such as startup and shutdown options), the paging file, and general performance options. In the System applet, you'll find the following tabs:
Figure 5.7: Foreground application boost
Startup/Shutdown tab—Set the Show List timer to 5 seconds. On a dedicated Windows NT server, there's no choice to be made other than the base video mode. Ensure that all check boxes in the Recovery section are checked.
Performance tab—In Windows NT 4, consider sliding the Foreground Application Boost slider to None (see Figure 5.7). The setting for this control can be argued two ways. The first theory is simpler: A server's primary purpose is to serve its network customers, so foreground applications should always take the back seat to the customer's needs. The second one proposes that if an operator does need to do something on a server console, it's for a very good reason and is worth taking cycles away from paying customers to get good response time. Foreground boost set to None on a heavily loaded, bottlenecked server could result in very slow response time for a console operator. In Windows 2000, this boost control changes to a radio button, shown previously in Figure 5.3.
Display—Don't use a screen saver. If you do, set it to a simple one such as Marquee or a blank screen. I know it doesn't look nearly as cool as a row of monitors running 3D textured flags, but elaborate screen savers chews up CPU for no good reason. If you must have some kind of a high-tech screen saver to impress your boss when he visits the computer room, choose Beziers and back the speed down a bit.
Summary
This chapter covers many of the basic practical matters in assembling a server and then keeping it in good working order. It's a really big subject, so I've skimmed over some intimate details. Instead, I've included lots of important points in these areas to help keep you on a straight course as you wade through all those intimate details. Server performance, backup media and jobs, software maintenance—there are hundreds of pitfalls you can encounter. This chapter has laid out principles you can use to avoid them.
About the Author
Sean Deuby is a Senior Systems Engineer with Intel Corporation, where he focuses on large-scale Windows 2000 and Windows NT Server issues. Before joining Intel, he was a technical lead in the Information Systems & Services NT Server Engineering Group of Texas Instruments (TI). In that role he was a principal architect of TI's 17-country, 40,000 account enterprise NT network. Sean has been a charter member of the Technical Review Board of Windows NT Magazine, has published several articles in the magazine, and is a contributing author to the Windows NT Magazine Administrator's Survival Guide. He speaks on NT Server and Windows 2000 topics at computer conferences around the world. His domain design white paper for TI, "MS Windows NT Server Domain Strategy" has been published monthly on the Microsoft TechNet CD since 1996. Sean has been a Microsoft Certified Systems Engineer since 1996 and a Certified Professional in Microsoft Windows NT Server and Windows NT Workstation since 1993. | http://technet.microsoft.com/en-us/library/bb727090(d=printer).aspx | CC-MAIN-2013-48 | refinedweb | 18,928 | 60.75 |
It would be alot better if you put the cap on the same side
as the part. The inductance will be lower that way. The
idea of a bypass cap is to create a low impedance source
at the bypass point. Putting the extra via inductance in
doesn't help that. Also, at higher frequencies if you don't
connect the cap before you make a connection to power and
return ("GND"), it's gonna get on 'em.
Depending on your frequencies, the currents involved, the
susceptability of other circuits, you might get away with it,
but it isn't optimum. Unless there is a compelling reason
not to, I'd put the cap on the same side as the part, as close
as you can get it.
The next question is, "How do you pick a decoupling cap?"
-- --------------------------------------------------------------- Mark Randol, RF Measurements Engineer | Motorola SPS, Inc. (602)413-8052 Voice | M/S EL379 (602)413-4150 FAX | 2100 E. Elliot Road ryvw50@email.sps.mot.com | Tempe, AZ 85284 --------------------------------------------------------------- | http://www.qsl.net/wb6tpu/si-list2/pre99/1350.html | CC-MAIN-2015-35 | refinedweb | 170 | 75.2 |
I have pictures of objects in my static files that I would like to display. However for some objects I don't have a picture and for those I would like to display an image saying "no pic available". I therefore have a field called pictures which is set to 1 for the object with a picture available and 0 for those with no picture available.
I have made a template tag that should be able to insert the correct picture but I'm facing a problem.
template tag file:
def static_picture(id_internal, picture):
if picture == 1:
return '"' + "{% static" + ' "' + 'img/pictures/' + id_internal + '.jpg' + '"' + ' %}' + '"'
else:
return '"' + "{% static" + ' "' + 'img/pictures/picture_missing.jpg' + '"' + " %}" + '"'
<img src="{{ object.id_internal|static_picture:object.picture }}" class="img-responsive">
<img src="{% static "img/pictures/339-10026.jpg" %}" class="img-responsive">
It does not work like that, after the {{}} are evaluated, there is no more evaluation. So you see string. You need to edit do code,
The Jinja (template translator create from {{}} html code) evaluate your template function and insert it to the place. After that django send this to browser:
<img src="{% static "img/pictures/339-10026.jpg" %}" class="img-responsive">
but browser does not understand this. So you have to return in your function full_path.
{% static ... %} just prefix your path with path to static folder. This path is available in settings. Settings are accessible in module settings in django.conf.
from django.conf import settings def static_picture(id_internal, picture): if picture == 1: return '"{}/img/pictures/{}.jpg"'.format(settings.STATIC_URL, id_internal) else: return '"{}/img/pictures/picture_missing.jpg"'.format(settings.STATIC_URL)
This return full path to file
<img src="{{ object.id_internal|static_picture:object.picture }}" class="img-responsive"> <img src="path/to/static_folder/img/picturesXXX.jpg" class="img-responsive"> | https://codedump.io/share/HcQghuxihk9J/1/using-django-custom-template-tag-to-display-photos | CC-MAIN-2017-09 | refinedweb | 288 | 51.65 |
Node is one of the premier frameworks for microservice architecture today. The microservice pattern allows developers to compartmentalize individual components of a larger application infrastructure. In this Node Microservices.
Node is one of the premier frameworks for microservice architecture today. The microservice pattern allows developers to compartmentalize individual components of a larger application infrastructure. Because each component runs independently, you can upgrade or modify components without impacting the larger application. Each component exposes an interface to external consumers who are blind to any internal logic the service does.
One of the challenges of working in a microservice environment is the process of one service finding another to call it. Each service runs as an application and has an address that the calling service must find before it can call any functions on its interface.
Seneca is a microservice toolkit that helps alleviate this headache by providing the foundations of a microservice application. Seneca markets itself as a toolkit for Node that leaves you free to worry about the business logic in your application.
In this.
You will use Okta to provide authentication for your users.Scaffold the Node Microservices Project
First, create a new directory for your application.
mkdir restaurant-application cd restaurant-application
Next, create folders for your services and the code for the back end of the application. Seneca refers to the code behind the services as plugins.
mkdir lib mkdir services
You will need a folder for any public files such as views, javascript, or CSS that the app will deliver to the client.
mkdir public mkdir public/js mkdir public/views
Now that you set up your directory structure, you can add the files you will need to run the application.
In the
lib folder, add the following 4 files;
cart.js,
order.js,
payment.js,
restaurant.js. Next, in the
services folder, add the corresponding services for each of the 4 files you just created;
cart-service.js,
order-service.js,
payment-service.js,
restaurant-service.js. In addition to these four, you will also need to add
web-app.js to the
services folder to handle the external communication from the users. In your root directory, add the
index.js file, which will handle the startup for your application.
For this application, you will use pugjs to render your views. In the
public/views folder, add your
_layout.pug view from which your app will extend all other views. Next add views for
cart.pug,
confirmation.pug,
home.pug, and
login.pug. You will also need a javascript file for the client to use in the
public\js folder. For simplicity, call this file
utils.js.
Next, run the command
npm init to go through the npm setup wizard. This wizard will prompt you for the application name, license, and other information. If you wish to use the default values, use
npm init -y. When you open your code editor, your file system should look like this:
With your application initialized for Node, you can now add the dependencies you will need for the project. Install these packages from the Node Package Manager. From the terminal, issue the commands below to install the packages.
First, add the Okta OIDC middleware and the Nodejs SDK. These packages make interfacing with Okta as easy as possible. You will use the Okta Node SDK as well as the OIDC middleware.
npm install @okta/[email protected] npm install @okta/[email protected]
Next, add the
express and
express-session packages to the project. Express is a web framework for Node that provides a feature-rich experience for creating mobile or web applications. Express-Session will help handle your session parameters.
npm install [email protected] npm install [email protected]
Install Pug to render your views. You may choose a different view engine, but Pug is very simple and easy to use.
npm install [email protected]
Body parser handles extracting the data from forms or JSON.
npm install [email protected]
To keep your secrets out of your code and repository, you’ll use
dotenv which reads values from a
.env file as if they were environment variables.
npm install [email protected]
And of course, you will need Seneca to manage the communication between your services. You will also use Seneca-Web to include middleware for Seneca on your web application level and the seneca-web-adapter-express to interface Seneca with express. Seneca provides adapters for other frameworks such as Hapi, but you will not need those for this project.
Create Your Node MicroservicesCreate Your Node Microservices
npm install [email protected] npm install [email protected] npm install [email protected]
Setting up the services is the easiest part of the job. This is where most of the magic happens. First, add the following to
cart-service.js:
require('seneca')() .use('../lib/cart') .listen(10202)
That’s it. You create an instance of Seneca, tell it which file(s) to use, and tell it what port to listen on. It may seem like magic, but that’s because it is.
The file
lib/cart is called a
plugin in Seneca parlance. I will go over what the cart plugin looks like later.
Next, create the other three services to perform the same action. Each service will use a designated plugin and listen on a different port.
First, set up the order service. It will listen on port 10204.
require('seneca')() .use('../lib/order') .listen(10204)
Next, add the code for the payment service, which listens on port 10203.
require('seneca')() .use('../lib/payment') .listen(10203)
Finally, you can add the code for the restaurant service to listen on port 10201.
require('seneca')() .use('../lib/restaurant') .listen(10201)
You will come back to the
web-app service when it’s time to write the web application. For now, take a look at the payment plugin to see how Seneca works. Add the following code to the
lib/payment.js file:
module.exports = function (options) { var seneca = this var plugin = 'payment' seneca.add({ role: plugin, cmd: 'pay' }, pay) function pay(args, done) { //TODO integrate with your credit card vendor done(null, { success: true }); } return { name: plugin }; }
Here you are exporting a very simple plugin that exposes one method
billCard to external consumers. In this tutorial, you have not integrated with any kind of payment service. In your application, you will need to integrate with your payment provider.
When building microservices with Seneca, there are several ways you can set them up. Let me take a moment to show you a couple of ways you could handle calls to the payment service.
The first thing you would need to do is add the
pay() function to Seneca. To do so, you would use the
add method with a JSON object called a “pattern”. A pattern contains some parameters that Seneca will use to try to match the request with a given action. In your very simple bill card, you only have
pay(). The following two sets of JSON would both result in hitting the pay function.
{ role: 'payment', cmd: 'pay', creditCardNumber: '0000-0000-0000-1234', creditCard: true }
and
{ role: 'payment', cmd: 'pay', paypalAccountId: 'xyz123', paypal: true }
In your
pay() function, you would need to check if
creditCard were true, and, if so, check if
paypal were also true. If both were false, perhaps your application would reject it. If both were populated, the application wouldn’t know what to do.
Seneca’s patterns handle this for you. You can extend your pay pattern to look for both. Take a look at the code below. While you don’t need this level of sophistication for the app you are creating currently, it is helpful to keep this idea in mind as your work through the project. Below you will see how you might handle splitting PayPal and credit card logic later.
seneca.add({ role: plugin, cmd: 'pay' }, reject) seneca.add({ role: plugin, cmd: 'pay', paypal: true }, payByPaypal) seneca.add({ role: plugin, cmd: 'pay', creditCard: true }, payByCreditCard)
In the above instance, if
paypal and
creditCard were both set to true,
creditCard would win since they share the same number of properties that match and creditCard comes before paypal alphabetically. This is an important concept in Seneca: patterns are unique and Seneca has a rigid system for breaking ties.
With this logic, under
pay, you don’t have to reject the payment but rather you could add branching and tie into the
payByPaypal and
payByCreditCard functions.
Why would you do that?
Imagine you want to extend your existing simple
pay() method at a later date to explicitly require consumers to choose credit cards or PayPal. You send out a reminder to users of the service that starting on January 1st, you will no longer accept the request without a flag for PayPal or credit card. On January 1st, none of your users bothered updating and now everyone is receiving the rejection message. You can simply change how the default
pay pattern is handled by replacing
reject with your
pay() method that contains the branching.
Finally, a
done() function would be passed in as a call back that determines what the application should do when the pattern is matched. So far in your application, you are just returning
success: true to the consumer which is fine for testing. But you will need to integrate with your payment vendor or pay for all this food yourself.
Next up, we have the restaurant microservice. The
restaurant-service uses the
restaurant plugin, of course. Here you have three exposed functions. The
get() will return a restaurant and its menu.
menu() will return the menu of a given restaurant. Finally, the
item() function packages up and returns a menu item for a consumer.
The code for
lib/restaurant.js is as follows:
module.exports = function(options) { var seneca = this; var plugin = 'restaurant'; seneca.add({ role: plugin, cmd: 'get' }, get); seneca.add({ role: plugin, cmd: 'menu' }, menu); seneca.add({ role: plugin, cmd: 'item' }, item); function get(args, done) { if (args.id) { return done(null, getResturant(args.id)); } else { return done(null, restaurants); } } function item(args, done) { var restaurantId = args.restaurantId; var itemId = args.itemId; var restaurant = getResturant(restaurantId); var desc = restaurant.menu.filter(function(obj, idx) { return obj.itemId == itemId; })[0]; var value = { item: desc, restaurant: restaurant }; return done(null, value); } function menu(args, done) { var menu = getResturant(args.id).menu; return done(null, menu); } function getResturant(id) { return restaurants.filter(function(r, idx) { return r.id === id; })[0]; } return { name: plugin }; };
You are following the convention of using
cmd to perform the operation and
role to denote what service that operation belongs to. But there is nothing special about
role or
cmd. They could easily have been
you and
i. What you need to remember is when you call your service from somewhere else that you match that same pattern.
The remaining services,
cart and
order, work in much the same fashion.
Cart exposes functionality for
add and
remove, which add and remove items from a given cart;
clear, which removes all items from a cart; and
get, which, of course, gets a cart.
Here is the code for the
lib/cart.js file:
module.exports = function (options) { var seneca = this; var plugin = 'cart'; seneca.add({ role: plugin, cmd: 'get' }, get); seneca.add({ role: plugin, cmd: 'add' }, add); seneca.add({ role: plugin, cmd: 'remove' }, remove); seneca.add({ role: plugin, cmd: 'clear' }, clear); function get(args, done) { return done(null, getCart(args.userId)); } function add(args, done) { var cart = getCart(args.userId); if (!cart) { cart = createCart(args.userId); } cart.items.push({ itemId: args.itemId, restaurantId: args.restaurantId, restaurantName: args.restaurantName, itemName: args.itemName, itemPrice: args.itemPrice }); cart.total += +args.itemPrice; cart.total.toFixed(2); return done(null, cart); } function remove(args, done) { var cart = getCart(args.userId); var item = cart.items.filter(function (obj, idx) { return ( obj.itemId == args.itemId && obj.restaurantId == args.restaurantId ); })[0]; var idx = cart.items.indexOf(item); if (item) cart.items.splice(idx, 1); cart.total -= item.itemPrice; return done(null, cart); } function clear(args, done) { var cart = getCart(args.userId); if (!cart) { cart = createCart(args.userId); } cart.items = []; cart.total = 0.00; done(null, cart); } function getCart(userId) { var cart = carts.filter(function (obj, idx) { return obj.userId === userId; })[0]; if (!cart) cart = createCart(userId); return cart; } function createCart(userId) { var cart = { userId: userId, total: 0.0, items: [] }; carts.push(cart); return cart; } return { name: plugin }; }; var carts = []
The
order plugin exposes a
placeOrder() method, which takes a
cart object and packages it into an array or
order. You will need to complete the
sendOrder function to integrate into the restaurants you are doing business with.
The code for the
lib/order.js file is:
Build Your Node Web ApplicationBuild Your Node Web Application
module.exports = function(options) { var seneca = this; var plugin = 'order'; seneca.add({ role: plugin, cmd: 'placeOrder' }, placeOrder); function placeOrder(args, done) { var orders = packageOrders(args.cart); for (var i = 0; i < orders.length; i++) { sendOrder(orders[i]); } done(null, { success: true, orders: orders }); } function packageOrders(cart) { orders = []; for (var i = 0; i < cart.items.length; i++) { var item = cart.items[i]; var order = orders.filter(function(obj, idx) { obj.restaurantId == item.restaurantId; })[0]; if (!order) { order = { restaurantId: item.restaurantId, items: [item] }; orders.push(order); } else { order.items.push(item); } } return orders; } function sendOrder(order) { //TODO integrate into your restaurants return true; } return { name: plugin }; };
Now you can begin to set up the web application in the
web-app file. First, let’s make sure you have all your includes.
var Express = require('express') var session = require("express-session"); var Seneca = require('seneca') var Web = require("seneca-web"); var seneca = Seneca(); var ExpressOIDC = require("@okta/oidc-middleware").ExpressOIDC; var path = require("path"); var bodyparser = require('body-parser');
As you saw earlier, you will need Express and Express Session, Seneca, the Okta Middleware, and a couple of utilities in path and body-parser.
First, tell Seneca to use the Express adapter. Below your
require() statements, add:
var senecaWebConfig = { context: Express(), adapter: require('seneca-web-adapter-express'), options: { parseBody: false, includeRequest: true, includeResponse: true } } seneca.use(Web, senecaWebConfig) .client({ port: '10201', pin: 'role:restaurant' }) .client({ port: '10202', pin: 'role:cart' }) .client({ port: '10203', pin: 'role:payment' }) .client({ port: '10204', pin: 'role:order' })
Here you will also tell Seneca where the other microservices are located. In your instance, all the services are running on the same IP and therefore you only need to identify the port. If that changes, you can locate the microservice by domain or IP if necessary.
Add the following to your
web-app.js file to set up Express.
seneca.ready(() => { const app = seneca.export('web/context')(); app.use(Express.static('public')); app.use(bodyparser.json()); app.use(bodyparser.urlencoded()); app.set('views', path.join(__dirname, '../public/views')); app.set('view engine', 'pug'); });
Typically you would use Express directly, but since you want to leverage the Seneca middleware you will need to use the instance of Express you assigned to Seneca. The rest of this should look fairly straightforward. You are telling Express where the static files are—in this case, the
public folder—and that you are using both
json and
urlencoded for
post methods. You are also telling Pug where to find the views.
The next thing you want to do is set up Okta to handle the authentication. Okta is an extremely powerful single sign-on provider.
If you don’t have an Okta account, you will need to sign up first. Then log in and navigate to the Applications area of the site. Click on Add Application and follow the wizard. The wizard will prompt you to select an application type. For this project, choose Web.
Next, the wizard will bring you to the Create New Application - Settings page. Give your application a name you will remember. For this example, you can use “Restaurant Application”. Ensure your base domain for Login redirect URIs is the same as your Node application. In your example, you are set to listen to
localhost:3000, but if your environment dictates you use a different port, you’ll need to use that. You can leave the rest of the settings set to their defaults. Click Done and proceed to the application screen.
To connect your application to Okta, you will need a few pieces of information. First, make sure you grab the Client ID and Client secret from the General tab of your application’s page. The secret will be obfuscated but there is an “eye” icon that will allow you to see it and a clipboard icon that will allow you to copy it to your clipboard.
With this information, you can now return to the application and connect to Okta. Open your
web-app.js file and add the following code.
const oktaSettings = { clientId: process.env.OKTA_CLIENTID, clientSecret: process.env.OKTA_CLIENTSECRET, url: process.env.OKTA_URL_BASE, appBaseUrl: process.env.OKTA_APP_BASE_URL }; const oidc = new ExpressOIDC({ issuer: oktaSettings.url + '/oauth2/default', client_id: oktaSettings.clientId, client_secret: oktaSettings.clientSecret, appBaseUrl: oktaSettings.appBaseUrl, scope: 'openid profile', routes: { login: { path: '/users/login' }, callback: { path: '/authorization-code/callback', defaultRedirect: '/' } } }); app.use( session({ secret: "ladhnsfolnjaerovklnoisag093q4jgpijbfimdposjg5904mbgomcpasjdg'pomp;m", resave: true, saveUninitialized: false }) ); app.use(oidc.router);
For the session secret, you can use any long and sufficiently random string. The login route will be the URL that users will be given to log in to. Here you are using
users/login, which seems as natural as any route. When a user navigates to
users/login Okta will step in and display the hosted login page to them.
You are using the scopes
openid and
profile. OpenID will provide the basic authentication details, and the profile will supply fields like the username. You will use the Okta username to identify the user with their cart later on. You can learn more about scopes in Okta’s documentation.
To keep your Okta credentials out of the code, add a file at the root of the project called
.env that will store these values for you. The contents will be:
OKTA_CLIENTID={yourClientId} OKTA_CLIENTSECRET={yourClientSecret} OKTA_URL_BASE={yourOktaDomain} OKTA_APP_BASE_URL=
Replace the placeholder values in curly braces with your values from Okta. At the very top of your
web-app.js file (before all other
require() statements), add:
require('dotenv').config();
This will ensure the Node application reads the values from the
.env file as environment variables when the application starts up.
The final thing you need to do is set up your routes and start the web application. In the
web-app.js file, add the following:
app.get('/', ensureAuthenticated, function (request, response) { var cart; var restaurants; var user = request.userContext.userinfo; var username = request.userContext.userinfo.preferred_username; seneca.act('role:restaurant', { cmd: 'get', userId: username }, function (err, msg){ restaurants = msg; }).act('role:cart', { cmd: 'get', userId: username }, function (err, msg) { cart = msg; }).ready(function () { return response.render('home', { user: user, restaurants: restaurants, cart: cart }); }) }); app.get('/login', function (request, response) { return response.render('login') }) app.get("/users/logout", (request, response, next) => { request.logout(); response.redirect("/"); }); app.get('/cart', ensureAuthenticated, function (request, response) { var username = request.userContext.userinfo.preferred_username; var user = request.userContext.userinfo; seneca.act('role:cart', { cmd: 'get', userId: username }, function (err, msg) { return response.render('cart', { user: user, cart: msg }); }); }) app.post('/cart', ensureAuthenticated, function (request, response) { var username = request.userContext.userinfo.preferred_username; var restaurantId = request.body.restaurantId; var itemId = request.body.itemId; var val; seneca.act('role:restaurant', { cmd: 'item', itemId: itemId, restaurantId: restaurantId }, function (err, msg) { val = msg; }) .ready(function () { seneca.act('role:cart', { cmd: 'add', userId: username, restaurantName: val.restaurant.name, itemName: val.item.name, itemPrice: val.item.price, itemId: val.item.itemId, restaurantId: val.item.restaurantId }, function (err, msg) { return response.send(msg).statusCode(200) }); }) }); app.delete('/cart', ensureAuthenticated, function (request, response) { var username = request.userContext.userinfo.preferred_username; var restaurantId = request.body.restaurantId; var itemId = request.body.itemId; seneca.act('role:cart', { cmd: 'remove', userId: username, restaurantId: restaurantId, itemId: itemId }, function (err, msg) { return response.send(msg).statusCode(200) }); }); app.post('/order', ensureAuthenticated, function (request, response) { var username = request.userContext.userinfo.preferred_username; var total; var result; seneca.act('role:cart', { cmd: 'get', userId: username }, function (err, msg) { total = msg.total }); seneca.act('role: payment', { cmd: 'pay', total: total }, function (err, msg) { result = msg; }).ready(function () { if (result.success) { seneca.act('role: cart', { cmd: 'clear', userId: username }, function () { return response.redirect('/confirmation').send(302); }) } else { return response.send('Card Declined').send(200); } }) }) app.get('/confirmation', ensureAuthenticated, function (request, response) { var username = request.userContext.userinfo.preferred_username; var user = request.userContext.userinfo; seneca.act('role:cart', { cmd: 'get', userId: username }, function (err, msg) { return response.render('confirmation', { user: user, cart: msg }); }); }); app.listen(3000);
The calls here perform each of the tasks that could be requested by the client. There are a few important notes about these calls that you should keep in mind.
First, each page should be provided with a user model if the user is logged in. The
userContext in the
request Express provides can obtain the user.
Secondly, this is where you begin to really use Seneca. To communicate between your services, you call
seneca.act() and provide the pattern to be matched. There is no need to know where the service is living, as you have already defined that in the service files themselves.
An important note here is that Seneca allows you to chain the
act() method. It will not process each of those in a series; therefore you can’t predict when you will receive results back. If you need to wait for a call to finish before moving onto the next, you can use
ready() to do something after the
act() is complete.
Finally, you will be using the
ensureAuthenticated() function as a route handler in your calls. This will check that Okta authenticated the user before allowing them to proceed to the intended page. If the Okta hasn’t authenticated the user, the program will redirect to your login page. You can add the following code to your
web-app.js file to help with this.
function ensureAuthenticated(request, response, next) { if (!request.userContext) { return response.status(401).redirect('../login'); } next(); }
Now you’re ready to test out your microservices.Start the Node Microservices for Debugging
Next, you’re going to move on to the client-side work. First, spawn each process to ensure all your services are running. Open your
index.js file and add the following code to it.
Build the Node Client ApplicationBuild the Node Client Application
var fs = require('fs') var spawn = require('child_process').spawn var services = ['web-app', 'restaurant-service', 'cart-service', 'payment-service', 'order-service'] services.forEach(function (service) { var proc = spawn('node', ['./services/' + service + '.js']) proc.stdout.pipe(process.stdout) proc.stderr.pipe(process.stderr) })
You will use the Pug view engine for your pages. Typically, you will start with the layout page or pages. In this application, you only have one layout page. I named mine
_layout so it appears at the top of my views folder, but you can call it whatever you would like.
block variables doctype html html(lang='en') head meta(charset='utf-8') meta(name='viewport' content='width=device-width, initial-scale=1, shrink-to-fit=no') script(src="" integrity="sha256-CSXorXvZcTkaix6Yvo6HppcZGetbYMGWSFlBw8HfCJo=" crossorigin="anonymous")(src="" integrity="sha384-xrRywqdh3PHs8keKZN+8zzc5TX0GRTLCcmivcbNJWm2rs5C8PRhcEn3czEjhAO9o" crossorigin="anonymous") link(href=""") title #{title} body div.d-flex.flex-column.flex-md-row.align-items-center.p-3.px-md-4.mb-3.bg-white.border-bottom.box-shadow h5.my-0.mr-md-auto Yum Yum Eats nav.my-2.my-md-0.mr-md-3 if user == undefined a.p-2.text-dark(href="/users/login") Log In else a.p-2.text-dark(href="/users/logout") Logout a.p4.btn.btn-primary(href="/cart") span.fas.fa-shopping-cart.mr-3 span.badge.badge-light(id='cart-button') #{cart.items.length} .container block content footer.pt-4.my-md-5.pt-md-5.border-top div.row.text-center div.col-8.col-md Built with #[a(href='') Express.js], login powered by #[a(href='') Okta].
The layout page provides all the common logic on each of the pages. Your header and footer sections are on this page as well as any scripts that will be used on all of your pages. Here you have brought in bootstrap from the CDN and font-awesome for icons.
The code includes branching in the layout page to check for the user in the model. This signals to the view whether it should show the login or logout buttons.
Next, set up the
home.pug page. This page will act as the main display for the restaurants and menus.
extends _layout block variables - var title = 'Restauranter - Eat Now' block content h2.text-center #{title} .row .col-lg-3 div(role="tablist" id="list-tab").list-group.restaurants each restaurant, i in restaurants - href = '#r' + restaurant.id - controls = 'r' + restaurant.id - id = 'list-' + restaurant.id + '-list' a.list-group-item.list-group-item-action(data-toggle='list', href=href, role='tab' aria-controls=controls, id=id) #{restaurant.name} .col-lg-9 .tab-content(id='nav-tabContent') each restaurant, i in restaurants - id = 'r' + restaurant.id - labeledBy = 'list-' + restaurant.id + '-list' .tab-pane.fade(id=id, role='tabpanel' aria-labelledby=labeledBy) .row each item, j in restaurant.menu .col-lg-2 span #{item.name} .col-lg-6 span #{item.description} .col-lg-2 span #{item.price} .col-lg-2 - func = 'addToCart(' + item.restaurantId + ',' + item.itemId + ')'; button(onClick = func) Add To Cart script(src="..\\js\\utils.js")
The
home page is where you get a chance to see some of the strength of Pug. The
home page extends the
_layout page and delivers the content unique to
home in the
block content section. Back on the
_layout page, there is a
block content section that dictates where the content will render. Here, your content is just a few tabs with your restaurants. When a user clicks the tab, the app displays a menu with an
Add To Cart button.
This code also adds a
script to the body of this page. You don’t need this script on every page, so you will only include it on pages that rely on it. All the necessary client-side javascript is in the
utils.js file in the
public\js folder. Add this as its content:
function updateCart(cart) { var count = cart.items.length; $('#cart-button').text(count); } function removeFromCart(restaurantId, itemId, rowNumber) { var data = { restaurantId: restaurantId, itemId: itemId }; $.ajax({ type: 'DELETE', url: 'cart', data: data, success: function(cart) { if(cart.items.length == 0) { window.location.href = 'cart'; } updateCart(cart); $('#row-' + rowNumber).remove(); $('#total-price').text('Total Price $ ' + cart.total); } }); } function addToCart(restaurantId, itemId) { var data = { restaurantId: restaurantId, itemId: itemId }; $.ajax({ type: 'POST', url: 'cart', data: data, success: function(cart) { updateCart(cart); } }); }
The
utils javascript provides logic for adding and removing items from the cart and updating the view based on the updated cart.
Once the user has added the items to their cart, they will need to review the order and check out. This all happens in your
views/cart.pug view.
extends _layout block variables - var title = 'Restauranter - Eat Now' block content h2.p-2 Cart if cart.items.length == 0 p Your cart is empty. Please check out our a(href="/") menus. else table.table.table-striped thead tr th Restaurant th Item th Price th Remove tbody each item, i in cart.items - c = 'row-' + i; tr(id=c) td #{ item.restaurantName } td #{ item.itemName } td #{ item.itemPrice } td - removeFunc = 'removeFromCart(' + item.restaurantId + ',' + item.itemId + ',' + i + ')' a.fa.fa-trash-alt(href="#", onClick = removeFunc) tfoot tr td(colspan="3") span.float-right(id='total-price') Total Price $ #{cart.total.toFixed(2)} td form(action="order", method="post") button.btn.btn-primary Order script(src="..\\js\\utils.js")
Once again, you are extending the
_layout page. The content itself is a table with the items the user added with a final opportunity to remove an item from the cart.
The program shows the
views/confirmation.pug page after the user submits their order and it succeeds. The page displays a thank-you message to the user.
extends _layout block variables - var title = 'Restauranter - Eat Now' block content h2 Thank You for ordering!
Finally, you can complete your login page. One of the best things about Okta is that you don’t need to implement a lot of login logic. You’ll recall that you set
routes/login in your Okta setup to
users/login. This presents the
users/login link to your user, and when they click it, they will be piped into the Okta login logic.
extends _layout block variables - var title = 'Login' block content p Hey there, in order to access this page, please a(href="/users/login") Login here.
The last thing you will need is base data for your application to use. In a real application, you might get this from a data store of some sort. For demonstration purposes, you’ll just create data in memory. At the bottom of your
restaurant.js file, add:
var restaurants = [ { id: '1', name: "Joe's Seafood Joint", menu: [ { restaurantId: 1, itemId: 1, name: 'Stuffed Flounder', price: '19.99', description: 'Daily Catch Flounder wrapped around Bay Area Crab chunks drizzled with an Imperial Sauce over rice with asparagus.' }, { restaurantId: 1, itemId: 2, name: 'Striped Bass', price: '17.99', description: 'No better rockfish than that right out of the Chesapeake Bay. Garnished with Lemon and sided with fried potatoes and fresh green beans.' }, { restaurantId: 1, itemId: 3, name: 'Lobster', price: '29.99', description: 'Maine Lobster brought in fresh this morning served with butter and hush puppies.' } ] }, { id: '2', name: 'The BBQ Place', menu: [ { restaurantId: 2, itemId: 1, name: 'Pulled Pork', price: '12.99', description: 'Slow cooked pork shoulder with our famous vinegar barbeque sauce served with hush puppies and carrots.' }, { restaurantId: 2, itemId: 2, name: 'Smokey Smoked Brisket', price: '17.99', description: 'Smoked for 2 whole days in our custom built smoker on premise. Juicy and tender brisket served with fries and coleslaw.' }, { restaurantId: 2, itemId: 3, name: 'Half Rack of Ribs', price: '29.99', description: 'Slathered in barbeque sauce and served with french fries and coleslaw.' } ] }, { id: '3', name: 'Sandwiches R Us', menu: [ { restaurantId: 3, itemId: 1, name: 'Tuna Wrap', price: '12.99', description: 'Fresh caught Hatteras yellowfin tuna wrapped in a wheat tortilla. Comes with chips or fries and a fried pickle.' }, { restaurantId: 3, itemId: 2, name: 'BLT', price: '10.99', description: 'The classic BLT made with real local bacon from my buddys farm' }, { restaurantId: 3, itemId: 3, name: 'Pigs in a Blanket', price: '12.99', description: 'Two mini pancakes as bread, bacon, sausage, and eggs. Drizzled in syrup and slathered in butter.' } ] } ];
At this point, your application is ready to go. You can start it by running debug in your IDE or by using the command
node index.js. Since your user won’t be authenticated, the first page he or she will see is the request to log in. The user can do so using their Okta account. Afterward, the app will take them the menu page.
Originally published by Nickolas Fisher at | https://morioh.com/p/6de13d44b013 | CC-MAIN-2020-10 | refinedweb | 5,192 | 60.41 |
1. Overview
In this codelab, you'll learn how to monitor your app's performance during a feature rollout. Our sample app will have basic functionality, and it's set up to display a different background image based on a Firebase Remote Config flag. We'll go over instrumenting traces to monitor the app's performance, rolling out a configuration change to the app, monitoring the effect and seeing how we can improve the performance.
What you'll learn
- How to add Firebase Performance Monitoring to your mobile app to get out-of-the-box metrics (like app start time and slow or frozen frames)
- How to add custom traces to understand critical code paths of your user journeys
- How to use the Performance Monitoring dashboard to understand your metrics and track important changes like the rollout of a feature
- How to setup performance alerts to monitor your key metrics
- How to roll out a Firebase Remote Config change
Prerequisites
- Android Studio 4.0 or higher
- An Android emulator with API level 16 or higher.
- Java version 8 or higher
- A basic understanding of Firebase Remote Config
2. Set up the sample project
Download the code
Run the following command to clone the sample code for this codelab. This will create a folder called
codelab-perf-rc-android on your machine:
$ git clone
If you don't have Git on your machine, you can also download the code directly from GitHub.
Import the project under the
firebase-perf-rc-android-start folder into Android Studio. You will probably see some runtime exceptions or maybe a warning about a missing
google-services.json file. We'll correct this in the next section.
In this codelab, you'll use the Firebase Assistant plugin to register your Android app with a Firebase project and add the necessary Firebase config files, plugins, and dependencies to.
- Click Connect.
- Open Android Studio. In the Assistant pane, you will
You should now see
google-services.json file in the module (app-level) directory of your app, and your app should now compile. In Android Studio, click Run > Run ‘app' to build and run the app on your Android emulator.
When the app is running, you should first see a splash screen like this:
Then, after a few seconds, the main page with the default image will display:
What's happening under the hood?
The splash screen is implemented in SplashScreenActivity and does the following:
- In
onCreate(), we initialize Firebase Remote Config settings and fetch the config values that you'll set in the Remote Config dashboard later in this codelab.
- In
executeTasksBasedOnRC(), we read the config value of the
seasonal_image_urlflag. If a URL is provided by the config value, we download the image synchronously.
- Once the download is complete, the app navigates to MainActivity and calls
finish()to end
SplashScreenActivity.
In
MainActivity, if
seasonal_image_url is defined through Remote Config, the feature will be enabled and the downloaded image will be displayed as the background of the main page. Otherwise, the default image (shown above) will be displayed.
4. Set up Remote Config
Now that your app is running, you can set up the new feature flag.
- In the left panel of the Firebase console, locate the Engage section, then click Remote Config.
- Click the Create configuration button to open the configuration form and add
seasonal_image_urlas the parameter key.
- Click Add description, then enter this description:
Shows a seasonal image (replaces default) in the main page when the restaurant list is empty.
- Click Add new -> Conditional value -> Create new condition.
- For the condition name, enter
Seasonal image rollout.
- For the
Applies if...section, select
User in random percentile <= 0%. (You want to leave the feature disabled until you're ready to roll out in a later step.)
- Click Create condition. You will use this condition later to roll out the new feature to your users.
- Open the Create your first parameter form and locate the Value for Seasonal image rollout field. Enter the URL where the seasonal image will be downloaded:
- Leave the default value as an empty string. This means the default image in the codebase will be shown rather than an image downloaded from a URL.
- Click Save.
You can see that the new config is created as a draft.
- Click Publish changes and confirm the changes at the top to update your app.
5. Add monitoring for the data loading time
Your app pre-loads some data prior to showing
MainActivity and displays a splash screen to hide this process. You don't want your users to wait too long on this screen, so normally it's beneficial to monitor how long the splash screen is displayed.
Firebase Performance Monitoring provides a way to do just that. You can instrument custom code traces to monitor the performance of specific code in your app – like the loading time for data and the processing time of your new feature.
To track how long the splash screen is displayed, you'll add a custom code trace to
SplashScreenActivity, which is the
Activity that implements the splash screen.
- Initialize, create, and start a custom code trace named
splash_screen_trace:
SplashScreenActivity.java
// ... import com.google.firebase.perf.FirebasePerformance; import com.google.firebase.perf.metrics.Trace; // ... public class SplashScreenActivity extends AppCompatActivity { private static final String TAG = "SplashScreenActivity"; private static final String SEASONAL_IMAGE_URL_RC_FLAG = "seasonal_image_url"; // TODO: Initialize splash_screen_trace private final Trace splashScreenTrace = FirebasePerformance.startTrace("splash_screen_trace"); // ... }
- End the trace in the
onDestroy()method of
SplashScreenActivity:
SplashScreenActivity.java
@Override protected void onDestroy() { super.onDestroy(); // TODO: Stop the splash_screen_trace here splashScreenTrace.stop(); }
Since your new feature downloads and processes an image, you'll add a second custom code trace that will track the additional time your feature has added to
SplashScreenActivity.
- Initialize, create, and start a custom code trace named
splash_seasonal_image_processing:
SplashScreenActivity.java
private void executeTasksBasedOnRC(FirebaseRemoteConfig rcConfig) { String seasonalImageUrl = rcConfig.getString(SEASONAL_IMAGE_URL_RC_FLAG); Log.d(TAG, SEASONAL_IMAGE_URL_RC_FLAG + ": " + seasonalImageUrl); if (!seasonalImageUrl.isEmpty()) { // TODO: Start the splash_seasonal_image_processing here final Trace seasonalImageProcessingTrace = FirebasePerformance .startTrace("splash_seasonal_image_processing"); // ... } }
- End the trace in the both
onLoadFailed()and
onResourceReady()methods of the
RequestListener:
SplashScreenActivity.java
Glide.with(SplashScreenActivity.this.getApplicationContext()) .asBitmap() .load(seasonalImageUrl) .signature(new ObjectKey(Utils.getCacheUUID())) .listener(new RequestListener<Bitmap>() { @Override public boolean onLoadFailed( @Nullable GlideException e, Object model, Target<Bitmap> target, boolean isFirstResource) { // TODO: Stop the splash_seasonal_image_processing here seasonalImageProcessingTrace.stop(); launchMainActivity(); return true; } @Override public boolean onResourceReady(Bitmap resource, Object model, Target<Bitmap> target, DataSource dataSource, boolean isFirstResource) { // TODO: Stop the splash_seasonal_image_processing here seasonalImageProcessingTrace.stop(); launchMainActivity(); return true; } }) .preload();
Now that you've added custom code traces to track the splash screen duration (
splash_screen_trace) and the processing time of the new feature (
splash_seasonal_image_processing), run the app in Android Studio again. You should see a logging message that contains
Logging trace metric: splash_screen_trace, followed by the duration of the trace. You won't see a log message for
splash_seasonal_image_processing because you haven't enabled the new feature yet.
6. Add a custom attribute to the trace
For custom code traces, Performance Monitoring automatically logs default attributes (common metadata like app version, country, device, etc.) so that you can filter the data for the trace in the Firebase console. You can also add and monitor custom attributes.
In your app, you've just added two custom code traces to monitor the splash screen duration and the processing time of the new feature. A factor that might affect these durations is whether the displayed image is the default image or if the image has to be downloaded from a URL. And who knows – you might eventually have different URLs from which you download an image.
So, let's add a custom attribute representing the seasonal image URL to these custom code traces. That way, you can filter duration data by these values later on.
- Add the custom attribute (
seasonal_image_url_attribute) for
splash_screen_tracein the beginning of the
executeTasksBasedOnRCmethod:
SplashScreenActivity.java
private void executeTasksBasedOnRC(FirebaseRemoteConfig rcConfig) { String seasonalImageUrl = rcConfig.getString(SEASONAL_IMAGE_URL_RC_FLAG); Log.d(TAG, SEASONAL_IMAGE_URL_RC_FLAG + ": " + seasonalImageUrl); // TODO: Add a custom attribute "seasonal_image_url_attribute" to splash_screen_trace if (seasonalImageUrl.isEmpty()) { splashScreenTrace.putAttribute("seasonal_image_url_attribute", "unset"); } else { splashScreenTrace.putAttribute("seasonal_image_url_attribute", seasonalImageUrl); } // ... }
- Add the same custom attribute for
splash_seasonal_image_processingright after the
startTrace("splash_seasonal_image_processing")call:
SplashScreenActivity.java
if (!seasonalImageUrl.isEmpty()) { // TODO: Start the splash_seasonal_image_processing here final Trace seasonalImageProcessingTrace = FirebasePerformance .startTrace("splash_seasonal_image_processing"); // TODO: Add a custom attribute "seasonal_image_url_attribute" to splash_seasonal_image_processing seasonalImageProcessingTrace .putAttribute("seasonal_image_url_attribute", seasonalImageUrl); // ... }
Now that you've added a custom attribute (
seasonal_image_url_attribute) for both of your custom traces (
splash_screen_trace and
splash_seasonal_image_processing), run the app in Android Studio again. You should see a logging message that contains
Setting attribute 'seasonal_image_url_attribute' to 'unset' on trace 'splash_screen_trace'. You have not yet enabled the Remote Config parameter seasonalImageUrl which is why the attribute value is
unset.
The Performance Monitoring SDK will collect the trace data and send them to Firebase. You can view the data in the Performance dashboard of the Firebase console, which we'll explain in detail in the next step of the codelab.
7. Configure your Performance Monitoring dashboard
Configure your dashboard to monitor your feature
In the Firebase console, select the project that has your Friendly Eats app.
In the left panel, locate the Release & Monitor section, then click Performance.
You should see your Performance dashboard with your very first data points in your metrics board! The Performance Monitoring SDK collects performance data from your app and displays it within minutes of collection.
This metrics board is where you can track key metrics for your app. The default view includes the duration of your app start time trace, but you can add the metrics that you care about most. Since you're tracking the new feature that you added, you can tailor your dashboard to display the duration of the custom code trace
splash_screen_trace.
- Click on one of the empty Select a metric boxes.
- In the dialog window, select the trace type of Custom traces and the trace name
splash_screen_trace.
- Click Select metric, and you should see the duration of
splash_screen_traceadded to your dashboard!
You can use these same steps to add other metrics that you care about so that you can quickly see how their performance changes over time and even with different releases.
The metrics board is a powerful tool to track the performance of key metrics experienced by your users. For this codelab, you have a small set of data in a narrow time range, so you'll be using other dashboard views that will help you understand the performance of the feature rollout.
8. Roll out your feature
Now that you've set up your monitoring, you're ready to roll out the Firebase Remote Config change (
seasonal_image_url) that you set up earlier.
To roll out a change, you'll go back to the Remote Config page in the Firebase console to increase the user percentile of your targeting condition. Normally, you would roll out new features to a small portion of the users and increase it only when you are confident that there are no issues with it. In this codelab, though, you're the only users of the app, so you can change the percentile to 100%.
- Click the Conditions tab at the top of the page.
- Click the
Seasonal image rolloutcondition that you added earlier.
- Change the percentile to 100%.
- Click Save condition.
- Click Publish changes and confirm the changes.
Back in Android Studio, restart the app in your emulator to see the new feature. After the splash screen, you should see the new empty state main screen!
9. Check performance changes
Now let's check out the performance of splash screen loading using the Performance dashboard in the Firebase console. In this step of the codelab, you'll use different parts of the dashboard to view performance data.
- On the main Dashboard tab, scroll down to the traces table, then click the Custom traces tab. In this table, you'll see the custom code traces you added earlier plus some out-of-the-box traces.
- Now that you have enabled the new feature, look for the custom code trace
splash_seasonal_image_processing, which measured the time it took to download and process the image. From the trace's Duration value, you can see that this download and processing takes a significant amount of time.
- Since you have data for
splash_seasonal_image_processing, you can add the duration of this trace to your metrics board at the top of the Dashboard tab.
Similar to before, click on one of the empty Select a metric boxes. In the dialog window, select the trace type Custom traces and the trace name
splash_seasonal_image_processing. Finally, click Select metric to add this metric to the metrics board.
- To further confirm the differences, you can take a closer look at the data for
splash_screen_trace. Click on the
splash_screen_tracecard in the metrics board, then click View metric details.
- In the details page, you'll see a list of attributes on the bottom left, including the custom attribute you created earlier. Click the custom attribute
seasonal_image_url_attributeto view the splash screen duration for each seasonal image URL on the right:
- Your splash screen duration values will probably be a bit different than those in the screenshot above, but you should have a longer duration when the image is downloaded from a URL versus using the default image (represented by "unset").
In this codelab, the reason for this longer duration might be straightforward, but in a real app, it may not be so obvious. The collected duration data will come from different devices, running the app in various network connection conditions, and these conditions could be worse than your expectation. Let's look at how you'd investigate this issue if this were a real world situation.
- Click Performance at the top of the page to go back to the Dashboard main tab:
- In the traces table at the bottom of the page, click the Network requests tab. In this table, you'll see all the network requests from your app aggregated into URL patterns, including the
images.unsplash.com/**URL pattern. If you compare the value of this response time to the overall time it takes for image download and processing (i.e., the duration of the
splash_seasonal_image_processingtrace), you can see that a large amount of the time is spent on downloading the image.
Performance findings
Using Firebase Performance Monitoring, you saw the following impact on the end users with the new feature enabled:
- The time spent on
SplashScreenActivityhas increased.
- The duration for
splash_seasonal_image_processingwas very large.
- The delay was due to the response time for the image download and the corresponding processing time needed for the image.
In the next step, you'll mitigate the impact on performance by rolling back the feature and identifying how you can improve the implementation of the feature.
10. Roll back the feature
Increasing your users' wait time during the splash screen is not desirable. One of the key benefits of Remote Config is the ability to pause and reverse your rollout without having to release another version to your users. This allows you to quickly react to issues (like the performance issues that you discovered in the last step) and minimize the number of unhappy users.
As a fast mitigation, you'll reset the rollout percentile back to
0 so that all your users will see the default image again:
- Go back to the Remote Config page in the Firebase console.
- Click on Conditions at the top of the page.
- Click on the
Seasonal image rolloutcondition you added earlier.
- Change the percentile to 0%.
- Click Save condition.
- Click Publish changes and confirm the changes.
Restart the app in Android Studio, and you should see the original empty state main screen:
11. Fix the performance issues
You discovered earlier in the codelab that downloading an image for your splash screen was causing the slowdown for your app. Taking a closer look at the downloaded image, you see that you're using the original resolution of the image, which was over 2MB! One quick fix for your performance issue is to reduce the quality to a more appropriate resolution so that the image takes less time to download.
Roll out the Remote Config value again
- Go back to the Remote Config page in the Firebase console.
- Click the Edit icon for the
seasonal_image_urlparameter.
- Update the Value for Seasonal image rollout to, then click Save.
- Click on the Conditions tab at the top of the page.
- Click on Seasonal image rollout, then set the percentile back to 100%.
- Click Save condition.
- Click on the Publish changes button.
12. Test the fix and set up alerts
Run the app locally
With the new config value set to use a different download image URL, run the app again. This time, you should notice that the time spent on the splash screen is shorter than before.
View the performance of the changes
Return to the Performance dashboard in the Firebase console to see how the metrics look.
- This time you'll use the traces table to navigate into the details page. Down in the traces table, in the Custom traces tab, click the custom trace
splash_seasonal_image_processingto see a more detailed view of its duration metric again.
- Click the custom attribute
seasonal_image_url_attributeto see the breakdown of the custom attributes again. If you hover over the URLs, you will see a value that matches the new URL for the reduced-size image:(with the
?w=640at the end). The duration value associated with this image is considerably shorter than the value for the previous image and more acceptable for your users!
- Now that you have improved the performance of your splash screen, you can set up alerts to notify you when a trace exceeds a threshold that you set. Open the Performance dashboard and click the overflow menu (three dot) icon for splash_screen_trace and click Alert settings.
- Click the toggle to enable the Duration alert. Set the threshold value to be a little above the value you were seeing so that if your splash_screen_trace exceeds the threshold, you will receive an email.
- Click Save to create your alert. Scroll down to the traces table, then click the Custom traces tab to see that your alert is enabled!
13. Congratulations!
Congratulations! You enabled the Firebase Performance Monitoring SDK and collected traces to measure the performance of a new feature! You monitored key performance metrics for the rollout of a new feature and reacted quickly when a performance issue was discovered. This was all possible with the ability to make config changes with Remote Config and monitor for performance issues in real time.
What we've covered
- Adding the Firebase Performance Monitoring SDK to your app
- Adding a custom code trace to your code to measure a specific feature
- Setting up a Remote Config parameter and conditional value to control/rollout a new feature
- Understanding how to use the performance monitoring dashboard to identify issues during a rollout
- Setting up performance alerts to notify you when your app's performance crosses a threshold that you set | https://firebase.google.com/codelabs/feature-rollout-performance?hl=en | CC-MAIN-2022-33 | refinedweb | 3,152 | 53.41 |
Hi everyone, I just wrote this code as a solution for the reverse words question.
Actually, it is working fine on my pc, while it is not accepted by the online judge, claiming that on input " 1" my code produces "1 " as an output.
What I'm doing wrong?
`public class Solution { public String reverseWords(String s) { String result = ""; if (s.equals("")) return s; else { String [] splitted = s.split(" "); if (splitted.length == 1){ return splitted[0]; } else { for (int i = splitted.length; i > 0; i--){ result += splitted[i-1]+ " "; } } } return result; } }`
Thank to everyone willing to help my undestand this weird behavior :) | https://discuss.leetcode.com/topic/1828/on-my-pc-the-solution-works-perfectly-the-online-judge-complains-what-i-m-missing | CC-MAIN-2018-05 | refinedweb | 102 | 68.16 |
Improving the Rubinius Bytecode Compiler
The Rubinius bytecode compiler is the gateway to all the magic that makes your Ruby code run. As you probably know, the Rubinius virtual machine is a bytecode interpreter. The Rubinius JIT compiler also processes bytecode, converting it into native machine code. Without bytecode, we'd be dead in the water. Recently, I've been working on improving the Rubinius bytecode compiler.
In this post, I'll explain what I've been doing and how it relates to getting Rubinius ready for 1.0. We recently released version 0.12 and we're going to be doing releases about every two weeks. If you haven't built Rubinius yet to check it out, head over to our download page and get started!
Principles
Before we get into code examples, let's lay the groundwork and review some general principles for writing software. We learn early in computer science that programming involves a series of trade-offs. There's always the the classic trade-off between execution time and storage space. If we write a function that saves each value it computes, it can return the saved value instead of re-computing it, but saving the value requires using memory or disk space. A function that computes millions of values would potentially need millions of storage locations.
While speed versus space is probably the most well-known classic trade-off, there are others. With a nod to Philip Kapleau, here are the three pillars of developing Rubinius: Code Quality, Compatibility, and Performance, or QCP. These interact in complex ways and there's no simple way to prioritize them. By compatibility, I mean conforming to the same behavior as MRI (or Matz's Ruby).
These three characteristics are interdependent. Clean, high-quality code enables working more quickly on compatibility. When code is behaving correctly, it's easier to profile and improve performance. Good performance, in turn, can make it easier to identify and fix compatibility issues. Quality code is easier to understand and work with when improving performance. These feedback loops push each of these code characteristics forward. The converse is also true. At times, it may be tempting to sacrifice quality to improve performance, but quick and dirty never pays off in the end.
To sum up, these are the goals we're pursuing: simplify and improve the bytecode compiler code, improve performance, make it simpler and easier to bootstrap Rubinius, and ultimately make it easier to fix compatibility issues running fantastic Ruby software, like your Rails applications.
Parsers and Compilers
There's a certain mystique that surrounds compilers. Rather than just accept the mystique, let's take a peek behind the unicorns and dragons and check out the simple gears and pistons that make it all work. In general terms, a compiler is a process for converting data from one form into another. In the specific case of Ruby, the compiler converts text in the form of Ruby syntax, into operations performed by the computer's CPU.
In discussing compilers, two rather distinct operations are often lumped together, namely, parsing and code generation. Parsing is the process of converting the source code into a data structure that the compiler can process to produce code that the computer, or a virtual machine, can execute, to perform computation. There are specific issues with each operation so let's look at them separately.
A. The Parser
Humans are the most adept and complex parsers in existence. One natural language can choke up a powerful computer, but humans typically handle one and sometimes several languages with relative ease. Not just the words or sounds of a language either, but also the intonations, facial expressions and body language that accompany even the simplest communications.
Compared to natural languages, programming languages are very simple, but even the simplest languages can be challenging to parse. Syntactically, Ruby is a rich and complex language. We love it for the expressive programs we can write—but parsing Ruby is hard. Every Ruby parser in the Ruby implementations that I know of are based on Matz's parser. From the beginning, Rubinius has used a directly imported version of Matz's parser with a few minor modifications.
Detour—Bootstrapping
Before continuing, we need to take a short detour. I'll be writing a post on the Rubinius bootstrapping process in the future, but I'll start by briefly describing part of the problem here.
The Rubinius bytecode compiler, and most of the Ruby core library (classes like
Array and
Hash), are written in pure Ruby—and herein lies the challenge. The Rubinius VM interprets bytecode. To run the Rubinius compiler, it needs to translate the Ruby source code for the compiler into bytecode, but to do so, Rubinius needs to run the compiler. You can probably see where this is heading: around and around without getting much done.
That is the essence of the problem of bootstrapping. To break this loop, we need to insert a process that does not depend on Rubinius. In other words, we need something that's not Rubinius to compile the core library and bytecode compiler so that Rubinius can load the bytecode and run the compiler. Then Rubinius can compile its own compiler and core library.
One way to do this would be to load the Ruby source code in MRI and use the ParseTree gem to extract the MRI parse tree as a recursive array of symbols and values, something also known as an S-expression or sexp. But, since you can't change the MRI parser without impacting its run time behavior, Evan Phoenix wrote the sydparse gem.
This gem is essentially a C extension combining the MRI parser with the sexp generation code from ParseTree. The gem enabled Evan to modify the parser if desired and get the parse results as a sexp. The sydparse gem is basically built into Rubinius currently under the vm/parser directory.
For example, try the following code:
$ bin/rbx irb(main):001:0> "a = 1".to_sexp => s(:lasgn, :a, s(:lit, 1))
So, with the
String#to_sexp method, we have something that will take Ruby code and transform it to a data structure that should be fairly easy for a computer to process. And that is precisely how the early Rubinius bytecode compiler worked: it processed sexps into bytecode. It didn't matter whether the sexps are output by sydparse running in MRI or by the Rubinius built-in parser. With that, a big part of bootstrapping was solved.
Back to the Main Road
So we've got a compiler that processes sexps to bytecode and it runs in both MRI and Rubinius. Everything sounds copacetic—what's the trouble, you ask? Well, compiling Ruby is hard, too.
To make the process more tractable, Evan rewrote the compiler so that the sexps are processed into an abstract syntax tree (AST) of Ruby objects. Each object has a
#bytecode method. To generate bytecode, start at the root of the tree and visit each node, calling the
bytecode method for the node.
For perspective, let's enumerate all the stages in the compiler. I'm glossing over a few details here, but you'll see the basic picture:
- parse tree: A tree of C data structs created by the MRI parser.
- sexp: A recursive array of symbols and values created by processing the MRI parse tree.
- rewritten sexp: The form of the sexp after certain structures are normalized to make converting the sexp simpler.
- AST: The abstract syntax tree of Ruby objects created by processing the rewritten sexp.
- bytecode: A stream of instructions that the Rubinius VM can execute.
- compiled method: The bytecode packaged with some additional information like the names of local variables and the amount of stack space needed when the compiled method runs. A typical Ruby source code file compiles to a tree of compiled methods.
- compiled file: The Rubinius VM can execute the compiled methods directly. But to avoid having to recompile them, the tree of compiled methods is serialized to a compiled file on disk. Rubinius can read the compiled file to recreate the tree of compiled methods in memory.
That's a significant number of stages and it's not hard to see where we can simplify to improve the process. Those sexp stages appear to just be passing data along. In fact, they require creating a lot of additional objects, time to process, and some seriously complex code to process them. The latter has a significant, negative impact on code quality.
S-expressions are just data, dumb and brittle. They only have form (e.g.
[:x, :y] versus
[:x, [:y]]) and position (e.g.
[:alias, :x, :y] versus
[:alias, :y, :x]) to encode information. If you wanted to add, for example, a line number to every sexp, you'd need to put the information in a form and at a particular position in every sexp, or you'd need to wrap every sexp in one that encodes the line number. The simplicity of sexps is beguiling, but you know what they say when all you have are s-expressions... Everything looks like function application.
On the other hand, we have this rich, object-oriented language that makes it trivially easy to create objects that conform to a consistent interface. We want to get from the Ruby text to Ruby objects as quickly, and simply, as possible.
One option would be to write the compiler so that it processes the MRI parse tree directly into bytecode. Unfortunately, that would require either rewriting the compiler in C (not gonna happen) or making the C data structs available in Ruby. The latter option sounds promising. We could translate the parse tree directly into an AST.
Melbourne
Enter Melbourne, a C extension that can run in MRI or Rubinius and process the MRI parse tree directly into an AST. In case you were wondering, it was named in honor of Evan's sydparse gem, and some rowdy Aussie developers we know.
In Melbourne, each node of the MRI parse tree is processed using a mechanism we all know and love: a method call. At each parse tree node, all children are processed by recursively calling the
process_parse_tree function (see lib/ext/melbourne/visitor.cpp). At a leaf node, there are no children to create, so an appropriately named Ruby method is called on the
Rubinius::Melbourne instance that is passed to
process_parse_tree.
Once all of a node's children are created, a method is called for the parent node, passing along the objects that were already created for the node's children. You can see the result of this process by running the following command:
$ bin/rbx -r compiler-ng -e '"a = 1".to_ast.ascii_graph' LocalVariableAssignment @line: 1 @name: a FixnumLiteral @line: 1 @value: 1
In lib/melbourne/processor.rb, you can see code like the following. Whenever a local variable assignment parse tree node (lasgn) is processed, this method is called and the LocalVariableAssignment AST node is created. Pretty straightforward.
def process_lasgn(line, name, value) AST::LocalVariableAssignment.new line, name, value end
One of my goals for improving the code quality in the compiler was to replace the use of conditionals by creating more explicit nodes in the AST. Sometimes code suffers from what I'd call conditionalitis, or inflammation of your conditionals (sounds painful, huh?). That's where you maintain a lot of state and use conditionals to figure out what to do.
An alternative is to create different forms of things that just do what they are supposed to do. Simply avoiding using conditionals is not really an option, but where they're used can have a big impact on code quality. It's can be hard to understand by just looking at the code why one branch or the other would be taken when the program is running, but when the conditional results in different forms being created, it's much easier to comprehend how the program will behave
For example, in MRI
a.b = 1 and
a \[b\] = 1 are parsed into the same parse tree node. But there are enough things that need to be done differently in the two cases that just creating different forms can make these tasks simpler and more explicit. The code in lib/melbourne/processor.rb for processing an attrasgn node looks like this:
def process_attrasgn(line, receiver, name, arguments) if name == :[]= AST::ElementAssignment.new line, receiver, arguments else AST::AttributeAssignment.new line, receiver, name, arguments end end
B. The Compiler
By this point we have a fully formed AST that faithfully represents the Ruby code we started with. Emitting bytecode now is almost anticlimactic. Not really, of course, because there is plenty of work left to do. Just keeping track of local variables in methods, blocks and evals could take up a whole post, but the basic idea is really simple. Start at the root node and call the
#bytecode method, walking down the tree until every node has been visited. All the details (and there are a lot of them) can be found in the
#bytecode methods on the AST nodes (see lib/compiler-ng/ast).
Besides generating bytecode, there are other interesting things you can do simply by defining methods on the AST nodes and visiting them. I'll give you two examples. First, let's look at
defined?.
# defined.rb class A class B end end def x puts "hey there" A end p defined? A::B p defined? x::B
(If you run this code in Ruby, you should see the following output.)
$ ruby defined.rb "constant" hey there "constant"
This code illustrates something about
defined?. It doesn't just check internal data structures. In some cases, like
x::B, it must do some evaluation.
In Rubinius, we simply add a
defined(g) method to any relevant AST node that takes an instance of the bytecode generator object and emits bytecode appropriate for evaluating the expression passed to
defined?. Here is the full code for the
Defined AST node.
class Defined < Node attr_accessor :expression def initialize(line, expr) @line = line @expression = expr end def bytecode(g) pos(g) @expression.defined(g) end end
Rather than emitting bytecode, it's possible to simply evaluate the AST, performing actions when visiting each node. Rubinius has an evaluator for being able to write bits of code for which there is no simple Ruby syntax. The evaluator makes it possible to write directly in Rubinius assembly language (i.e. in the operations that the VM directly executes). Consider this example:
# asm.rb def hello(name) Rubinius.asm(name) do |name| push :self run name string_dup push_literal "hello, " string_dup string_append send :p, 1, true end end hello "world"
If you run this in Rubinius, you should see the following output:
$ bin/rbx asm.rb "hello, world"
Of course, this example is much easier to write in plain old Ruby. However, we use this facility in the Ruby core library. For example, in the
Class#new method.
The basic idea is that a tree of Ruby objects representing Ruby source code is a powerful tool that can be easily extended to accomplish various tasks.
Compiler, Part Deux
Let's revisit the list we presented earlier of compiler stages. How do we coordinate those stages? What if we need to insert a stage? Well, by creating more things we can name, and on which we can define behavior.
You can see the various stages laid out in lib/compiler-ng/stages.rb. By default, each stage know which stage follows it. You create an instance of the compiler by specifying the starting stage and the ending stage. Each stage has an interface with the preceding and following stage, with the compiler object itself, and with the object that performs the work for that stage. For example, if the compiler will compile a String to a CompiledMethod, it will consist of the StringParser, Generator, Encoder, and Packager stages.
The goal was to create an API for the compiler that made it simple to use programatically in a variety of circumstances. For example, using the compiler internally to compile a Ruby file when
#require is called or in a command line script like lib/bin/compile-ng.rb. The command line script may give the option to show the data structures as the compiler is processing them. Here's an example:
$ cat var.rb a = 1 p a $ bin/rbx compile-ng -AB var.rb Script @name: __script__ @file: "var.rb" Block @line: 1 @array: LocalVariableAssignment @line: 1 @name: a FixnumLiteral @line: 1 @value: 1 SendWithArguments @line: 2 @name: p @privately: true Self @line: 2 ActualArguments @line: 2 @array: LocalVariableAccess @line: 2 @name: a ============= :__script__ ============== Arguments: 0 required, 0 total Locals: 1: a Stack size: 3 Lines to IP: 1: 0-4, 2: 4-14 0000: meta_push_1 0001: set_local 0 # a 0003: pop 0004: push_self 0005: push_local 0 # a 0007: allow_private 0008: send_stack :p, 1 0011: pop 0012: push_true 0013: ret ---------------------------------------- $ bin/rbx var.rbc 1
Take a look at
bin/rbx compile-ng -h for more options and try this out on your Ruby code. It opens up a pretty exciting world of understanding.
A Final Note
Melbourne and the new compiler will both be enabled in the up-coming 0.13 release, so anywhere I've used compile-ng or compiler-ng in this post, will become just compile or compiler. Until then, all the code exists in the Rubinius Github repository, so you don't have to wait to check it out.
Thoughts? Questions? Leave them here!
Share your thoughts with @engineyard on Twitter | https://blog.engineyard.com/2009/improving-the-rubinius-bytecode-compiler | CC-MAIN-2015-35 | refinedweb | 2,945 | 63.29 |
IRC log of tagmem on 2003-11-10
Timestamps are in UTC.
19:25:59 [RRSAgent]
RRSAgent has joined #tagmem
19:26:01 [Zakim]
Zakim has joined #tagmem
19:26:04 [Ian]
zakim, this will be TAG
19:26:04 [Zakim]
ok, Ian; I see TAG_Weekly()2:30PM scheduled to start in 4 minutes
19:26:28 [Ian]
Ian has changed the topic to:
19:51:58 [Stuart]
Stuart has joined #tagmem
19:59:10 [Zakim]
TAG_Weekly()2:30PM has now started
19:59:17 [Zakim]
+Norm
20:00:31 [TBray]
TBray has joined #tagmem
20:00:55 [Zakim]
+??P1
20:01:13 [Stuart]
zakim, ??P1 is me
20:01:13 [Zakim]
+Stuart; got it
20:01:26 [Zakim]
+DOrchard
20:01:54 [Zakim]
+Tim_Bray
20:01:59 [Ian]
zakim, call Ian-BOS
20:01:59 [Zakim]
ok, Ian; the call is being made
20:02:00 [Zakim]
+Ian
20:02:34 [DaveO]
DaveO has joined #tagmem
20:02:44 [Ian]
Regrets: CL
20:02:52 [Ian]
At risk: PC
20:03:14 [Ian]
NW: I may have to drop off suddenly; excuse me in advance.
20:03:28 [Zakim]
+Roy
20:03:43 [Ian]
[RF back in 10 mins]
20:03:48 [Zakim]
-Roy
20:04:27 [Stuart]
zakim, who is here?
20:04:27 [Zakim]
On the phone I see Norm, Stuart, DOrchard, Tim_Bray, Ian
20:04:28 [Zakim]
On IRC I see DaveO, TBray, Stuart, Zakim, RRSAgent, Ian, Norm
20:05:09 [DanC]
DanC has joined #tagmem
20:05:16 [Zakim]
+DanC
20:05:39 [Zakim]
+??P4
20:06:18 [Stuart]
zakim, ??p4 is PaulC
20:06:18 [Zakim]
+PaulC; got it
20:06:46 [Stuart]
zakim, who is hre
20:06:46 [Zakim]
I don't understand 'who is hre', Stuart
20:06:52 [Stuart]
zakim, who is here?
20:06:52 [Zakim]
On the phone I see Norm, Stuart, DOrchard, Tim_Bray, Ian, DanC, PaulC
20:06:53 [Zakim]
On IRC I see DanC, DaveO, TBray, Stuart, Zakim, RRSAgent, Ian, Norm
20:07:34 [Ian]
Roll call: PC, SW, DC, IJ, NW, DO, TB.
20:07:40 [timbl]
timbl has joined #tagmem
20:07:43 [Ian]
Regrets: CL.
20:07:45 [Ian]
TBL arriving
20:07:48 [Ian]
RF back in a few.
20:07:58 [Ian]
Resolved to accept minutes of 27 Oct teleconf
20:08:05 [Ian]
20:08:14 [Ian]
Accept the minutes of the 3 Nov teleconference?
20:08:20 [Ian]
NW: I skimmed, looks ok.
20:08:32 [DanC]
interesting... XMLVersioning-41
20:08:50 [Ian]
DO: Please do not accept the 3 Nov minutes as accurate record.
20:08:55 [Ian]
DO: I will review them over the next few days.
20:09:10 [Zakim]
+TimBL
20:09:18 [Ian]
Accept this agenda?
20:09:28 [Ian]
20:10:14 [Ian]
zakim, ??P4 is Paul
20:10:14 [Zakim]
sorry, Ian, I do not recognize a party named '??P4'
20:10:19 [Ian]
zakim, mute PaulC
20:10:19 [Zakim]
PaulC should now be muted
20:10:27 [Ian]
zakim, unmute PaulC
20:10:27 [Zakim]
PaulC should no longer be muted
20:11:49 [Ian]
---
20:11:54 [Ian]
Tech Plenary expectations
20:12:07 [Ian]
SW: Meeting M + T is lesser of all evils; maximizes head count.
20:12:36 [Ian]
NW, TBL: We have conflicts.
20:12:46 [TBray]
q+
20:12:51 [Ian]
DC: I'd suggest Th and Fri instead
20:13:18 [Ian]
TBray: I suggest that instead we use the whole time to liaise.
20:13:28 [Ian]
...with other groups who will be meeting there.
20:13:56 [TBray]
q-
20:14:48 [Norm]
q+
20:14:50 [Ian]
SW: I have the feeling no good fit for NW, or perhaps CL except for Friday.
20:14:58 [Ian]
IJ has conflicts Th/Fri
20:15:03 [Ian]
ack DanC
20:15:03 [Zakim]
DanC, you wanted to express a preference for Th/Fr TAG and to 2nd bray's proposal, ammended to include a social time, such as dinner
20:15:41 [Ian]
DC: If the TAG meets, I'll be there. If we don't spend a whole day meeting, I'd like to distribute task of organizing liaison and also a TAG social meeting.
20:15:46 [Ian]
NW: I'm happy to do as TB suggests.
20:15:48 [timbl]
q+ dave paul
20:15:51 [Ian]
ack Norm
20:16:11 [Stuart]
ack dave
20:17:18 [Ian]
DO: I'd like to use some ftf time to go over findings (e.g., extensibility finding)
20:17:43 [Ian]
DO: Maybe we could set aside one day to look at TAG material.
20:17:43 [TBray]
q+
20:17:45 [Stuart]
ack paul
20:17:59 [Ian]
PC: Heads-up for scheduling on the margin.
20:18:20 [Ian]
PC: Recall that there may be new TAG participants.
20:18:37 [Ian]
PC: It would be appropriate to have a ftf meeting early in the year to get them engaged.
20:19:01 [Ian]
PC: My original proposal was that we not meet in March, but rather ftf in February to bring on new folks.
20:19:10 [Stuart]
ack TBray
20:19:50 [Ian]
TBray: For me the tech plenary is a valuable opportunity to liaise. It's not easy for me to go to the south of France for just one day.
20:20:25 [Ian]
TBray: So I am willing to go if we either have a first-rate ftf meeting or to do real liaison work.
20:21:35 [Zakim]
+Roy
20:21:43 [Ian]
TBL: I would be happy to attend the RDF core meeting one day and a tag ftf the other day.
20:22:20 [DanC]
s/RDF Core/RDF Interest/
20:22:25 [DaveO]
q+
20:24:36 [Ian]
ack DaveO
20:24:42 [Ian]
DO: I'd like the TAG to meet the week of the TP.
20:25:16 [Ian]
DO: TAG mtg in jan/feb looking tough for me.
20:25:33 [Ian]
DO: I have a strong pref for week of TP.
20:25:41 [TBray]
q+
20:26:05 [Ian]
SW: Propose to use the TP week for liaisons, and TAG ftf meeting on Tuesday.
20:26:32 [Ian]
SW: I will arrange liaisons (with help) with other groups during the week.
20:26:36 [Ian]
Brainstorm list of groups:
20:26:43 [Ian]
HTML WG (xlink)
20:26:47 [Ian]
I18N (charmod)
20:27:13 [Ian]
Web Services (wsdl + REST, issue 37)
20:27:38 [Ian]
PC: Meet with Schema about extensibility.
20:28:23 [DaveO]
I agree with Stuart's proposal and vote yes.
20:28:46 [Ian]
20:30:19 [Ian]
Proposal:
20:30:28 [Ian]
- TAG meeting Tuesday
20:30:35 [Ian]
- Arrange to liaise with other groups around that.
20:30:39 [Ian]
PC: XML Core on ID?
20:30:57 [Ian]
IJ: When is the binary xml workshop?
20:31:30 [timbl]
q+ to point out that this is all reather dependent on the other groupos being able to make time on their schedules
20:31:47 [Ian]
TB, TBL, NW: Like the proposal.
20:31:52 [DanC]
DC too
20:32:15 [Ian]
Resolved: Adopt proposal for meeting during tech plenary week.
20:33:05 [Ian]
PC: Do we plan to invite old and new participants at the first meeting of new folks?
20:33:12 [Ian]
SW, DO: Yes
20:34:22 [Ian]
TBL: What about a video conf earlier than tech plenary? E.g., January.
20:34:47 [Ian]
SW: One reason to wait until Feb is for new participants.
20:35:37 [Ian]
Action SW: Explore possibility of TAG videolink mtg in February, with help from PC.
20:35:51 [Ian]
q+
20:35:54 [Ian]
ack TBray
20:35:56 [Ian]
ack timbl
20:35:56 [Zakim]
timbl, you wanted to point out that this is all reather dependent on the other groupos being able to make time on their schedules
20:36:07 [Ian]
q-
20:36:09 [Ian]
=====
20:36:19 [Ian]
1.2 TAG Nov face-to-face meeting agenda
20:36:29 [TBray]
We'll be flexible and available, I'd hope that they would too.
20:36:30 [Ian]
Meeting and agenda page
20:36:37 [Ian]
20:36:52 [Ian]
SW: My proposal was to have people send written reviews of the arch doc.
20:37:03 [TBray]
BTW, we'd be available for liaisons on Tuesday too, right?
20:37:21 [Ian]
DC: I can't read anything new between now and meeting.
20:37:24 [Ian]
q+
20:38:09 [Ian]
IJ: I was planning to have next editor's draft tomorrow.
20:38:24 [Ian]
DC: That will be counter-productive in my opinion.
20:38:45 [Ian]
DC: Please get endorsement before you ask for objections.
20:39:08 [Ian]
[27 Oct draft:
]
20:39:44 [DanC]
s/in my opinion/for my purposes/
20:40:21 [Ian]
RF: I will do a review of 11 Nov draft.
20:40:27 [Norm]
So will I
20:40:30 [Ian]
TBL: I expect to download before getting on plane.
20:40:47 [timbl]
Trouble is, I was going to edit slides on the plane.
20:41:14 [Ian]
DC: Re ftf agenda and last call decision: I think it would be great to say at the meeting "Yes, this doc is ready for last call." I think that we are likely to make more edits.
20:41:53 [Norm]
q+
20:41:56 [Ian]
TBray: I'd like to have a TAG decision on the substance of my request.
20:41:57 [Ian]
q-
20:42:34 [Ian]
DO: I can agree to no more major structural changes, but not to point on new material (since NW and I have been working on extensibility and versioning material).
20:42:38 [Ian]
ack Norm
20:42:54 [Ian]
NW: I am unhappy with the current extensibility section and would like it fixed.
20:43:31 [Ian]
q+
20:44:03 [Ian]
TBray: I think that abstractcomponentrefs is not cooked enough to be in the arch doc.
20:44:39 [DaveO]
grumble. I did my action item to create material in abstractcomponentrefs for inclusion in the web arch....
20:45:08 [TBray]
yeah, but it's a way harder issue.
20:45:56 [Norm]
q+
20:46:19 [TBray]
ack Ian
20:46:20 [Ian]
IJ: I don't have need to make big structural changes; I suspect TAG may want to at FTF meeting.
20:46:20 [Ian]
q-
20:46:21 [TBray]
ack Norm
20:46:24 [timbl]
q+ paul
20:46:38 [Ian]
NW: My comment was that nobody on the TAG should make substantial changes except for versinoing sectino.
20:46:40 [Ian]
ack paul
20:46:46 [Ian]
PC: I think the TAG needs to be date-driven.
20:46:54 [Ian]
q+
20:47:05 [TBray]
+1 to Norm's formulation
20:47:28 [Ian]
ack Ian
20:47:47 [Ian]
IJ: I would like to walk through my announced intentions before I make a complete commitment.
20:47:54 [Ian]
PC: I think we need to be date-driven at this point.
20:48:06 [timbl]
Ian, did the
rep'n diagram have text in the "representation" box?
20:49:48 [Ian]
DC: I am not yet satisfied that the TAG ftf meeting is clear enough about which document we'll be discussing.
20:49:49 [timbl]
I think the diagram is misleading now.
20:51:01 [DanC]
W3C process calls for ftf agendas 2 weeks in advance. I expect documents to stabilize in at that time. I gather I'm not gonna get what I want this time.
20:53:05 [Ian]
REsolved: If IJ finishes draft by tomorrow, we will review that at the ftf meeting.
20:53:16 [TBray]
not now
20:53:37 [DanC]
I can't seem to find my last end-to-end review... I'm pretty sure it was a bit before 1Aug.
20:54:10 [Ian]
[TAG will review AC meeting slides at ftf meeting]
20:54:29 [Ian]
===================================
20:54:49 [Ian]
2.2 XML Versioning (XMLVersioning-41)
20:54:58 [Ian]
Proposal from DO:
20:55:07 [Ian]
20:55:14 [Ian]
Proposal from IJ:
20:55:23 [Ian]
20:55:54 [timbl]
20:56:13 [Ian]
20:56:54 [timbl]
The latter is Ian's shortened version for arch doc
20:56:55 [Ian]
[IJ summarizes]
20:57:08 [Ian]
DO: We talked about use of namespaces names on the thread.
20:57:49 [Ian]
IJ: See status section for my expectations regarding namespaces.
20:58:04 [DanC]
status section of what?
20:58:52 [TBray]
4.6.2 of
20:59:40 [DaveO]
q+
20:59:50 [Stuart]
q+ paul
20:59:53 [Norm]
q+ to note that Ian said "only make backwards compatible" but left that out of his proposed text
21:00:03 [Stuart]
q- paul
21:00:13 [TBray]
q+ to agree with Stuart's comment that the level of detail in webarch and the walsh/orchard draft is violently different
21:00:18 [Ian_]
Ian_ has joined #tagmem
21:00:27 [timbl]
q+ to note that the ownership and change issues with nmaepsaces are similar to te problems with document in general, and expectation shoudl be set.
21:00:54 [Ian_]
DC: Not all namespaces have owners. Delegated ownership is a special case.
21:00:55 [Ian_]
DC: I'd prefer to generalize rather than limit scope.
21:01:00 [Ian_]
DC: The general point is that the Web community agrees on what URIs mean. This is just one case of that.
21:01:09 [Stuart]
ack danC
21:01:09 [Zakim]
DanC, you wanted to note problems with "Only namespace owner can change namespace"
21:01:28 [Ian_]
IJ: I wanted to address issue of "changing namespaces" by saying "Document your change expectations"
21:01:44 [Ian_]
TBL: I think we can include the specific case of http; you lose a lot of power in generalizing.
21:01:51 [Ian_]
DO: What about URN?
21:01:58 [Ian_]
TBL: What if they use a UUID? Depends on the URN scheme.
21:02:34 [Ian_]
NW: The URI Scheme shouldn't have any bearing on this discussion.
21:03:01 [Ian_]
TBL: HTTP allows you to own a URI, through DNS delegation, you have a right to declare what it means. In those circumstances, it makes sense to state your change expectations.
21:03:19 [Stuart]
ack DaveO
21:03:29 [Ian_]
[TBL seems to assert IJ's proposal to include a good practice note to document change policy]
21:03:37 [Ian_]
s/assert/support
21:04:10 [Ian_]
DO: One of the problems I had with IJ's proposal is that it didn't include all of the good practice notes that were in our text. In particular, requiring a processing model for extensions.
21:04:20 [DanC]
[good practice notes are fine in specific cases of general principles; but if we can't say what the general principle is, we haven't done our job]
21:04:28 [Ian_]
q+ to respond to DO
21:05:29 [timbl]
s/User agent/agent/
21:06:03 [DanC]
is there some reason to rush this discussion?
21:06:38 [Norm]
I want some text in the 11 Nov webarch draft.
21:06:55 [DanC]
ah; I see, thx Norm.
21:06:59 [Ian_]
q- Ian_
21:07:08 [Ian_]
DO: I think these strategies need to be called out even more.
21:07:14 [Stuart]
ack PaulC
21:07:18 [Zakim]
+Roy_Fielding
21:07:21 [Zakim]
-Roy
21:07:24 [Ian_]
PC: I have not yet read IJ's proposal since he sent Friday.
21:07:48 [Ian_]
PC: Stability of namespaces should appear in finding.
21:08:19 [Roy]
Roy has joined #tagmem
21:08:25 [Ian_]
PC: I would support more advice on namespace change policies.
21:08:25 [timbl]
q+ to say yes there is something.
21:08:58 [Ian_]
PC: There seems to be a tremendous amount of content on single-namespace languages; less on multiple namespace strategies.
21:09:07 [Ian_]
PC: Is the finding focused on a single namespace problem?
21:09:22 [timbl]
q+ to say yes there is something.
: Note that a condition of documents reaching CR status will be that the clauses 2 and 3 will no longer be usable, to give the specification the necessary stability.
21:09:29 [Ian_]
DO: That is one of the splits in the finding. The finding doesn't go into enough detail on pros and cons of extension strategies.
21:10:48 [Ian_]
PC: I was just pointing to IJ's point on stability.
21:11:19 [DaveO]
ian, that's somewhat incorrect. "details on pros and cons of extension strategies on the use of multiple namespaces".
21:11:20 [Ian_]
PC: I think we have to seriously consider talking about mixed namespace docs since that's one of our issues.
21:12:27 [Ian_]
TBL: namespace policy for W3C specs is linked from W3C Guide.
21:12:44 [Ian_]
TBL: The requirement is to indicate change policy; also when namespace becomes fixed (at CR).
21:13:20 [DanC]
it is policy.
21:13:35 [Ian_]
PC: We could include W3C policy as an example in arch doc.
21:13:43 [Norm]
ack norm
21:13:43 [Zakim]
Norm, you wanted to note that Ian said "only make backwards compatible" but left that out of his proposed text
21:14:07 [Ian_]
NW: Warning about putting namespace material in section on namespaces.
21:14:20 [Ian_]
[IJ expects to include xrefs]
21:14:34 [Ian_]
NW: For draft tomorrow, I'd like for us to err on the side of including more text rather than less.
21:15:09 [Ian_]
NW: The one critical piece not in IJ's proposal is forwards/backwards, closed/open systems, development times.
21:15:35 [Stuart]
ack TBray
21:15:35 [Zakim]
TBray, you wanted to agree with Stuart's comment that the level of detail in webarch and the walsh/orchard draft is violently different
21:15:48 [Ian_]
TBray: I don't think the community is close on semantics or even desirability of mixed namespace docs. I don't think we can go there yet.
21:16:14 [Ian_]
TBray: I have just read IJ's text. I agree with IJ's point that the level of detail of DO/NW text is greater than rest of arch doc.
21:16:32 [Ian_]
TBray: I would by and large be ok with IJ's text.
21:16:52 [Ian_]
TBray: I think IJ has come close to an 80/20 point.
21:17:19 [Ian_]
TBray: On for/back compatibility, I don't know that it is required to be included. I agree that the finding should have the details since these are complex issues.
21:17:49 [Ian_]
TBray: I am concerned that if you talk about f/b compatibility, you fall over the slippery slope that might require 8 pages of details.
21:18:03 [Ian_]
TBray: Perhaps mention f/b compatibility as an example of what's important, with a link to the finding.
21:18:34 [Ian_]
DO: Do you think additional material is required to be sufficient?
21:18:52 [Ian_]
TBray: IJ's draft is close to being sufficient. It's fine for the arch doc to point off to findings for more detail.
21:19:45 [Stuart]
ack timbl
21:19:45 [Zakim]
timbl, you wanted to note that the ownership and change issues with nmaepsaces are similar to te problems with document in general, and expectation shoudl be set. and to say yes
21:19:48 [Zakim]
... there is something. and to say yes there is something.
: Note that a condition of documents reaching CR status will be that the clauses 2 and 3
21:19:52 [Zakim]
... will no longer be usable, to give the specification the necessary stability.
21:20:04 [Ian_]
TBray: I don't think IJ's draft is seriously lacking anything. Mention of f/b compatibility a good idea.
21:20:42 [Ian_]
TBL: On the issue of mixed namespaces, it may be worth saying that if you are designing a mixed name doc in XML right now, no general solution. But that if you do so for RDF, there is a well-defined solution.
21:21:30 [timbl]
There is a well-defined solution for mixing of RDF ontologies.
21:21:58 [timbl]
RDF does not provdie a solution for how to mix arbitrary XML namespaces for non-RDF applications.
21:24:11 [Ian_]
DO: I propose to work with IJ to find a middle ground.
21:24:11 [Ian_]
DC
21:24:21 [Ian_]
DC: It's ok for me if last call draft says nothing about versioning.
21:24:22 [Stuart]
ack DanC
21:24:44 [Norm]
q+
21:24:47 [Ian_]
TBray: I"d be happier with IJ's most recent draft rather than nothing.
21:25:06 [Ian_]
DC: The tactic of putting more text in and cutting back is not working for me.
21:25:33 [Ian_]
NW: I would like the arch doc to include some text in the arch doc.
21:25:49 [TBray]
q+
21:26:25 [DaveO]
q+
21:26:25 [Ian_]
NW: I am happier with IJ's text than nothing; but I'd like to work with IJ to include a few more things in tomorrow's draft, and discuss at ftf meeting.
21:26:42 [timbl]
I would be OK with skipping versioning for the arch doc last call in the interests of expediancy of consesnsus of tag. Would be happier with ian's current text, if consesnus of tag.
21:26:48 [Ian_]
NW: My slightly preferred solution is to add all of DO/NW good practice notes for discussion at ftf meeting.
21:27:01 [Zakim]
-PaulC
21:27:02 [Ian_]
PC: I have to go; I'm flexible on solution.
21:27:06 [Norm]
q?
21:27:08 [Norm]
ack norm
21:27:27 [Ian_]
TBray: I am sympathetic for a subgroup to work on some text for inclusion in tomorrow's draft.
21:27:36 [Ian_]
TBray: I am not excited about adding a lot more stuff.
21:28:06 [Norm]
q?
21:28:17 [Stuart]
ack TBray
21:28:18 [Ian_]
TBray: note that I'm a big fan of the finding. But I think we need to stick closer to IJ's level of detail and length.
21:28:43 [Ian_]
DO: I would be disappointed if IJ's draft was the extent of material that was included in the arch doc.
21:29:24 [DanC]
I got lost somewhere; In Vancouver, we had a list of the issues that were critical path for last call for a "backward looking" last call. Now versioning seems to be in there. I guess I'll have to pay more attention.
21:29:26 [Ian_]
DO: I believe more material needs to be in the arch doc (in particular good practice notes); the arch doc will go through Rec track. I think that things that don't go through the Rec track will not be taken as seriously, not get as much review, etc.
21:29:34 [Stuart]
q+
21:29:37 [Ian_]
ack DaveO
21:30:37 [Stuart]
ack Stuart
21:30:45 [Ian_]
SW: If the TAG agrees that we consider versioning that important, we can put a separate doc through the Rec track.
21:31:58 [timbl]
q+ to ask about timing
21:32:40 [Ian_]
DO: I think the middle ground for this text is closer to the middle.
21:32:46 [Ian_]
q+
21:32:52 [TBray]
For the record: IMHO Ian's text is better leaving this uncovered, but Ian is coming close to the 80/20 point and I don't want to see it get much longer than that
21:33:12 [Ian_]
DC: So there's no principles in here about versioning.
21:33:56 [Stuart]
ack DanC
21:33:58 [Ian_]
TBL: Perhaps we need to get into sync on the timing of this.
21:34:05 [TBray]
I think procedurally the right thing to do is let Stuart and Norm/Dave saw off what they can by tomorrow.
21:34:17 [TBray]
er s/Stuart/Ian/
21:34:17 [Ian_]
TBL: My assumption is that we will dot I's and cross T's if we are to be on last call track soon.
21:34:23 [Stuart]
s/Stuart/Ian/?
21:35:06 [Ian_]
TBL: We are going to find small things we want to clean up in the existing text.
21:35:16 [DanC]
is that a question from the chair? NO! we are *not* anywhere near "last call sign off". I think 2/3rds of the current draft isn't endorsed by various tag memebers.
21:35:21 [Ian_]
TBL: The versioning text is interesting, but i need to look more closely at the text.
21:36:16 [Ian_]
TBL: In any case, we need a disclaimer that we are not done by virtue of going to last call.
21:36:27 [Ian_]
TBL: We will need a place to put ongoing ideas for the next draft.
21:36:37 [Norm]
As I said before, I will be sorely disappointed if we don't say something about this topic in V1.0 of the webarch document.
21:36:55 [DanC]
I hear you norm, but I'm not clear why.
21:37:33 [Ian_]
q+
21:37:46 [Ian_]
ack timbl
21:37:46 [Zakim]
timbl, you wanted to ask about timing
21:38:14 [Ian_]
TBray: I hear some consensus to hand this off to DO/NW/IJ to come up with something short enough and includes enough material.
21:38:47 [Ian_]
ADJOURNED
21:38:54 [DaveO]
and I'll want to have an ad-hoc group on abstract component refs
21:38:55 [Ian_]
RRSAgent, stop | http://www.w3.org/2003/11/10-tagmem-irc.html | CC-MAIN-2015-40 | refinedweb | 4,394 | 78.28 |
On 02/02/08 16:27, Klaus Schmidinger wrote: > In a crude attempt to run VDR's Transfer-Mode without using a cRemux > (and thus avoiding all the extra buffering and processing) I am > trying to send the payload of the TS packets directly to the device. > > The attached patch implements cDevice::PlayTS() and handles video > and audio packets with fixed PIDs (just for testing). > > I. Nevermind, I just found it myself: it must be +5 instead of +4 in inline int TsPayloadOffset(const uchar *Data) { return (Data[3] & ADAPT_FIELD) ? Data[4] + 5 : 4; } Now it works - and Transfer-Mode never switched as fast as this :-) Klaus | http://www.linuxtv.org/pipermail/vdr/2008-February/015405.html | CC-MAIN-2015-22 | refinedweb | 107 | 55.27 |
how i want to develop a online bit by bit examination process as part of my project in this i am stuck at how to store multiple choice questions options and correct option for the question.this is the first project i am doing
I have only one day to visit the Jaipur..
I have only one day to visit the Jaipur.. Hi, I have only a day to travel in Jaipur ..hence, bit worried about what to see first
question
question Good morning sir,
i need a jsp and mysql code to track attendance immediately,and if you have any idea about how to take absentees please send me that too
question
question i have a jsp project ATS-HR ,i am using eclipse for back.../) is not available.**
how tdo i solve this.please help me to run my project
please give me one demo of how to create and run a jsp project in eclipse
question
question if you have any idea about automatic updation in mysql + jsp.i wrote a code for retrieving attendance.but i don't know how to display absentees.for example,assume that one employee is forgot to mark attendance
question
question Gud morning sir,
I have asked u some question regarding jsp in saturaday for that i didnot find any answere in which u send me the some... label and registeration label in this two labels.If one we have selected
hi
hi how i get answer of my question which is asked by me for few minutes ago.....rply
Javascript Question about converting Celcius to Fahrenheit
Javascript Question about converting Celcius to Fahrenheit How...? |
I really have difficulties with it. Can you help me.
Do I have to put var celsius and Fahrenheit after opening javascript?
Do I have to prompt... and the remaining should be in 2nd column....how can i seperate the two columns using a line....can u help me????
Hi Friend,
You can use the following
question about database
question about database sir i have made a drop down button using... down button..how can i do this....i have taken many fields such as type,ward id... their values from database so that they appear in drop down i have made
question
question Sir,
How to stream video on one computer which is playing on another PC in LAN using java + socket / RMI . if you have any idea about that please help me and give the source code
JSP Question - JSP-Servlet
JSP Question hi i have different word and excel files in different pages.i want to store it in one workbook or print it in pdf format.hows this possible ? Thanks
How to forward the control from one jsp to another?
How to forward the control from one jsp to another? Hi!
This is Prasad Jandrajupalli.
I have the 3 JSP's, but I want communicate with each... is not communicate with the Third JSP.
I want forward the control from first jsp to second
question
; hi ,
i think it is easy on executeQuery();
sql query is select uname...question good afternoon,
how to get user name and password... in Jdbc -jsp page .
that's it.
chandu
Hi... - Java Beginners
Hi... Hi friends,
I hv two jsp page one is aa.jsp & bb.jsp... Friend,
Please clarify your question.
Thanks Hi frnd,
Your asking about bb.jsp in aa.jsp.So can use jsp include tag.
If you use jsp include tag
question
question hi good morning.
how to retrive the data from one form of jsp to another form.please help me
Hi Friend,
Try the following code:
1)form1.jsp:
<html>
<form method="post" action="form2.jsp">
<
hi roseindia - JSP-Servlet
hi roseindia defination of jsp? JSP is just a server side programming where one can embed both html and Java coding in a single page... the ease of connecting it to any database. Actually I must have told
jsp question
jsp question sir plz tell me how can I create a page which is similar to feedback form in which if one option is empty then other option is inaccessible.
for example it consists of name address etc.
if name field is not filled
About Jsp
About Jsp Hello sir, I am developing online Quiz project in jsp using MySql Database. I want to know that How I will show Questions from database One by One on Same page and also want to calculate Result for the User
I have to retrieve these data from the field table
I have to retrieve these data from the field table Hi. I have... as single values like chennai as one value, trichy as one value. and i have... chennai,trichy,kanchipuram for a single record. I have to retrieve these data from belowsp servlet question
jsp servlet question I have an HTML form which has a couple of radio... table there is only one column for Gender, how do I insert male OR female... lists. I have created a servlet to connect to access database and process the form
Hi
Hi how to read collection obj from jsp to servlet and from jsp - jsp?
Hi Friend,
Please visit the following link:
Thanks
question about applet
question about applet how to run java applet on wed browser
Hi Friend,
Please visit the following link:
Applet Tutorials
Thanks
i have a problem in spring - Spring
i have a problem in spring spring Aop: how to configure proxyfactorybean in xml file for providing advices for one particular class
i have a problem to do this question...pls help me..
i have a problem to do this question...pls help me.. Write a program....
Hi Friend,
Try this:
import java.util.*;
public class ReverseNumber... reversedNumber = 0;
for (int i = 0; i <= num; i
Crate a Popup Menu in Java
;
Here, you will learn how to create a Popup menu in
Java. Popup... right click on the frame:
This program illustrates you about the creation of the
popup menu. Following methods and APIs have been used in this program
I have problem in my Project
I have problem in my Project Dear Sir,
i have problem in my project about Jtable
i have EDIT JButton whenevery i was click on edit he is display all data from database but i want to select any row
hi..
hi.. I want upload the image using jsp. When i browse the file then pass that file to another jsp it was going on perfect. But when i read...);
for(int i=0;i<arr.length-1;i++) {
newstr=newstr?
question
question i need a simple javascript or jsp code to send message from one mail to another mail
question
question hai..........
i have one doubt can i have two actions for one button.
please reply me soon
thanks in advance
question
question how to put every data base connections (selection,insertion ,updation and so on) in one jsp page and how to import that in other pages of one project
question
question good afternoon sir,
i have a project in jsp which contains attendance marking.i could mark present successfully but i want to list absentees.i need a jsp code to insert absent,half day leave in to data base while user
question
Action in JSP more than one actions done in a jsp in struts2 and controll go in different java class
Have a look at the following link:
Sruts Tutorials
question
question how to list absentees in a web project ATS +jsp+mysql.if you have any idea please send me
question
question how to loop table rows in javascript ,jsp ,i want to list four images in each rows from a file
question
question how to get system current time only in jsp. i don,t need date with time,need time only
question
question i need to select some details form database and that will displays on the same jsp page which is already contains some other details ,mustn't go to next page while i am clicking a link. have any option in jsp or java
question
question i need a simple jsp + mysql code to display all absentees.and please send,if have any option to send any notification message to user who forgot to mark attendance
java question
java question I am converting a .net website into java one.
Can you...
-webapp
- common
- jsp
- css
- images
- javascripts
I have all these files.
I am trying to make header.jsp, footer.jsp, menu.jsp etc
question
question i need to mark employees as half day absent who came after 10'o clock in my web project,so i need to know who are all late entry.if you have any idea using jsp and mysql please help me
question
question Good Morning Sir,
i have two frames client and server .server have two button search and send with one text field and client have one scroll pane using java swing .If i am clicking a search button then i want
question
question Hi good morning sir,
how to send an error message to an html page after checking the database for different users.for eg:your user name...,jsp and simple java script.i don't know aj
A question
to execute a jsp program?
I have package named package1 . This contains a class named class1.
I created directory as
folder "Webroot"
The Webroot folder contains 2 subfolders named "WEB-INF" and "src" and 2 jsp pages.
The src folder contains
question
()'
what is the wrong in this ,it returned zero rows but have row like :
leave... table.
i wan't to list employees on leave in current date.please help me,send me the correct code.using mysql+jsp
question
attributes.
,very thanks but i didn't get clearly give me one example...question i am using this code in my project
response.sendRedirect("LoginForm1.jsp?msg=Username or Password is incorrect!");
but i wan't to hide
question
()'
but have one row in leavetable like-
Leave(L) 100 John 2011-06-28
i
Hi... - Java Beginners
Hi... Hi,
I want make a date fields means
user click date... code;
if any one need just mail me: fightclub_ceo@sify.com
Hi... will be displayed 26-09-2008 means dd-mm-yyyy format
if u understood my question
question
question Sir,
If i have a frame with one text field and one button search using java swing .If i am clicking a search button then i want to display the file chooser and i want to display the selected file on the text field
question paper - JSP-Servlet
question paper I am doing a project in jsp of creating question paper from question bank.
Here we have to generate a paper based on the language like c,c++.
we want to know how to generate the paper by considering two
question
question Sir,
If i have a frame with one text field and two button search and send using java swing .If i am clicking a search button then i want to display the file chooser and i want to display the selected file on the text
how can i use one dbase conection in serveral pages - JSP-Servlet
how can i use one dbase conection in serveral pages Hi!
Thanks... connections many times whereever i need in several pages. How can I avoid this....that means i want to write only one time and i want to call that in other pages | http://roseindia.net/tutorialhelp/comment/6127 | CC-MAIN-2014-42 | refinedweb | 1,920 | 72.05 |
M. And lucky us there are a few good libraries there – for ease of use.
WIRING MPR121 SENSOR WITH ARDUINO:
Most of the MPR121 will have the following pinout:
- GND – connect that to your Arduino ground pin
- VCC – connect to the Arduino 3.3V (NOTE: 3.3V not 5V !!)
- SDA – connect to the I2C SDA pin – in Arduino Uno its A4
- SCL – connect to the I2C SCL pin – in Arduino Uno its A5
- IRQ – this is the trigger pin, and in our example will connect it to D4 on the arduino Uno
I got to write a few words on the 3.3V tolerance of the mpr121, at its core (without any support circuitry) is a 3.3V chip. But due to the way I2C works (hardware level) it is safe to connect the SDA/SCL directly to the Arduino Uno. But if you experience issues in reading the data you might need to use a logic level shifter.
From my personal experience I never encountered any issues with this breakout boards. And if you buy one from Adafruit from example – the board has extra circuitry to support 5V tolerance.
Now on the other side of the breakout board we have 12 pins that you can connect to anything you like, soda cans, fruits, aluminum foil and many other things. So plug in some jumper wires for now, and lets go get some code going.
In order to communicate with the MPR121 we will need to download a library. There are a few good one out there, me personally, I prefer the one from Bare conductive, here is the link for the library on git.
Download the ZIP file and extract the MPR121 folder out of the ZIP into your “libraries” folder in the Arduino ide working environment. If you done this while your IDE was already – you might have to close it – in order to see the examples in the menu.
Click on “File” -> “Examples” scroll till you see the “MPR121” and choose the “SimpleTouch” example from it.
SAMPLE CODE:
#include <MPR121.h> #include <Wire.h> #define numElectrodes 12 void setup() { Serial.begin(115200); while(!Serial); // only needed if you want serial feedback with the // Arduino Leonardo or Bare Touch Board); } // pin 4 is the MPR121 interrupt on the Bare Touch Board MPR121.setInterruptPin(4); // this is the touch threshold - setting it low makes it more like a proximity trigger // default value is 40 for touch MPR121.setTouchThreshold(40); // this is the release threshold - must ALWAYS be smaller than the touch threshold // default value is 20 for touch MPR121.setReleaseThreshold(20); // initial data update MPR121.updateTouchData(); } void loop() { if(MPR121.touchStatusChanged()){ MPR121.updateTouchData(); for(int i=0; i<numElectrodes; i++){ if(MPR121.isNewTouch(i)){ Serial.print("electrode "); Serial.print(i, DEC); Serial.println(" was just touched"); } else if(MPR121.isNewRelease(i)){ Serial.print("electrode "); Serial.print(i, DEC); Serial.println(" was just released"); } } } }
Let’s go over the important parts of this code :
#include <MPR121.h>
#include <Wire.h>
#define numElectrodes 12
This will include the library and the Wire library (needed for the I2C). and define the number of pins used on the board itself – it’s later used for looping.
In the setup
Serial.begin(115200); – This gets the serial communication working, the original baud rate is 9600, I personally prefer working at 115200
Wire.begin(); – This gets the Wire library started – and its needed for the I2C protocol
if(!MPR121.begin(0x5A)) – This initiate the MPR121 library, NOTE I replace the default example I2C address from 0X5C to 0X5A which is the default on most of the breakout boards.
Line 16 to 41 in the code – The above piece of code will execute only if we had issues initiating the MPR121 and it will output the error type and will loop forever (while (1)) – not allowing any other part of the code to run.
MPR121.setInterruptPin(4); – set the IRQ – or trigger pin – to be on D4 of the Arduino.
MPR121.setTouchThreshold(40); – This set the Touch Threshold to 40, valid values are from 0->255
MPR121.setReleaseThreshold(20); – This set the Release Threshold to 20, valid values are from 0->255.
NOTE: this value must be smaller than the touch value.
MPR121.updateTouchData(); – Now we update the data of the sensor to get start values.
Now for the loop its self
if(MPR121.touchStatusChanged()) – this value will be true; we know we got new data on the sensor. The value changes according to the IRQ pin – this is why we need it.
MPR121.updateTouchData(); – We again update the data from the sensor, to get the current state of each of the input pins.
for(int i=0; i<numElectrodes; i++) – we loop over the number of electrodes we defined in the beginning.
if(MPR121.isNewTouch(i)){
Serial.print(“electrode “);
Serial.print(i, DEC);
Serial.println(” was just touched”);
We check if there is a new touch on each of the pins. If so we show the output
else if(MPR121.isNewRelease(i)){
Serial.print(“electrode “);
Serial.print(i, DEC);
Serial.println(” was just released”);
}
And we also check if there were any new release. After you upload the code and open the serial monitor, any touch or release of the jumper wires will trigger an output of what was touched or released.
After we got to know how things work, Now let’s make something useful. I removed all the comments from the code, added a led on pin 3, and added 2 more IF :
I connected a touch pad made out of cardboard painted with Conductive Paint, from bare conductive . The first “button” will switch the LED on once it touched. And the second “button” will turn it off – once released. Upload the code and we got a 2 button switching mechanism.
CODE TO TURN ON AND OFF LED’s:
); } } } } }
The library has many other very cool features, which I’m not going to cover in full, I can suggest you look at the H file and see what are the options, here is a link to it
DEMO VIDEO:
Few last words on how to change the I2C address of the MPR121. What sets the address is where the ADR(ress) pin is connected to. In most of the common breakout board you will find on the internet, the address pin is connected to the ground via a pad on the back, cutting the pad with a knife in the middle will change its Address to 0x5B for example. You can then connect the address pin to other pin out like the SCL and SDA.
So what you are going to make with this cool touch sensor? Leave your thoughts, feedback and questions below 🙂
- DIY bubble machine using Arduino - June 28, 2018
- Make any surface touch sensitive with MPR121 and Arduino - June 6, 2018
- Programming addressable RGB LED strip with Arduino - May 29, 2018 | http://www.gadgetronicx.com/interface-touch-sensor-mpr121-arduino/ | CC-MAIN-2018-34 | refinedweb | 1,151 | 71.75 |
Wholesale
||
Sleepwear Factories
||
Loungewear ||
Lingerie
||
Women's Wholesale
Clothing
||
Discount Women's Wholesale
Fashion
||
Women's Fashion News ||
Women's Clothes Guide ||
Womenswear Buyers ||
Women's
Fashion ||
Shop For Women's Clothing
Carole Hochman: Carole
Hochman has been designing intimate apparel for more than 30 years. What
began as a young girl's dream to become a New York City fashion designer
has today evolved into a global powerhouse company. Carole is the
President and Creative Director of the Carole Hochman Design Group which
manufactures not only the unsurpassed Carole Hochman brand of homewear
and sleepwear, but also owns OnGossamer and the licenses to several
exceptional lingerie and sleepwear collections including Oscar de la
Renta, Ralph Lauren, Betsey Johnson and Lilly Pulitzer
Jamatex, Inc: they are a design and importer of juniors
and ladies sleep /
loungewear and lingerie. They wholesale and
retail online. They
additionally design and import terry cloth products such as towels
and robes. Address: 140 Mount Olivet Road
Oxford Pennsylvania 19363
USA Phone: 610 932 7050
Contact: Pam Mcdowell
/internet sales
Komar
(Charles Komar & Sons Inc.): established in 1908, Komar is a privately
held intimate apparel company that markets and distributes women's
sleepwear, loungewear, bras, and lingerie. From their earliest origins,
an unwavering commitment to provide comfortable, quality products
remains unchanged, echoing the words of their founder that, "you only
have one reputation". Komar maintains a leadership role in the sleepwear
market through innovation and design. Komar is a progressive company
with a global sourcing network that includes factories in fourteen
countries, and state of the art distribution centers. Komar is a global
resource defined by a heritage of quality, innovation and exquisite
design. This enduring heritage is found in all their brands.
The Lingerie Center:
Wholesaler of women's bras, lingerie, sleepwear and intimates.
They deal in major brand names at 60-70%
of the regular retail price. They are also
importers of private label products. All their
merchandise is first quality and brand new. If you have any first quality closeouts in lingerie etc
they are interested in buying from you. Address:
6959 Harwin Drive Suite 109 Houston, TX 77036 Ph: 713-784-5533 Fax:
713-784-5534
sales@thelingeriecenter.com
Contact: Mustafa Faizullabhoy
Monalizas Lingerie
Ltd. (Canada): they
sell to many boutique type of stores in USA.
They can supply a small sample order
initially. take advantage of great exchange rates on Canadian
dollar. They accept visa or mastercharge.
all shipments will be made via ups. Address:
2283 West 41ST Avenue Vancouver
V6M 2A3 Canada Phone: 604- 2664598
sshahban@shaw.com
Contact: Moe Shahban
MRT Textile Inc.
Address: 350 Fifth Ave. #4908 New York New
York 10118 USA Phone: 646-674-1073
or 646-674-1074 Fax: 646-674-1075
info@newcreationapparel.com Contact: Sigrid Peterson
Papinelle
(Australia):)
Spring Import
Wholesale Clothing: Wholesalers of
junior apparel and sleepwear for women. Clothing line includes: knit tops,
hooded tees, and zip hoodies. Intimate apparel line includes: chemises,
nightdresses, sleepwear sets, and pajamas. Production services also
available. Based out of Brooklyn, NY.
Address: 991 Flushing Avenue, Brooklyn, NY 11206 Phone: (718) 381-1888 Fax:
(718) 412-3223 E-mail:
support@springimportusa.com
Texere Silk: Texere Silk is a business unit of NEMG, an
import company out of Farmington, Connecticut. They
offer apparel and home textiles under their
own brand names as well as offer contract manufacturing for US
companies. They strictly adhere to
values of social and ecological responsibility and promote just business
practices Address: 20 Batterson
Park Road, Farmington, CT 06032 Phone: 860 679 0258; 860 679 0480 Fax:
860 679 0490 E-mail:
araza@nemercantile.com Contact: Ali
Raza
Women's Fashion Industry Directory
If
you can not find enough sleepwear wholesalers in the
Apparel Search clothing wholesaler directory,
you are welcome to search for additional wholesale distributors of
sleepwear by
searching our Wholesale Clothing
Search Engine. This
wholesale
search directory
specializes in wholesale apparel and fashion including sleepwear,
nightwear, pajamas, camisoles, lingerie etc. You can find from
flannel pajamas to sexy lingerie.
Thank you for viewing
our women's sleepwear.
Clothes Guide
Silk
Lounge Pants
Silk Pajamas
Sleep Shirts
Sleep Pants
Sleepwear
Pajamas
Apparel Search
Add Your Company Contact
Us About Us Advertise
News Letter Legal | https://www.apparelsearch.com/wholesale_clothing/womens/womens_wholesale_sleepwear.htm | CC-MAIN-2020-16 | refinedweb | 703 | 55.34 |
Introduction:
ASP.NET MVC allows you to break/partition your large application into smaller parts. You can use Area in ASP.NET MVC (C# and VB.NET) projects easily. But I have found an issue regarding Area in ASP.NET MVC 3 Tools Update when working with a ASP.NET MVC 3 (VB.NET) project. Note that I am saying ASP.NET MVC 3 Tools Update and ASP.NET MVC 3 (VB.NET), because this issue is specific to both of them. You will not find this issue in ASP.NET MVC 2 (VB.NET), ASP.NET MVC 3 (C#), ASP.NET MVC 3 RTM(not Tools Update) (VB.NET), etc, project. In this article, I will show you the issue with Area and also show you a quick solution.
Description:
To see this issue in action, create an ASP.NET MVC 3 (VB.NET) project. Then, add an Area in this project. Next, add a controller inside this Area. Now run this application and navigate to this controller action. You will see the following screen,
Something going wrong in this project. Now follow the same process as above but now with ASP.NET MVC 3 (C#). You will find that everything will work well ASP.NET MVC 3 (C#) project. Now compare your ASP.NET MVC 3 (C#) Area's controller namespace with ASP.NET MVC 3 (VB.NET),
ASP.NET MVC 3 (C#):
ASP.NET MVC 3 (VB.NET):
The above screens shows that ASP.NET MVC 3 (VB.NET) project adding wrong namespace in controller inside an Area. You may be thinking that why the controller's namespace(in above screen) is wrong in ASP.NET MVC 3 (VB.NET) and correct in ASP.NET MVC 3 (C#) project. The reason is that every route registered in an Area(in MyAreaAreaRegistration.cs file) have DataTokens["Namespaces"] equal to a string array with one string element which value equals to MyApplication.Areas.MyArea.*, which tell the Framework that look in the namespaces starts with ApplicationName.Areas.MyArea before looking anywhere else and DataTokens["UseNamespaceFallback"] equal to false, which tell the Framework that look only in the specified namespaces. Since the controller's namespace in ASP.NET MVC 3 (VB.NET) project is MyApplication, that's why it shows you the 'The resource cannot be found' screen. So the quick way to make your ASP.NET MVC 3 (VB.NET) project work, just replace MyApplication namespace with MyApplication.Areas.MyArea.Controllers namespace. Run again your ASP.NET MVC 3 (VB.NET) application, you will find that everything will work just fine.
Again note that this issue is only specific to ASP.NET MVC 3 (VB.NET) and ASP.NET MVC 3 Tools Update.
Summary:
With Areas in ASP.NET MVC, you can easily break/partition your large application into smaller units. In this article, I showed you an issue regarding Area in ASP.NET MVC 3 (VB.NET) project. I also showed you a quick solution. Hopefully you will enjoy this article too.
Great work. Thanks for providing a work around.
Pingback from An Issue with Area in ASP.NET MVC 3 (VB.NET) | ASP.NET and ASP.NET MVC | Syngu | http://weblogs.asp.net/imranbaloch/archive/2011/08/16/area-issue-in-asp-net-mvc-3-vb-net.aspx | CC-MAIN-2013-20 | refinedweb | 532 | 72.83 |
S3Anonymize
Table of Contents
S3Anonymize is a tool to remove sensitive information from a record (and related records) based on configurable rules. It is primarily intended for person data, but can be re-used for any type of record.
Important Notes
- S3Anonymize removes the specified record details permanently and irrevocably (it essentially overwrites them), even if Sahana is otherwise configured to archive data instead of deleting them
- S3Anonymize requires both UPDATE and DELETE permission for the target master record, but no particular permission for any related records (except DELETE for deletion)
- S3Anonymize will audit "anonymize" for the target master record (not for the entire cascade)
Configuring Rules
Rules are configured using
s3db.configure for the target table. The rules format looks like this:
s3db.configure("pr_person", anonymize = {# A name and title for the rule set: "name": "default", "title": "Names, IDs, Reference Numbers, Contact Information, Addresses", # Rules how to clean up fields in the master record: "fields": {"first_name": ("set", "-"), # Set field to this value "last_name": ("set", "-"), "pe_label": anonymous_id, # Callable returning a new field value "date_of_birth": obscure_dob, "comments": "remove", # Set field value to None }, # Rules for related records: "cascade": [("dvr_case", {"key": "person_id", # Foreign key in the related table "match": "id", # Match this key of the parent table # Field rules for the related table "fields": {"comments": "remove", }, }), ("pr_contact", {"key": "pe_id", "match": "pe_id", "fields": {"contact_description": "remove", "value": ("set", ""), "comments": "remove", }, "delete": True, # Delete the related records after cleanup (default False) }), ], }, )
- in cascading rules, the
key+
matchproperties can be replaced by a
lookupproperty to configure a callable with the signature
lookup(table, rows, tablename)that returns a set of relevant record IDs in the related table
- cascading rules can be nested (selection rules refer to the table under which the cascade is listed, not to the outermost master table)
- standard field rules are:
"remove"sets the field value to None
"reset"sets the field value to the field default
("set", value)sets the field value to the specified value
- field rules can also be callables with the signature
rule(master_id, field, current_value)that return the new value for the field
- field rules must produce valid records (i.e. the resulting value must pass database constraints and validators)
- after applying field rules, S3Anonymize will execute
update_superand
onacceptlike any other CRUD method
- records in related tables will additionally be deleted if
"delete": Trueis specified (which makes sense if the field rules remove all useful information from those records anyway)
- if cascading records are to be deleted, this will additionally execute
ondelete(as last step)
- the master record itself is not automatically deletable (so that the user can verify the result before deleting it manually)
Instead of a single set of rules, it is possible to configure multiple rule sets as list:
s3db.configure("pr_person", anonymize = [{...first rule set...}, {...second rule set...}], )
...each with its own
name and
title. These rule sets will later be selectable in the GUI, so that the user can choose to only remove some, but not other data from the record (see screenshot below).
GUI and REST Method
S3AnonymizeWidget
To embed S3Anonymize in the GUI, it comes with a special widget class S3AnonymizeWidget and a UI script (s3.ui.anonymize.js).
S3AnonymizeWidget produces an action button/link (with a hidden dialog) that can be embedded in the record view (e.g. in postp in place of the delete-button):
def postp(r, output): if r.record and not r.component and r.method in (None, "update", "read") and isinstance(output, dict): buttons = output.get("buttons") or {} from s3 import S3AnonymizeWidget buttons["delete_btn"] = S3AnonymizeWidget.widget(r, _class="action-btn anonymize-btn") output["buttons"] = buttons return output
The
_class parameter can be used to control the appearance of the link. The
widget function will automatically embed the UI dialog and script, and authorize the link.
Clicking on the link brings up a dialog like this:
In this dialog, the user can choose all or some of the configured rule sets, then confirm the action and submit the form.
Back-end Function
S3Anonymize implements the
S3Method interface and can thus be configured as REST method for a resource using
s3db.set_method.
Apart from that, S3Anonymize comes with a generic
cascade() method that can be used to implement other anonymize/cleanup routines:
S3Anonymize.cascade(table, record_ids, rules)
...where:
- table is the target
Table
- record_ids is a set or list of record IDs to anonymize
- rules is single dict of rules as described above
This function returns nothing, but will raise in case of an error.
Important:
S3Anonymize.cascade() does not check any permissions itself, i.e. this must be implemented by the caller instead
Attachments (1)
- s3anonymize.png (62.8 KB ) - added by 4 years ago.
Download all attachments as: .zip | https://eden.sahanafoundation.org/wiki/S3/S3Anonymize | CC-MAIN-2022-21 | refinedweb | 784 | 54.05 |
Most people want to do a lot more to pictures than just display them and crop them. If you do a lot of digital photography, you may want to remove the "red-eye" caused by your camera flash. You might also want to convert pictures to black and white for printing, highlight certain objects, and so on.
To do these things, you must work with the individual pixels that make up the image. The media module represents pixels using the RGB color model discussed in the sidebar on page 72. Module media provides a Color type and more than 100 predefined Color values. Several of them are listed in Figure 4.3, on page 62; black is represented as "no blue, no green, no red," white is the maximum possible amount of all three, and other colors lie somewhere in between.
The media module provides functions for getting and changing the colors in pixels (see Figure 4.9, on the next page) and for manipulating colors themselves (see Figure 4.10, on page 70).
To see how these functions are used, let's go through all the pixels in Madeleine's cropped and named picture and make it look like it was taken at sunset. To do this, we're going to remove some of the blue and some of the green from each pixel, making the picture darker and redder.2
2. We're not actually adding any red, but reducing the amount of blue and green will fool the eye into thinking we have.
Function get_red(pixel) set_red(pixel, value) get_blue(pixel) set_blue(pixel, value) get_green(pixel) set_green(pixel, value) get_color(pixel) set_color(pixel/ color)
Description
Gets the red component of pixel
Sets the red component of pixel to value
Gets the red component of pixel
Sets the blue component of pixel to value
Gets the red component of pixel
Sets the green component of pixel to value
Gets the color of pixel
Sets the color of pixel to color
Figure 4.9: Pixel-manipulation functions
Download modules/sunset.py
import media pic = media.load_picture( 'pic207.jpg') media.show(pic)
for p in media.get_pixels(pic):
new_blue = int(0.7 * media.get_blue(p)) new_green = int(0.7 * media.get_green(p)) media.set_blue(p, new_blue) media.set_green(p, new_green)
media.show(pic) Some things to note:
• Color values are integers, so we need to convert the result of multiplying the blue and green by 0.7 using the function int.
• The for loop does something to each pixel in the picture. We will talk about for loops in detail in Section 5.4, Processing List Items, on page 89, but just reading the code aloud will give you the idea that it associates each pixel in turn with the variable p, extracts the blue and green components, calculates new values for them, and then resets the values in the pixel.
Try this code on a picture of your own, and see how convincing the result is.
Description
Function darken(color) lighten(color)
create_color(red, green, blue) distance(c1, c2)
Description
Returns a color slightly darker than color Returns a color slightly darker than color Returns color (red, green, blue)
Returns how far apart colors c1 and c2 are
Figure 4.10: Color functions
Another use for modules in real-world Python programming is to make sure that programs don't just run but also produce the right answers. In science, for example, the programs you use to analyze experimental data must be at least as reliable as the lab equipment you used to collect that data, or there's no point running the experiment. The programs that run CAT scanners and other medical equipment must be even more reliable, since lives depend on them. As it happens, the tools used to make sure that these programs are behaving correctly can also be used by instructors to grade students' assignments and by students to check their programs before submitting them.
Checking that software is doing the right thing is called quality assurance, or QA. Over the last fifty years, programmers have learned that quality isn't some kind of magic pixie dust that you can sprinkle on a program after it has been written. Quality has to be designed in, and software must be tested and retested to check that it meets standards.
The good news is that putting effort into QA actually makes you more productive overall. The reason can be seen in Boehm's curve in Figure 4.11, on the following page. The later you find a bug, the more expensive it is to fix, so catching bugs early reduces overall effort.
Most good programmers today don't just test their software while writing it; they build their tests so that other people can rerun them months later and a dozen time zones away. This takes a little more time up front but makes programmers more productive overall, since every hour invested in preventing bugs saves two, three, or ten frustrating hours tracking bugs down.
Figure 4.11: Boehm's curve
One popular testing library for Python is called Nose, which can be downloaded for free at. To show how it works, we will use it to test our temperature module. To start, create a new Python file called test_temperature.py. The name is important: when Nose runs, it automatically looks for files whose names start with the letters test_. The second part of the name is up to us—we could call it test_hagrid.py if we wanted to—but a sensible name will make it easier for other people to find things in our code.
Every Nose test module should contain the following:
• Statements to import Nose and the module to be tested
• Functions that actually test our module
• A function call to trigger execution of those test functions
Like the name of the test module, the names of the test functions must start with test_. Using the structure outlined earlier, our first sketch of a testing module looks like this:
Download modules/structure.py
import nose import temperature def test_to_celsiusO:
'''Test function for to_celsius''' pass # we'll fill this in later def test_above_freezingO:
'''Test function for above_freezing.''' pass # we'll fill this in too if __name__ == '__main__': nose.runmoduleO
In the red-green-blue (or RGB) color system, each pixel in a picture has a certain amount of the three primary colors in it, and each color component is specified by a number in the range 0-255 (which is the range of numbers that can be represented in a single 8-bit byte).
By tradition, RGB values are represented in hexadecimal, or base-16, rather than in the usual base-10 decimal system. The "digits" in hexadecimal are the usual 0-9, plus the letters A-F (or a-f). This means that the number after 916 is not 10i6, but A16; the number after A16 is B16, and so on, up to F16, which is followed by 1016. Counting continues to 1F16, which is followed by 2016, and so on, up to FF16 (which is 15i0x16i0 + 15i0, or 255i0).
An RGB color is therefore six hexadecimal digits: two for red, two for green, and two for blue. Black is therefore #000000 (no color of any kind), while white is #FFFFFF (all colors saturated), and #008080 is a bluish-green (no red, half-strength green, half-strength blue).
For now, each test function contains nothing except a docstring and a pass statement. As the name suggests, this does nothing—it's just a placeholder to remind ourselves that we need to come back and write some more code.
If you run the test module, the output starts with two dots to say that two tests have run successfully. (If a test fails, Nose prints an "F" instead to attract attention to the problem.) The summary after the dashed line tells us that Nose found and ran two tests, that it took less than a millisecond to do so, and that everything was OK:
Download modules/structure.out
Two successful tests isn't surprising, since our functions don't actually test anything yet. The next step is to fill them in so that they actually do something useful. The goal of testing is to confirm that our code works properly; for to_celsius, this means that given a value in Fahrenheit, the function produces the corresponding value in Celsius.
It's clearly not practical to try every possible value—after all, there are a lot of real numbers. Instead, we select a few representative values and make sure the function does the right thing for them.
For example, let's make sure that the round-off version of to_celsius from Section 4.2, Providing Help, on page 59 returns the right result for two reference values: 32 Fahrenheit (0 Celsius) and 212 Fahrenheit (100 Celsius). Just to be on the safe side, we should also check a value that doesn't translate so neatly. For example, 100 Fahrenheit is 37.777... Celsius, so our function should return 38 (since it's rounding off).
We can execute each test by comparing the actual value returned by the function with the expected value that it's supposed to return. In this case, we use an assert statement to let Nose know that to_celsius(100) should be 38:
Download modules/assert.py
import nose from temp_with_doc import to_celsius def test_freezing():
'''Test freezing point.''' assert to_celsius(32) == 0
def test_boiling():
'''Test boiling point.''' assert to_celsius(212) == 100
def test_roundoff():
nose.runmodule()
When the code is executed, each test will have one of three outcomes:
• Pass. The actual value matches the expected value.
• Fail. The actual value is different from the expected value.
• Error. Something went wrong inside the test itself; in other words, the test code contains a bug. In this case, the test doesn't tell us anything about the system being tested.
Run the test module; the output should be as follows:
Download modules/outcome.out
Ran 3 tests in 0.002s OK
As before, the dots tell us that the tests are passing.
Just to prove that Nose is doing the right thing, let's compare to_celsius's result with 37.8 instead:
Download modules/assert2.py
import nose from temp_with_doc import to_celsius def test_to_celsius():
'''Test function for to_celsius''' assert to_celsius(100) == 37.8
nose.runmoduleO
This causes the test case to fail, so the dot corresponding to it is replaced by an "F," an error message is printed, and the number of failures is listed in place of OK:
Download modules/fail.out
FAIL: Test function for to_celsius
Traceback (most recent call last):
File "/python25/lib/site-packages/nose/case.py", line 202, in runTest self.test(*self.arg) File "assert2.py", line 6, in test_to_celsius assert to_celsius(100) == 37.8 AssertionError
Ran 1 test in 0.000s FAILED (fai1ures=1)
The error message tells us that the failure happened in test_to_celsius on line 6. That is helpful, but the reason for failure can be made even clearer by adding a description of what is being tested to each assert statement.
Download modules/assert3.py
import nose from temp_with_doc import to_celsius def test_to_celsius():
'''Test function for to_celsius'''
assert to_celsius(100) == 37.8, 'Returning an unrounded result'
nose.runmodule()
That message is then included in the output:
Download modules/fail_comment.out
FAIL: Test function for to_celsius
Traceback (most recent call last):
File "c:\Python25\Lib\site-packages\nose\case.py", line 202, in runTest self.test(*self.arg) File "assert3.py", line 6, in test_to_celsius assert to_celsius(100) == 37.8, 'Returning an unrounded result' AssertionError: Returning an unrounded result
Ran 1 test in 0.000s FAILED (failures=1)
Having tested test_to_celsius with one value, we need to decide whether any other test cases are needed. The description of that test case states that it is a positive value, which implies that we may also want to test our code with a value of 0 or a negative value. The real question is whether our code will behave differently for those values. Since all we're doing is some simple arithmetic, we probably don't need to bother; in future chapters, though, we will see functions that are complicated enough to need several tests each.
Let's move on to test_above_freezing. The function it is supposed to test, above_freezing, is supposed to return True for any temperature above freezing, so let's make sure it does the right thing for 89.4. We should also check that it does the right thing for a temperature below freezing, so we'll add a check for -42.
Finally, we should also test that the function does the right thing for the dividing case, when the temperature is exactly freezing. Values like this are often called boundary cases, since they lie on the boundary between
two different possible behaviors of the function. Experience shows that boundary cases are much more likely to contain bugs than other cases, so it's always worth figuring out what they are and testing them.
The test module, including comments, is now complete:
Download modules/test_freezing.py
import nose from temp_with_doc import above_freezing def test_above_freezing():
'''Test function for above_freezing.'''
assert above_freezing(89.4), 'A temperature above freezing.' assert not above_freezing(-42), 'A temperature below freezing.' assert not above_freezing(0), 'A temperature at freezing.'
When we run it, its output is as follows:
Download modules/test_freezing.out
Whoops—Nose believes that only one test was run, even though there are three assert statements in the file. The reason is that as far as Nose is concerned, each function is one test. If some of those functions want to check several things, that's their business. The problem with this is that as soon as one assertion fails, Python stops executing the function it's in. As a result, if the first check in test_above_freezing failed, we wouldn't get any information from the ones after it. It is therefore generally a good idea to write lots of small test functions, each of which only checks a small number of things, rather than putting dozens of assertions in each function.
Was this article helpful? | https://www.pythonstudio.us/computer-science/pixels-and-colors.html | CC-MAIN-2019-51 | refinedweb | 2,367 | 62.68 |
There is an awful lot of confusion and hype about threads. The enthusiasm for threads is largely driven by the growth of multi-core hardware, while the headaches are a natural consequence of the difficulties posed by threaded programs. Throughout this article, it is important to remember that though the basic concept of a thread is quite simple, threaded programming is very hard. Always keep in mind that just because solution to a thread problem seems obvious, you must remain skeptical. The best practice for threaded programming is relentless scrutiny and careful testing - do not rely solely on your intuition, since it is very often wrong.
Threads provide two essential services: parallelism and concurrency. Of the two, parallelism is by far simpler, and it makes for the best starting place. The basic idea behind parallelism is that sometimes we can get stuff done faster by doing more things at once. We do this all the time in our day-to-day life. As an example, say you are cooking a meal of fish and rice. You are very hungry so you want to eat as soon as possible. Therefore, you start the rice pot and put the fish in the oven so both will get cooked at the same time. This is far faster than first cooking fish, then cooking the rice:
<----Cook Fish----> + <-----Cook Rice----->
-- or --
<----Cook Fish---->
<-----Cook Rice----->
This basic strategy also works in programming. Suppose you want to sum two large arrays of integers. Here is an obvious first attempt:
void sum(inout int[] dest, int[] a, int[] b)
{
for(int i=0; i<a.length; i++)
{
dest[i] = a[i] + b[i];
}
}
Of course, it really doesn't matter what order we sum the integers in. We could just as easily write something like this as well:
void sum(inout int[] dest, int[] a, int[] b)
{
// Iterate in reverse order
for(int i=a.length-1; i>=0; i--)
{
dest[i] = a[i] + b[i];
}
}
In fact, we could even sum up each integer at the same time. Here is how we could do it with threads:
void sum(inout int[] dest, int[] a, int[] b)
{
Thread first_half = new Thread(
{
for(int i=0; i<a.length / 2; i++)
{
dest[i] = a[i] + b[i];
}
});
Thread second_half = new Thread(
{
for(int i=a.length / 2; i<a.length; i++)
{
dest[i] = a[i] + b[i];
}
});
//Sum the arrays
first_half.start();
second_half.start();
//Wait for threads to complete
first_half.join();
second_half.join();
}
In theory, this version should run twice as fast as either of the first two examples. This is because we are summing both halves of the array at the same time. In practice however you will probably need a multi core system to see any noticeable difference. Even then, there are still some costs associated with creating and starting a thread which will affect the overall performance of this method.
Parallelism works here because all of the data is independent. Now suppose that sum looked like this:
void sum(inout int[] dest, int[] a, int[] b)
{
for(int i=1; i<a.length; i++)
{
dest[i] = dest[i-1] + a[i] + b[i];
}
}
In this case the order does matter, and it is no longer easily parallelizable. In general, whenever you can think of some way to split a problem into a series of independent sub-problems, you can parallelize without too much trouble.
Of course, parallelism only the first part of threading. The hard part is concurrency. It is very easy to confuse these two concepts, so pay close attention. Concurrency occurs when a program contains multiple states which communicate. Throughout our day we run into problems with concurrency all the time. When we need to communicate or coordinate with other people, the basic principles of concurrency are there working to make our lives difficult.
A familiar concurrent activity is juggling. While juggling, each ball is in one of three states, held, thrown or falling. To keep the balls moving, each of your hands needs to be in one of several states. Either you are waiting to catch a ball, holding a ball or throwing a ball into the air. To make things even more complicated, you need to move your left and right hands independent of each other, yet still coordinate their movements.
* |
^ |
| v
| *
\*/ \_/
[Note: This picture sucks. Need to find something better.]
Virtually anyone with hands can juggle one ball. Two is a bit more difficult, but still not too bad. Three is impossible without practice. As the number of balls increases, so does the complexity of the hand motions. The deft manipulations required to keep many objects falling through the air constantly are very impressive. Each ball is moving in its own direction, and the juggler must watch them all.
Of course, if we want to carry multiple balls there are easier ways to do it than juggling. We could simply carry them one at a time and be done with it. It may not be as flashy, and it may not be as fast, but it is definitely safer. This often the case with concurrency in programs.
In another everyday example, suppose you are taking two classes - one on mathematics, and the other on chemistry. Each class has a textbook which you must read by some time in the near future. One solution is to simply read the mathematics textbook all the way through first, then finish the chemistry text. The problem with this is that you might have outstanding assignments in both classes, so you would miss some homework and perhaps receive a bad grade in chemistry.
<-----Math--|--> <------Chemistry----->
|
Chemistry Homework Due
You could also try the opposite, read your chemistry then your mathematics, but then you would be forgetting your math homework.
<----Chemistry--|-><-----Math----->
|
Math Homework Due
Neither of these situations are desirable. In the cooking example, we were able to solve our problem by doing two things simultaneously. Here that isn't an option (unless you can somehow figure out how to use one eye to read the chemistry book and one the other to read the math book.) The solution that most people use is to split up the reading into parts. Typically your early homework isn't going to require you to completely read the text book. So you read say 20 pages of chemistry, then 20 pages of math until you are done. You must be able to remember your current page in each book in order for this trick to work, but fortunately this is not too difficult in this case. Your new reading schedule now looks something like this:
<-Chem-><-Math->|<-Chem-><-Math-> ...
|
Homework Due
You are effectively reading the two texts concurrently - but not in parallel. This type of concurrency is usually very helpful for structuring programs. Unlike the juggling example, here we are not going out of our way to make things more complicated. Programming concurrent programs with threads is a bit like juggling, while this second example is like something completely different: Fibers.
Fibers are like threads in that they execute some function independently of the rest of the program. The key difference is that fibers are not parallel. You get to control when and where they are executed. Here is an example:
private import
tango.core.Thread,
tango.io.Stdout;
void main()
{
//Create a new fiber using the inline delegate syntax.
Fiber f = new Fiber(
{
Stdout.print("-- I am here").newline;
Fiber.yield();
Stdout.print("-- Now I am here!").newline;
});
Stdout.print("Running f once").newline;
f.call();
Stdout.print("Running f again").newline;
f.call();
}
This is basically a hello world program. When you run it, it will print out the following:
Running f once
-- I am here
Running f again
-- Now I am here
For those who are well versed in functional programming, a Fiber is sort of like "call-with-current-continuation". When you execute the fiber, it runs until it yields. The next time you run it, it will pick up right where it left off. Here is another more sophisticated example:
void main()
{
Fiber f = new Fiber(
{
for(int i=1; i<10000; i++)
{
Stdout.print("i = ", i).newline;
Fiber.yield();
}
});
Stdout("First call").newline;
f.call();
Stdout("Second call").newline;
f.call();
Stdout("Third call").newline;
f.call();
}
Which will print out:
First call
i = 1
Second call
i = 2
Third call
i = 3
At first this may seem like a strange concept, but there really isn't much to it. If it seems almost too simple, you've probably figured it out.
Suppose you have a set of objects, like a list, and you want to iterate over all of them. How do you do it?
This seemingly vague question has at least a hundred different answers in each programming languages. Classically speaking, we call this "enumeration". Loops and recursion are just one way we can enumerate things, but there are always others. If we want to think of things in object oriented programming, we could rephrase this problem using design patterns. What design pattern do you use to traverse the elements of a container? There are two classic patterns to solve this problem, iterators and visitors. Of the two, iterators work more like loops and visitors work more like recursion. We shall look at both of these methods separately before examining an even better method based on concurrency. But first, let's review:
Iterators are by far the most common of the three strategies we are going to examine. C++'s STL and Java's container libraries widely employ this technique. Going back to the list example, suppose you wish to provide code to enumerate all the elements in the list sequentially. You might write something like the following:
class Node
{
Node next;
int value;
}
public class ListIterator
{
private Node cur;
this(Node n)
{ cur = n;
}
public bool done()
{ return cur !is null;
}
public void next()
{
if(!done)
{
cur = cur.next;
}
}
public int value()
{
if(done)
{
return -1;
}
return cur.value;
}
}
public class List
{
private Node head;
public ListIterator getIterator()
{
return new ListIterator(head);
}
...
}
In this situation, we provide one simple interface for performing all sorts of operations on a list. This is good object oriented design, and it works well in many situations. A common problem is to test if a given container has some object inside it. To do this with a linked list, you would typically iterate over every element in the list until you either reached the end of the container or found what you were looking for. Using the iterator we just wrote, this is simple:
// Tests if the list l contains an element with value n.
bool contains(List l, int n)
{
for(ListIterator it = l.getIterator; !it.done; it.next)
{
if(it.value == n)
{
return true; // found it - no need to continue iteration
}
}
return false; //Not in the list.
}
Of course, iterators aren't perfect. While their interface is very clean, the code itself is usually very messy. In this example, the choice of a linked list was quite deliberate. What if we wanted to iterate over something more complicated? Suppose you had a binary tree, and you wanted to do an inorder traversal of each node, how would you write an inorder-tree-iterator? This is a very difficult problem, and there aren't really any good solutions.
Fortunately, this is where the second design pattern comes in. Visitors are a nice way to travel through the elements of a container without writing complicated and inefficient iterator code. This is a very common technique in the functional programming world, where it is often known as iteration-by-first-class functions. The basic ideas is that we pass a function pointer to the container object, and then recursively apply the function to each element in the container. To see how it works, let's write a binary tree inorder traversal using visitors:
public class BinaryTree
{
private BinaryTree left, right;
private int value;
public void inorderTraversal(void delegate(int) visitor)
{
//Visit left tree
if(left !is null)
left.inorderTraversal(visitor);
//Apply the visitor
visitor(value);
//Visit right tree
if(right !is null)
right.inorderTraversal(visitor);
}
...
}
And to see how this works, here is a simple test function to sum all the elements in the tree:
void sum_tree(BinaryTree tree)
{
int s = 0;
tree.inorderTraversal((int v) { s += v; });
return s;
}
This code for the traversal is very simple, but as before it comes at a price. While our implementation has gotten simpler, the interface is much more limited than before. What if we took tried to implement "contains" as we did for the list, only this time we will operate on trees? Let's take a look at the results (and it won't be pretty...)
bool contains(BinaryTree tree, int n)
{
bool r = false;
tree.inorderTraversal((int v)
{
if(n == v)
r = true;
});
return r;
}
This is not very efficient at all. If n occurs early during the traversal, it is impossible to abort. The dumb computer is stuck visiting each node in tree, even though it is a total waste of time. Of course, the visitor could be modified to support an early out type escape code, but it doesn't really solve the underlying problem. What if you wanted to skip the first 3 elements of the tree? Or what if you wanted to only check every other element? The number of contingency circumstances and additional edge cases just stack up.
Of these two patterns, neither is perfect. Iterators suffer from a clumsy implementation, while the visitors have a terrible interface. If we look for the source of the problem, we can see why this is happening. In the iterators, it is the user that is is in charge, while with visitors it is the container. Here is a diagram which illustrates the chain of execution:
Iterators:
Client ----> resume +---Client---->
\ /
call \ / return
*---Server--*
Visitors:
*---Client---*
call / \ return
/ \
--Server--> resume +--Server---->
Whoever gets control of the stack, gets to write clean code. The other guy is stuck hacking out horrible state machines and fake stacks to get the job done. The solution to this problem is to simply give both sides their own stack - that way everyone gets to write clean code. With Fibers, this is possible. Let's look at that tree example one more time:
//to be written...
Concurrency can be the source of various problems.
If a counter is modified from more than one thread, problems can occur. The problem is, that most actions are not atomic, they can be interrupted by a thread switch. E.g. a incrementing a counter means loading the value from memory into a cpu register, the value is incremented and stored back into memory. If another thread is run, in time the first one modifies the value, the interrupting thread can also load/modify/store the value. Back to the first thread, it will overwrite the value in memory. So the modification of the second thread is lost.
What makes this problem hard is, it will occur very seldom. It is only a very short time slice where the one thread interrupting the other will do this data corruption.
If locking mechanisms are used to protect concurrency problems this can yield the problem of dead locks. A dead lock is a scenario where on lock wait for another lock, which waits on the first. The dead lock scenario can be very hard to find out what happends and why some threads are locked.
The D programming language support the concept of the keyword synchronized. Sometimes this is not sufficient to solved programming problem. For these kind of problems see the package tango.core.sync.
Docs: Mutex
A mutex is a lock with boolean state. The keyword synchronized can do a mutual exclusion for parts of code on a given object (an object reference is used as a mutex), but the lock cannot be taken or released manually.
Mutex offers the lock(), unlock() and a tryLock() method.
A Mutex can also be used with the synchronized keyword.
{
mutex.lock();
scope(exit) mutex.unlock();
// ...
}
is equivalent to this syntax
synchronized(mutex){
// ...
}
Note: On Windows this implementation used CriticalSection on Posix pthread_mutex is used.
Docs: Semaphore
What is is?
See
In general, a semaphore is a mutex, but it is not a boolean variable, instead it is an integer counter. The semaphore can be initiailized with a number of "permits".
A call to wait() decrements the permit counter if it is greater than zero. If the counter is already zero (no permits available), the call to wait() will block until a permit will be available. This is equivalent to the lock() call on the Mutex.
Releasing a permit is done with the notify() methods. The call to notify() does not block. If there are threads waiting for a permit, one of them (blocking in the wait() call) will be awaken. The notify() is equivalent to unlock() in case of Mutex.
There are also the variants for wait(Interval) with timeout and tryWait() which does never block, returning if a permit was taken or not.
Docs: Condition
A condition variable is a mechanism that allows the testing and signalling of a condition from within a synchronized block or while holding a mutex lock.
The condition variable is associated with a mutex.
import tango.core.sync.Mutex;
import tango.core.sync.Condition;
Mutex m = new Mutex;
Condition c = new Condition(m);
synchronized(m) {
(new Thread({
doSomething(); // Executes while wait() blocks
c.notify(); // Causes one (in this case, the only) caller of wait() to return
// ...
})).start();
c.wait(); // atomically m.unlock(), block until notify or notifyAll, m.lock() before returning.
}
The wait is blocked until c.notify(), or c.notifyAll() is called.
Docs: Barrier
A barrier across which threads may only travel in groups of a specific size.
Docs: ReadWriteMutex
A primitive for maintaining shared read access and mutually exclusive write access.
Docs: tango.core.Thread
What is TLS?
See
The support in Thread is untyped and offers these functions:
class Thread{
// ...
static uint createLocal(){ ... }
static void deleteLocal( uint key ){ ... }
static void* getLocal( uint key ){ ... }
static void* setLocal( uint key, void* val ){ ... }
// ...
}
First, it is neccessary to create a storage with createLocal(). The result is the key, which can be used in every thread to access the thread local storage. With creation this storage is available in every thread, but it is empty, holding a null value.
From now on, every thread can use Thread.setLocal(key,val) to store untyped pointers (void*) and retrieve them with Thread.getLocal(key). Each thread stores it own value.
For releasing the resources used for TLS, call Thread.deleteLocal(key).
There is also another way to use TLS, by using the ThreadLocal(T) class template. With this class, TLS can be used in a typed manner.
import tango.core.Thread;
alias ThreadLocal!(TLSData) MyData;
MyData data = new MyData( getSomeInitialTLSData() );
// retrieve
TLSData d = data.val();
// store
data.val( getSomeTLSData() );
Note: Only one instance of ThreadLocal is used for all threads.
Doc: ThreadGroup
A ThreadGroup is a simple container for threads. Beside of add(), remove(), create(), it offers opApply() to make a foreach over all threads and also has a joinAll() methods.
auto tg = new ThreadGroup();
tg.create({
// work load for first thread
});
tg.create({
// work load for second thread
});
tg.joinAll(); | http://dsource.org/projects/tango/wiki/ChapterThreading | CC-MAIN-2019-51 | refinedweb | 3,230 | 66.03 |
Learn
Methods, Blocks, & Sorting
Practice Makes Perfect
You won’t become a Master Method Maker ‘til you make a mess of methods. (Say that three times fast.)
def by_five?(n) return n % 5 == 0 end
The example above is just a reminder on how to define a method.
Instructions
1.
Define a
Make sure to use
Define a
greeter method that takes a single string parameter,
name, and returns a string greeting that person.
Make sure to use
return and don’t use
puts.
2.
Define a
Define a
by_three? method that takes a single integer parameter,
number, and returns
true if that number is evenly divisible by three and
false if not. | https://production.codecademy.com/courses/learn-ruby/lessons/methods-blocks-sorting/exercises/practice-makes-perfect-1 | CC-MAIN-2021-04 | refinedweb | 113 | 72.46 |
Linear searching techniques are the simplest technique. In this technique, the items are searched one by one. This procedure is also applicable for unsorted data set. Linear search is also known as sequential search. It is named as linear because its time complexity is of the order of n O(n).
Input: A list of data: 20 4 89 75 10 23 45 69 the search key 10 Output: Item found at location: 4
linearSearch(array, size, key)
Input − An sorted array, size of the array and the search key
Output − location of the key (if found), otherwise wrong location.
Begin for i := 0 to size -1 do if array[i] = key then return i done return invalid location End
#include<iostream> using namespace std; int linSearch(int array[], int size, int key) { for(int i = 0; i<size; i++) { if(array[i] == key) //search key in each places of the array return i; //location where key is found for the first time } return -1; //when the key is not in the list } int main() { int n, searchKey, loc; cout << "Enter number of items: "; cin >> n; int arr[n]; //create an array of size n cout << "Enter items: " << endl; for(int i = 0; i< n; i++) { cin >> arr[i]; } cout << "Enter search key to search in the list: "; cin >> searchKey; if((loc = linSearch(arr, n, searchKey)) >= 0) cout << "Item found at location: " << loc << endl; else cout << "Item is not found in the list." << endl; }
Enter number of items: 8 Enter items: 20 4 89 75 10 23 45 69 Enter search key to search in the list: 10 Item found at location: 4 | https://www.tutorialspoint.com/Linear-Search | CC-MAIN-2021-43 | refinedweb | 273 | 63.97 |
I was wondering if this would be the best way to fix this. The
alternative would be to turn off the XML declaration and add an html
type character encoding metadata statement.
If you think adding the namespace is the best way to go I'm happy to
do a patch, but it'd save me time if you could tell me where the this
part of the output document is being generated.
paul
On 08/03/06, Thorsten Scherler <thorsten.scherler@wyona.com> wrote:
> El mié, 08-03-2006 a las 13:09 +0930, Paul Bolger escribió:
> >.
>
> Can you provide a patch?
>
> salu2
> --
> Thorsten Scherler
> COO Spain
> Wyona Inc. - Open Source Content Management - Apache Lenya
>
> thorsten.scherler@wyona.com thorsten@apache.org
>
> | http://mail-archives.apache.org/mod_mbox/forrest-dev/200603.mbox/%3Ceab855560603080609v5df79501s@mail.gmail.com%3E | CC-MAIN-2016-44 | refinedweb | 123 | 74.69 |
colorama 0.2.7
Cross-platform colored terminal text.
- Download and docs:
-
- Development:
-
- Discussion group:
- otherwise show up as gobbledygook in your Termcolor (.).2
Usage
Initialisation
Applications should initialise Colorama using:
from colorama import init init()
If you are on Windows, the call to init() will start filtering ANSI escape sequences out of any text sent to stdout or stderr, and will replace them with equivalent Win32 calls.
Calling init() has no effect on other platforms (unless you request other optional functionality, see keyword args below.) The intention is that applications can call init() unconditionally on all platforms, after which ANSI output should just again.(Fore.RESET + Back.RESET + WinXP (CMD, Console2), Ubuntu (gnome-terminal, xterm), and OSX.
Some presumably valid ANSI sequences aren’t recognised (see details below) but to my knowledge nobody has yet complained about this. Puzzling.
See outstanding issues and wishlist at: just been inserted here to make it easy to read. # clear the screen ESC [ mode J # clear the screen. Only mode 2 (clear entire screen) # is supported. It should be easy to add other modes, # let me know if that would be useful.
Multiple numeric params to the ‘m’ command can be combined into a single sequence, eg: nor stripped. It would be cool to add them though. Let me know if it would be useful for you, via the issues on google code.
Development.
Created by Jonathan Hartley, tartley@tartley.com
Thanks
- Downloads (All Versions):
- 31131 downloads in the last day
- 188571 downloads in the last week
- 954536 downloads in the last month
-.7.xml | https://pypi.python.org/pypi/colorama/0.2.7 | CC-MAIN-2015-22 | refinedweb | 262 | 64 |
Filtering with Multiple Genre List
Remember that when we had only one Genre per Movie, it was easy to quick filter, by adding a [QuickFilter] attribute to GenreId field.
Let's try to do similar in MovieColumns.cs:
[ColumnsScript("MovieDB.Movie")] [BasedOnRow(typeof(Entities.MovieRow))] public class MovieColumns { //... [Width(200), GenreListFormatter, QuickFilter] public List<Int32> GenreList { get; set; } }
As soon as you type a Genre into Genres you'll have this error:
As of Serenity 2.6.3, LinkingSetRelation will automatically handle equality filter for its field, so you won't get this error and it will just work. Anyway, it's still recommended to follow steps below as it is a good sample for defining custom list requests and handling them when required.
ListHandler tried to filter by GenreList field, but as there is no such column in database, we got this error.
So, now we have to handle it somehow.
Declaring MovieListRequest Type
As we are going to do something non-standard, e.g. filtering by values in a linking set table, we need to prevent ListHandler from filtering itself on GenreList property.
We could process the request Criteria object (which is similar to an expression tree) using a visitor and handle GenreList ourself, but it would be a bit complex. So i'll take a simpler road for now.
Let's take a subclass of standard ListRequest object and add our Genres filter parameter there. Add a MovieListRequest.cs file next to MovieRepository.cs:
namespace MovieTutorial.MovieDB { using Serenity.Services; using System.Collections.Generic; public class MovieListRequest : ListRequest { public List<int> Genres { get; set; } } }
We added a Genres property to our list request object, which will hold the optional Genres we want movies to be filtered on.
Modifying Repository/Endpoint for New Request Type
For our list handler and service to use our new list request type, need to do changes in a few places.
Start with MovieRepository.cs:
public class MovieRepository { //... public ListResponse<MyRow> List(IDbConnection connection, MovieListRequest request) { return new MyListHandler().Process(connection, request); } //... private class MyListHandler : ListRequestHandler<MyRow, MovieListRequest> { } }
We changed ListRequest to MovieListRequest in List method and added a generic parameter to MyListHandler, to use our new type instead of ListRequest.
And another little change in MovieEndpoint.cs, which is the actual web service:
public class MovieController : ServiceEndpoint { //... public ListResponse<MyRow> List(IDbConnection connection, MovieListRequest request) { return new MyRepository().List(connection, request); } }
Now its time to build and transform templates, so our MovieListRequest object and related service methods will be available at client side.
Moving Quick Filter to Genres Parameter
We still have the same error as quick filter is not aware of the parameter we just added to list request type and still uses the Criteria parameter.
Need to intercept quick filter item and move the genre list to Genres property of our MovieListRequest.
Edit MovieGrid.ts:
export class MovieGrid extends Serenity.EntityGrid<MovieRow, any> { //... protected getQuickFilters() { let items = super.getQuickFilters(); var genreListFilter = Q.first(items, x => x.field == MovieRow.Fields.GenreList); genreListFilter.handler = h => { var request = (h.request as MovieListRequest); var values = (h.widget as Serenity.LookupEditor).values; request.Genres = values.map(x => parseInt(x, 10)); h.handled = true; }; return items; } }
getQuickFilters is a method that is called to get a list of quick filter objects for this grid type.
By default grid enumerates properties with [QuickFilter] attributes in MovieColumns.cs and creates suitable quick filter objects for them.
We start by getting list of QuickFilter objects from super class.
let items = super.getQuickFilters();
Then locate the quick filter object for GenreList property:
var genreListFilter = Q.first(items, x => x.field == MovieRow.Fields.GenreList);
Actually there is only one quick filter now, but we want to play safe.
Next step is to set the handler method. This is where a quick filter object reads the editor value and applies it to request's Criteria (if multiple) or EqualityFilter (if single value) parameters, just before its submitted to list service.
genreListFilter.handler = h => {
Then we get a reference to current ListRequest being prepared:
var request = (h.request as MovieListRequest);
And read the current value in lookup editor:
var values = (h.widget as Serenity.LookupEditor).values;
Set it in request.Genres property:
request.Genres = values.map(x => parseInt(x, 10));
As values is a list of string, we needed to convert them to integer.
Last step is to set handled to true, to disable default behavior of quick filter object, so it won't set Criteria or EqualityFilter itself:
h.handled = true;
Now we'll no longer have Invalid Column Name GenreList error but Genres filter is not applied server side yet.
Handling Genre Filtering In Repository
Modify MyListHandler in MovieRepository.cs like below:
private class MyListHandler : ListRequestHandler<MyRow, MovieListRequest> { protected override void ApplyFilters(SqlQuery query) { base.ApplyFilters(query); if (!Request.Genres.IsEmptyOrNull()) { var mg = Entities.MovieGenresRow.Fields.As("mg"); query.Where(Criteria.Exists( query.SubQuery() .From(mg) .Select("1") .Where( mg.MovieId == fld.MovieId && mg.GenreId.In(Request.Genres)) .ToString())); } } }
ApplyFilters is a method that is called to apply filters specified in list request's Criteria and EqualityFilter parameters. This is a good place to apply our custom filter.
We first check if Request.Genres is null or an empty list. If so no filtering needs to be done.
Next, we get a reference to MovieGenresRow's fields with alias mg.
var mg = Entities.MovieGenresRow.Fields.As("mg");
Here it needs some explanation, as we didn't cover Serenity entity system yet.
Let's start by not aliasing MovieGenresRow.Fields:
var x = MovieGenresRow.Fields; new SqlQuery() .From(x) .Select(x.MovieId) .Select(x.GenreId);
If we wrote a query like above, its SQL output would be something like this:
SELECT t0.MovieId, t0.GenreId FROM MovieGenres t0
Unless told otherwise, Serenity always assigns t0 to a row's primary table. Even if we named MovieGenresRow.Fields as variable x, it's alias will still be t0.
Because when compiled, x won't be there and Serenity has no way to know its variable name. Serenity entity system doesn't use an expression tree like in LINQ to SQL or Entity Framework. It makes use of very simple string / query builders.
So, if wanted it to use x as an alias, we'd have to write it explicitly:
var x = MovieGenresRow.Fields.As("x"); new SqlQuery() .From(x) .Select(x.MovieId) .Select(x.GenreId);
...results at:
SELECT x.MovieId, x.GenreId FROM MovieGenres x
In MyListHandler, which is for MovieRow entities, t0 is already used for MovieRow fields. So, to prevent clashes with MovieGenresRow fields (which is named fld), i had to assign MovieGenresRow an alias, mg.
var mg = Entities.MovieGenresRow.Fields.As("mg");
What i'm trying to achieve, is a query like this (just the way we'd do this in bare SQL):
SELECT t0.MovieId, t0.Title, ... FROM Movies t0 WHERE EXISTS ( SELECT 1 FROM MovieGenres mg WHERE mg.MovieId = t0.MovieId AND mg.GenreId IN (1, 3, 5, 7) )
So i'm adding a WHERE filter to main query with Where method, using an EXISTS criteria:
query.Where(Criteria.Exists(
Then starting to write the subquery:
query.SubQuery() .From(mg) .Select("1")
And adding the where statement for subquery:
.Where( mg.MovieId == fld.MovieId && mg.GenreId.In(Request.Genres))
Here fld actually contains the alias t0 for MovieRow fields.
As Criteria.Exists method expects a simple string, i had to use .ToString() at the end, to convert subquery to a string:
Yes, i should add one overload that accepts a subquery... noted.
.ToString()));
It might look a bit alien at start, but by time you'll understand that Serenity query system matches SQL almost 99%. It can't be the exact SQL as we have to work in a different language, C#.
Now our filtering for GenreList property works perfectly... | https://volkanceylan.gitbooks.io/serenity-guide/tutorials/movies/filtering_with_multiple_genre_list.html | CC-MAIN-2019-18 | refinedweb | 1,301 | 59.3 |
ESP8266 + Radio + Relay Working Fine
HI Everyone,
I noticed that I couldn't find a relay working with ESP8266MOD, I was messing with it over the last few days and figured out that the following sketch works. The big fix was in the sketch you use pin 0 for the relay and actually attach it to pin 3. This works great with WEMOS ESP8266.
'''
// Enable debug prints to serial monitor
#define MY_DEBUG
#define MY_NODE_ID 97 //esp8266relay
// Enable and select radio type attached
#define MY_RADIO_NRF24
//#define MY_RADIO_RFM69
// Enable repeater functionality for this node
#define MY_REPEATER_FEATURE
#include <MySensors.h>
#define RELAY_1());
}
}
""
@Newzwaver I think that makes sense. D3 on Wemos D1 Minis is GPIO0. I usually use the Dn notation, because I find it easier to understand.?? | https://forum.mysensors.org/topic/7965/esp8266-radio-relay-working-fine | CC-MAIN-2018-47 | refinedweb | 125 | 53.61 |
Cannot find namespace system.web.security
- Tuesday, May 17, 2005 5:17 AMHello,
I am using beta2 team edition. But when I try to add
"using system.web.security;" in the code, I got compile error:
Error 1 The type or namespace name 'Security' does not exist in the namespace 'System.Web' (are you missing an assembly reference?) C:\Documents and Settings\Administrator\my documents\visual studio 2005\Projects\WebSite1\CustomMemberShipLibrary\CustomMembershipProvider.cs 4 18 CustomMemberShipLibraryAnd then when I try to add System.Web.Security to the reference, I cannot find it from the list. What is the problem?
thanks,
All Replies
- Tuesday, May 17, 2005 6:50 AMWhat type of project are you creating? windows/web? For web projects System.web is automatically included.
If you want to reference this dll specifically for other projects, you can add the 'System.Web' dll from the list in the 'add reference' dialog box.
thanks,
akhila
- Wednesday, May 18, 2005 3:41 AMThe project is a web project. Because I am working on the subclass, so I seperate my subclass from the web application. Actually, it is a class library.
It is strange that I got this compile time error and even cannot find system.web.security from the reference list. The reference under systeem.web only have:
system.web
system.web.mobile
sysetem.web.regularexpression
system.web.services
but no sysetem.web.security!!!
Does my IDE have problem?
- Wednesday, May 18, 2005 5:14 AMI think I found the answer.
The problem is I didn't add system.web into the reference. So the environment doesn't recogize system.web.security if I add code : using system.web.security
But I still confused that why I cannot find system.web.security from the reference list(when you trying to add reference, a pop up window shows up and the title is "add reference". I cannot find system.web.security from the list.
- Wednesday, May 18, 2005 9:51 PM
I think you are confusing the usages of namespace and reference. This is understandable, as they both follow similar formats, and oftentimes can be equivalent.
The reference option allows you to build against a particular assembly, much like you'd link against a particular import library in a native build. In this case, you needed to build against System.Web.Dll.
System.Web.Dll is an assembly that contains several namespaces, including System.Web.Security (and System.Web.HttpCookie, and hundreds of others). To confuse matters, more than one assembly can have implementations in the same namespace.
This relationship is documented in the "Requirements" section for the particular entities in the namespace. So for instance
System.Web.Security.DefaultAuthenticationEventArgs class:
[...]
Requirements
Namespace: System.Web.Security
Platforms: Windows 2000, Windows XP Professional, Windows Server 2003 family
Assembly: System.Web (in System.Web.dll)
So in this case, if you wanted to use this class, you'd reference System.Web.dll (either with the Add Reference... option in VS, or with /r on the command line), and use the System.Web.Security namespace (using System.Web.Security) in your source.
Does that make sense?
- Thursday, May 19, 2005 3:58 AMThank you for your reply.
In your opinion, when I open reference adding window, each component that listed under .NET Tab is individual assembly. Am I right? So the system.web and system.web.mobile are different assembly.
Where can I find the diagram that depict hierachical relationship for all the assemblies?
Thanks again.
- Thursday, December 13, 2007 8:31 PMThank you AngryRichard, you rescued me! very helpful poit.
- Friday, August 06, 2010 11:47 PM
Nothing worked, I ended up using the fully qualified name
System.Web.Security.
Membership...
Respectfully - Yovav Gad
- Friday, March 16, 2012 2:38 PM
Try adding a reference to System.Web.ApplicationServices. You should then be able to addd your using System.Web.Security.
Cheers! | http://social.msdn.microsoft.com/Forums/en-US/vstsprofiler/thread/92024d61-209b-4eb5-82a7-ffcbb642f01a | CC-MAIN-2013-20 | refinedweb | 647 | 54.08 |
Thanks Jeni -
That is exactly what I wanted to know.
Using MSXML 4.0 you can do this.. (where @att="myns:hello") to return
"hello"....
<xsl:stylesheet version="1.0"
xmlns:xsl=""
xmlns:
<xsl:template
<xsl:value-of
</xsl:template>
</xsl:stylesheet>
I'm using .Net Xslt however and I don't think it supports the extension
functions :S
Steven.
-----Original Message-----
From: Jeni Tennison [mailto:jeni@xxxxxxxxxxxxxxxx]
Sent: 20 August 2002 12:38
To: Steven Livingstone
Cc: xsl-list@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re: [xsl] Qualified Attrib Value
Hi Steven,
> How do I get the value of the "test" attrib (contains a value
> qualified in myprefix associated namespace) within this fragment
> without the prefix? (not using string manipulation, but proper Xpath).
>
> <el test="myprefix:val" />
XSLT 1.0 doesn't support schemas, so an XSLT 1.0 processor doesn't know
that the test attribute contains a qualified name, from which it should
be able to extract a local part and a namespace URI. As far as the XSLT
1.0 processor is concerned, the test attribute contains a string. So if
you want to get information from that string then you
*have* to use string manipulation:
substring-after(@test, ':')
When XPath 2.0 comes around, if you work with a processor that supports
W3C XML Schema and you have a schema that says that the test attribute
is of type xs:QName then the test attribute's "typed value" will be the
qualified name. You can get the typed value of a node with the data()
function. You can then extract the local part of the QName with the
get-local-name-from-QName() function, so use:
get-local-name-from-QName(data(@test))
[Each time I think about using these get-property-from-dataType()
functions I want to scream.]
You'll still be able to use the former method in XPath 2.0, and that
gives you the benefit of not relying on someone using a W3C XML
Schema-aware XSLT processor (which I imagine will be rare beasts) nor on
the schema being available when you do the transformation (a risky
assumption in a networked environment), but the latter will deal
comfortably with the situation where the qualified name in the test
attribute doesn't have a prefix, whereas the string manipulation method
returns an empty string in that case.
Cheers,
Jeni
---
Jeni Tennison
XSL-List info and archive: | http://www.oxygenxml.com/archives/xsl-list/200208/msg00836.html | crawl-002 | refinedweb | 401 | 62.07 |
Section 8.9 The Java Collections Framework
One very helpful component of the Java standard library is the extensive Java Collections Framework 18 that provides implementations of many commonly used data structures. Some of them we have already discussed in Chapter 5. In this section, we only give a very high-level overview and leave out many fine-grained details that the avid reader can obtain from the official documentation.
The collections framework is organized as a class hierarchy that provides interfaces for many common abstract data types and several implementations of them. The top interface is called
Collection<T> 19 which essentially stands for a multi-set, i.e. something similar to a set except for the fact that elements can appear multiple times in it. It is different from a list in that elements are not ordered. A collection provides, among other things, the following operations: add, remove, (check if it) contains (an element), size, iterate (over all elements):
interface Collection<E> { boolean add(E elem); // Ensure that the collection // contains element after call boolean remove(E elem); // Remove a single instance of // element if coll contains it void clear(); // Remove all elements int size(); // Get size Iterator<E> iterator(); // Get an iterator // more methods ... }
From the base interface collection, more detailed interfaces and classes are derived: lists, sets, queues, deques (double-ended queues). Queues are data structures where elements can be added to and removed from. Depending on the kind of queue this happens in a first-in last-out style similar to a list or according to a completely different policy as in priority queues. We will not go into further detail about queues here. In this text, we focus on lists and sets because we have already discussed some of them. We have discussed array lists in Subsection 5.1.1 and doubly-linked lists in Subsection 5.1.3. The Java implementation follows our discussion quite closely. The tree sets are self-balancing binary search trees and hash sets are based on the techniques discussed in Section 5.3.
Lists 20 extend collections by imposing an order on the inserted elements. Adding will append to the list, removing will remove the first instance of the list. Additionally, one can also obtain the i-th element of a list.
interface List<E> implements Collection<E> { E get(int index); // more methods ... }
Sets 21 don't add any new relevant methods but enforce “set semantics” most importantly that adding an element that is already contained a second time does not change the set. Furthermore, there is also no
get(int index); method because sets are not ordered. 22
Subsection 8.9.1 Maps
A map assigns elements from one set, so-called keys, an element from another set, called the value set. Similar to mathematical maps, each key can be assigned to at most one value. Sets can be seen as a special case of maps where the value set is a singleton set (containing only one element).
In the Java collection framework, maps are no collections but form a class hierarchy of their own with the interface
Map<K, V> at its top.
interface Map<K, V> { V put(K key, V value); // Associate key with val // Return old association or null boolean remove(K key); // Remove association from key int size(); // Number of keys in map V get(K key); // Get value for key or null boolean containsKey(K key); // check if there's an assoc for key }
Subsection 8.9.2 Hash Sets
We discuss data structures and algorithms for hashing in Section 5.3. Here, we discuss specific details that are relevant to use the hash sets and maps in the Java collection framework.
When adding an object
o to a hash table (either a set or a map), Java invokes
o.hashCode() (which is defined in
Object) to compute the hash code for the object. Since Java collections also use
equals when searching for an entry in a collection,
equals and
hashCode have to be consistent in the following way:
Definition 8.9.2. hashCode Consistency.
Assume that
q and
p are both objects of a class \(T\text{.}\) Then, whenever
p.equals(q) == true it must hold that
p.hashCode() == q.hashCode().
Note that the opposite is in general not always true since it would imply that \(T\) is isomorphic to
int.
As a rule of thumb, you should, whenever you want to override
equals or
hashCode also override the other and check that both are consistent with respect to Definition 8.9.2.
To understand why it is important to maintain hashCode consistency, consider this class which violates hashCode consistency:
public class Fraction { private int numerator, denominator; public boolean equals(Object o) { if (! (o instanceof Fraction)) return false; Fraction f = (Fraction) o; return this.numerator * f.denominator == f.numerator * this.denominator; } public int hashCode() { return this.numerator; } }
The two fractions \(1/2\) and \(2/4\) are clearly equal according to the implementation of
equal in
Fraction. However, in each hash table longer than 2, they end up in different buckets which has the effect that after inserting \(1/2\text{,}\)
contains(new Fraction(2, 4)) will return false.
Subsection 8.9.3 Iteration
One aspect that is particularly noteworthy about collections is iteration. Each collection can create iterators which are objects that capture the state of an iteration over all elements in a collection. Based on the particular collection (e.g. list, set, etc.) the order of iteration is defined or not. An iterator 23 provides three main methods:
interface Iterator<E> { boolean hasNext(); // check, if the iterator is valid E next(); // if valid, return current element // and proceed to next void remove(); // if valid, remove current element // and proceed to next }
For example, the iterator of an array list could look like this:
public class ArrayList<E> implements List<E> { private E[] elements; private int size; // ... public Iterator<E> iterator() { return new Iterator() { private int i = 0; public E next() { return elements[i++]; } boolean hasNext() { return i < size; } // ... }; } }
Since iteration is very frequent, Java offers a special for-loop to iterate over collections (and also arrays):
Set<Vec2> s = ...; for (Vec2 v : s) { System.out.println(v.length()); }
This is (more or less) equivalent to this more verbose piece of code:
Set<Vec2> s = ...; for (Iterator<Vec2> it = s.iterator(); it.hasNext(); ) { Vec2 v = it.next(); System.out.println(v.length()); }
Subsection 8.9.4 Genericity
The collections framework extensively uses genericity in the form of type variables. These provide an upper bound on the type of the objects that can be added to the collection. For example:
Set<Vec2> s = new HashSet<Vec2>();
The type
Set<Vec2> makes sure that only objects that are a
Vec2 can be added to
s. Accordingly, when e.g. iterating over the set, the type variable ensures that the type of each object in the set is at least
Vec2.
Because type variables have only been added to Java later, they have not made it into Java byte code. The Java compiler replaces type variables by
Object when compiling the Java program into Java byte code in a process called type erasure. This is the cause of some inconsistencies that occasionally cause trouble. Most prominently has Java been designed with covariant arrays, i.e. you can write
Vec2[] o = new PolarVec2[10];. This requires the language to perform dynamic type checks when you assign to an array. For example, when writing
o = new CartesianVec2(...); an
ArrayStoreException has to be thrown. This in turn requires that the array knows its element type at run time to be able to perform this check. However, since type variables are erased to
Object, you cannot create arrays with type variables, i.e. the following is not possible:
T[] a = new T[10];
docs.oracle.com
docs.oracle.com
docs.oracle.com
docs.oracle.com
OrderedSet.
docs.oracle.com | https://prog2.de/book/sec-java-collections.html | CC-MAIN-2022-33 | refinedweb | 1,323 | 54.52 |
Searching.
If you liked this blog, share the love:
January 17th, 2006 at 8:03 am
Peter Nederlof also had some thoughts on this, i think that’s even prettyer…
January 17th, 2006 at 8:48 am
Not bad but have a few problems with Peter’s approach. That it’s modify Function may or may not be a problem (to me it’s a problem). More significant is it fires off parent constructors automatically - what if you don’t want the parent constructor called? Plus is doesn’t support Mixins.
Of course if you do want to extend Function, you could apply Troels approach in the same way as Peter and have the advantage of not having to reassign the Function object e.g.;
That way you still have control over the constructor and support for Mixins.
January 17th, 2006 at 10:08 am
I realized that if the constructor is empty, the regex will fail on ie (works fine at moz). Simply throwing a if (aMatch != null) { around the line which copies the constructor does the trick.
One more thing. Modifying the prototype actually changes already instantiated objects. Modyfying the parent prototype doesn’t. This might lead to some confusion.
January 17th, 2006 at 10:19 am
That’s one I hadn’t run into. Updated the example re your suggestion.
January 17th, 2006 at 3:08 pm
January 17th, 2006 at 3:28 pm
I think the use of the regex is a mistake. It is too fragile. It fails if you use anonymous functions like:
var Animal = function() { this.species = “animal”; }
Also, I’m not conviced it is that useful. With it, you
can call the superclass constructor as “this.Animal()”
Without it you’d use “Animal()” instead.
I’d suggest reserving a fixed property name, such as superclassConstructor, or just superclass, to hold the reference to the superclass constructor. Then you don’t need to rely on Function.prototype.toString() and regexp hackery to figure out the name of the superclass constructor. You just say:
descendant.prototype.superclass = parent;
January 17th, 2006 at 4:21 pm
Really want ecmascript v4..
Flashs’ actionscript 2.0 is based upon it, QuickTime has it.
But none of the browsers. :/
January 17th, 2006 at 6:07 pm
There’s a much simpler way of doing inheritance if you don’t need multiple inheritance (mixins)–which, if you’re used to Java, you’ll instinctively avoid anyway. This approach is described in the link fabio posted above:
The only disadvantage I see here is that you’re not able to pass different arguments to the superclass’s constructor for each instance of the subclass you create. Am I missing something?
January 17th, 2006 at 7:44 pm
That’s a fair point. At the same time I would argue that using anonymous functions this way would only happen if you’re manipulating data / DOM on the fly. In other words this probably wouldn’t be a way to “publish” an API for other programmers to use and unlikely you’d be wanting to extend it (perhaps it might inherit from something else but I doubt vice versa).
There might also be issues with trying to extend build in classes - if I remember right IE doesn’t regard RegExp to be a class in the same way as Mozilla.
All that said and I know that regex looks very hacky but, under normal circumstances it’s going to do the right thing. Effectively it’s filling in for what typeof “fails” to do with user defined Objects.
Guess I’ve got a gut problem with that approach. Part of it is if you’re bringing different scripts together where people have used different conventions and perhaps another is simply too much thinking in terms of classic inheritance - having something so fundamental defined only be convention makes me nervous.
This may sound odd but there is some reason why I didn’t like that approach, after reading it long ago in Javascript: The Definitive Guide, but have forgotten the train of thought that brought me there.
It’s got something to do with this;
Poodle.prototype = new Dog();
alert(Poodle.prototype.constructor);
Here the constructor is the Animal constructor. With Troels approach you get to keep the Poodle constructor in the prototype. Quite why thats important though, I’ve long since forgotten ;) Perhaps it just seemed like a nasty side-effect. Will get back if I think of it.
January 17th, 2006 at 9:15 pm
Hmm yeah. It’s a little counter-intuitive (but then what about prototypes is intutive?), but I don’t see what problems this could cause either.
In particular, why ask for
Poodle.prototype.constructorwhen you can just ask for
Poodle? If your goal is to be able to access the constructor from an instance, well that’s a different story. But like yourself, I can’t think of a situation where you would need to.
January 18th, 2006 at 7:58 am
Still pondering and so far the only real reason I can think of is when you need to work out what user defined class an instance was created from like;
That’s something you need in the absence of “instanceof” but that may no longer be an issue these days - I don’t know the history of various ECMAScript implementations and when instanceof was supported, but it’s in Firefox 1.5 and seems of have been implemented in “Javascript 1.4″.
Otherwise nice link here discussing some of this.
Anyway - it’s now bugging me ;)
January 18th, 2006 at 5:28 pm
HarryF wrote:
Actually, anonymous functions are probably even more common in production library code that is published. That is because it is good practice to put it in a namespace. For example:
This kind of namespacing is standard with module libraries such as JSAN and (I think) Dojo. Your code won’t interoperate with code from those repositories.
January 19th, 2006 at 5:10 am
That’s a very good point that I hadn’t considered. I guess it would be possible to support that in a different implementation, where you pass a string identifying the parent rather than a reference to the function object - something like;
But that’s starting to look very hacky and there’s probably more cases there I haven’t considered.
Anyway - thanks for pointing that out.
January 19th, 2006 at 7:30 am
Can I just mention that all of the code samples in this article are disappearing as soon as the page is loaded when I view this in IE 6.0 on WinXP Pro?
Not my first choice of browser or OS, but this is a corporate build!
January 19th, 2006 at 7:54 am
On the contrary, I recall reading only a few days ago (can’t remember where) that anonymous functions were a good way of simulating Java’s ‘Private’ access modifier, since it prevented third parties calling such functions easily.
On a seperate note, I can’t see why you would go to all of the bother of setting up inheritance like this, then choose to not use it occassionally, as was suggested. A poodle is always a dog, and an animal, so if this feature is needed it suggests to me that the object model has been badly designed.
January 19th, 2006 at 6:28 pm
In general, I don’t really trust anything that relies on string manipulation and string manipulation isn’t known to be extremely fast (though in this case, that code doesn’t actually run too much).
I think Kevin Yank’s implementation is fine and the .constructor problem is the exact one addressed by KevLinDev’s article.
I think this implementation is interesting because it’s conceptually better because when inheriting from objects, it just copies their “.prototype” properties over rather than assign their prototype to a new object (Dog.prototype = new Animal()). Hence, as Kevin points out, you can call a constructor with parameters. I would argue that it offers other benefits b/c the descendant’s methods don’t point to a random superclass object but instead do the method/property lookup correctly via the proto-chain. However, in order to do this, you have to loop through every property on the superclass’s .prototype object, which could be a little slow (but it’s only done once so not a big deal).
January 19th, 2006 at 6:43 pm
David Flanagan makes a very relevant point.
A client asked me to whisk some thick client dhtml together in a hurry (basically as netvibes.com clone), so I took a deeper look at some of the emerging javascript frameworks.
The copyPrototype as shown by harry doesn’t work with name.space.foo = function … style, which is rather annoying.
Even if it worked, the constructor contains dots in it’s name, so you can’t really create a function by the same name.
Anyway - As harry describes, calling overridden methods is possible through the magic of a little known construct called apply. This same construct can be used to call parent constructors. It may seem a bit clunky, but if you ever used PHP4 to write OO code, you’ll recognize that you are essentially calling a static method from within an object instance - works the same way.
To conclude, the following works :
Once you got used to the .apply(this, arguments) it’s rather harmless, and there is no need for wierd hacks and extending the build-in objects’ prototype.
PS: omnicity - yes, very annoyingly this page is broken in IE …
January 19th, 2006 at 8:38 pm
Thanks for reporting this! We’re on it…
January 24th, 2006 at 1:37 am
Is it just me or does it seem that the comments are leading straight back to crawford?
Funny that I could never really grasp crawford’s approach until reading through this thread; copying, testing and renaming the examples (to ‘inherits’ no less).. only to re-read crawford to find this:
January 27th, 2006 at 5:11 am
[…] En me baladant, je suis tombé sur ce très bon article qui présente différents héritages en Javascript et en détaille un en particulier. Personnellement, je préfère les méthodes Crockford, mais cette démarche doit aussi avoir du bon. En tout cas, elle démontre encore une fois la puissance du Javascript, que je trouve décidément très proche du Ruby. […]
January 28th, 2006 at 3:02 pm
MochiKit doesn’t have an approach to inheritance. The *only* inheritance in MochiKit is to create the various Error constructors so that “instanceof Error” can be used to detect one (useful for Deferreds), and that does inheritance in the idiomatic JavaScript way.
Everything else in MochiKit uses more functional patterns. In my experience with languages like Python is that you really just don’t need inheritance for much, and that’s four times as true in JavaScript.
I’m not entirely sure what you mean by the 60kloc reference. MochiKit is less than 8kloc in total with comments, and Base is less than 1.5. The functionality in Base is pretty shallow — it’s mostly the minimum required functionality in order to make the rest of MochiKit possible. If you were to have your way, what would be there? Or more importantly, what *wouldn’t*?
MochiKit originally was going to have an inheritance method, but it turned out to be YAGNI. I wrote a bunch of code that make single inheritance work (while maintaining the functionality of instanceof), but I just didn’t need it at all in any production or library code.. so I tossed it. Someone else was interested in it, so it lives on here though:
February 8th, 2006 at 2:19 am
I recently had an idea about this and was wondering what your thoughts were. The implementation is as follows:
I like it because I can build a nice prototype chain without calling the SuperClass constructor JUST to get the inheritance (gets messy if the constructor actually does something or requires args). It also gets around the obj.constructor problem.
This doesn’t allow (directly) for multiple inheritance, but it could also be used in conjunction with a class augmentation/copy approach (and I tend to shy away from multiple inheritance anyway).
I’ve posted the idea on my blog at
March 6th, 2006 at 6:01 pm
[…] Read the article SitePoint Blogs » Javascript Inheritance […]
March 9th, 2006 at 1:44 pm
I’ve come up with a way of doing inheritance that suites my needs. I created a helper function Class() that builds the class based on objects passed in. It’s super simple to use and the code is clean. I’ve posted code and details on my blog:
Feedback is welcome.
March 18th, 2006 at 4:10 am
I beleive inheritance is most easily acheived using the following:
Any comments to johnny_barry@hotmail.com
The ideas above could be packaged into functions but I think it is better to get use to how ecmascript works. Function.prototype.call() and Function.prototype.apply() are great functions!
March 23rd, 2006 at 4:48 pm
[…] I want to achieve the above without resorting to global functions to build prototype chains […]
March 24th, 2006 at 8:56 pm
[…] I want to achieve the above without resorting to global functions to build prototype chains […]
April 1st, 2006 at 3:25 pm
[…] If you’re like me, however, you’re still rather troubled by prototype’s claim of “class-driven development” in Javascript. In fact, you’re probably a little fuzzy on the differences between object-oriented and prototype-based languages (hey, is that why they called it prototype?), and, if you’ve investigated them, you find approaches to enabling classical object-oriented development in Javascript rather unintuitive. (See here, here, and here, all good stuff, but rather painful to use). […]
June 18th, 2006 at 1:59 pm
[…] I want to achieve the above without resorting to global functions to build prototype chains […]
June 22nd, 2006 at 4:49 pm
[…] I want to achieve the above without resorting to global functions to build prototype chains […]
July 18th, 2006 at 4:38 pm
Everyone, the above code of
copyPrototype()in the article will NOT work in Internet Explorer (6.0) but works just fine in Firefox up to 1.5.0.4 in this following case:
The reason is that the regexp in
copyPrototypeused to match the parent’s constructor will also match the space after (and before) the constructor’s name. Hence instead of grabbing “Animal” only, the pattern will also grab “Animal ” instead. Firefox, however, resolve this just fine. IE, on the other hand, will give out a nasty and almost unidentifiable result in the call
this.Animal()in the Dog’s constructor. This bug took me almost 2 days to slowly work through and discover.
So here is the new fix snippet of
copyPrototype()that will work with any type of function declarations:
What I have added here is
aMatch[1].replace(/^\s*|\s*$/g,"")to trim the space before and after function name before the assignment. This updated code has proved to work well in my test cases.
Cheers!
Alex Le at
August 4th, 2006 at 2:00 am
I’ve blogged about an alternate approach which seems considerably more elegant to me - I’d be interested to hear what you think!
July 18th, 2007 at 7:10 pm
Hi,
The single inheritance example below doesn’t need copyPrototype.
It works fine.
And is possible to call the base class method which has been overridden.
July 18th, 2007 at 7:25 pm
Hi,
Your code would look like this.
It’s simple and good enough for my needs.
As said, I’m happy that it doesn’t need copyPrototype and is possible to call the base class method which has been overridden.
April 16th, 2008 at 10:46 pm
function inherit(descendant, parent) {
var sConstructor = parent.toString();
var aMatch = sConstructor.match( /\s*function (.*)\(/ );
if ( aMatch != null ) { descendant.prototype[aMatch[1]] = parent; }
for (var m in parent.prototype) {
// MY HUMBLE ADDITION
if (typeof(parent.prototype[m]) == ‘function’) {
descendant.prototype[aMatch[1]+’_'+m] = parent.prototype[m];
}
descendant.prototype[m] = parent.prototype[m];
}
}
my humble addition allows parent methods to be called without using “apply”.
if B extends A as in Mr. Fuecks examples, and “obj” is an instance of A, and “category” a virtual function, class A’s version can be invoked on obj as “obj.A_category()” similar to the c++ syntax “obj->A::category()”
May 12th, 2008 at 7:01 pm :). | http://www.sitepoint.com/blogs/2006/01/17/javascript-inheritance/ | crawl-001 | refinedweb | 2,778 | 62.78 |
Hey there everyone! I am quite familiar with FMOD studio, and I’ve recently been learning Unity to make an audio showcase. I downloaded the Fmod Studio Unity package and imported it into my Unity Project. Just from poking around, I was able to figure out some basic stuff. I create FMOD Assets by refreshing from the GUIDs list. I added the Listener script to my player. I added an Emitter playing a loop to test out 3D audio. All easy stuff once I figured it out.
But this only gets me a recreation of the functionality already built into Unity – listeners and positional audio – just from my actual Studio build instead of using Unity components.
I can’t figure out how to to take it to the next level with scripting. I imagine in scripts, there’s a way to set parameter values and all that cool fancy stuff. I am familiar with code in general, I just can’t figure out what the API calls are, how to reference the right Event, etc…
I can’t find actual documentation on it anywhere. Any help would be greatly appreciated!
- ArtizensOnline asked 5 years ago
- You must login to post comments
it will work, but you cannot assign Events in FMOD Studio to anything except the MasterBank. it will not create the FMODAssets folder on initial import.
if you’re getting NullRefs in playing back the audio here’s the checklist i use:
Sometimes, initially, there’s a huge raft of errors when you first start using the integration. weirdly enough, going to [b:30ifw5jq]FMOD>About this Integration[/b:30ifw5jq] can help to cure a lot of these errors where it won’t even preview sounds in Unity Free. not sure why.
Is the FMOD Listener on the Main Camera Object (or wherever your Unity Audio Listener is)? that’s a big and common one.
Are you using the latest integration clean installed? i had issues updating older integrations in Unity and had to remove the older integration completely before it would work.
Did you move your Unity project or FMOD project folder? seems the integration has issues with losing the path to the events when one source or the other has moved. i have had varying success with nuking the [b:30ifw5jq]FMODAssets[/b:30ifw5jq] and [b:30ifw5jq]StreamingAssets[/b:30ifw5jq] folders, and re-importing the Build from the original project. it’s tricky because the Bank file has the same GUID as the previous banks and it will tell you that you can’t re-import the same bank.
i’ve tried a number of methods to shake this but it’s not easy. the sure fire method is just to start all over from scratch with a new project (you can copy your Automation curves from your old project but not Events), but you may be able to edit the GUID in the Bank build file itself (NOTE – THIS IS A HACK – not recommended).
other fun things – if you accidentally open the FMOD Studio project in any version of FMOD Studio prior to 1.03.03 you risk permanently damaging your project and making it impossible to integrate without – you guessed it – starting from scratch. we had 1.02.12 installed on our school computers and opening any project in that would instantly make the integration break. so backup a ton if you’re integrating.
- metaphysician answered 4 years ago
For those of you having trouble with Unity auto-generating the FMODAssets folder after refreshing the event list, have you made sure to assign your events to a bank in FMOD? I didn’t realize at first that even for the MasterBank, you still have to manually assign your events to the MasterBank…
- MikePatterson answered 4 years ago
Hi there,
We’re still in the process of creating examples and documentation for the integration. For the short term I recommend downloading the API installer and looking at the documentation and examples for the C++ API. The C# Studio API exposes the same features as the C++ API.
There are some extra convenience classes built on top which can be handy as well. For those I will provide a brief description here for you.
FMOD_StudioSystem – this is a singleton which can be used to create event instances.
[b:25qewuxh]PlayOneShot[/b:25qewuxh] – play a finite event at a given position in the world and destroy the event when it completes. This is good for one off sound effects that don’t require parameter control.
[b:25qewuxh]getEvent[/b:25qewuxh] – create an instance of an event. The FMOD.Studio.EventInstance returned can be controlled from script, start(), stop(), getParameter().
An example could be something like this:
[code:25qewuxh]
FMOD.Studio.EventInstance engine;
FMOD.Studio.ParameterInstance engineRPM;
void Start()
{
engine = FMOD_StudioSystem.instance.getEvent("/Vehicles/Car Engine");
engine.start();
engine.getParameter("RPM", out engineRPM);
}
void Update()
{
// get a RPM value from the game’s car engine
engineRPM.setValue(rpm);
}
void OnDisable()
{
engine.stop();
}
[/code:25qewuxh]
- Guest answered 5 years ago
I have the exact same problem as benybody1. It creates the StreamingAssets folder with my banks inside, but doesn’t create a FMODAssets folder with my events. I have the current version of FMOD and the latest unity package (10306). I tried Pravusjif’s idea of copy and pasting manually, but this didn’t solve the issue.
If anyone could help, it would be greatly appreciated.
Thanks
Edit: So I’ve realised you have to have Unity Pro license in order to use FMOD, I guess that’s why it’s not working
- LyndonHolland answered 4 years ago
Thank you, this is super helpful, and good to hear that you’re also working on some official documentation and examples! Thanks so much for the continued support!
- ArtizensOnline answered 5 years ago
I just tested it with Unity free and it appears to work and creates the FMOD Assets folder as expected. You cannot ship a games for PC/Mac/Desktop with the Unity free license because FMOD requires native plugin support which is only available in Pro. You should be able to use it inside the editor and for iOS and Android though.
- Guest answered 4 years ago
I have this, no errors, the print(result) prints ‘OK’, and still dont have any sound playing. (Double clicking the asset from the project folder plays the sound)
using UnityEngine; using System.Collections; public class play_Footsteps : MonoBehaviour { FMOD_StudioSystem soundSystem; FMOD.Studio.EventInstance evt; FMOD.RESULT result; void Start () { soundSystem=FMOD_StudioSystem.instance; evt = FMOD_StudioSystem.instance.GetEvent("event:/Footsteps/FootSteps"); result = evt.start (); print (result); } // Update is called once per frame void Update () { } }
- Sid answered 4 years ago
- last edited 4 years ago
- Hi Sid, it's probably worthwhile to create a question for this issue. It doesn't look like you have set the 3D attributes for the footstep event so it is probably playing at position (0,0,0) in the world, which could be a long distance from the listener making it play silently..
- benybody1 answered 4 years ago
I’m having some trouble with the new version of the package. I’ve never seen the FMOD/Import in menu,I only see the FMOD/Refresh Banks. And I can’t Continue to my work…
- Dic answered 6 months ago
[quote="benybody1":123yq1zf.[/quote:123yq1zf]
Hello benybody1 i’m having some trouble with integrating FMOD with unity too, and i just confirmed that to get the FMODAssets folder automatically created you have to manually copy the banks inside your FMOD project folder (inside the /build/ folder) and paste them inside your Assets/StreamingAssets/ folder in unity. after this if you just go to FMOD->Refresh Event List from the Unity Editor, the FMODAssets folder will be created with the corresponding files inside.
- Pravusjif answered 4 years ago
I’m having some trouble with the new version of the package. I switched my stuff over to have "event:/" in front of my event names and it no longer complains about that. I added an FMOD_Listener script to my main camera, but I still get 1 NULLReferenceException because I’m attempting to play something on Start(), and then as my game goes on, I don’t get any more NULLReferenceException, but no sound plays either. How can I fix these issues?
Thanks
I fixed my NULLReferenceException problem, something else had to change on my end, now, I’m getting a "These banks were built with an incompatible version of FMOD" error, both FMOD Studio and the FMOD Studio Unity package say 1.03.05 and even in the error says the current integration version is 1.03.05 but I’m still getting this error, even after rebuilding the banks.
It looks like it’s working in other scenes. all of the sounds in our game are 2D, so when we switch from scene to scene, and the camera has a different listener, it changes the volume of that sound. so during our game, our game is far away, so that’s why i wasn’t hearing anything. How do i get it to do 2D?
also only on the scene where I can’t hear anything, it keeps saying "ERR_FILE_NOTFOUND" which might have to do with the way our cameras are set up? I think we have 8, but one main camera, and the others are children of the main camera, and I put the listener on the main camera like in all the other scenes and it works fine in those.
EDIT: now it’s saying it can’t find any of the files in any of the scenes… and I didn’t even change anything. I can play the sounds from the FMODAssets folder just fine. every time I refresh the event list it moves the same 11 events even when it’s the same, I’m not sure if that’s normal or not
Hi DanjelRicci,
I’m also getting the NullReferenceException error – what exactly did you do to get that working?
EDIT: I’m SUCH an idiot. I kept missing the significance of ‘add the listener to the camera’, meaning go to Component->Scripts->FMOD Listener and add to the camera. I’d only added a Main script to the camera, and was confused because an ‘Audio Listener’ (not FMOD) was already attached to the camera.
- rukidnao answered 4 years ago
I am relatively new to Fmod Studio and programming in general. This is the first time I have tried integrating Fmod into unity. I have successfully loaded Fmod into unity and played a sound through an event emitter just as ambience. Now I am trying to play an event as my player object moves through my level. This is my current code:
[code:3t9yd8mw]using UnityEngine;
using System.Collections;
using FMOD.Studio;
public class PlayerMovement : MonoBehaviour {
public float moveSpeed;
public GameObject deathParticles;
private float maxSpeed = 5f;
public float timer;
private Vector3 input;
private Vector3 spawn;
public bool isDead;
public float RotateSpeed = 30f;
FMOD.Studio.EventInstance playermove;
FMOD.Studio.ParameterInstance speed = 0f;
// Use this for initialization void Start () { timer = 0.0f; isDead = false; spawn = transform.position; rigidbody.AddForce (0, 0.25f, 0, ForceMode.Impulse); playermove = FMOD_StudioSystem.instance.GetEvent("/Player/Player_Move"); playermove.start (); } void Update () { speed = moveSpeed; if (timer < 2 && isDead) { timer += Time.deltaTime; } if (isDead && timer > 2) { isDead = false; transform.position = spawn; timer = 0.0f; } input = new Vector3(Input.GetAxisRaw ("Horizontal"), 0, Input.GetAxisRaw ("Vertical")); if(rigidbody.velocity.magnitude < maxSpeed) { rigidbody.AddRelativeForce(input * moveSpeed); } if (transform.position.y < -2) { Die (); } if (Input.GetKey ("q")) { transform.Rotate (-Vector3.up * RotateSpeed * Time.deltaTime); } else if (Input.GetKey ("e")) { transform.Rotate (Vector3.up * RotateSpeed * Time.deltaTime); } } void OnCollisionEnter(Collision other) { if (other.transform.tag == "Enemy") { isDead = true; Die (); } } void OnTriggerEnter(Collider other) { if (other.transform.tag == "Goal") { GameManager.CompleteLevel(); isDead = false; } } void Die() { Instantiate(deathParticles, transform.position, Quaternion.identity); transform.position = new Vector3(transform.position.x, -1, transform.position.z); isDead = true; }
}
[/code:3t9yd8mw]
Im not sure if I have to load banks and what not, or even have an Fmod event emitter on my player object. Only error im getting right now is:
Assets/Scripts/PlayerMovement.cs(19,48): error CS0029: Cannot implicitly convert type
float' toFMOD.Studio.ParameterInstance’
but it doesnt seem that unity is recognizing my Fmod.Studio input or anything associated with it.
- inked91 answered 4 years ago
Hello and thanks so much for the help!
I started using Fmod with Unity today but the script up there gives me some problems.
First I had to change [i:2hjr0k6y]engine = FMOD_StudioSystem.instance.getEvent("/Vehicles/Car Engine");[/i:2hjr0k6y] to [i:2hjr0k6y]engine = FMOD_StudioSystem.instance.getEvent("event:/Vehicles/Car Engine");[/i:2hjr0k6y] to avoid an error Fmod wanting a "/" at the beginning of the path (and that’s weird because it already was present there).
But then, if I debug the "engine" variable, it rerturns Null, in fact I get a NullReferenceException error when trying to do [i:2hjr0k6y]engine.start()[/i:2hjr0k6y].
Thanks for the help!
EDIT: Oh gosh, nevermind, now it’s working great. Only later I realized had to attach the listener to the camera… Makes sense. 😆
- DanjelRicci answered 4 years ago
Yes, best post in this topic for sure. Thanks so much!
- arckex answered 4 years ago | http://www.fmod.org/questions/question/forum-39908/ | CC-MAIN-2018-30 | refinedweb | 2,213 | 65.73 |
C# 3.0 has certainly introduced some really cool features. I have used the Automatic Properties extensively as well as object and collection initializers. These are real time savers. However, the most exciting feature (IMHO) are Extension Methods. My last post shows one example of how powerful extension methods can be. Here is another example (inspired by Scott Gu).
1: public static class Extensions
2: {
3: ///
4: /// Do not use this extension for large sets because it iterates through
5: /// the entire set (worse case). i.e. O(n).
6: ///
7: public static bool In<T>( this T test, params T[] set )
8: {
9: return set.Contains( test );
10: }
11: }
Usage (excerpt from RowCommand event handler):
1: if( e.CommandName.In( "Open", "Close" ) )
3: ...
4: }
Instead of:
1: if( e.CommandName == "Open" ||
2: e.CommandName == "Close" )
3: {
4: ...
5: }
Here is another example:
1: public delegate T CreateIfNullDelegate<T>();
2: public static T GetValue<T>( this System.Web.Caching.Cache cache, string key, CreateIfNullDelegate<T> createIfNullDelegate, bool updateCache )
4: object value = cache[key];
5: if( value == null && createIfNullDelegate != null )
6: {
7: value = createIfNullDelegate();
8: if( updateCache )
9: cache[key] = value;
11: return (T)value;
12: }
13: ...
14: myData = Cache.GetValue<MyType>( myKey, myCreateDelegate, true );
15: myOtherData = Cache.GetValue<MyOtherType>( myOtherKey, myOtherCreateDelegate, false );
Instead of:
1: object value = cache[myKey];
2: if( value == null )
4: value = myCreateDelegate(); cache[myKey] = value;
6: myData = (MyType)value;
7: object value = cache[myOtherKey];
8: if( value == null )
9: {
10: value = myOtherCreateDelegate();
12: myOtherData = (MyOtherType)value;
I've used this extension repeatedly. A nice side effect is that the extension is more testable than a code-behind page. | http://geekswithblogs.net/WillSmith/archive/2008/03/13/more-fun-with-extension-methods.aspx | CC-MAIN-2014-42 | refinedweb | 272 | 50.63 |
本文来自于:
Using SIP
Bindings are generated by the SIP code generator from a number of specification files, typically with a .sip extension. Specification files look very similar to C and C++ header files, but often with additional information (in the form of a directive or an annotation) and code so that the bindings generated can be finely tuned.
SIP可以从多个参数文件创建C/C++的Python绑定代码,参数文件以*.sip命名,类似于C/C++的.H头文件,原则上与C/C++的模块或头文件对应。
A Simple C++ Example
We start with a simple example. Let’s say you have a (fictional) C++ library that implements a single class called Word. The class has one constructor that takes a \0 terminated character string as its single argument. The class has one method called reverse() which takes no arguments and returns a \0 terminated character string. The interface to the class is defined in a header file called word.h which might look something like this:
开始一个C++的类,非常简单,实现字符的顺序翻转(接受一个字符串指针,包含的字符串必须以'\0'结尾),如下所示:
// Define the interface to the word library. class Word { const char *the_word; public: Word(const char *w); char *reverse() const; };
The corresponding SIP specification file would then look something like this:
对应的SIP文件如下:
// Define the SIP wrapper to the word library. %Module word class Word { %TypeHeaderCode #include <word.h> %End public: Word(const char *w); char *reverse() const; };
Obviously a SIP specification file looks very much like a C++ (or C) header file, but SIP does not include a full C++ parser. Let’s look at the differences between the two files.
SIP文件与C/C++头文件非常类似,但是不包括完整的C++语法解析支持。另外,还有一些特殊的指示符。包括:
-
The %Module directive has been added [1]. This is used to name the Python module that is being created, word in this example.
-
%Module 指示Python模块的名称,该名称在Python中以import name的方式使用。
-
The %TypeHeaderCode directive has been added. The text between this and the following %End directive is included literally in the code that SIP generates. Normally it is used, as in this case, to #include the corresponding C++ (or C) header file [2].
-
%TypeHeaderCode和%End之间是SIP创建时加入的包含文件指示。
-
The declaration of the private variable this_word has been removed. SIP does not support access to either private or protected instance variables.
-
该C++类在Python中的接口定义,已经移除了私有变量和保护变量等定义。
If we want to we can now generate the C++ code in the current directory by running the following command:
现在可以使用sip 创建出接口文件,如下:
sip -c . word.sip
However, that still leaves us with the task of compiling the generated code and linking it against all the necessary libraries. It’s much easier to use theSIP build system to do the whole thing.
下一步,我们还需要对创建的代码进行编译。可以使用SIP的Build System来同时完成创建接口文件和库构建的过程。
Using the SIP build system is simply a matter of writing a small Python script. In this simple example we will assume that the word library we are wrapping and it’s header file are installed in standard system locations and will be found by the compiler and linker without having to specify any additional flags. In a more realistic example your Python script may take command line options, or search a set of directories to deal with different configurations and installations.
创建SIP的构建系统主要是编写一个Python的脚本,非常简单。我们假设所有的库已经安装缺省位置安装(在更复杂的例子中,再解释如何处理不同的目录问题。)
This is the simplest script (conventionally called configure.py):
import os import sipconfig # The name of the SIP build file generated by SIP and used by the build # system. build_file = "word.sbf" # Get the SIP configuration information. config = sipconfig.Configuration() # Run SIP to generate the code. os.system(" ".join([config.sip_bin, "-c", ".", "-b", build_file, "word = ["word"] # Generate the Makefile itself. makefile.generate()
Hopefully this script is self-documenting. The key parts are theConfiguration and SIPModuleMakefile classes. The build system contains other Makefile classes, for example to build programs or to call other Makefiles in sub-directories.
After running the script (using the Python interpreter the extension module is being created for) the generated C++ code and Makefile will be in the current directory.
To compile and install the extension module, just run the following commands [3]:
make make install
That’s all there is to it.
See Building Your Extension with distutils for an example of how to build this example using distutils.
A Simple C Example
Let’s now look at a very similar example of wrapping a fictional C library:
/* Define the interface to the word library. */ struct Word { const char *the_word; }; struct Word *create_word(const char *w); char *reverse(struct Word *word);
The corresponding SIP specification file would then look something like this:
/* Define the SIP wrapper to the word library. */ %Module(name=word, language="C") struct Word { %TypeHeaderCode #include <word.h> %End const char *the_word; }; struct Word *create_word(const char *w) /Factory/; char *reverse(struct Word *word);
Again, let’s look at the differences between the two files.
-
The %Module directive specifies that the library being wrapped is implemented in C rather than C++. Because we are now supplying an optional argument to the directive we must also specify the module name as an argument.
-
The %TypeHeaderCode directive has been added.
-
The Factory annotation has been added to the create_word()function. This tells SIP that a newly created structure is being returned and it is owned by Python.
The configure.py build system script described in the previous example can be used for this example without change.
A More Complex C++ Example
In this last example we will wrap a fictional C++ library that contains a class that is derived from a Qt class. This will demonstrate how SIP allows a class hierarchy to be split across multiple Python extension modules, and will introduce SIP’s versioning system.
The library contains a single C++ class called Hello which is derived from Qt’s QLabel class. It behaves just like QLabel except that the text in the label is hard coded to be Hello World. To make the example more interesting we’ll also say that the library only supports Qt v4.2 and later, and also includes a function called setDefault() that is not implemented in the Windows version of the library.
The hello.h header file looks something like this:
// Define the interface to the hello library. #include <qlabel.h> #include <qwidget.h> #include <qstring.h> class Hello : public QLabel { // This is needed by the Qt Meta-Object Compiler. Q_OBJECT public: Hello(QWidget *parent = 0); private: // Prevent instances from being copied. Hello(const Hello &); Hello &operator=(const Hello &); }; #if !defined(Q_OS_WIN) void setDefault(const QString &def); #endif
The corresponding SIP specification file would then look something like this:
// Define the SIP wrapper to the hello library. %Module hello %Import QtGui/QtGuimod.sip %If (Qt_4_2_0 -) class Hello : public QLabel { %TypeHeaderCode #include <hello.h> %End public: Hello(QWidget *parent /TransferThis/ = 0); private: Hello(const Hello &); }; %If (!WS_WIN) void setDefault(const QString &def); %End %End
Again we look at the differences, but we’ll skip those that we’ve looked at in previous examples.
-
The %Import directive has been added to specify that we are extending the class hierarchy defined in the file QtGui/QtGuimod.sip. This file is part of PyQt4. The build system will take care of finding the file’s exact location.
-
The %If directive has been added to specify that everything[4] up to the matching %End directive only applies to Qt v4.2 and later. Qt_4_2_0 is a tag defined in QtCoremod.sip[5] using the %Timeline directive. %Timelineis used to define a tag for each version of a library’s API you are wrapping allowing you to maintain all the different versions in a single SIP specification. The build system provides support to configure.pyscripts for working out the correct tags to use according to which version of the library is actually installed.
-
The TransferThis annotation has been added to the constructor’s argument. It specifies that if the argument is not 0 (i.e. the Helloinstance being constructed has a parent) then ownership of the instance is transferred from Python to C++. It is needed because Qt maintains objects (i.e. instances derived from the QObject class) in a hierachy. When an object is destroyed all of its children are also automatically destroyed. It is important, therefore, that the Python garbage collector doesn’t also try and destroy them. This is covered in more detail in Ownership of Objects. SIP provides many other annotations that can be applied to arguments, functions and classes. Multiple annotations are separated by commas. Annotations may have values.
-
The = operator has been removed. This operator is not supported by SIP.
-
The %If directive has been added to specify that everything up to the matching %End directive does not apply to Windows.WS_WIN is another tag defined by PyQt4, this time using the%Platforms directive. Tags defined by the%Platforms directive are mutually exclusive, i.e. only one may be valid at a time [6].
One question you might have at this point is why bother to define the private copy constructor when it can never be called from Python? The answer is to prevent the automatic generation of a public copy constructor.
We now look at the configure.py script. This is a little different to the script in the previous examples for two related reasons.
Firstly, PyQt4 includes a pure Python module called pyqtconfig that extends the SIP build system for modules, like our example, that build on top of PyQt4. It deals with the details of which version of Qt is being used (i.e. it determines what the correct tags are) and where it is installed. This is called a module’s configuration module.
Secondly, we generate a configuration module (called helloconfig) for our own hello module. There is no need to do this, but if there is a chance that somebody else might want to extend your C++ library then it would make life easier for them.
Now we have two scripts. First the configure.py script:
import osimport sipconfigfrom PyQt4 import pyqtconfig# The name of the SIP build file generated by SIP and used by the build# system.build_file = "hello.sbf"# Get the PyQt4 configuration information.config = pyqtconfig.Configuration()# Get the extra SIP flags needed by the imported PyQt4 modules. Note that# this normally only includes those flags (-x and -t) that relate to SIP's# versioning system.pyqt_sip_flags = config.pyqt_sip_flags# Run SIP to generate the code. Note that we tell SIP where to find the qt# module's specification files using the -I flag.os.system(" ".join([config.sip_bin, "-c", ".", "-b", build_file, "-I", config.pyqt_sip_dir, pyqt_sip_flags, "hello.sip"]))# We are going to install the SIP specification file for this module and# its configuration module.installs = []installs.append(["hello.sip", os.path.join(config.default_sip_dir, "hello")])installs.append(["helloconfig.py", config.default_mod_dir])# Create the Makefile. The QtGuiModuleMakefile class provided by the# pyqtconfig module takes care of all the extra preprocessor, compiler and# linker flags needed by the Qt library.makefile = pyqtconfig.QtGuiModuleMakefile( configuration=config, build_file=build_file, installs=installs)# Add the library we are wrapping. The name doesn't include any platform# specific prefixes or extensions (e.g. the "lib" prefix on UNIX, or the# ".dll" extension on Windows).makefile.extra_libs = ["hello"]# Generate the Makefile itself.makefile.generate()# Now we create the configuration module. This is done by merging a Python# dictionary (whose values are normally determined dynamically) with a# (static) template.content = { # Publish where the SIP specifications for this module will be # installed. "hello_sip_dir": config.default_sip_dir, # Publish the set of SIP flags needed by this module. As these are the # same flags needed by the qt module we could leave it out, but this # allows us to change the flags at a later date without breaking # scripts that import the configuration module. "hello_sip_flags": pyqt_sip_flags}# This creates the helloconfig.py module from the helloconfig.py.in# template and the dictionary.sipconfig.create_config_module("helloconfig.py", "helloconfig.py.in", content)
Next we have the helloconfig.py.in template script:
from PyQt4 import pyqtconfig# These are installation specific values created when Hello was configured.# The following line will be replaced when this template is used to create# the final configuration module.# @SIP_CONFIGURATION@class Configuration(pyqtconfig.Configuration): """The class that represents Hello configuration values. """ def __init__(self, sub_cfg=None): """Initialise an instance of the class. sub_cfg is the list of sub-class configurations. It should be None when called normally. """ # This is all standard code to be copied verbatim except for the # name of the module containing the super-class. if sub_cfg: cfg = sub_cfg else: cfg = [] cfg.append(_pkg_config) pyqtconfig.Configuration.__init__(self, cfg)class HelloModuleMakefile(pyqtconfig.QtGuiModuleMakefile): """The Makefile class for modules that %Import hello. """ def finalise(self): """Finalise the macros. """ # Make sure our C++ library is linked. self.extra_libs.append("hello") # Let the super-class do what it needs to. pyqtconfig.QtGuiModuleMakefile.finalise(self)
Again, we hope that the scripts are self documenting.
Ownership of Objects
When a C++ instance is wrapped a corresponding Python object is created. The Python object behaves as you would expect in regard to garbage collection - it is garbage collected when its reference count reaches zero. What then happens to the corresponding C++ instance? The obvious answer might be that the instance’s destructor is called. However the library API may say that when the instance is passed to a particular function, the library takes ownership of the instance, i.e. responsibility for calling the instance’s destructor is transferred from the SIP generated module to the library.
Ownership of an instance may also be associated with another instance. The implication being that the owned instance will automatically be destroyed if the owning instance is destroyed. SIP keeps track of these relationships to ensure that Python’s cyclic garbage collector can detect and break any reference cycles between the owning and owned instances. The association is implemented as the owning instance taking a reference to the owned instance.
The TransferThis, Transfer and TransferBack annotations are used to specify where, and it what direction, transfers of ownership happen. It is very important that these are specified correctly to avoid crashes (where both Python and C++ call the destructor) and memory leaks (where neither Python and C++ call the destructor).
This applies equally to C structures where the structure is returned to the heap using the free() function.
See also sipTransferTo(), sipTransferBack() andsipTransferBreak().
Types and Meta-types
Every Python object (with the exception of the object object itself) has a meta-type and at least one super-type. By default an object’s meta-type is the meta-type of its first super-type.
SIP implements two super-types, sip.simplewrapper andsip.wrapper, and a meta-type, sip.wrappertype.
sip.simplewrapper is the super-type of sip.wrapper. The super-type of sip.simplewrapper is object.
sip.wrappertype is the meta-type of both sip.simplewrapperand sip.wrapper. The super-type of sip.wrappertype istype.
sip.wrapper supports the concept of object ownership described inOwnership of Objects and, by default, is the super-type of all the types that SIP generates.
sip.simplewrapper does not support the concept of object ownership but SIP generated types that are sub-classed from it have Python objects that take less memory.
SIP allows a class’s meta-type and super-type to be explicitly specified using the Metatype and Supertype class annotations.
SIP also allows the default meta-type and super-type to be changed for a module using the %DefaultMetatype and %DefaultSupertypedirectives. Unlike the default super-type, the default meta-type is inherited by importing modules.
If you want to use your own meta-type or super-type then they must be sub-classed from one of the SIP provided types. Your types must be registered using sipRegisterPyType(). This is normally done in code specified using the %InitialisationCode directive.
As an example, PyQt4 uses %DefaultMetatype to specify a new meta-type that handles the interaction with Qt’s own meta-type system. It also uses %DefaultSupertype to specify that the smallersip.simplewrapper super-type is normally used. Finally it usesSupertype as an annotation of the QObject class to override the default and use sip.wrapper as the super-type so that the parent/child relationships of QObject instances are properly maintained.
Lazy Type Attributes
Instead of populating a wrapped type’s dictionary with its attributes (or descriptors for those attributes) SIP only creates objects for those attributes when they are actually needed. This is done to reduce the memory footprint and start up time when used to wrap large libraries with hundreds of classes and tens of thousands of attributes.
SIP allows you to extend the handling of lazy attributes to your own attribute types by allowing you to register an attribute getter handler (usingsipRegisterAttributeGetter()). This will be called just before a type’s dictionary is accessed for the first time.
Support for Python’s Buffer Interface
SIP supports Python’s buffer interface in that whenever C/C++ requires achar or char * type then any Python type that supports the buffer interface (including ordinary Python strings) can be used.
If a buffer is made up of a number of segments then all but the first will be ignored.
Support for Wide Characters
SIP v4.6 introduced support for wide characters (i.e. the wchar_t type). Python’s C API includes support for converting between unicode objects and wide character strings and arrays. When converting from a unicode object to wide characters SIP creates the string or array on the heap (using memory allocated using sipMalloc()). This then raises the problem of how this memory is subsequently freed.
The following describes how SIP handles this memory in the different situations where this is an issue.
-
When a wide string or array is passed to a function or method then the memory is freed (using sipFree()) after than function or method returns.
-
When a wide string or array is returned from a virtual method then SIP does not free the memory until the next time the method is called.
-
When an assignment is made to a wide string or array instance variable then SIP does not first free the instance’s current string or array.
The Python Global Interpreter Lock
Python’s Global Interpretor Lock (GIL) must be acquired before calls can be made to the Python API. It should also be released when a potentially blocking call to C/C++ library is made in order to allow other Python threads to be executed. In addition, some C/C++ libraries may implement their own locking strategies that conflict with the GIL causing application deadlocks. SIP provides ways of specifying when the GIL is released and acquired to ensure that locking problems can be avoided.
SIP always ensures that the GIL is acquired before making calls to the Python API. By default SIP does not release the GIL when making calls to the C/C++ library being wrapped. The ReleaseGIL annotation can be used to override this behaviour when required.
If SIP is given the -g command line option then the default behaviour is changed and SIP releases the GIL every time is makes calls to the C/C++ library being wrapped. The HoldGIL annotation can be used to override this behaviour when required.
Managing Incompatible APIs
New in version 4.9.
Sometimes it is necessary to change the way something is wrapped in a way that introduces an incompatibility. For example a new feature of Python may suggest that something may be wrapped in a different way to exploit that feature.
SIP’s %Feature directive could be used to provide two different implementations. However this would mean that the choice between the two implementations would have to be made when building the generated module potentially causing all sorts of deployment problems. It may also require applications to work out which implementation was available and to change their behaviour accordingly.
Instead SIP provides limited support for providing multiple implementations (of classes, mapped types and functions) that can be selected by an application at run-time. It is then up to the application developer how they want to manage the migration from the old API to the new, incompatible API.
This support is implemented in three parts.
Firstly the %API directive is used to define the name of an API and its default version number. The default version number is the one used if an application doesn’t explicitly set the version number to use.
Secondly the API class, mapped type orfunction annotation is applied accordingly to specify the API and range of version numbers that a particular class, mapped type or function implementation should be enabled for.
Finally the application calls sip.setapi() to specify the version number of the API that should be enabled. This call must be made before any module that has multiple implementations is imported for the first time.
Note this mechanism is not intended as a way or providing equally valid alternative APIs. For example:
%API(name=MyAPI, version=1) class Foo { public: void bar(); }; class Baz : Foo { public: void bar() /API=MyAPI:2-/; };
If the following Python code is executed then an exception will be raised:
b = Baz()b.bar()
This is because when version 1 of the MyAPI API (the default) is enabled there is no Baz.bar() implementation and Foo.bar() will not be called instead as might be expected.
Building a Private Copy of the sip Module
New in version 4.12.
The sip module is intended to be be used by all the SIP generated modules of a particular Python installation. For example PyQt3 and PyQt4 are completely independent of each other but will use the same sip module. However, this means that all the generated modules must be built against a compatible version of SIP. If you do not have complete control over the Python installation then this may be difficult or even impossible to achieve.
To get around this problem you can build a private copy of the sip module that has a different name and/or is placed in a different Python package. To do this you use the --sip-module option to specify the name (optionally including a package name) of your private copy.
As well as building the private copy of the module, the version of thesip.h header file will also be specific to the private copy. You will probably also want to use the --incdir option to specify the directory where the header file will be installed to avoid overwriting a copy of the default version that might already be installed.
When building your generated modules you must ensure that they #include the private copy of sip.h instead of any default version. | https://my.oschina.net/u/2306127/blog/375948 | CC-MAIN-2018-09 | refinedweb | 3,741 | 56.45 |
.
[edit] Ada
with Ada.Strings.Fixed, Ada.Text_IO;
use Ada.Strings, Ada.Text_IO;
procedure String_Replace is
Original : constant String := "Mary had a @__@ lamb.";
Tbr : constant String := "@__@";
New_Str : constant String := "little";
Index : Natural := Fixed.Index (Original, Tbr);
begin
Put_Line (Fixed.Replace_Slice (
Original, Index, Index + Tbr'Length - 1, New_Str));
end String_Replace;
Alternatively
Put_Line ("Mary had a " & New_Str & " lamb.");
[edit] Aikido
const little = "little"
printf ("Mary had a %s lamb\n", little)
// alternatively
println ("Mary had a " + little + " lamb")
[edit] ALGOL 68.
[edit] AutoHotkey
; Using the = operator
LIT = little
string = Mary had a %LIT% lamb.
; Using the := operator
LIT := "little"
string := "Mary had a" LIT " lamb."
MsgBox %string%
Documentation: Variables (see Storing values in variables and Retrieving the contents of variables)
[edit] AWK
String interpolation is usually done with functions sub() and gsub(). gawk has also gensub().
#!/usr/bin/awk -f
BEGIN {
str="Mary had a # lamb."
gsub(/#/, "little", str)
print str
}
[edit] Batch File
[edit] Bracmat
Use pattern matching to find the part of the string up to and the part of the string following the magic X. Concatenate these parts with the string "little" in the middle.
@("Mary had a X lamb":?a X ?z) & str$(!a little !z)
[edit] C
Include the
<stdio.h> header to use the functions of the printf family:
#include <stdio.h>
int main() {
const char *extra = "little";
printf("Mary had a %s lamb.\n", extra);
return 0;
}
[edit] C++
#include <string>
#include <iostream>
int main( ) {
std::string original( "Mary had a X lamb." ) , toBeReplaced( "X" ) ,
replacement ( "little" ) ;
std::string newString = original.replace( original.find( "X" ) ,
toBeReplaced.length( ) , replacement ) ;
std::cout << "String after replacement: " << newString << " \n" ;
return 0 ;
}
[edit] C#
This is called "composite formatting" in MSDN.
class Program
{
static void Main()
{
string extra = "little";
string formatted = string.Format("Mary had a {0} lamb.", extra);
System.Console.WriteLine(formatted);
}
}
[edit] Clojure
(let [little "little"]
(println (format "Mary had a %s lamb." little)))
[edit] COBOL
IDENTIFICATION DIVISION.
PROGRAM-ID. interpolation-included.
DATA DIVISION.
WORKING-STORAGE SECTION.
01 extra PIC X(6) VALUE "little".
PROCEDURE DIVISION.
DISPLAY FUNCTION SUBSTITUTE("Mary had a X lamb.", "X", extra)
GOBACK
.
[edit] Coco
As CoffeeScript, but the braces are optional if the expression to be interpolated is just a variable:
size = 'little'
console.log "Mary had a #size lamb."
[edit] CoffeeScript }
"""
[edit] Common Lisp
(let ((extra "little"))
(format t "Mary had a ~A lamb.~%" extra))
More documentation on the FORMAT function.
[edit] D.
[edit] DWScript
PrintLn(Format('Mary had a %s lamb.', ['little']))
Output:
Mary had a little lamb.
[edit] E.
[edit] ECL
IMPORT STD;
STD.Str.FindReplace('Mary had a X Lamb', 'X','little');
[edit] Elixir
Elixir borrows Ruby's #{...} interpolation syntax.
x = "little"
IO.puts "Mary had a #{x} lamb"
[edit] Erlang
- Output:
7> S1 = "Mary had a ~s lamb". 8> S2 = lists:flatten( io_lib:format(S1, ["big"]) ). 9> S2. "Mary had a big lamb"
[edit] Euphoria
constant lambType = "little"
sequence s
s = sprintf("Mary had a %s lamb.",{lambType})
puts(1,s)
[edit] F#
let lambType = "little"
printfn "Mary had a %s lamb." lambType
[edit] Factor
[edit] Fantom
Interpolating a variable value into a string is done by using a $ prefix on the variable name within a string. For example:
fansh> x := "little"
little
fansh> echo ("Mary had a $x lamb")
Mary had a little lamb
Documentation at: Fantom website
[edit] Fortran
[edit] Frink
x = "little"
println["Mary had a $x lamb."]
[edit] FunL
X = 'little'
println( "Mary had a $X lamb." )
[edit] Go
Doc:
package main
import (
"fmt"
)
func main() {
str := "Mary had a %s lamb"
txt := "little"
out := fmt.Sprintf(str, txt)
fmt.Println(out)
}
[edit] Groovy
def adj = 'little'
assert 'Mary had a little lamb.' == "Mary had a ${adj} lamb."
[edit] Haskell
No such facilities are defined in Haskell 98, but the
base package distributed with GHC provides a
printf function.
import Text.Printf
main = printf "Mary had a %s lamb\n" "little"
[edit] HicEst
Further documentation on HicEst string interpolation function EDIT()
CHARACTER original="Mary had a X lamb", little = "little", output_string*100
output_string = original
EDIT(Text=output_string, Right='X', RePLaceby=little)
[edit] Icon and Unicon.
[edit] J).
[edit] Java);
[edit] JavaScript
var original = "Mary had a X lamb";
var little = "little";
var replaced = original.replace("X", little); //does not change the original string
[edit] jq
[edit] Julia
X = "little"
"Mary had a $X lamb"
[edit] Lasso
Lasso doesn't really have built-in string interpolation, but you can use the built-in email mail-merge capability:
- Output:
Mary had a little lamb
[edit] Lua
[edit] Variable names
There is no default support for automatic interpolation of variables names being used as placeholders within a string. However, interpolation is easily emulated by using the [string.gsub] function:
str = string.gsub( "Mary had a X lamb.", "X", "little" )
print( str )
[edit] Literal characters
Interpolation of literal characters escape sequences does occur within a string:
print "Mary had a \n lamb" -- The \n is interpreted as an escape sequence for a newline
[edit] Mathematica
Extra = "little";
StringReplace["Mary had a X lamb.", {"X" -> Extra}]
->"Mary had a little lamb."
[edit] Maxima
printf(true, "Mary had a ~a lamb", "little");
[edit] Nemerle.");
}
}
[edit] NetRexx.
[edit] Nimrod
import strutils
var str = "little"
echo "Mary had a $# lamb" % [str]
# doesn't need an array for one substitution, but use an array for multiple substitutions
[edit] OCaml
The OCaml standard library provides the module Printf:
let extra = "little" in
Printf.printf "Mary had a %s lamb." extra
[edit] OOC
In a String all expressions between #{...} will be evaluated.
main: func {
X := "little"
"Mary had a #{X} lamb" println()
}
[edit] Oz
String interpolation is unidiomatic in Oz. Instead, "virtual strings" are used. Virtual strings are tuples of printable values and are supported by many library functions.
declare
X = "little"
in
{System.showInfo "Mary had a "#X#" lamb"}
[edit] PARI/GP);
[edit] Perl
$extra = "little";
print "Mary had a $extra lamb.\n";
printf "Mary had a %s lamb.\n", $extra;
[edit] Perl 6
my $extra = "little";
say "Mary had a $extra lamb"; # variable interpolation
say "Mary had a { $extra } lamb"; # expression interpolation
printf "Mary had a %s lamb.\n", $extra; # standard printf
say $extra.fmt("Mary had a %s lamb"); # inside-out printf
[edit] PHP
<?php
$extra = 'little';
echo "Mary had a $extra lamb.\n";
printf("Mary had a %s lamb.\n", $extra);
?>
[edit] PicoLisp
(let Extra "little"
(prinl (text "Mary had a @1 lamb." Extra)) )
[edit] PL/I
[edit] PowerShell:###,###})
[edit] Prolog'.
[edit] PureBasic
[edit] Python.'
[edit] Racket
See the documentation on fprintf for more information on string interpolation in Racket.
#lang racket
(format "Mary had a ~a lamb" "little")
[edit] REBOL
str: "Mary had a <%size%> lamb"
size: "little"
build-markup str
;REBOL3 also has the REWORD function
str: "Mary had a $size lamb"
reword str [size "little"]
[edit] REXX.
[edit] Ruby.
[edit] Run BASIC
a$ = Mary had a X lamb."
a$ = word$(a$,1,"X")+"little"+word$(a$,2,"X")
[edit] Scala
[edit] Sed
#!/bin/bash
# Usage example: . interpolate "Mary has a X lamb" "quite annoying"
echo "$1" | sed "s/ X / $2 /g"
[edit] Seed7
$
[edit] SNOBOL4
[edit] Swift
let extra = "little"
println("Mary had a \(extra) lamb.")
[edit] Tcl @SIZE@ lamb."
puts [string map {@SIZE@ "]
[edit] TUSCRIPT
$$ MODE TUSCRIPT
sentence_old="Mary had a X lamb."
values=*
DATA little
DATA big
sentence_new=SUBSTITUTE (sentence_old,":X:",0,0,values)
PRINT sentence_old
PRINT sentence_new
Output:
Mary had a X lamb. Mary had a little lamb.
[edit] UNIX Shell.
[edit] C Shell
set extra='little'
echo Mary had a $extra lamb.
echo "Mary had a $extra lamb."
printf "Mary had a %s lamb.\n" $extra
C Shell has
$extra and
${extra}. There are also modifiers, like
$file:t; csh(1) manual explains those.
[edit] Ursala. Here is the output.
Mary had a little lamb.
[edit] VBA
Visual Basic for Applications has a built-in function Replace.
Example dialog (in the immediate window):
a="little" print replace("Mary had a X lamb","X",a) Mary had a little lamb
Function Replace has 3 mandatory and 3 optional arguments:
- the input string
- the substring that is to be replaced
- the string that is to replace the substring
- optional: the position in the input string where to start replacing (default 1)
- optional: the number of replacements to make (default -1, i.e. all)
- optional: the comparison method: vbBinaryCompare or vbTextCompare (text compare is not case sensitive: it will replace "%X" as well as "%x". vbBinaryCompare will only replace %X. Default value depends on the setting of the standard comparison method (set with Option Compare Binary or Option Compare Text).
[edit] zkl
"Mary had a X lamb.".replace("X","big")
Generates a new string. For more info, refer to manual in the downloads section of zenkinetic.com zkl page
- Programming Tasks
- Basic language learning
- String manipulation
- Basic Data Operations
- NSIS/Omit
- BBC BASIC/Omit
- Ada
- Aikido
- ALGOL 68
- AutoHotkey
- AWK
- Batch File
- Bracmat
- C
- C++
- C sharp
- Clojure
- COBOL
- Coco
- CoffeeScript
- Common Lisp
- D
- DWScript
- E
- ECL
- Elixir
- Erlang
- Euphoria
- F Sharp
- Factor
- Fantom
- Fortran
- Frink
- FunL
- Go
- Groovy
- Haskell
- HicEst
- Icon
- Unicon
- Icon Programming Library
- J
- Java
- JavaScript
- Jq
- Julia
- Lasso
- Lua
- Mathematica
- Maxima
- Nemerle
- NetRexx
- Nimrod
- OCaml
- OOC
- Oz
- PARI/GP
- Perl
- Perl 6
- PHP
- PicoLisp
- PL/I
- PowerShell
- Prolog
- PureBasic
- Python
- Racket
- REBOL
- REXX
- Ruby
- Run BASIC
- Scala
- Scala Implementations
- Sed
- Seed7
- SNOBOL4
- Swift
- Tcl
- TUSCRIPT
- UNIX Shell
- C Shell
- Ursala
- VBA
- Zkl
- 8086 Assembly/Omit
- 80386 Assembly/Omit
- Bc/Omit
- Dc/Omit
- GUISS/Omit
- Unlambda/Omit
- Z80 Assembly/Omit | http://rosettacode.org/wiki/String_interpolation_(included) | CC-MAIN-2014-42 | refinedweb | 1,578 | 57.37 |
Details
- Type:
Improvement
- Status: Closed
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: 4.4
-
- Component/s: modules/spellchecker
-
- Lucene Fields:New
Description
I'm working on a custom suggester derived from the AnalyzingInfix. I require what is called a "blended score" (//TODO ln.399 in AnalyzingInfixSuggester) to transform the suggestion weights depending on the position of the searched term(s) in the text.
Right now, I'm using an easy solution :
If I want 10 suggestions, then I search against the current ordered index for the 100 first results and transform the weight :
a) by using the term position in the text (found with TermVector and DocsAndPositionsEnum)
or
b) by multiplying the weight by the score of a SpanQuery that I add when searching
and return the updated 10 most weighted suggestions.
Since we usually don't need to suggest so many things, the bigger search + rescoring overhead is not so significant but I agree that this is not the most elegant solution.
We could include this factor (here the position of the term) directly into the index.
So, I can contribute to this if you think it's worth adding it.
Do you think I should tweak AnalyzingInfixSuggester, subclass it or create a dedicated class ?
Activity
- All
- Work Log
- History
- Activity
- Transitions
I attached a first patch which allows blended score based on the position of the search term. It only provides strategy (a) with two options :
- Linear (-10% for each position) : blended_weight = weight * (1-0.10*position)
- Reciprocal : blended_weight = weight/(1+position)
I would also like to add the second strategy (b) by directly using the score, but here is a first attempt.
Any advice/remarks welcome!
Thanks Remi, patch looks great!
Can you move that boolean finished inside the if (lastToken != null)? (If there was no lastToken then we should not be calling offsetEnd.endOffset).
Can we leave AnalyzingInfixSuggester with DOCS_ONLY? I.e., open up a method (maybe getTextFieldType?) that the subclass would override and set to DOCS_AND_FREQS_AND_POSITIONS.
In createCoefficient, instead of splitting the incoming key on space, I think you should ask the analyzer to do so? In fact, since the lookup (in super) already did that (break into tokens, figure out if last token is a "prefix" or not), maybe we can just pass that down to createResult?
If the query has more than one term, it looks like you only use the first? Maybe instead we should visit all the terms and record which one has the lowest position?
Have you done any performance testing?).
key.toString() can be pulled out of the while loop and done once up front.
Why do you use key.toString().contains(docTerm) for the finished case? Won't that result in false positives, e.g. if key is "foobar" and docTerm is "oba"?
Can you rewrite the embedded ternary operator in the LookUpComparator to just use simple if statements? I think that's more readable...
Hey Michael, thanks for the in-depth code review!
I attached another patch which makes things simpler and fixes what you suggested.
The remaining things are :
Have you done any performance testing?
Not really, I've seen that you did some for the infix suggester, but I couldn't find the code. Is there something already or should I test the performance my way ?).
For now, the only way I know to access the DocsAndPositionsEnum is by getting it from the TermsEnum which implies iterating over the term vector (the doc says "Get DocsAndPositionsEnum for the current term").
Woops, sorry, this fell below the event horizon of my TODO list. I'll look at your new patch soon.
There is an existing performance test, LookupBenchmarkTest, but it's a bit tricky to run. See the comment on
LUCENE-5030:
New patch looks great, thanks Remi!
I'm worried about how costly iterating over term vectors is going to be ... are you planning to run the performance test? If not, I can.
It might be better to open up a protected method to convert the smallest position to the coefficient? The default impl can do the switch based on the BlenderType enum... but apps may want to control how the score is "boosted" by position.
Hi!
Here is new patch including your comment for the coefficient calculation (I guess a Lambda function would be perfect here!).
I ran the performance test on my laptop, here are the results compared to the AnalyzingInfixSuggester :
– construction time
AnalyzingInfixSuggester input: 50001, time[ms]: 1780 [+- 367.58]
BlendedInfixSuggester input: 50001, time[ms]: 6507 [+- 2106.52]
– prefixes: 2-4, num: 7, onlyMorePopular: false
AnalyzingInfixSuggester queries: 50001, time[ms]: 6804 [+- 1403.13], ~kQPS: 7
BlendedInfixSuggester queries: 50001, time[ms]: 26503 [+- 2624.41], ~kQPS: 2
– prefixes: 6-9, num: 7, onlyMorePopular: false
AnalyzingInfixSuggester queries: 50001, time[ms]: 3995 [+- 551.20], ~kQPS: 13
BlendedInfixSuggester queries: 50001, time[ms]: 5355 [+- 1295.41], ~kQPS: 9
– prefixes: 100-200, num: 7, onlyMorePopular: false
AnalyzingInfixSuggester queries: 50001, time[ms]: 2626 [+- 588.43], ~kQPS: 19
BlendedInfixSuggester queries: 50001, time[ms]: 1980 [+- 574.16], ~kQPS: 25
– RAM consumption
AnalyzingInfixSuggester size[B]: 1,430,920
BlendedInfixSuggester size[B]: 1,630,488
If you have any idea on how we could improve the performance, let me know (see above my comment for your previous suggestion to avoid visiting term vectors).
Thanks Remi, the performance seems fine? But I realized this is not the best benchmark, since all suggestions are just a single token.
New patch looks great; I think we should commit this approach, and performance improvements can come later if necessary.
see above my comment for your previous suggestion to avoid visiting term vectors
Oh, the idea I had was to not use term vectors at all: you can get a TermsEnum for the normal inverted index, and then visit each term from the query, and then .advance to each doc from the top N results. But we can do this later ... I'll commit this patch (I'll make some small code style improvements, e.g. adding { } around all ifs).
Thanks Remi!
I committed with the wrong issue
LUCENE-5345 by accident...
Great, glad to contribute!
In term of performance, I'm using it on my laptop with 30K terms and the mean time for lookup is 5ms for 5 results and 45ms for 50 results (with a factor 10, ie. I retrieve 50 / 500 items then reduce to 5 / 50). I'm not following a proper testing methodology so it's just roughly what I observed.
I will do more extensive testing performance-wise and yeah, we can tackle that later on.
Woops, I introduced a bug when refactoring the comparator.
I submitted another patch to fix this. I also updated the test case accordingly.
Commit 1558100 from Michael McCandless in branch 'dev/trunk'
[ ]
LUCENE-5354: BlendedInfixSuggester: fix wrong return (0 instead of -1) from the LookupResult comparator
Commit 1558102 from Michael McCandless in branch 'dev/branches/branch_4x'
[ ]
LUCENE-5354: BlendedInfixSuggester: fix wrong return (0 instead of -1) from the LookupResult comparator
This sounds very useful!
I think a subclass could work well, if we open up the necessary methods (which Query to run, how to do the search / resort the results)?
We could make the index-time sorting optional as well? This way you'd build an "ordinary" index, run an "ordinary" query, so you have full flexibility (but at more search-time cost). | https://issues.apache.org/jira/browse/LUCENE-5354 | CC-MAIN-2017-39 | refinedweb | 1,227 | 64.91 |
There have been significant changes in the income tax return filing rules in the last few years, especially pertaining to the rule for filing belated ITR for the previous years.
There have been significant changes in the income tax return (ITR) filing rules in the last few years, especially pertaining to the rule for filing belated ITR for the previous years. For FY 2015-16, the rule allowed a person to file the ITR for that year till 31st March 2018, i.e. one year after the end of relevant assessment year.
Under the Finance Act 2017, the government made a big change to the old rule, allowing a person to file the return for FY 2016-17 till 31st March 2018, which otherwise was allowed till 31st March 2019 as per the previous rule. The move is widely seen as a measure to stop tax evasion.
So, the last date as per the new rule to file the tax for FY 2015-16 and FY 2016-17 was 31st March 2018 and if you have failed to adhere with the final deadline, you are left with limited options. Let’s find out what are the options for you now.
Options Available
The Income Tax Department (ITD) still has the authority to allow the taxpayers to file the return even though the deadline of 31st March 2018 has lapsed. However, the taxpayer must provide a genuine reason for the delay to get the permission to file the return. A taxpayer is allowed to file an application for condonance up to 6 years from the end of actual assessment year to file the return, i.e. for F.Y. 2015-16, the last date would be 31st March 2023.
If you have defaulted in payment of taxes for the relevant year, you should deposit the tax amount u/s 234A (for default in furnishing the return of income), u/s 234B (for default in payment of advance tax) and u/s 234C (for deferment of advance tax).
Also Read: Good news for income tax payers! Taxmen can’t deny assessee’s legitimate claim in revised return, rules ITAT
If you have deposited the tax but missed the last date to file the ITR, then you are not allowed to apply for condonation of delay. However, the I-T department may send you a notice to pay the penalty for delay in filing, which can go up to Rs 5000. In case you are able to present a valid reason for such delay and the IT officer is satisfied with the reason, then penalty can be waived off at his/her discretion.
Things to keep in mind
If you get the permission to file the return after the 31st March 2018 deadline, then you can still get the tax deduction benefits such as Sec 80C benefits, but you won’t be allowed to carry forward the losses and adjust them.
You must not panic if you get a notice from the tax department for non-filing of the previous return for which the last date has passed. You should read the notice properly and reply to it within prescribed timelines or you can take the help of a tax expert to respond properly.
. | https://www.financialexpress.com/money/income-tax/income-tax-return-filing-what-if-you-missed-the-deadline-of-filing-itr-and-cant-do-it-online-anymore/1219111/ | CC-MAIN-2019-18 | refinedweb | 541 | 61.8 |
CubicWeb provides the somewhat usual form / field / widget / renderer abstraction to provide generic building blocks which will greatly help you in building forms properly integrated with CubicWeb (coherent display, error handling, etc...), while keeping things as flexible as possible.
A form basically only holds a set of fields, and has te be bound to a renderer which is responsible to layout them. Each field is bound to a widget that will be used to fill in value(s) for that field (at form generation time) and ‘decode’ (fetch and give a proper Python type to) values sent back by the browser.
The field should be used according to the type of what you want to edit. E.g. if you want to edit some date, you’ll have to use the cubicweb.web.formfields.DateField. Then you can choose among multiple widgets to edit it, for instance cubicweb.web.formwidgets.TextInput (a bare text field), DateTimePicker (a simple calendar) or even JQueryDatePicker (the JQuery calendar). You can of course also write your own widget.
A small excursion into a CubicWeb shell is the quickest way to discover available forms (or application objects in general).
>>> from pprint import pprint >>> pprint( session.vreg['forms'] ) {'base': [<class 'cubicweb.web.views.forms.FieldsForm'>, <class 'cubicweb.web.views.forms.EntityFieldsForm'>], 'changestate': [<class 'cubicweb.web.views.workflow.ChangeStateForm'>, <class 'cubes.tracker.views.forms.VersionChangeStateForm'>], 'composite': [<class 'cubicweb.web.views.forms.CompositeForm'>, <class 'cubicweb.web.views.forms.CompositeEntityForm'>], 'deleteconf': [<class 'cubicweb.web.views.editforms.DeleteConfForm'>], 'edition': [<class 'cubicweb.web.views.autoform.AutomaticEntityForm'>, <class 'cubicweb.web.views.workflow.TransitionEditionForm'>, <class 'cubicweb.web.views.workflow.StateEditionForm'>], 'logform': [<class 'cubicweb.web.views.basetemplates.LogForm'>], 'massmailing': [<class 'cubicweb.web.views.massmailing.MassMailingForm'>], 'muledit': [<class 'cubicweb.web.views.editforms.TableEditForm'>], 'sparql': [<class 'cubicweb.web.views.sparql.SparqlForm'>]}
The two most important form families here (for all practical purposes) are base and edition. Most of the time one wants alterations of the AutomaticEntityForm to generate custom forms to handle edition of an entity.
AutomaticEntityForm is an automagic form to edit any entity. It is designed to be fully generated from schema but highly configurable through uicfg.
Of course, as for other forms, you can also customise it by specifying various standard form parameters on selection, overriding, or adding/removing fields in selected instances.
It is possible to manage which and how an entity’s attributes and relations will be edited in the various contexts where the automatic entity form is used by using proper uicfg tags.
The details of the uicfg syntax can be found in the The uicfg module chapter.
Possible relation tags that apply to entity forms are detailled below. They are all in the cubicweb.web.uicfg module.
autoform_section specifies where to display a relation in form for a given form type. tag_attribute(), tag_subject_of() and tag_object_of() methods for this relation tag expect two arguments additionally to the relation key: a formtype and a section.
formtype may be one of:
section may be one of:
By default, mandatory relations are displayed in the ‘attributes’ section, others in ‘relations’ section.
Use autoform_field to replace the default field class to use for a relation or attribute. You can put either a field class or instance as value (put a class whenether it’s possible).
Warning
autoform_field_kwargs should usually be used instead of autoform_field. If you put a field instance into autoform_field, autoform_field_kwargs values for this relation will be ignored.
In order to customize field options (see Field for a detailed list of options), use autoform_field_kwargs. This rtag takes a dictionary as arguments, that will be given to the field’s contructor.
You can then put in that dictionary any arguments supported by the field class. For instance:
# Change the content of the combobox. Here `ticket_done_in_choices` is a # function which returns a list of elements to populate the combobox autoform_field_kwargs.tag_subject_of(('Ticket', 'done_in', '*'), {'sort': False, 'choices': ticket_done_in_choices}) # Force usage of a TextInput widget for the expression attribute of # RQLExpression entities autoform_field_kwargs.tag_attribute(('RQLExpression', 'expression'), {'widget': fw.TextInput})
Note
the widget argument can be either a class or an instance (the later case being convenient to pass the Widget specific initialisation options)
Let’s have a look at the ticket_done_in_choices function given to the choices parameter of the relation tag that is applied to the (‘Ticket’, ‘done_in’, ‘*’) relation definition, as it is both typical and sophisticated enough. This is a code snippet from the tracker cube.
The Ticket entity type can be related to a Project and a Version, respectively through the concerns and done_in relations. When a user is about to edit a ticket, we want to fill the combo box for the done_in relation with values pertinent with respect to the context. The important context here is:
from cubicweb.web import formfields def ticket_done_in_choices(form, field): entity = form.edited_entity # first see if its specified by __linkto form parameters linkedto = form.linked_to[('done_in', 'subject')] if linkedto: return linkedto # it isn't, get initial values vocab = field.relvoc_init(form) veid = None # try to fetch the (already or pending) related version and project if not entity.has_eid(): peids = form.linked_to[('concerns', 'subject')] peid = peids and peids[0] else: peid = entity.project.eid veid = entity.done_in and entity.done_in[0].eid if peid: # we can complete the vocabulary with relevant values}, 'p') vocab += [(v.view('combobox'), v.eid) for v in rset.entities() if rschema.has_perm(form._cw, 'add', toeid=v.eid) and v.eid != veid] return vocab
The first thing we have to do is fetch potential values from the __linkto url parameter that is often found in entity creation contexts (the creation action provides such a parameter with a predetermined value; for instance in this case, ticket creation could occur in the context of a Version entity). The RelationField field class provides a relvoc_linkedto() method that gets a list suitably filled with vocabulary values.
linkedto = field.relvoc_linkedto(form) if linkedto: return linkedto
Then, if no __linkto argument was given, we must prepare the vocabulary with an initial empty value (because done_in is not mandatory, we must allow the user to not select a verson) and already linked values. This is done with the relvoc_init() method.
vocab = field.relvoc_init(form)
But then, we have to give more: if the ticket is related to a project, we should provide all the non published versions of this project (Version and Project can be related through the version_of relation). Conversely, if we do not know yet the project, it would not make sense to propose all existing versions as it could potentially lead to incoherences. Even if these will be caught by some RQLConstraint, it is wise not to tempt the user with error-inducing candidate values.
The “ticket is related to a project” part must be decomposed as:
Note
the last situation could happen in several ways, but of course in a polished application, the paths to ticket creation should be controlled so as to avoid a suboptimal end-user experience
Hence, we try to fetch the related project.
veid = None if not entity.has_eid(): peids = form.linked_to[('concerns', 'subject')] peid = peids and peids[0] else: peid = entity.project.eid veid = entity.done_in and entity.done_in[0].eid
We distinguish between entity creation and entity modification using the Entity.has_eid() method, which returns False on creation. At creation time the only way to get a project is through the __linkto parameter. Notice that we fetch the version in which the ticket is done_in if any, for later.
Note
the implementation above assumes that if there is a __linkto parameter, it is only about a project. While it makes sense most of the time, it is not an absolute. Depending on how an entity creation action action url is built, several outcomes could be possible there
If the ticket is already linked to a project, fetching it is trivial. Then we add the relevant version to the initial vocabulary.
if peid:}) vocab += [(v.view('combobox'), v.eid) for v in rset.entities() if rschema.has_perm(form._cw, 'add', toeid=v.eid) and v.eid != veid]
Warning
we have to defend ourselves against lack of a project eid. Given the cardinality of the concerns relation, there must be a project, but this rule can only be enforced at validation time, which will happen of course only after form subsmission
Here, given a project eid, we complete the vocabulary with all unpublished versions defined in the project (sorted by number) for which the current user is allowed to establish the relation.
Sometimes you want a form that is not related to entity edition. For those, you’ll have to handle form posting by yourself. Here is a complete example on how to achieve this (and more).
Imagine you want a form that selects a month period. There are no proper field/widget to handle this in CubicWeb, so let’s start by defining them:
# let's have the whole import list at the beginning, even those necessary for # subsequent snippets from logilab.common import date from logilab.mtconverter import xml_escape from cubicweb.view import View from cubicweb.predicates import match_kwargs from cubicweb.web import RequestError, ProcessFormError from cubicweb.web import formfields as fields, formwidgets as wdgs from cubicweb.web.views import forms, calendar class MonthSelect(wdgs.Select): """Custom widget to display month and year. Expect value to be given as a date instance. """ def format_value(self, form, field, value): return u'%s/%s' % (value.year, value.month) def process_field_data(self, form, field): val = super(MonthSelect, self).process_field_data(form, field) try: year, month = val.split('/') year = int(year) month = int(month) return date.date(year, month, 1) except ValueError: raise ProcessFormError( form._cw._('badly formated date string %s') % val) class MonthPeriodField(fields.CompoundField): """custom field composed of two subfields, 'begin_month' and 'end_month'. It expects to be used on form that has 'mindate' and 'maxdate' in its extra arguments, telling the range of month to display. """ def __init__(self, *args, **kwargs): kwargs.setdefault('widget', wdgs.IntervalWidget()) super(MonthPeriodField, self).__init__( [fields.StringField(name='begin_month', choices=self.get_range, sort=False, value=self.get_mindate, widget=MonthSelect()), fields.StringField(name='end_month', choices=self.get_range, sort=False, value=self.get_maxdate, widget=MonthSelect())], *args, **kwargs) @staticmethod def get_range(form, field): mindate = date.todate(form.cw_extra_kwargs['mindate']) maxdate = date.todate(form.cw_extra_kwargs['maxdate']) assert mindate <= maxdate _ = form._cw._ months = [] while mindate <= maxdate: label = '%s %s' % (_(calendar.MONTHNAMES[mindate.month - 1]), mindate.year) value = field.widget.format_value(form, field, mindate) months.append( (label, value) ) mindate = date.next_month(mindate) return months @staticmethod def get_mindate(form, field): return form.cw_extra_kwargs['mindate'] @staticmethod def get_maxdate(form, field): return form.cw_extra_kwargs['maxdate'] def process_posted(self, form): for field, value in super(MonthPeriodField, self).process_posted(form): if field.name == 'end_month': value = date.last_day(value) yield field, value
Here we first define a widget that will be used to select the beginning and the end of the period, displaying months like ‘<month> YYYY’ but using ‘YYYY/mm’ as actual value.
We then define a field that will actually hold two fields, one for the beginning and another for the end of the period. Each subfield uses the widget we defined earlier, and the outer field itself uses the standard IntervalWidget. The field adds some logic:
Now, we can define a very simple form:
class MonthPeriodSelectorForm(forms.FieldsForm): __regid__ = 'myform' __select__ = match_kwargs('mindate', 'maxdate') form_buttons = [wdgs.SubmitButton()] form_renderer_id = 'onerowtable' period = MonthPeriodField()
where we simply add our field, set a submit button and use a very simple renderer (try others!). Also we specify a selector that ensures form will have arguments necessary to our field.
Now, we need a view that will wrap the form and handle post when it occurs, simply displaying posted values in the page:
class SelfPostingForm(View): __regid__ = 'myformview' def call(self): mindate, maxdate = date.date(2010, 1, 1), date.date(2012, 1, 1) form = self._cw.vreg['forms'].select( 'myform', self._cw, mindate=mindate, maxdate=maxdate, action='') try: posted = form.process_posted() self.w(u'<p>posted values %s</p>' % xml_escape(repr(posted))) except RequestError: # no specified period asked pass form.render(w=self.w, formvalues=self._cw.form)
Notice usage of the process_posted() method, that will return a dictionary of typed values (because they have been processed by the field). In our case, when the form is posted you should see a dictionary with ‘begin_month’ and ‘end_month’ as keys with the selected dates as value (as a python date object).
Note
Fields are used to control what’s edited in forms. They makes the link between something to edit and its display in the form. Actual display is handled by a widget associated to the field.
Let first see the base class for fields:
This class is the abstract base class for all fields. It hold a bunch of attributes which may be used for fine control of the behaviour of a concrete field.
Attributes
All the attributes described below have sensible default value which may be overriden by named arguments given to field’s constructor.
Generic methods
Return the ‘qualified name’ for this field, e.g. something suitable to use as HTML input name. You can specify a suffix that will be included in the name when widget needs several inputs.
Return the HTML DOM identifier for this field, e.g. something suitable to use as HTML input id. You can specify a suffix that will be included in the name when widget needs several inputs.
Fields may be composed of other fields. For instance the RichTextField is containing a format field to define the text format. This method returns actual fields that should be considered for display / edition. It usually simply return self.
Form generation methods
Method called at form initialization to trigger potential field initialization requiring the form instance. Do nothing by default.
Return the correctly typed value for this field in the form context.
Post handling methods
Return an iterator on (field, value) that has been posted for field returned by actual_fields().
Return the correctly typed value posted for this field.
Now, you usually don’t use that class but one of the concrete field classes described below, according to what you want to edit.
Use this field to edit unicode string (String yams type). This field additionally support a max_length attribute that specify a maximum size for the string (None meaning no limit).
Unless explicitly specified, the widget for this field will be:
Use this field to edit password (Password yams type, encoded python string).
Unless explicitly specified, the widget for this field will be a PasswordInput.
Use this field to edit integers (Int yams type). Similar to BigIntField but set max length when text input widget is used (the default).
Use this field to edit big integers (BigInt yams type). This field additionally support min and max attributes that specify a minimum and/or maximum value for the integer (None meaning no boundary).
Unless explicitly specified, the widget for this field will be a TextInput.
Use this field to edit floats (Float yams type). This field additionally support min and max attributes as the IntField.
Unless explicitly specified, the widget for this field will be a TextInput.
Use this field to edit booleans (Boolean yams type).
Unless explicitly specified, the widget for this field will be a Radio with yes/no values. You can change that values by specifing choices.
Use this field to edit date (Date yams type).
Unless explicitly specified, the widget for this field will be a JQueryDatePicker.
Use this field to edit datetime (Datetime yams type).
Unless explicitly specified, the widget for this field will be a JQueryDateTimePicker.
Use this field to edit a timezone-aware datetime (TZDatetime yams type). Note the posted values are interpreted as UTC, so you may need to convert them client-side, using some javascript in the corresponding widget.
Use this field to edit time (Time yams type).
Unless explicitly specified, the widget for this field will be a JQueryTimePicker.
Use this field to edit time interval (Interval yams type).
Unless explicitly specified, the widget for this field will be a TextInput.
This compound field allow edition of text (unicode string) in a particular format. It has an inner field holding the text format, that can be specified using format_field argument. If not specified one will be automaticall generated.
Unless explicitly specified, the widget for this field will be a FCKEditor or a TextArea. according to the field’s format and to user’s preferences.
This compound field allow edition of binary stream (Bytes yams type). Three inner fields may be specified:
Unless explicitly specified, the widget for this field will be a FileInput. Inner fields, if any, will be added to a drop down menu at the right of the file input.
This field shouldn’t be used directly, it’s designed to hold inner fields that should be conceptually groupped together.
Use this field to edit a relation of an entity.
Unless explicitly specified, the widget for this field will be a Select.
This function return the most adapted field to edit the given relation (rschema) where the given entity type (eschema) is the subject or object (role).
The field is initialized according to information found in the schema, though any value can be explicitly specified using kwargs.
Note
A widget is responsible for the display of a field. It may use more than one HTML input tags. When the form is posted, a widget is also reponsible to give back to the field something it can understand.
Of course you can not use any widget with any field...
The abstract base class for widgets.
Attributes
Here are standard attributes of a widget, that may be set on concrete class to override default behaviours:
Also, widget instances takes as first argument a attrs dictionary which will be stored in the attribute of the same name. It contains HTML attributes that should be set in the widget’s input tag (though concrete classes may ignore it).
Form generation methods
Called to render the widget for the given field in the given form. Return a unicode string containing the HTML snippet.
You will usually prefer to override the _render() method so you don’t have to handle addition of needed javascript / css files.
This is the method you have to implement in concrete widget classes.
Return the current string values (i.e. for display in an HTML string) for the given field. This method returns a list of values since it’s suitable for all kind of widgets, some of them taking multiple values, but you’ll get a single value in the list in most cases.
Those values are searched in:
Values found in 1. and 2. are expected te be already some ‘display value’ (eg a string) while those found in 3. and 4. are expected to be correctly typed value.
3 and 4 are handle by the typed_value() method to ease reuse in concrete classes.
Return HTML attributes for the widget, automatically setting DOM identifier and tabindex when desired (see setdomid and settabindex attributes)
Post handling methods
Return process posted value(s) for widget and return something understandable by the associated field. That value may be correctly typed or a string that the field may parse.
Simple <input type=’hidden’> for hidden value, will return a unicode string.
Simple <input type=’text’>, will return a unicode string.
Simple <input type=’email’>, will return a unicode string.
Simple <input type=’password’>, will return a utf-8 encoded string.
You may prefer using the PasswordInput widget which handles password confirmation.
Simple <input type=’file’>, will return a tuple (name, stream) where name is the posted file name and stream a file like object containing the posted file data.
Simple <input type=’button’>, will return a unicode string.
If you want a global form button, look at the Button, SubmitButton, ResetButton and ImgButton below.
Simple <textarea>, will return a unicode string.
Simple <select>, for field having a specific vocabulary. Will return a unicode string, or a list of unicode strings.
Simple <input type=’checkbox’>, for field having a specific vocabulary. One input will be generated for each possible value.
You can specify separator using the separator constructor argument, by default <br/> is used.
Simle <input type=’radio’>, for field having a specific vocabulary. One input will be generated for each possible value.
You can specify separator using the separator constructor argument, by default <br/> is used.
<input type=’text’> + javascript date/time picker for date or datetime fields. Will return the date or datetime as a unicode string.
Compound widget using JQueryDatePicker and JQueryTimePicker widgets to define a date and time picker. Will return the date and time as python datetime instance.
Use jquery.ui.datepicker to define a date picker. Will return the date as a unicode string.
You can couple DatePickers by using the min_of and/or max_of parameters. The DatePicker identified by the value of min_of(/max_of) will force the user to choose a date anterior(/posterior) to this DatePicker.
example: start and end are two JQueryDatePicker and start must always be before end
affk.set_field_kwargs(etype, ‘start_date’, widget=JQueryDatePicker(min_of=’end_date’)) affk.set_field_kwargs(etype, ‘end_date’, widget=JQueryDatePicker(max_of=’start_date’))
That way, on change of end(/start) value a new max(/min) will be set for start(/end) The invalid dates will be gray colored in the datepicker
Use jquery.timePicker to define a time picker. Will return the time as a unicode string.
FCKEditor enabled <textarea>, will return a unicode string containing HTML formated text.
Simple <div> based ajax widget, requiring a wdgtype argument telling which javascript widget should be used.
<input type=’text’> based ajax widget, taking a autocomplete_initfunc argument which should specify the name of a method of the json controller. This method is expected to return allowed values for the input, that the widget will use to propose matching values as you type.
<input type=’password’> and a confirmation input. Form processing will fail if password and confirmation differs, else it will return the password as a utf-8 encoded string.
Custom widget to display an interval composed by 2 fields. This widget is expected to be used with a CompoundField containing the two actual fields.
Exemple usage:
class MyForm(FieldsForm): price = CompoundField(fields=(IntField(name='minprice'), IntField(name='maxprice')), label=_('price'), widget=IntervalWidget())
Select widget for IntField using a vocabulary with bit masks as values.
See also BitFieldFacet.
Custom widget to display a set of fields grouped together horizontally in a form. See IntervalWidget for example usage.
Custom widget to edit separatly a URL path / query string (used by default for the path attribute of Bookmark entities).
It deals with url quoting nicely so that the user edit the unquoted value.
Those classes are not proper widget (they are not associated to field) but are used as form controls. Their API is similar to widgets except that field argument given to render() will be None.
Simple <input type=’button’>, base class for global form buttons.
Note that label is a msgid which will be translated at form generation time, you should not give an already translated string.
Simple <input type=’submit’>, main button to submit a form
Simple <input type=’reset’>, main button to reset a form. You usually don’t want to use this.
Simple <img> wrapped into a <a> tag with href triggering something (usually a javascript call).
Besides the automagic form we’ll see later, there are roughly two main form classes in CubicWeb:
This is the base class for fields based forms.
Attributes
The following attributes may be either set on subclasses or given on form selection to customize the generated form:
Generic methods
Return field with the given name and role.
Raise FieldNotFound if the field can’t be found.
Return a list of fields with the given name and role.
Form construction methods
Remove the given field.
Append the given field.
Insert the given field before the field of given name and role.
Insert the given field after the field of given name and role.
Append an hidden field to the form. name, value and extra keyword arguments will be given to the field constructor. The inserted field is returned.
Form rendering methods
Render this form, using the renderer given as argument or the default according to form_renderer_id. The rendered form is returned as a unicode string.
formvalues is an optional dictionary containing values that will be considered as field’s value.
Extra keyword arguments will be given to renderer’s render() method.
Form posting methods
Once a form is posted, you can retrieve the form on the controller side and use the following methods to ease processing. For “simple” forms, this should looks like :
form = self._cw.vreg['forms'].select('myformid', self._cw) posted = form.process_posted() # do something with the returned dictionary
Notice that form related to entity edition should usually use the edit controller which will handle all the logic for you.
use this method to process the content posted by a simple form. it will return a dictionary with field names as key and typed value as associated value.
return a generator on field that has been modified by the posted form.
This class is designed for forms used to edit some entities. It should handle for you all the underlying stuff necessary to properly work with the generic EditController.
As you have probably guessed, choosing between them is easy. Simply ask you the question ‘I am editing an entity or not?’. If the answer is yes, use EntityFieldsForm, else use FieldsForm.
Actually there exists a third form class:
Form composed of sub-forms. Typical usage is edition of multiple entities at once.
but you’ll use this one rarely.
Note
Form renderers are responsible to layout a form to HTML.
Here are the base renderers available:
This is the ‘default’ renderer, displaying fields in a two columns table:
The ‘htable’ form renderer display fields horizontally in a table:
This is a specific renderer for the multiple entities edition form (‘muledit’).
Each entity form will be displayed in row off a table, with a check box for each entities to indicate which ones are edited. Those checkboxes should be automatically updated when something is edited.
This is the ‘default’ renderer for entity’s form.
You can still use form_renderer_id = ‘base’ if you want base FormRenderer layout even when selected for an entity.
This is a specific renderer for entity’s form inlined into another entity’s form. | https://docs.cubicweb.org/book/devweb/edition/form.html | CC-MAIN-2017-13 | refinedweb | 4,401 | 50.02 |
* Al Viro <viro@ZenIV.linux.org.uk> wrote:> On Fri, Apr 24, 2009 at 08:13:12AM +0100, Al Viro wrote:> > > AFAICS, both CIFS_SB(sb) and ->tcon are assign-once, so lock_kernel() should> > really go here (if it can't be removed completely, of course, but that's up> > to CIFS folks). Applied with such modification.> > commit 208f6be8f9244f4a3e8de7b4c6ca97069698303a in > git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6.git/> > if you want to see the version after that change (or wait for > linux-next to pick it).You've not replied to my request (attached below) to put these trivial BKL-pushdown bits into a separate branch/tree and not into the VFS tree. You've now mixed that commit with other VFS changes.Had it been in a separate branch, and had we tested it, Linus could have pulled the trivial BKL pushdown bits out of normal merge order as well. That is not possible now.You've also not explained why you have done it this way. It would cost you almost nothing to apply these bits into a separate branch and merge that branch into your main tree. Lots of other maintainer are doing that.So if you've done this by mistake, i'd like to ask you to reconsider and put these bits into a separate, stable-commit-ID branch. If you've done this intentionally, i'd like you to explain the reasons for it, instead of just doing it silently without explanation.Anwyay, if there's no resolution, i'll apply Alessio's fixes with a different commit ID, to not hold up the rather useful work that is going on in the kill-the-BKL tree. Later on i'll have to rebase that portion of the tree to avoid duplicate commit IDs. I just wanted to put it on the record why i have to do that rebase.Thanks, Ingo----- Forwarded message from Ingo Molnar <mingo@elte.hu> -----Date: Thu, 23 Apr 2009 23:32:49 +0200From: Ingo Molnar <mingo@elte.hu>To: Al Viro <viro@ZenIV.linux.org.uk>Subject: Re: [PATCH 0/5 -tip] umount_begin BKL pushdownCc: Alessio Igor Bogani <abogani@texware.it>, Jonathan Corbet <corbet@lwn.net>, Fr??d??ric Weisbecker <fweisbec@gmail.com>, Peter Zijlstra <a.p.zijlstra@chello.nl>, LKML <linux-kernel@vger.kernel.org>, LFSDEV <linux-fsdevel@vger.kernel.org>* Al Viro <viro@ZenIV.linux.org.uk> wrote:> On Thu, Apr 23, 2009 at 09:12:00PM +0200, Alessio Igor Bogani wrote:> > Push the BKL acquisition from vfs to every specific filesystems> > with hope that it can be eliminated in a second moment.> > > > The first 4 patches add BKL lock into umount_begin() functions > > (for the filesystems that have this handler). The last one > > remove lock_kernel()/unlock_kernel() from fs/namespace.c (the > > only point that invoke umount_begin() funtcions).> > I'd rather collapse all these patches together; no point doing > that per-fs (for all 4 of them). And CIFS side is bogus.> > Another thing: -tip is no place for that. I can put that into VFS > tree, provided that comments above are dealt with.When that happens, could you please put it into a separate, append-only branch i could pull (after some initial test-time) into tip:kill-the-BKL?Thanks, Ingo | https://lkml.org/lkml/2009/4/24/102 | CC-MAIN-2015-32 | refinedweb | 550 | 58.69 |
Scala Backend Engineer Sinisa Louc has many years of experience in the industry and with Scala being his main programming language he has a keen interest in staying up to date. This is his interesting article on What's new in Scala 3 which is not long away now.
"What’s new in Scala 3
Here’s a short digest for all those who are still unfamiliar with the changes and improvements that are coming in Dotty / Scala 3. I’m not presenting anything that’s not already been presented elsewhere; I’m merely taking a bunch of sources — Dotty documentation, conference talks, blogposts, Google groups, SIPs, pull requests, etc. — and combining them into a text that’s (hopefully) easy to read, not diving too deep into any of the upcoming changes, but still providing enough detail to get you all excited about them.
Intersections and unions
Type system is going through some major face-lifting.
First of all, we’re getting true union and intersection types. Now you might be thinking “what’s the big deal, intersection types are nothing but A with B, and we modeled union types without problems with constructs such as Either or subtyping hierarchies (e.g. Apple and Banana extend Fruit)”. That is true, we had those mechanisms and we will still have them, but intersection and union types are slightly different.
Let’s start with union types. A union type, expressed as A | B, represent a “true” union, as opposed to a disjoint union (also known as tagged union) that we have with Either, scalaz disjunction etc. Disjoint union clearly separates between the “left” and “right” case, but standard non-disjoint union doesn’t. This means that there’s no Left and Right; you can literally do this:
val a: A | B = new A() val b: A | B = new B()
They are commutative, so A | B is the same as B | A.
To be perfectly honest, there are probably not going to be extremely many places where you will actually want to use a union type instead of Either, Option etc., but that’s because the latter are monads which come equipped with handy functions such as map, flatMap, filter etc. But unions will definitely come handy in domain modeling, where we often resorted to aforementioned subtyping hierarchies. This will be super-handy if you had a slightly complicated case of intersections on middle levels of the hierarchy; fore example, let’s say we have traits Dog, Bird and Bat. Then there are traits CanSee (only extended by Dog and Bird), CanFly (only Bird and Bat) and Mammal (only Dog and Bat). Or we don’t want them to sound so general because those two types are all that will ever extend them, so we name them more specifically DogOrBird, DogOrBat, BirdOrBat. Super clumsy. With Scala 3 you’ll be able to simply state your type as Dog | Bird without those intermediate types in the hierarchy.
Intersection types (expressed as A & B) represent a dual concept to union types, and just like with union types, there are already similar constructs present in Scala. We can already mix in traits B and C into some class/trait/object A, resulting in intersection type “A with B with C”. Main difference between such mixins and the intersection types we’re getting in Scala 3 is the commutativity: A with B is not the same as B with A, at least from the type system perspective, while A & B and B & A are the same thing and can be used interchangeably. Unlike disjoint unions and subclass hierarchies which still have advantages in certain use cases over Scala 3 unions, intersection types are basically commutative versions of standard mixins types. This means that “A with B with C” syntax will be deprecated in the future.
Trait parameters
As you know, traits cannot have parameters; we have abstract classes for that. However, abstract classes cannot be mixed into different parts of the class hierarchy.
Well, now the traits are getting parameters too, which means syntax like this:
trait Foo(val s: String) { ... }
This could lead us to the “diamond problem” in class hierarchies, e.g. both Foo(“a”) and Foo(“b”) have been mixed in at some point. That would actually not compile. Rules are as follows:
- Only classes can pass arguments to their parent traits; traits cannot pass arguments to traits.
- When a class C extends a parameterized trait T, it must provide the arguments to T, except if C has a superclass that also extends T; in that case, superclass must provide the arguments, and C cannot.
This has been described quite nicely in the associated SIP. I’m pretty sure people will come up with various edge cases when using trait parameters, but let’s deal with those when the time comes. Birds-eye view of the feature looks pretty promising.
Function types
There are two main big things coming up regarding function types.
First, we are getting dependent function types. This means having types such as (a: A) => a.Foo. We already have dependent methods, but now we’re getting the possibility of turning such a method into a function, which is commonplace in Scala, but was impossible for methods whose return type was path dependent on the input type. Now we can do it:
def fooMethod(a: A): a.Foo = a.key val fooFunction: (a: A) => a.Foo = fooMethod
Second most important function types feature, and I find this one personally very exciting, are implicit function types. There has been some confusion over this, so let me try to clear it up. Let’s take the following implicit function type:
type MyFunction[B] = implicit A => B
This *does not* have the semantics of “implicit conversion from A to B”. Think of it like a “function of type A => B, where A is implicit”. Like in other scenarios with implicits, this means that if implicit value of type A is found in the scope, it will be passed; if no implicit A is in scope, it needs to be passed explicitly or the compilation will fail.
So given some method
def foo(f: Foo)(implicit a: A): B = ???
we can rewrite that as:
type MyFunction = implicit A => B def foo(f: Foo): MyFunction
This allows us to define foo as a *function value*. Without implicit function types, we can only define it as a method; right now in Scala 2, having implicit parameters in a method automatically means that it cannot be expressed as a function.
Implicit function types allow us some very handy refactoring. Unfortunately, I don’t have nearly enough space here to do full justice to implicit function types in terms of explaining their full usefulness. It would require a blogpost on its own, which I might even write one of these days. Several interesting use cases (e.g. removing boilerplate in the tagless final pattern, refactoring Dotty compiler context passing, some implementations of SQL queries etc.) have been presented in this very interesting talk by Olivier Blanvillain at ScalaDays in Berlin, so if at the time of reading this a video of it is on YT, I really recommend you to watch it.
Generic tuples
This one is still open at the time of writing this, but it’s happening for sure in Scala 3. Tuples are no longer going to be implemented via those awkward TupleN traits that (quite arbitrarily) end at Tuple22. Instead, they are going to be implemented similarly to how Lists are implemented, with their recursive-friendly head :: tail structure, which means that they are basically becoming shapeless HList.
Even though this means that the 22 limit is gone, that’s not the main advantage here (if you’re using a tuple with 22+ values, you should probably reconsider your domain design). Main improvement comes in the way we treat the tuples themselves, because nested tuples can now be treated as flattened;(a, b, c) will be exactly the same as(a, (b, (c, ()))). This allows for some nice generic programming similarly to what we can do with HLists, such as mapping over them with monomorphic or even polymorphic functions (the latter requires some shapeless trickery; check out these excellent blogposts by Miles Sabin).
Opaque types
Here’s a short explanation of a newtype for those who aren’t familiar with it: it’s similar to a type alias, but rather than being an alias just for the programmer, it’s also an alias for the compiler, meaning that the compiler actually differentiates the two. Let’s take the following type alias as an example:
type FirstName = String
This gives us the possibility of using “FirstName” as a type,
which can make the code more understandable. However, nothing prevents us from actually passing a “normal” String wherever FirstName is required. Even worse, if we have another type alias, e.g. LastName = String, nothing prevents us from accidentally passing a LastName where FirstName is expected and vice versa, since from the compiler’s point of view they are all just Strings.
If we want to achieve the behaviour where such usage would be prevented by the compiler, we need to resort to value classes. This means writing something like this:
class FirstName(val underlying: String) extends AnyVal class LastName(val underlying: String) extends AnyVal
This is a bit boilerplate-y and has a slight performance hit due to boxing/unboxing. Adding the extends AnyVal part takes care of some of that performance penalty, but not in all use cases. Plus, our code looks awful!
There’s a library that allows defining them in a slightly more elegant way:
@newtype case class FirstName(underlying: String)
but this still feels like the same old boilerplate with some new syntax. Also, these newtypes must be defined in an object or package object.
Scala 3 is introducing opaque types.
opaque type FirstName = String
People sometimes use the terms newtype and opaque type interchangeably. Others say that newtype is the one that wraps around the underlying type as we’ve seen earlier, while opaque types are defined “directly”. Others yet say that opaque types are a special kind of type alias that is created using a newtype. Don’t get confused by the terminology; concept is pretty much the same.
So yeah, opaque types give us exactly what we need. FirstName is a type on its own, and passing a value of type LastName where FirstName is expected won’t compile. There might be some resistance to introducing a new keyword, in which case the proposal is to use: new type FirstName = String. Either way, this will be a nice feature.
Type lambdas
We’re getting full language support for type lambdas without having to resort to ugly boilerplate or external libraries.
Let’s take a popular use case where F[_] is needed, but we want to pass a type constructor that takes two types (such as Map or Either), or in other words, we want to pass (﹡ → ﹡) → ﹡where ﹡→ ﹡is needed.
Instead of having to define a type lambda like this:
we will be able to simply saywe will be able to simply say
({ type T[A] = Map[Int, A] })#T
[A] => Map[Int, A]
Type parameters of lambdas will support variances and bounds, for example
[+A, B <: C] => Whatever[A, B, C]
Erased parameters
There are situations where we only need some parameter(s) in the signature, e.g. for evidence in generalized type constraints, and they are never used in the body itself. Unnecessary code will still be generated for such parameters, which can be prevented by using the keyword “erased”.
For example,
could and should be used ascould and should be used as
def foo[S, T](s: S, t: T)(implicit ev: S =:= T)
def foo[S, T](s: S, t: T)(implicit erased ev: S =:= T)
So whenever you have one or more parameters that are used only for type checking, using “erased” keyword will make your code more performant.
Enums
Yep, one of the clunkiest constructs in Scala gets a full redesign, which means replacing code like this:
object WeekDay extends Enumeration { type WeekDay = Value val Mon, Tue, Wed, Thu, Fri, Sat, Sun = Value }
with code like this:
enum WeekDay { case Mon, Tue, Wed, Thu, Fri, Sat, Sun }
They can be parameterized and can contain custom members. They will also have some handy methods already pre-defined.
Example:
enum Weekday(val index: Int) { private def next(i: Int) = (i + 1) % 7 private def prev(i: Int) = (i + 7) - 1 % 7 def nextDay = Weekday.enumValue(next(index)) def prevDay = Weekday.enumValue(prev(index)) case Mon extends Weekday(0) case Tue extends Weekday(1) ... }
Other
Here are some additional features and improvements that I find important as well, but they don’t really require separate paragraphs to be explained.
Multiversal equality:
Compiler will now match the types when comparing values and fail the compilation if there’s a mismatch, e.g. “foo” == 123. We can already model this via type classes (it’s actually done in several libraries), but with Scala 3 we’re getting the native language support.
Restricting implicit conversions:
Compiler will now require a language feature import not only when an implicit conversion is defined but also when it is applied.
Null safety:
This is based on the union types feature, which allows us to define the result type of scary operations that can throw a null pointer exception (useful for interoperability with Java) as Foo | null.
Extension clauses:
These will basically be a replacement for implicit classes, e.g.
extension StringOps for String { … }, but I didn’t dig too deep into this yet. Take a look at this PR if you want to find out more.
Removing unneeded constructs:
…such as early initializers, auto-tupling, existential types using forSome, automatic insertion of (), infix operators with multiple parameters, etc.
Conclusion
Scala 3 is going to be a big change, no doubt about that. I just hope that the upside of getting all these new exciting goodies will not be met with the downside of difficult switch from 2 to 3, which might result in a split community (such as when Python moved to the same major release). Tools are being developed in order to make the transitions and cross builds as painless as possible. One interesting thing that’s coming up is TASTY (Typed Abstract Syntax Trees; I guess “Y” is just making the acronym catchy). These trees contain all the information present in a Scala program, such as syntactic structure, info about types, positions etc. Once serialised into a TAST, it will be possible to port the Scala program to a different version, which will allow easy cross builds and solving the problems of binary (in)compatibility.
But let’s not ruin the moment talking about real-life problems :) as far as the language itself is concerned, Scala 3 is bringing a lot of improvements and interesting new features to the table, and I’m looking forward to it.
Thanks for reading."
This article was written by Sinisa Louc and posted to Medium.com. | https://www.signifytechnology.com/blog/2018/05/whats-new-in-scala-3-by-sinisa-louc | CC-MAIN-2019-35 | refinedweb | 2,521 | 58.01 |
SPI using Pyton3
There was that email from Onion to the user community where users were encouraged to switch to Pyton3, because Python2.7 will be at EOL soon.
So I went ahead and installed it (instructions at)
Next, I wanted to install SPI module, do to some experiments with APA-102 RGB LED strip. I found this page:
But installing the
pyOnionSpipackage resulted in pulling in also the
python-basepackage, so I figure this is a Python 2.7 module. I can't find any trace of corresponding package for Python3. Does it exist? Because if it does not, and maybe also other specialized modules for Omega are not migrated, then there is good incentive to stick with Python2.7.
- Lazar Demin administrators last edited by Lazar Demin
@Pavils-Jurjans again, I would recommend using
python-spidev() instead of
pyOnionSpi
However, we haven't yet released a Python3 compatible spidev module. But it'll be coming soon!
Ok, thanks, I will use this python module for now, and stick with Python 2.7 until you release the Python3 version.
Hmm, I just tested the
python-spidevand it just gives bad output. The clock is uneven, and the data has only one instance of "1", followed with pause, and then left at HIGH for long time:
Here's the code I use for testing:
import spidev spi = spidev.SpiDev() spi.open(32766, 1) to_send = [0x01, 0x02, 0x03] spi.xfer(to_send) spi.close()
Meanwhile, the pyOnionSpi, that appears to be based on the buggy
spi-tool, at least gives me correct output:
import onionSpi spi = onionSpi.OnionSpi(1, 32766) # Due to the spi-tool bug, bus and device param values are swapped spi.write([0x01, 0x02, 0x03])
So I am not sure why you are suggesting to switch to something that doesn't appear to work at all.
- Lazar Demin administrators last edited by Lazar Demin
Tested it many times internally, works well with displays and other devices.
It's important to remember that spi-tool and python-spidev are just userspace programs that make requests to the kernel to use the SPI hardware.
There isn't anything in python-spidev that can confuse the hardware enough that it will produce bad output.
What is the clock speed you're seeing with python-spidev? It's possible that python-spidev's default transfer rate (clock speed) is greater than the sampling rate of your logic analyzer, resulting in what looks like an uneven clock and bad data.
That actually happened to me the first time I used python-spidev
See the writebytes.py example for a pointer on changing the clock speed.
Dammit, you're right! I should've checked the time scale on the crippled diagram.
But now I stumble on the next problem. I connected my APA-102 RGB LED strip to the GPIO pins 7 (CLK) and 8 (MOSI), and it appears that this connection causes some bad interference with Omega:
- At boot, about 20 of LEDs are lighting up and twinkling, showing that there is some data being sent out of SPI. Since I am not using the CS1 pin, my interpretation is that the data that is meant for SPI device 0, now is captured by the LED strip (it says here, that "..device 0 (CS0) is connected to the flash memory used by the Omega").
- Omega is working with random glitches, perhaps the communication with device 0 is being messed up.
Perhaps I am missing some simple fix here? Perhaps I should let the LED strip receive signal from MOSI only when CS1 is high?
- Lazar Demin administrators last edited by
@Pavils-Jurjans said in SPI using Pyton3:
Perhaps I should let the LED strip receive signal from MOSI only when CS1 is high?
Yeah, a small circuit that leaves MOSI floating unless CS1 is asserted will resolve your issue. Just be aware that CS is active-low in the default SPI mode.
Pardon my ignorance, but could you, please, point me to an example of such circuit? I am not sure I can come up with the best solution. Being a programmer rather than electronics engineer, all I can think of is throwing in an AND gate where CLK and (active-high mode) CS are inputs and the output goes to the LED strip.
- Lazar Demin administrators last edited by
@Pavils-Jurjans It would likely be a non-inverting CMOS circuit, where CS1 controls whether the Omega's MOSI is connected to your LED strip
@Lazar-Demin Thanks, I ordered bunch of 74HC125D chips, and it seems to do the trick.
I set up only the CLK line to go through the buffer, though. But as long as the APA102 LED strip works, and the communication with flash memory (SPI device 0) is not disturbed, there's no need to make the similar setup for MOSI, too.
There's another problem surfacing now, though. The
spi.xfer(byte_array)command sends only 28 bytes correctly, and then the rest is just a bunch of zeros or 0xff. It is as though there's some internal buffer with size of 28 bytes.
import spidev spi = spidev.SpiDev() spi.open(32766, 1) spi.max_speed_hz=100000 # 100 KHz size = 50; spi_data = [0x00] * size for offset in range(size): spi_data[offset] = offset print spi_data spi.xfer(spi_data) spi.close()
). | https://community.onion.io/topic/3189/spi-using-pyton3/6?lang=en-GB | CC-MAIN-2021-43 | refinedweb | 890 | 70.94 |
Fixing the Exploration Phase in D&D 5E with the Journey System
The Forgotten Pillar: Exploration
It’s often said that Dungeons & Dragons can be divided into three ‘pillars’: combat, social/roleplay, and exploration.
The toolbox for combat is obviously the largest and most comprehensive, with entire systems built to ensure dynamic, exciting combats.
While roleplay is less well-defined, it also benefits from being a part of the game that is largely improvised. You can make Persuasion or Deception checks to get what you want, but it is often more fun to act these conversations out.
Exploration in D&D 5E has always felt like the red-headed stepchild. There are pages discussing environmental effects and providing random encounters, but it has never felt especially well-defined.
DMs often hand-wave exploration with a couple of random encounter rolls and a “…after several days/weeks/months, you reach your destination’.
This hobbles an exploration-focused class like the ranger. With so many of its skills built around the exploration phase, they miss out on the combat or social buffs other classes get.
It also trivializes the perils of travel. The world feels smaller when you can just skip between cities. That terrifying wilderness you’ve described loses its teeth if it is just three random encounters and a long rest.
I didn’t want that to happen in my current game. I wanted the world to feel large and wild and terrifying.
With the current D&D exploration rules being a little uninspiring, I went further afield to find solutions.
Even something as mundane as a forest can be terrifying if you don’t skip the exploration phase. Image courtesy of Free Photos by Pixabay.
The Sorry State of Exploration in D&D 5E
At its core, the exploration phase tends to be a Survival check to find food and/or the route. This is followed by a few rolls on random encounter tables.
Combat or roleplay may ensure. You’ll likely play out a single long rest between combats, pretending any others happened ‘off screen’.
When Wizards of the Coast released Tomb of Annihilation, a big selling point for the module was that it featured a sizable hex-crawl that harkened back to old-school D&D.
To survive this hex crawl, players needed to manage both water and food. They had to contend with diseases, swollen rivers, impenetrable jungle, and fearsome foes.
It was all very promising, except for the fact that it just wasn’t fun to run. The resource management element proved more clunky than challenging and the distances players needed to travel meant countless, often repetitive encounters.
Worse, for those DMs who did put the effort into ensuring the hex-crawl was fun, there are multiple in-game elements that invalidated it:
- The Outlander background’s ability to always find enough food and water for a party rendered the resource management element moot;
- The Keen Mind feat’s ability to always find true north meant getting lost was no longer an issue;
- Spells like Tiny Hut and Create Food & Water removed the need to forage.
Now, none of these are insurmountable obstacles for a savvy DM, but it’s this kind of hand waving that makes an exploration based class like the ranger feel so useless.
While Volo’s Guide to Everything did a good job of at least making ranger’s competitive (and Unearthed Arcana has some useful suggestions), Tomb of Annihilation’s hex crawl still felt like a mixed message.
On one hand, we’re presented with a challenging, fleshed out setting to explore, and on the other, we’re given a bunch of features that immediately invalidate it.
Can you imagine if there was a feat that just allowed players to skip combats? Or a background that promised automatic success in a roleplay situation?
The Old d100 System of Exploration
I usually hate it when an article says it is fixing something in D&D when it really means improving it. You aren’t fixing True Strike by making it a bonus action – you’re improving upon something that already works.
I’ll make an exception for Exploration. It is broken and it does need fixing. You only need to read this post on reddit to see what a joyless, crunchy system it is RAW.
Now, a good DM will make the effort to pre-roll all of this and weave it into some kind of coherent series of encounters and obstacles. My Tomb of Annihilation games improved immeasurably when I learned from the Tomb of Annihilation Companion and prepared 30-45 days of travel ahead of time.
No more frantic rolling as the party discusses their plans. No more hectic pasting of tokens onto hastily uploaded battlemaps.
If you’re looking for the quickest possible fix for exploration – this is it. Figure out how long your party is likely to be traveling, pre-roll their travel days, tweak for flavour, and voila!
This doesn’t address the underlying issue, however. No matter how meticulously planned your series of encounters may be, it still boils down to random rolls, combats, and a lot of hand-waving of the details in between.
No matter how well-planned these sessions are, you’ll eventually have players (or yourself) wanting to hand wave it and just ‘get to the good stuff’.
What mysterious ruins lie along the way? Exploration lets you stumble upon sites not keyed to your adventure. Image courtesy of DarkWorkX.
Getting to the Good Stuff in Style
Looking for a solution to my exploration woes, I turned to Adventures in Middle Earth.
Even if you aren’t familiar with the now defunct game, you’ve likely read J.R.R Tolkien’s seminal work of fantasy or, failing that, seen the critically acclaimed Lord of the Rings movies.
You’ll remember there was quite a lot of walking. Like, a lot of walking.
The Adventures in Middle Earth system understands that the journey is very much a part of the action. It is not some tiresome bore that happens between the best parts – it is a crucial part of the story you’re telling. To this end, its Journeys system is a core part of the game.
A Journey is broken down into three parts:
- Embarkation: The mood of the party as it sets out;
- Journey: The events that occur along the way;
- Arrival: The state in which the party reaches their destination.
During the course of these three phases, four party members have specialized roles to help their party reach their destination safely. Those without an assigned role can assist:
- Guide: The one responsible for getting the party safely to their destination (Survival);
- Scout: The one ranging ahead checking for perils in their path (Stealth);
- Hunter: The one gathering additional food & water (Survival);
- Look Out: The one keeping an eye peeled for wandering monsters (Perception).
As written, these four roles almost entirely upon Wisdom skills. While this is most realistic, it’s a rare party that has four PCs with decent stats in these two particular skills. While your Ranger might be in their element being adept at all of the above, they’re only able to do one o them.
With that in mind, I’d suggest a minor tweak to the above.
My suggestion would be to also make Persuasion a requirement for your guide, as so much of their role affects the spirits of the company. For your Scout, why not make use of the oft-overlooked Investigation check? Hell, if your hunter doesn’t have the best Survival, I’d go so far as to suggest letting them instead use Nature.
The point is – you want the four roles to differ, but you also want the four roles to loosely match up with members of your party.
Across the course of the journey, the various tables involved (one for Embarkation, one for Journey, and one for Arrival) come into play, with each result requiring a roll from one or more of the above. Several even require the entire party to make checks to overcome obstacles.
Failures can lend points of Shadow (an Adventures in Middle Earth specific trait not unlike D&D’s optional Sanity score) or levels of Exhaustion, while successes remove one or both of the above. Success can also grant Inspiration to be used on future skill checks.
But Chris – these are all still just random tables!
You’re not wrong. This system does still boil down to rolling on a series of tables. However, these tables are not just arbitrary monster encounters or natural perils. The ability to gain levels of Exhaustion coupled with the fact there are no long or short rests during the Journey phase means that a series of bad results can have a party reaching their destination in an absolutely sorry state.
The Journey system turns a system that is essentially random rolls that lead into one of the other two pillars of Dungeons & Dragons into a challenging mini-game that has an impact on the other pillars, but is not just some lame duck delivery system.
Adapting the Journey System for D&D 5E
So, you want to fix your exploration phase by implementing the Journey system from Adventures in Middle Earth?
You could take the lazy route and transplant it wholesale into your games, ignoring the rules that don’t apply and adapting on the fly.
Or, you could use the modified system as presented below. I’m nice like that.
Step 1: Embarkation
The first step in the Journey phase has a huge impact on the phases that follow. A party leaving in high spirits will be better equipped to handle the rigors of travel, while one leaving on an empty-stomach is in for a bad time.
Determining Difficulty
The first step to designing a journey is determining its difficulty. This impacts on rolls made on each table, with a perilous journey more likely to generate an unfavourable result.
- Easy: Familiar terrain that is well-mapped. Travel on established roads.
- Moderate: The ‘standard’ for wilderness travel in relatively well-known environments;
- Hard: Unfamiliar areas of wilderness such as deep forests. I use this for Chult’s mapped areas.
- Severe: Mountainous regions or trackless swamps. I use this for Chult’s unmapped areas.
- Daunting: Areas held by dangerous foes or filled with peril. I use this for journeys in Chult’s more far-flung corners.
Take a note of the number next to each of the above options as well, as this will influence rolls on future tables.
Assigning Roles
As a group, the party needs to decide on who will fill the key roles required by the journey. I’ve tweaked them as below:
- Guide: The Aragorn of the party. Key abilities include Survival and Persuasion.
- Scout: Roves ahead looking for potential pitfalls in the path. Key abilities include Stealth and Investigation.
- Hunter: Hunts and catches food for the party. Key abilities include Survival and Nature.
- Look-Out: Watches the party’s path for ambushes. Key ability is Perception.
It’s worth noting that the journey system does not require players to keep track of rations and water consumption. It is assumed they’re bringing enough to cover the bare essentials, with the Hunter’s role instead landing additional food.
Time to Depart!
With the above decided, it is time to hit the road!
The Guide will roll a d12 and add their Persuasion bonus to the result. The DM will then deduct the Difficulty rating from the result and check the Embarkation table for the result.
As an example, our ranger, Mitsu rolls a 4 on the d12. He adds his Persuasion bonus of +3 for a total of 7, and the DM then deducts 4 from the result, as this is a journey through dangerous territory. The end result is a 3.
Consulting the Embarkation table, he sees that the party has chosen a path that is more likely to be observed by their enemies. Unfortunately for them, they’ll have a harder time avoiding encounters on the road ahead.
Sample Embarkation Table
I’ve adapted the below Embarkation table from the one presented in Adventures in Middle Earth, which is obviously specific to that particular game.
- An Ill Feeling. The party departs under a cloud of doubt. When rolling on the Journey table, add an additional +2 to all results rolled. All checks made to determine the initial outcome of encounters are made at disadvantage.
- Dampened Spirits. The party’s departure is marred by foul moods and restlessness. During the Journey, each player makes ability at Disadvantage until they succeed, at which point their spirits lift and the gloom departs.
- A Perilous Path. The party’s path takes them through territory where they are more likely to encounter enemies. When rolling on the Journey table, add an additional +1 to all results rolled. The first check made during encounters on the journey is made at disadvantage.
- Inaccurate Maps. The party’s maps or information are out of date, forcing them to travel through more difficult terrain than they had anticipated. For the course of this journey, consider the terrain one point more difficult than it is.
- Foul Weather. The party leaves in less than ideal conditions, drenched by sheets of icy rain or sweltering in intense heat. Each player must make a Constitution saving throw against a DC of 10 + the journey’s difficulty rating or begin the journey with a level of exhaustion.
- Poorly Provisioned. The party departs without adequate provisions (or their provisions spoil). During the journey, they are constantly battling hunger and illness. When rolling ability checks during the journey, each player must then deduct 1d4 from the result rolled.
- Well Provisioned. The party departs with full bellies and superb provisions for the road ahead. For the duration of the journey, each player made add 1d4 to any ability check they are required to make.
- Fine Weather. The party departs under auspicious skies, with fine weather and ideal traveling conditions ahead of them. Each member of the party may ignore the first point of exhaustion gained during the journey.
- Paths Swift and True. The guide has selected the best possible path for the road ahead, selecting terrain that is as easy to travel as possible. For the course of the journey, consider terrain one point less difficult than it is.
- A Cautious Departure. The party departs keenly aware of the dangers that lie ahead of them. While you will need to add +1 to results rolled on the Journey table, the characters’ extra preparedness translates into their having advantage on their first roll in each encounter.
- High Spirits. The party departs with a clear sense of purpose and camaraderie. During the Journey, each player makes ability at advantage until they fail, at which point self-doubt reins in their enthusiasm.
- An Auspicious Start. All signs point to a safe journey for the party, who departs in ideal conditions. When rolling on the Journey table, add an additional +2 to all results rolled. All checks made to determine the initial outcome of encounters are made at advantage.
There you have it! Your party is on the road!
A path stretches out before you, leading into lands unknown. Image courtesy of Free Photos by Pixabay.
Step 2: Journey
The meat of the journey system is (unsurprisingly) the journey table.
Depending on the length of the journey ahead, you’ll roll on the below table as shown here:
- Short Journey: 1d2 times;
- Medium Journey: 1d2+1 times;
- Long Journey: 1d3+2 times.
What constitutes a short, medium, or long journey is at your discretion. For Tomb of Annihilation, I’ve said 1-6 days is a short journey, 6-15 is a medium journey, and anything else is a long journey.
Rolling on the Journey Table
When it comes time to roll for events on the journey table, you’ll need to factor in the difficulty you decided at the journey’s outset. Remember to factor in the result rolled during Embarkation, as this will impact the result.
You’ll also need to refer back to the difficult you assigned, as this will impact the roll as follows:
- Easy: -1 to the result rolled;
- Hard/Severe: +1 to the result rolled;
- Daunting: +2 to the result rolled.
You’ll then roll 1d12 + the difficulty modifier + any modifiers stipulated by the Embarkation result.
Determining DC
When it comes time for the party to make an ability check, the DC for this check is always 12 + the difficult rating selected. For ease of use, this is shown below:
- Easy: DC 13;
- Moderate: DC 14;
- Hard: DC 15;
- Severe: DC 16;
- Daunting: DC 17.
Spice up your exploration with strange monuments and otherworldly vistas. Image courtesy of Stefan Keller.
Sample Journey Table
The below table is kept intentionally generic, but you are encouraged to tailor the descriptions and content to suit your game’s setting.
- A Chance Encounter. The party encounters a traveler or group of travelers. These may be merchants, fellow adventurers, pilgrims, or whatever else you decide.The Scout may attempt a Stealth check to lead the party around this encounter, or any member of your party may instead freely approach to interact with them, making a Persuasion check to establish their initial mood.Depending on how roleplay works out, the party may gain an important snippet of information about the road ahead (granting them advantage on the first roll of their next encounter) or bad information (granting disadvantage).
- Good Hunting. Conditions for hunting and foraging are especially good today. The Hunter must make a Survival check in order to capitalize on this.If they are successful, they are able to prepare a meal that lifts the spirits of the party and restores some of their vitality, removing a level of Exhaustion.If they fail, they have wasted valuable time for the party, who must deduct 1 from their eventual Arrival roll as a result.
- An Obstacle. Something blocks the party’s path. It may be a fallen tree, a fast-flowing river, or a ravine. The Guide must make a Survival check and all other party members must make an Athletics or Acrobatics check to successfully negotiate their way around this blockage.If the party is traveling with mounts, one party member must also make an Animal Handling check.If all of the checks are successful, the group is buoyed by their teamwork and will add +1 to their eventual Arrival roll.If half or more of the checks are successful, the group manages to negotiate the obstacle with no loss of time.
If half or more fail, the company expends vital energy and all members gain a level of Exhaustion.
If all of the party members fail, the party is bone-tired. They not only gain a level of exhaustion, but will have to deduct 1 from their eventual Arrival roll.
- In Need of Help. The party comes across travelers in need of their aid. This may be an injured explorer, a village beset by foes, or a group of mercenaries under attack.The party may choose to ignore their pleas and gain +1 to their eventual Arrival roll, but doing so casts a pall over the party as they are left to contemplate the sorry fates of those they left behind. Each player must succeed at a Wisdom saving throw or gain disadvantage on ability checks and attack rolls until they succeed at one, at which point the pall lifts.If the players choose to remain and help, the exact content of the encounter is left up to you. As written, it is recommended to run something akin to a Skill Challenge, but you could also play this out through roleplay, combat, or a combination thereof.
If the party succeeds, they should gain +1 to their Arrival roll, with Inspiration for those who acquit themselves particularly well.If they fail in their task, they will instead deduct -1 from their eventual Arrival roll, as their spirits are dimmed by their failure.
- Inspiring Sight. The party has come across a site, scene, or location of particular beauty. Each member of the party must attempt either a Perception or Investigation check to fully comprehend and appreciate the sight they have encountered. If they are successful, they may remove a level of Exhaustion.In addition, if all members of the party succeed, their collective spirit urges them on with greater speed, granting +1 to their eventual Arrival roll.If they fail, they instead see the ugliness in the vista, gaining a level of exhaustion as they grumble about the long journey and difficult conditions.
If all members fail, however, they drag their feet, penalizing them with a -1 to their Arrival roll.
- A Hunt. The hunter has come across particularly promising game, if only he can run it down. The Hunter must make a Survival check to attempt to bring down this elusive game (the exact nature of which is up to you).On a success by 5 or more, they are able to prepare a great feast for the party, removing a level of exhaustion and granting the party a +1 to their Arrival roll.On a success, the party enjoys a hearty meal and may remove one level of exhaustion.On a failure, the hunter’s hubris results in the party going to bed with empty stomachs. They gain a level of Exhaustion.
On a failure by 5 or more, the hunter fails spectacularly. The party is pulled well off course, several of them are injured in the process, and no food is found. Not only does everybody gain a level of Exhaustion, but the party must also deduct 1 from their Arrival roll.
- A Comfortable Camp. The party’s Scout has spotted a particularly promising location to set up camp. The exact nature of this is up to you, although it may be an abandoned home, a well-sheltered cave, or a comfortable spot by a stream.The Scout must make an Investigation check to approach the camp and judge its quality. On a success by 5 or more, the camp is perfect! Everybody may remove a level of Exhaustion and the party will add +1 to their Arrival roll.On a success, the camp proves a comfortable place to sleep, allowing the party to remove a level of exhaustion.
On a failure, the camp is not as it seems. Perhaps biting insects harass the players or the distant howling of wolves keeps them up. Regardless of the reason, it makes for a poor night’s sleep, and everybody will gain a level of exhaustion.
On a failure by 5 or more, the camp is not as empty as it first appeared! The group have inadvertently stumbled upon a dangerous foe! Combat is likely to ensue, although the exact nature of the combat is left up to you.
- A Remnant of the Past. The party stumbles across an item or site from a bygone era, reminding them of the age of the world and the transient nature of their own lives.Each member of the party must make a Perception check. On a success, their spirits are lifted by this relic of old, granting them Inspiration. On a success by 5 or more, they are also able to remove a level of Exhaustion. Additionally, if the entire party succeeds at their roll, their collective good spirits translate into a +1 to the Arrival roll.With a failed roll, the player is instead made keenly aware of their own mortality and the fleeting nature of life. They must succeed at a Wisdom saving throw or be afflicted with a malaise that imposes disadvantage on ability checks. This effect persist until the character succeeds at an ability check.
On a failure by 5 or more, the character is despondent. They not only must make a Wisdom saving throw to avoid the aforementioned disadvantage, but they also gain a level of exhaustion as they wallow in their ill-mood.
If the entire party fails at their Perception checks, their collective foul mood translates into a -1 to the Arrival roll.
- A Dangerous Place. The party has come across a place most foul. This might be an evil temple, a cursed location, or the site of a particularly brutal battle.The Scout must make an Investigation check to spot the danger before the party blunders into it. If they succeed by 5 or more, they have succesfully pulled the party up short, and the party can observe the scene from afar, gaining resilience from their outrage or the comforting presence of their companions. Everybody in the party gains Inspiration. In addition, the party may add +1 to their Arrival roll.On a success, the party is still able to avoid the perilous situation, and doing so grants them a +1 to their Arrival roll.
On a failure, the scout has inadvertantly led his party into a place both grim and dangerous. Each party member must make a Wisdom saving through or be afflicted with dread, translating into either disadvantage on all ability checks (until a success) or a Long-Term Madness.
On a failure by 5 or more, the company has blundered into a site that is very much still inhabited. They not only get the penalties indicated above, but something dangerous lurks nearby and the party has walked right into its trap…
- Auspicious Meeting. The party has encountered a particularly powerful or important traveler on the road. This may be Elminster or Artus Cimber, or it may be a powerful foe.If the Embarkation roll was a 1, this is automatically a foe, while an Embarkation roll of 12 means that the party has met a powerful potential ally.If neither a 1 nor a 12 was rolled for Embarkation, the Look-Out must instead make a Perception check. On a roll of 5 or more, the party has met somebody of great importance. They may immediately remove a level of Exhaustion and, should they interact with the person in a successful roleplay, gain +1 to their Arrival roll.
If the look-out succeeds, they have instead met somebody of importance without realising it, and the encounter should play out as if they had rolled a 1 on this table.
If the look-out’s roll fails, they have instead come upon enemies. Treat this as if they had rolled a 5 or 11 on this table, with a combat the likely outcome.
If the look-out fails by 5 or more, they have come upon a singularly powerful foe. If they do not wish to tangle with this foe, they must flee, resulting in a level of Exhaustion and a -1 to to their Arrival roll.The exact nature of these encounters is left entirely up to you. In my Tomb of Annihilation game I might treat a success by 5 or more as Artus Cimber, a success as a Flaming Fist patrol, a failure as a band of Yuan-Ti or undead, and a failure by 5 or more as a band of Red Wizards.
Make sure your destination is a fitting reward for your players. Image courtesy of Donna Kirby.
Step 3: Arrival
With the journey behind them, all that remains is to figure out the manner of the party’s arrival.
There are a few factors to take into account when rolling for Arrival. Firstly, the terrain traveled through impacts the result as follows:
- Easy: +1 to Arrival roll;
- Moderate: No modifier;
- Hard/Severe: -1 to Arrival roll;
- Daunting: -2 to Arrival roll.
You’ll also want to factor in any modifiers gained during the journey before rolling a d8 and consulting the below table.
Sample Arrival Table
- Weary to the Bones. The party has arrived both physically and emotionally exhausted. Each player must succeed at a Wisdom saving throw or gain a Long Term Madness.If the party’s journey took them through a particularly disturbing place that they were unable to avoid (a 10 on the journey table), the DC is increased by 2.
- Empty Bellies. The party limps into town having exhausted their supplies a few days earlier. They are starving, tired, and dehydrated. Each party member gains a level of Exhaustion.
- Poor Spirits. The arduous journey has left the party in poor spirits and an ill-mood. Each player has disadvantage on Charisma-based ability checks until they are able to succeed at one.The good side to being in a foul mood is that the party is likely spoiling for a fight, which translates into advantage on the first initiative roll each is require to make.
- Uncertainty. The party arrives in ill weather, after dark, or just unsure if they’ve arrived where they intended to be.How exactly this plays out is at the DM’s discretion.Perhaps the group has stumbled upon a den of bandits close to their destination, perhaps the gates are barred and they must negotiate to be allowed in, or perhaps it is as simple as requiring your Guide to make a Persuasion or Survival check to get them the last distance to their destination.The penalty for failure should be Exhaustion, with success simply being an end to their arduous journey.
- Weary but Glad. The trip may have been long and exhausting, but the sight of their destination instills the party with much needed energy. Each player may remove a level of Exhaustion.
- Determined. The events of their journey have instilled the group with greater zeal for their future travels. Perhaps they are motivated by a desire to get to grips with their foes or perhaps this is simply a desire to be better prepared.Regardless of the reason, the party will have +1 to their next Embarkation roll.
- Tall Tales. The party arrives at their destination with plenty of stories to tell. They have bonded through their shared toil and their gregarious spirit is infectious.Each party member has advantage on Charisma ability checks until such a time as they fail one.
- Full of Hope. The journey may have been hard, but the party has emerged from it with renewed hope for the road ahead. Each player gains a level of Inspiration and may remove a level of Exhaustion.
The Journey system has revitalized the Exploration pillar at my table.
After my players begged me to do away with the hex-crawl in Tomb of Annihilation, the transition to the Journey system has meant players actually looking forward to one of the three core tiers of gameplay. It’s no longer combat and roleplay standing out on their own, but the third tier getting some much needed love as well.
There are still some tweaks to be made (I’ve suggested Outlander and Keen Mind granting advantage to their respective skills), but your ranger ought to love their abilities feeling more necessary to the party.
While I’ll leave the exact mechanics of favored terrains and the like up to you, I’ve found simply granting advantage on relevant rolls has meant the ranger finally has an area to excel in that isn’t brooding.
Your Say
How do you run exploration at your table?
If you are still running it as written, how have you managed the abundance of random tables?
Or are you a hand-waver like I used to be?
Hi there friends, good paragraph and fastidious arguments commented here, I am actually enjoying by these. Malena Millard Hendrix
Dua Lipa has announced that his much-delayed album Future Nostalgia will finally be released on 31 January. Doro Pippo Terrena
I love your writing style genuinely loving this web site. Maiga Paulo Markson
This post will assist the internet viewers for creating new website or even a blog from start to end. Danice Reinaldo Joanne
I really like and appreciate your article post. Much thanks again. Cool. Aubrie Raynor Laban
My family all the time say that I am killing my time here at web, however I know I am getting familiarity every day by reading thes good posts. Kitty Welsh Laurianne
Useful info. Fortunate me I discovered your site by chance, and I am stunned why this coincidence did not took place in advance! I bookmarked it. Dulcine Morgun Spain
Admiring the dedication you put into your website and in depth information you present. Annamaria Stacee Marvel
tinder app , tinder dating app
Hi, Neat post. There is an issue with your
web site in internet explorer, may check this? IE nonetheless is the marketplace
chief and a good component of other people will leave out your excellent writing due to this problem.
Wow, wonderful blog layout! How long have you been blogging for?
you make blogging look easy. The overall look of your site is fantastic,
let alone the content!
It’s an remarkable piece of writing designed for all the web
users; they will obtain benefit from it I am sure.
free dating sites
Its like you read my mind! You seem to grasp a lot about this,
such as you wrote the e-book in it or something.
I feel that you simply can do with a few % to pressure the message home a bit, however
other than that, that is great blog. A great read.
I’ll definitely be back.
Hi are using WordPress for your blog platform? I’m new to the blog world but I’m trying to get started and create my own. Do you require any coding knowledge
to make your own blog? Any help would be greatly appreciated!
Wonderful.
My brother recommended I would possibly like this blog.
He was entirely right. This publish actually made my day.
You can not imagine simply how much time I had spent for this
info! Thank you!
It’s difficult to find experienced people on this topic, however,
you seem like you know what you’re talking about!
Thanks
Hi, I do think this is an excellent web site. I stumbledupon it 😉 I’m going
to revisit yet again since i have book marked it. Money and freedom is the greatest way to change, may you be
rich and continue to help others.
Its such as you read my thoughts! You appear to know so
much about this, like you wrote the e book in it or something.
I feel that you just can do with some % to force the message home a little
bit, but instead of that, that is fantastic blog.
An excellent read. I’ll certainly be back.
Magnificent can not wait to read far more from you. This is really
a tremendous website.
I’m really loving the theme/design of your blog. Do you ever run into any web browser compatibility problems?
A couple of my blog visitors have complained about my
site not operating correctly in Explorer but looks great in Chrome.
Do you have any suggestions to help fix this issue?
Great beat ! I would like to apprentice while you amend your site,
how could i subscribe for a blog website? The account helped me a acceptable
deal. I had been tiny bit acquainted of this
your broadcast provided bright clear idea
magnificent issues altogether, you simply won a new reader. What would you recommend about your put up that you simply made some days in the past? Any certain? Renate Lemuel England
Hello, i think that i saw you visited my blog thus i came to go back
the want?.I’m attempting to in finding things to improve my web site!I suppose its adequate to use some of your ideas!!
Thank you for sharing your info. I truly appreciate
your efforts and I will be waiting for your
next write ups thank you once again.
I like the valuable info you provide for your articles. I will
bookmark your weblog and test again here regularly.
I’m somewhat sure I will learn plenty of new stuff right right here!
Best of luck for the following!
hyzaar 100 mg 12.5 mg tablet
tadalafil 20mg price in india
viagra gel for sale uk
obviously like your web-site however you need to take a look at
the spelling on quite a few of your posts. Several
of them are rife with spelling problems and I
to find it very bothersome to inform the reality on the other hand I’ll definitely come back!
No matter if some one searches for his vital thing, thus he/she wishes to be available
that in detail, so that thing is maintained over here.
Highly descriptive article, I enjoyed that bit.
Will there be a part 2?
I’m gone to convey my little brother, that he should also pay a quick visit this web
site on regular basis to take updated from newest reports.
legal online pharmacies in the us
trazodone 377
buy sildenafil 50mg
tetracycline orders without a prescription
online pharmacy fungal nail
no prescription doxycycline
A little man often cast a long shadow.
Quality posts is the main to be a focus for the visitors to go to see the website, that’s what this website is providing.
tetracycline online without prescription
cortisol prednisone
lowest price cialis 20mg
sildenafil cheapest price uk
where to buy viagra in singapore
cialis 60mg online
how to buy ivermectin
albenza otc
keflex online
buy tadalafil mexico online
azithromycin 250mg tablets
cheap viagra in australia
pharmacy websites
buying prozac online
wellbutrin 150mg daily
cialis coupons
ivermectin 50 – ivermectin 1 topical cream
buy yasmin pill online
cymbalta 20 mg price
purchase tadalafil – buy tadalafil 20mg price generic tadalafil 20mg
cialis 20 mg generic india
synthroid 50 mcg
cialis uk
medrol 6 mg
buy cialis tablets australia
albuterol prescription coupon
lowest price cialis online
generic malegra fxt
cialis in malaysia
гидра сайт
buy zovirax cream australia
yasmin generic
cipro generic brand
ссылка на даркнет
Доставка алкоголя якутск
viagra otc usa
purchasing clomid online
tadalafil prescription
Доставка алкоголя якутск
arimidex price canada
viagra online purchase canada
order zofran
viagra pills cheap online
yasmin 3mg 30mcg
“Everything is very open with a very clear explanation of the challenges. It was definitely informative. Your site is useful. Many thanks for sharing!”
דירות דיסקרטיות בחיפה
“Greetings! Very useful advice within this article! It is the little changes that make the biggest changes. Thanks for sharing!”
דירות דיסקרטיות בקריות
cialis purchase india
“Good write-up. I definitely appreciate this website. Thanks!
”דירות-דיסקרטיות-בבת-ים/
“Nice post. I learn something totally new and challenging on blogs I stumbleupon everyday. It will always be helpful to read through content from other authors and use a little something from their web sites.”דירות-דיסקרטיות/
how to buy viagra in us
cialis usa online
wellbutrin 144
purchase clomid online canada
real viagra price
generic viagra 100mg price
buy cheap cialis 20mg
viagra online purchase in usa
cialis 20mg
150 mg viagra online
acyclovir buy online nz
female cialis
plaquenil 200 mg
albendazole 2g
kamagra online sale
malegra 100 cheap
sildenafil fast delivery
viagra 12
kamagra 100
canadian drugstore viagra
viagra on the web
cialis tadalafil
buy cheap viagra online
metformin canadian pharmacy
where can i buy cialis in australia with paypal
cialis for daily use canadian pharmacy
pharmaceutical online
finasteride 5mg no prescription
это все можно купить в москве.
viagra generics
generic for amoxicillin
legit canadian pharmacy
zofran tablets uk
10mg viagra for women
tadalafil 5mg online pharmacy
viagra nz buy
propecia 1mg – finasteride hair growth propecia
buy real viagra from canada
“I need to to thank you for this wonderful read!! I definitely enjoyed every little bit of it. I’ve got you saved as a favorite to look at new stuff you postÖ”
<a href="https:דירה דיסקרטית בצפון
finpecia 1mg price in india
usa viagra
It’s an amazing article in support of all the internet users; they will
get advantage from it I am sure.
medrol 8
We’re a group of volunteers and starting a new scheme in our community.
Your website offered us with valuable information to work on. You
have done a formidable job and our entire community will be thankful to you.
Thank you for every other informative web site.
Where else may just I am getting that type of information written in such an ideal
manner? I’ve a mission that I’m simply now working on, and I have been on the look out for such info.
medrol 32 mg tablets
Incredible story there. What occurred after? Take care!
how much is metformin 1000 mg
malegra 200 mg.
hydrochlorothiazide 25 mg tab price
legal online pharmacy coupon code
cost of atenolol 25 mg
legit online pharmacy
I am sure this piece of writing has touched
all the internet visitors, its really really fastidious paragraph on building up
new website.
sildenafil australia buy
clindamycin medicine
generic viagra india
60 mg sildenafil!
where can i get azithromycin
how to get a prescription for viagra
I enjoy what you guys tend to be up too. This kind of clever work and
reporting! Keep up the superb works guys I’ve included
you guys to our blog!
cialis 80 mg
viagra pfizer
hydroxychloroquine buy online
medrol
cheap cytotec online
citalopram pill 40 mg
rx trazod much is cephalexin
You have made some good points there. I looked on the
net for more information about the issue and found
most individuals will go along with your views on this
site.
flagyl 200mg
propecia otc – finasteride warnings finasteride 5mg tablets
In any occasion, the standards of righteous news-casting haven’t been willing of through this weighty web index. Google in truth thinks often relative to the characterize and meat of articles, solely as a component of its XML. Major news-casting is tied in with being straightforward and as destroy headed as could be expected.
In search what judgement do you think the Google XML sitemap creeps, lists, and distributes visitor place soothe from CNN, BBC, Techcrunch, The Impediment In someone’s bailiwick Journal and others in its list items?
my blog
Excellent, what a blog it is! This blog presents helpful data to us, keep it up.
games ps4 185413490784 games ps4! ps4 games allenferguson ps4 games
Awesome things here. I am very satisfied to look your
post. Thanks so much and I’m having a look ahead to touch you.
Will you kindly drop me a e-mail?
I just couldn’t leave your web site prior to suggesting that I extremely loved the
usual info an individual provide for your visitors?
Is going to be back frequently in order to check up on new posts
Awesome! Its actually remarkable article, I have got much clear
idea regarding from this post.
This website definitely has all the info I wanted about this subject and didn’t know who to ask.
Hello! Someone in my Myspace group shared this website with us so I came
to give it a look. I’m definitely loving the information. I’m bookmarking and will be tweeting this
to my followers! Outstanding blog and terrific design.
Since the admin of this web page is working, no question very shortly it will be renowned, due to its feature contents.
Sxqsvg – best female viagra tablets Fdhcsc pjwcoc
It’s best to take part in a contest for one of the best blogs on the web. I’ll advocate this web site!
I’ll right away grab your rss feed as I can not to find your e-mail subscription link
or newsletter service. Do you’ve any? Please allow me understand
in order that I could subscribe. Thanks.
Yddali – pink viagra Qtbskx cvfsnc
Hrmwdi – essay paper writing service Vfmpzb jqjprc
Vmadyt – finasteride tablets ip 1mg Qldvsr jiatkr 0mniartist asmr
Iffftl – writing summary essay Smxsax oqjuhn
Just about all of the things you mention happens to be supprisingly legitimate and that makes me wonder the reason why I hadn’t looked at this with this light before. This piece truly did switch the light on for me as far as this subject matter goes. Nonetheless at this time there is 1 issue I am not necessarily too cozy with so whilst I attempt to reconcile that with the main theme of the point, permit me observe exactly what the rest of the visitors have to say.Very well done.
Gpqwty – sildenafil warnings Vqbydc fxfeab
My family all the time say that I am wasting my
time here at web, however I know I am getting know-how daily by reading such good articles or reviews.
asmr 0mniartist
I do believe all the ideas you’ve presented in your post.
They are really convincing and can certainly work. Still, the posts are very brief for newbies.
May just you please prolong them a bit from next time?
Thank you for the post. asmr 0mniartist
Mmfsqe – furosemide 100 mg Iowdcc dblusb Яндекс поисковая сеть
generic viagra available
I rattling thankful to find this website on bing, just what I was looking for : D likewise saved to favorites.
Awesome article.
canadian pharmacies that ship to us
Amazing! Its really remarkable post, I have got much clear idea concerning
from this paragraph. 0mniartist asmr
Hi i am kavin, its my first occasion to commenting anyplace,
when i read this article i thought i could also make comment due to this good article.
asmr 0mniartist
online pharmacies
mail order prescription drugs from canada
Mwksmf – tadacip 20 Zqwugy vjjyxb
celecoxib 200 mg without a prescription – price of celebrex 200 mg celebrex cap 200mg
online canadian pharmacy
ivermectin oral 0 8 – stromectolp.com ivermectin drug
Hey, you used to write magnificent, but the last few posts have been kinda boringK I miss your tremendous writings. Past few posts are just a little bit out of track! come on!
canadian pharmacy
Thank you, I’ve recently been searching for information about this topic for a while and yours is the
greatest I have discovered so far. But, what in regards to the bottom line?
Are you positive concerning the’m now not positive where you are getting your information, however good topic.
I must spend a while studying much more or understanding more.
Thank you for wonderful information I was searching
for this info for my mission.
I’m not sure why but this blog is loading extremely slow for me.
Is anyone else having this problem or is it a
problem on my end? I’ll check back later on and see if the problem still sharing your thoughts on are. Regards
Thanks for any other informative web site. The place else
could I am getting that type of info written in such an ideal approach?
I have a project that I’m simply now operating on, and
I’ve been at the look out for such information.
I am grateful to you for this beautiful content. I have included the content in my favorites list and will always wait for your new blog posts.
reputable canadian pharmacy
canadian online pharmacies
First off I want to say great blog! I had a quick question in suggestions or hints? Cheers!
En güncel haberler, son dakika gelişmeler ve tamamı gerçek internet haberleri adresinde sizi bekliyor!
canadian pharcharmy online
Simply want to say your article is as astounding.
The clarity in your post is simply spectacular and i can suppose you are a
professional in this subject. Well along with your permission allow
me to seize your RSS feed to keep updated with approaching post.
Thank you one million and please keep up the rewarding work.
online pharmacy reviews
Hey very nice blog!
online pharmacy canada
I simply could not depart your site prior to suggesting that I really loved the usual information an individual provide for your visitors? Is gonna be again often to check up on new posts
online pharmacies
best canadian mail order pharmacies
canadian pharmacies online
My brother suggested I may like this website. He used to be entirely right. This publish truly made my day. You can not believe just how so much time I had spent for this info! Thanks!
scoliosis
Excellent post! We will be linking to this particularly great content on our site.
Keep up the great writing. scoliosis
Procrastination is the thief of time.
scoliosis
Wow that was odd. I just wrote an very long comment but after
I clicked submit my comment didn’t show up. Grrrr…
well I’m not writing all that over again. Anyways, just wanted to
say excellent blog! scoliosis
I’m still learning from you, but I’m improving myself. I certainly liked reading everything that is posted on your website.Keep the posts coming. I enjoyed it!
scoliosis
Simply desire enjoyable work.
scoliosis
where can i buy generic cialis
dating sites
Hey I am so excited I found your weblog, I really found you by error, while I was looking.
dating sites
free dating sites very quick for me
on Firefox. Outstanding Blog! free dating sites
Yes! Finally something about a.
I was reading some of your articles on this site and I think this internet site is rattling instructive! Keep putting up.
Hello Dear, are you in fact visiting this site daily, if so then you will without doubt take fastidious experience.
We stumbled over here by a different web address and thought I should
check things out. I like what I see so now i am following you.
Look forward to looking over your web page yet again.
Thanks a bunch for sharing this with all people you really know what you are speaking about!
Bookmarked. Kindly also seek advice from my
web site =). We can have a link trade agreement among us
Hello, i think that i saw you visited my weblog thus i came to “return the favor”.I am attempting to find things
to enhance my site!I suppose its ok to use a few of your ideas!!
Hello, Neat post. There’s an issue with your website in web explorer,
might check this? IE still is the market chief and a big
part of folks will omit your magnificent writing because
of this problem.
Good day! I could have sworn I’ve been to this blog before but after reading through some of the post I realized it’s new to
me. Anyhow, I’m definitely glad I found it and I’ll be bookmarking and checking back often!
I have been exploring for a little bit for any high
quality articles or weblog posts in this kind of area .
Exploring in Yahoo I finally stumbled upon this website.
Studying this information So i’m happy to convey that I have a very excellent uncanny feeling I found
out just what I needed. I so much indisputably will
make sure to do not forget this web site and give it a look on a relentless basis.
Hi there, for all time i used to check weblog posts here in the
early hours in the dawn, for the reason that i
enjoy to find out more and more.
Hey there! I’ve been following your website for a long time now and finally got the bravery to go ahead and give
you a shout out from Humble Tx! Just wanted to say
keep up the fantastic work!
It’s hard to come by knowledgeable people for this subject, however, you sound like you
know what you’re talking about! Thanks
What’s up, this weekend is nice for me, for the reason that this occasion i am reading this wonderful informative piece
of writing here at my home.
There’s definately a great deal to find out about this subject.
I like all of the points you made. very fast for me on Firefox.
Exceptional Blog!
reputable canadian pharmacy
buy cialis generic canada
cialis name brand 20mg cialis daily cialis 50mg price
I delight in, result in I discovered exactly what I used to be taking a look for.
You’ve ended my 4 day long hunt! God Bless you man. Have
a great day. Bye
I know this web page offers quality based posts and other information,
is there any other web site which presents these stuff in quality?
I love your blog.. very nice colors & theme.
Did you design this website yourself or did you hire someone
to do it for you? Plz respond as I’m looking to design my own blog
and would like to find out where u got this from. many thanks
If you wish for to take a great deal from this piece of
writing then you have to apply such strategies to your won weblog.
Hello, I enjoy reading through your article. I wanted to write a
little comment to support you.
It’s very easy to find out any topic on net as compared to books, as I found this article at this website.
Then you simply scrape for these target URLs on search engines , and
import the new discovered target URLs into your 7 verified gsa link list (Fatima) builders once going to be end of mine day, except before end I
am reading this great piece of writing to increase my experience.
Very nice post. I just stumbled upon your blog and wished to say that I have truly enjoyed surfing around your blog posts.
In any case I will? Superb work!
Its like you learn my mind! You appear to know so much approximately this, such as you
wrote the e-book in it or something. I believe that you just
can do with a few p.c. to drive the message home a bit, however instead of
that, this is fantastic blog. A fantastic read. I’ll definitely be back.
I got this web site from my buddy who shared with me about this web page and now this time I am browsing this web
page and reading very informative articles or reviews here.
Excellent way of describing, and good post to take information regarding my
presentation focus, which i am going to convey
in academy.
I am not sure where you’re getting your info, but great topic.
I needs to spend some time learning much more or
understanding more. Thanks for magnificent information I was looking for this info for my mission.
I’ll right away take hold of your rss feed as I can not
in finding your email subscription link or e-newsletter service.
Do you have any? Kindly permit me recognize so that I may subscribe.
Thanks.
Very rapidly this site will be famous among all blog users, due to it’s fastidious content
Good day! I simply wish to offer you a big thumbs
up for the great info you’ve got right here on this
post. I am coming back to your web site for more soon.
We’re a bunch of volunteers and starting a brand new scheme in our community.
Your website provided us with useful info to work on. You’ve performed an impressive activity and our
whole group will be thankful to you.
When some one searches for his required thing, thus he/she needs to be available that in detail, so that thing is maintained over here.
An outstanding matter here on your blog.!
Thank you for some other great post. Where else may just anybody get that type of information in such a
perfect approach of writing? I have a presentation subsequent week, and
I’m on the look for such info.
Fantastic beat ! I would like to apprentice while you amend your site,
how could i subscribe for a blog website? The account helped
me a acceptable deal. I had been tiny bit acquainted of this your
broadcast offered bright clear concept | https://multiplenerdgasms.com/improving-exploration-dnd/ | CC-MAIN-2021-25 | refinedweb | 9,316 | 71.95 |
SWC
Download compiled library (SWC) version 0.8.8.1 - Flex 3
Download compiled library (SWC) version 0.8.3 - Flex 2
Mate is distributed under the Apache 2.0 license.
SVN access
To compile, you'll also need to add this namespace:
and point it to the manifest.xml file included in the project (see screenshot).
You can also browse the source.
Maven POM
Current documentation
Download PDF (always refer to the online docs for latest documentation)
API Docs (same as online reference)
The URL indicates the location of the repository. Perhaps that should be made more clear. If you use Subclipse plugin for Eclipse, that is the only thing you need. Adding the svn command to the url would only confuse those users.
Any idea when mate will be available in the maven repository?
I'm trying to build my app with maven2 and it would be nice
to be able to refer to mate as dependency...
Thanks in advance,
Rgds,
Les.
first off congratulations on creating such a fantastic framework and thank you all for releasing this as an opensource framework for us to use.
I have worked with Cairngorm and PureMVC and i can see Mate address many of the concerns that i had with the other two frameworks.
As part of my learning Mate....i was comparing the size of the swc files of all the frameworks and here is how they match:
PureMVC (ver 2.0.1) => 12 KB
Cairngorm(ver 2.x) = > 11KB
Mate (ver 08.7) => 737 KB
Can you help me understand why this is soo large ?.... I looked through the API docs....which has about 10 packages and less than 75 classes ... so what is adding to the bulk?
I understand this may not impact the final size of the apps written using Mate...but just curious as to why the framework swc is so large.
Thank you once again.
Regards,
GT
The fact that the last swc is so large has been an oversight and we forgot to compile it without the flex framework. See the previous ones, which are much smaller:
Also see this thread:
By the way, i have updated the Cairngorm example to use CallBack and ListenerInjector .... instead of the mxml tags that were added to the view ... I know you guys must be busy...so i can send that to you ...and you can do a quick review and upload that .... so all the users will get the latest updated example.... kindly let me know where to send the updated example...
Regards,
GT
We had started working on the example updates. But I think the only posted was the hello world and flickrwidget ( ), so I guess it would be helpful if you send us the updated Cafe townsend.
Thanks!
We've released a new version, with the correct settings (removing the flex framework). But let me note that the size of the swc file is unrelated to the size of your air app. See forum thread I posted above.
Actually i use cairngorm for it's ServiceLocator, and mate as framework.
And when Change Log will be availible, it is interesting to see what's new done? | http://mate.asfusion.com/page/downloads | crawl-002 | refinedweb | 532 | 84.17 |
Aussie Dave knew I panned
Haigh's on my first go-around with the
Dark Bar with
Cardamom. As he grew up in the state of South Australia,
Haigh's home state, and was intravenously dripped with
Haigh's cacao since he was a child, he couldn't accept that
tarring without giving Haigh's a few more shots to redeem
itself in the Republic's eyes. I have entertained
the thought he's a Haigh's sales rep as his alternate
identity. Another parcel from
Oz was sent to me and included a Haigh's Premium Milk
Chocolate bar and a Haigh's
Premium Dark Chocolate bar.
Haigh's has one thing going for it. It's an Australian institution,
and Aussies have been programmed by the Aussie powers-that-be to buy Australian-made stuff from Australian-owned
businesses. Dissect that further and you'll comprehend it's all a crock.
Plenty of "Australian" businesses are actually owned by foreigners,
primarily Americans and British. And Australian-made usually means the factory is
in Oz and the product assembled in Australia, but from "local and imported
ingredients." Nearly everything in Oz seems to have some component of
foreign included, Haigh's chocolate included. The cacao,
machinery, expertise, and probably even the fillings come from outside Oz
What Haigh's doesn't have going for it is its price. They company
sells its chocolates in 12 high-end stores across three Australian states,
not in supermarkets or premium health food outlets. Without a
free-market of competing brands on the
Haigh's shelves besides its own and with its near one hundred year reputation
of Aussie-owned, Aussie made, Haigh's can charge more than most of its
competitors and get away with it
On the back of the Premium Milk Aussie Dave sent me, there is a little blurb
written about John Haigh, the only grandson of the founder Alfred Haigh.
It says: "John obtained his skills from a leading Swiss chocolate
maker and returned with the knowledge and machinery required to make premium
chocolate." According to The Haigh's Book Of Chocolate, that
Swiss company was Lindt and Sprungli. John wrote to Switzerland's top
ten chocolate manufacturers in the late 1940's. Seven blew him off, two rejected him. Lindt
was the only one to say, "Get your ass on over here." The company wanted an
Anglophone to be a companion for their director's friend's son. Jesus,
I'm an Anglophone. It could have been me
who applied to intern at Lindt and been the director's friend's son had I actually been alive back then!
We can infer from the statement written on the back of the bar wrapper that before John
took the helm and did his apprenticeship with Lindt, Haigh's was making
s--t chocolates, but Australians didn't know better. The Haigh's Book
agrees: "When he came into Haigh's, John soon found that the company
was making poor quality products under great difficulties." Hence,
between Haigh's founding in 1915 and, say, around 1959 when John became
managing director, Haigh's chocolates sucked.
John Haigh freely admits to this day that Lindt sets some high standards.
So it's more than fair to ask how Haigh's Premium Milk measures up to
his hero's.
With 32% cocoa solids, Haigh's milk clocks in at more or less the same
as everyone else's cocoa solid content for a milk chocolate bar. Lindt uses 31%.
Haigh's milk solid content mimics his Swiss gods, at 26%. The more I ate, the more addictive it became. It was a
more pleasant eating experience than the internationally renowned Green &
Black's milk version.
If G & B's and Haigh's were priced at similar levels, Haigh's milk would
trump it. Haigh's should actually be cheaper than G & B's. G & B
is organic chocolate and enjoys a premium for that reason alone, and it's an
import which needs to be shipped the far distances to Australia. Yet Haigh's retails for more than double G & B's and almost triple the
delicious and the superior Lindt
Swiss Gold Hazelnut. This contradicts Haigh's very own marketing schlock in their Haigh's book which claims "new
equipment and methods helped keep the retail prices of Haigh's products
reasonable, as did the company's practice of selling chocolates exclusively
through its own outlets." I guess this all depends on the definition
of 'reasonable.' Former U.S. President Bill Clinton, during impeachment proceedings, redefined what the words 'the' and 'sex' meant. Haigh's chocolate
prices could be seen as reasonable, if you're George Soros, Jamie Packer, or
Donald Trump.
Hats off to Haigh's for this one but respect for their tasty milk still
doesn't earn it my endorsement. At these prices, there are better
investments for your choco-dollars. John Haigh looked towards
Switzerland for inspiration and maybe you should, too. | http://www.dougsrepublic.com/chocolate/20101017-haigh-milk.php | CC-MAIN-2018-09 | refinedweb | 836 | 69.82 |
New projects are created by choosing New Project from the File menu. Visual Studio .NET displays the New Project dialog box, as shown in Figure 1-8. This dialog box gives you access to all available project types, including Database, Setup, and Deployment projects. If you have Visual Basic .NET or other languages installed, you can create new projects in those languages from this dialog box.
All projects are created as part of a solution, which can contain multiple projects of different types. A solution ties together all the projects and files that you’re working with in an instance of the Visual Studio .NET IDE. By default, when you create a new project, the current solution is closed and a new solution is created to house your project. To override this behavior and add a new project to the current solution, select the Add To Solution radio button instead of the Close Solution radio button in the New Project dialog box.
A number of templates are available for creating new Visual C# projects. Each template will create a project skeleton, usually with an initial set of source files added to the project. Each project includes an AssemblyInfo.cs source file, which is used to configure properties for your compiled assembly.
Each Visual C# project template will provide a default name for the project. In most cases, you should override the suggested project name unless you’re happy with names like ConsoleApplication1. The New Project dialog box also allows you to specify a location for your project.
The available templates for Visual C# projects are listed here:
Windows Application Creates a Windows Forms application that initially consists of a single form. The class that supports the initial form also provides the main entry point to the application.
Class Library Creates a project containing a single Visual C# class, with no specified inheritance. You can use this class as a starting point to create new class libraries of any type.
Windows Control Library Creates a project that you can use to develop reusable user interface controls for Windows applications. The project includes one class, which is derived from System.Windows.Forms.UserControl.
ASP.NET Web Application Creates an ASP.NET project, including all the necessary files for a simple Web application. The project includes a number of files specific to Web applications that will be discussed in greater detail in Chapter 20.
ASP.NET Web Service Creates a special type of Web application that uses the Simple Object Access Protocol (SOAP) to expose services that can be invoked by SOAP clients.
Web Control Library Creates a project that you can use to develop reusable user interface controls for Web applications. This type of project is discussed in more detail in Chapter 20.
Console Application Creates a command-line application for Windows. This type of project includes one Visual C# .NET class that includes the main entry point for the application.
Windows Service Creates a Windows service application, a special type of long-running process that runs on Microsoft Windows NT, Microsoft Windows 2000, and Microsoft Windows XP. This type of application runs in its own Windows session and will continue to run even if the user logs off.
Empty Project Creates a Windows project with no source files. This type of project is useful if you’re planning to add existing files to the project.
Empty Web Project Creates a Web application project with no source files. This type of project is useful if you’ll be using files that already exist.
New Project In Existing Folder Creates a new, empty project in an existing folder, instead of creating a new project folder.
When you create a project using the ASP.NET Web Application, ASP.NET Web Service, or Empty Web Project template, the location of a machine that’s running Microsoft Internet Information Services (IIS) 5.1 or later is specified as the project location. If your development machine has IIS installed, you can specify localhost as the project location.
Each project template creates its own unique set of files used to compile your project. A subdirectory with the same name as the project is created to contain the project’s files; by default, its location is under My Documents\Visual Studio Projects. All project files are located in the project directory except for Web applications and Web services. For these projects, most files are uploaded into a virtual directory accessible to your Web server, with only a solution file remaining in the project directory.
The first project we’ll look at is the classic Hello World program written in Visual C#.
To create a basic HelloWorld project, open the New Project dialog box, and select the Visual C# Projects folder in the Project Types list. Next select Console Application from the Templates list, and change the project name to HelloWorld. It’s not necessary to change the default location of the project; however, you might want to make note of the location so that you can access the project later. When you’re happy with the name and location of the project, click OK, and Visual Studio .NET will create the project for you.
The files generated for the HelloWorld solution fall into two categories: files that define the solution and the project, and source files that are compiled to generate the project assembly.
When a new project such as HelloWorld is created, it’s usually created as part of a new solution. The solution will have the same name as the project, although each can be renamed after the project is created. The project and solution files for the HelloWorld project include the following:
HelloWorld.suo A hidden binary file that contains current user options for the solution
HelloWorld.sln A text file that contains information about the solution
HelloWorld.csproj An XML file that defines the Visual C# project
HelloWorld.csproj.user An XML file that contains user-specific project information
The source files generated for a project vary for each type of project template. The specific files that are generated will be discussed throughout this book as we examine each project type. All the project types share a few similarities, however. For example, all Visual C# projects include an AssemblyInfo.cs file, which is used to define assembly characteristics. Each project’s source files also define a default namespace that encloses all classes and other types in the project. This default namespace is initially set to the name of the project but should be changed to a more unique name if your project is to be distributed to others. | https://etutorials.org/Programming/visual-c-sharp/Part+I+Introducing+Microsoft+Visual+C+.NET/Chapter+1+A+Tour+of+Visual+Studio+.NET+and+Visual+C+.NET/Creating+Visual+C+Solutions/ | CC-MAIN-2022-21 | refinedweb | 1,103 | 63.7 |
ETag magic with Django
An ETag is a feature of HTTP that allows for a web server to know if content has changed since the last time the browser visited the page. The client sends the ETag from the cached page in a header. If the ETag in the header matches the current ETag then the server lets the browser know that the cached is up-to-date by sending back a
304 Not Modified response.
The most natural way to build an ETag is to generate it from the HTML returned by the view, which I believe is how the default view caching works in Django. The downside of this is that the page is generated even if the client has a cached copy, and all that is saved is the cost of sending the page to the client.
Bigger wins can be had by using Django's conditional view processing to calculate an ETag outside of the view. I haven't seen the requirements documented, but as far as I can tell there is only a single property needed in an ETag:
- The ETag should vary with the page, i.e. when the page content changes, the ETag chages.
A simple alternative to generating an ETag from the page content is to store a version number on the model, or models, that are used to generate the page. If this version number is incremented each time the page content changes, then the version number itself can be used as the ETag.
If you want to keep a version number for a Django model, store it as an IntegerField and increment it in the model's save method, or hook up a post_save signal handler and do it there.
Keeping a version number is easy to implement, but it has the disadvantage that you have to access the database each time the view is accessed. It would be nicer still, if the ETag could be generated without even that single DB query.
I have experimented with a simple method of doing this with Django's caching system. My ETags are random strings stored in the cache with a key that is created from the parameters to the view. When the DB object changes, the existing ETag is replaced with a freshly generated one.
Using a random string, unconnected with the model, may seem counter-intuitive, but the contents of the ETag are unimportant as long as it varys with the page.
Here's some example code taken from my current project.
def get_etag_key(username, desktop_slug): etag_key = "desktopetag.%s.%s" % (username, desktop_slug) return etag_key def get_etag(request, username, desktop_slug): """ Create an etag for the a given deskop. The etag itself its stored in the cache and is a random identifier. The cached etag is changed when the desktop changes, so it is always unique. """ etag_key = get_etag_key(username, desktop_slug) etag = cache.get(etag_key, None) return etag @etag(get_etag) def desktop_view(request, username, desktop_slug): # An expensive view
The view
desktop_view is a typical Django view, decorated with the
etag decorator which simply calls the
get_etag function to pluck the ETag (if it exists) from the cache – a very fast operation, particularly if memcached is deployed.
The other part of the system is the code that is called when the DB object is changed:
etag_key = get_etag_key(username, desktop_slug) cache.set(etag_key, str(random.random()))
The above code simply generates a random float and converts it to a string, which serves as a perfectly good ETag. If you are paranoid (and every good engineer is), you could also append the current time in milliseconds to avoid the possibility of re-generating the same random number.
etag_key = get_etag_key(username, desktop_slug) cache.set(etag_key, str(random.random())+str(time.time()))
So far, it seems like a pretty good system with negligible overhead, although I haven't yet used it in a production environment. There is a downside of course; if you are using an in-memory cache, like memcached, then your ETags will be lost when the server is power cycled – and subsequent pages will regenerated even if there is a cached copy. As always YMMV.
Nice post. But Etag is not a “feature of HTML” instead is a feature of the mighty HTTP protocol ;-)
I'm a green hand.
I build a proxy server with nginx, whenever a require reach it will search memcached to find the file, e.g. “”, but there isn't etag in header.
Look forward your reply and thank you very muck!
reesun
Nobody cares, sadly. :'( | https://www.willmcgugan.com/blog/tech/post/etag-magic-with-django/ | CC-MAIN-2022-27 | refinedweb | 755 | 68.2 |
This preview shows
pages
1–3. Sign up to
view the full content.
ENGR 112 TEST 2 (answers) 1. On a graphical window 800 wide by 600 high the lower right-hand corner is at coordinates. c. (799, 599) 2. The code in Graph.h d. all of the above 3. void f(int x); tells us d. f does not return a value 4. User-defined types a. are indicated by class or struct 5. Arguments to a function may be passed d. any of the above 6. Classes Line , Axis , Polygon , etc. derive from class Shape. d. all of the above 7. Having the first argument almost always be a Point is an example of b. regularity 8. Public members of a class or struct b. are visible anywhere in the class, even above the declaration 9. An expression which changes the value of a variable c. may not be on the left-hand side of an = 10. A constructor d. all of the above 11. Attaching graphical objects to a Simple_window d. all of the above 12. using namespace std; a. tells the compiler to look for std::foo if foo is not defined
View Full
Document
This preview
has intentionally blurred sections.
13. An invariant a. is a rule for what constitutes a valid value 14. A function activation record d. is created when a function is called that is not inlined 15. Given the code enum Month{jan=1,feb,mar,apr,may,jun,jul,aug,sep,oct,nov,dec};
- Spring '11
- zachry
Click to edit the document details | https://www.coursehero.com/file/6516786/112test2-08a-sol/ | CC-MAIN-2016-50 | refinedweb | 264 | 80.68 |
26 September 2011 07:31 [Source: ICIS news]
By Nurluqman Suratman
SINGAPORE (ICIS)--Higher prices allowed South Korea’s petrochemical exports in August to increase 36.5% year on year to $5.32bn (€3.94bn), bolstering hopes that the value of monthly shipments will remain strong through end-2011 despite a weakening of global demand, analysts said on Monday.
The ?xml:namespace>
Against the gloomy global economic backdrop, demand prospects for
“Macro-economic data from August 2011 indicated a slowdown in US GDP growth and consumer spending, while
Meanwhile, export volumes of most of South Korea's major petrochemical products grew on a year-on-year basis in August, according to data from Korea International Trade Association (KITA) showed. (Please see table below)
Overseas shipments of ethylene surged 76% year on year to 86,187 tonnes in August, while exports of propylene surged by 89.9% to 84,140 tonnes, the data showed.
Exports of polypropylene (PP) grew by 46.6% to 93,698 tonnes, while overseas shipments of polyethylene fell by 5.28% to 101,190 tonnes, according to KITA.
Among other aromatics products, shipments of benzene rose 8.54% year on year to 112,487 tonnes, while exports of paraxylene (PX) were up by 40.1% to 173,777 tonnes, KITA data showed.
Shipments of methyl methacrylate (MMA), on the other hand, slumped by 53.9% year on year to 569 tonnes in August, while shipments of polymethyl methacrylate (PMMA) fell by 15.3% to 7,612 tonnes, the data showed.
For the first 20 days in August,
Overseas shipments of textiles fell by 15.7% to $1.22bn over the same period, while exports of automobiles rose by 32.5% to $3.23bn, the data showed.
For the whole month of August,
While overall trade have been robust this year, risks are to the downside, said Rabobank’s Matabadal.
But higher prices can mask a decline in volumes of shipments.
“Spot prices of petrochemical products are much higher now than last year, and this is pushing up export values. We see US dollar-based exports increasing in the fourth quarter on a year-on-year basis,” said Hong Chanyang, a petrochemicals industry analyst at Seoul-based Shinhan Investment Corp.
Prices of petrochemical products usually track the movement of crude prices in the international market, analysts said.
“Sales of petrochemical and petroleum products, especially those sold on a US dollar basis are sensitive to crude prices,” Hong said, adding that buoyant global oil futures will continue drive petrochemical product prices higher in the medium-term.
Source: KITA
( | http://www.icis.com/Articles/2011/09/26/9494950/high-prices-to-keep-s-korea-petchem-exports-growing-to-end-11.html | CC-MAIN-2014-41 | refinedweb | 429 | 64.61 |
Troubleshooting BizTalk Server SOAP Adapter
The SOAP adapter is a powerful integration mechanism for Microsoft® BizTalk® Server. It enables BizTalk Server to connect to Web services and the Windows Communication Foundation (WCF) framework by receiving and sending Web service requests. On the receive side, the SOAP receive adapter creates a BizTalk message object, and promotes the associated properties to the message context. On the send side, the SOAP send adapter calls into a Web service. The SOAP send adapter then reads the message context on the BizTalk message object to get the proxy name and then calls the associated external Web service proxy. The following illustration shows how the SOAP Receive and Send adapters work with BizTalk Server 2006 and BizTalk 2006 R2..
By default, the receive SOAP adapter loads into the IIS process space (also called an isolated host) to use Hypertext Transfer Protocol (HTTP). This tightly integrates the performance and configuration of IIS to the SOAP adapter. The following topics are covered in this section:
- Application pool cycling
- Orchestration application domain unloading
- BizTalk Server upgrades
- IIS security issues
- SOAP message time outs
- ASP .NET execution time outs
- HTTP pre-authentication
- HTTP redirection
Application Pool Recycling
The receive SOAP adapter uses IIS to establish an HTTP session with a remote Web client. When the HTTP session is closed or broken, the failure is handled appropriately in the SOAP adapter. This type of failure can occur for several reasons, including the following:
- IIS was recycled. This closes the HTTP session
- The client application HTTP request has timed out
- A network failure closed the HTTP receive session
The SOAP Adapter’s application pool configuration can cause problems if is set to cycle. The application pool for the SOAP adapter should not be configured to recycle during processing and the Idle Timeout setting should be configured to “off.” For more information about how to configure the IIS application pool, see “How to modify Application Pool Recycling events in IIS 6.0” at. For more information about how to configure the Idle Timeout setting, see “Recycling Application Pool Settings” at and “Recycling Worker Processes” at.
Orchestration Application Domain Unloading
The SOAP adapter is often used with an orchestration. The performance of the orchestration is part of the overall performance of the solution. When the application domain is not loaded into memory, this can cause timeouts to occur. The default behavior of the application domain for an orchestration is to shut down if idle more than 30 minutes (SecondsIdleBeforeShutdown) or if the application domain is empty more than 20 minutes (SecondsEmptyBeforeShutdown). In some situations it might be beneficial to keep an orchestration application domain in memory. These settings apply only to orchestrations. By default, BizTalk Server has the following two shutdown settings:
SecondsIdleBeforeShutdown
SecondsIdleBeforeShutdown is the number of seconds that an application domain is idle (that is, it contains only dehydratable orchestrations) before being unloaded. Specify a Max value of “-1” to signal that an application domain should never unload when idle but not empty. When an idle but non-empty domain is shut down, all of the contained instances are dehydrated first.
- Location: BTSNTSvc.exe.config
- Enumeration: milliseconds
- Default: 1800
- Max: -1 (infinite)
- Min: 0
- Example: SecondsIdleBeforeShutdown = 1800
SecondsEmptyBeforeShutdown
- Location: BTSNTSvc.exe.config
- Enumeration: milliseconds
- Default: 1200
- Max: -1 (infinite)
- Min: 0
- Example: SecondsEmptyBeforeShutdown = 1200
To keep the application domain loaded in memory, configure SecondsIdleBeforeShutdown and SecondsEmptyBeforeShutdown to -1 (this value is equivalent to 9,223,372,036,854,775,807 ticks). However, the trade off is overall memory usage will increase. For more information about orchestration application domain unloading, see “Orchestration Engine Configuration” at.
BizTalk Server Upgrades and .NET Versioning
If you upgrade from BizTalk Server 2004 to BizTalk 2006, you may encounter issues with .NET versions. Because BizTalk Server 2004 used .NET Framework version 1.1 and BizTalk Server 2006 uses a newer version, a mismatch of the load application pool can occur. For example, an orchestration exposed as a Web service encounters a failure due to the shutdown of the SOAP adapter application pool. This occurs when the IIS virtual directory is configured to use the ASP.NET 1.1. BizTalk Server 2006 requires ASP.NET 2.0. To fix this problem, change the default ASP.NET version settings. For more information, see “Locations Tab, ASP.NET Configuration Settings Dialog Box” at.
IIS Security Issues
Security settings in IIS are another area with where you may experience configuration issues. When you set up the SOAP adapter, follow the BizTalk Server 2006 documentation guidelines to minimize these issues. For more information, see SOAP Adapter Security Recommendations at. Specifically follow the security recommendations for securing IIS. If IIS 6.0 is installed, follow the recommendations for configuring application isolation. For more information see “Isolating Web Sites and Applications” at. If IIS 5.X is installed, see “From Blueprint to Fortress: A Guide to Securing IIS 5.0” at..
SOAP.ClientConnectionTimeout
- Location: message context property
- Enumeration: milliseconds
- Default: 900000
- Max:
- Min:
- Example: MessageResponse(SOAP.ClientConnectionTimeout) = 900000;
The event log error from a SOAP Request Response failure is as follows:
Event Type: Warning Event Source: BizTalk Server 2006 Event Category: BizTalk Server 2006 Event ID: 5743 Date: 03/30/2008 Time: 8:03:34 PM User: N/A Computer: MICROSOFT1 Description: The client connection was closed before response could be delivered back. A request-response for the "SOAP" adapter at receive location "/WebserviceName.asmx." has timed out before a response could be delivered.
This error occurs because the Web Service closed the connection before the response was delivered. To fix this issue, extend the time that the Web service waits for a response before timing out. On an IIS Server, make this change in the IIS Web config file. You can also increase the execution timeout value to the number of seconds that ASP.NET will wait before closing the connection. For more information, see “httpRuntime Element (ASP.NET Settings Schema)” at.
Request Message Timeout
When calling a Web service with slow response, the timeout can be adjusted by using the context property as shown in the following example:
ASP.NET Execution Timeout
The third timeout value depends on the ASP.NET executionTimeout property timeout settings. When you increase the overall transaction timeout, adjust the executionTimeout value to receive all the expected responses. For more information about configuring the timeout value, see “executionTimeout” at.
executionTimeout
HTTP Preauthentication
Communication between the SOAP adapter and an HTTP client can fail if HTTP preauthentication is configured incorrectly. IIS supports many authentication options, of these, Basic and Kerberos support the concept of preauthentication.
The basic concept of authentication is when a client requests access to a protected resource on the server, the server challenges the client to provide valid credentials. Once the client provides the credentials, the server grants the client access to the resource. For more information, see “HttpWebRequest.PreAuthenticate Property” at.
Web Services and Web servers may not always support pre-authentication. For example, it may be disabled on the Web server. When using the SOAP adapter on a Web server configured to support preauthentication, the first request must contain a valid set of user credentials. On occasion, some Web services will return a status code of 500 Internal Server Error instead of the expected response of 401.
To prevent issues with the SOAP adapter and unexpected responses from Web services, in an orchestration, create a custom component and call it from an Expression shape. For more information, see the sample code by Feroze Daud that forces credentials to be sent on the first request at.
Redirection
When a SOAP adapter issues a request to a Web service, it is possible for the HTTP request to be redirected to another server. The initial HTTP Request results in a HTTP redirection class response of a status code of 3XX. In IIS, these response status codes can result in a status code of 401, unauthorized access returned to the SOAP adapter. When this error occurs, BizTalk Server will not attempt to navigate to the redirected URI in the HTTP 3XX status code response. To verify if the request was redirected, reconfigure the SOAP adapter to post directly to the URI indicated in the redirect response and see if the request succeeds.
There are two possible solutions:
- Use a dynamic send port in an orchestration and consume the Web service. When Web service returns the redirect request with 302/307 status, it comes as an HTTP exception to the SOAP adapter. The orchestration can catch the general exception and extract the new URI. Then use the dynamic send port to resend the request.
- Create a .NET component to consume the Web service in an orchestration. Design the component to handle the exception and to resend the request to the new URI.
Performance
The topics below describe some factors that may impact the performance of SOAP, HTTP, or WSE Adapters.
Threading Issues
A thread is the basic unit for an operating system to allocate processing time. A process—such as a BizTalk host instance—represents an executing program and contains one or more threads in its context.
Thread starvation can cause slow performance with the SOAP, HTTP, or WSE adapters. A large volume of messages (floodgate) can trigger performance degradation with these adapters. This is especially true if a large number of messages arrive immediately after an adapter was initialized. The root cause is often the system needs to be tuned to handle the load instead of a hardware limitation.
Within an instance of a BizTalk host where SOAP, HTTP, or WSE Adapters are running, worker threads handle queued work items, and I/O threads dedicated callback threads associated with I/O completion ports for completed asynchronous I/O requests. Performance problems occur when there are not enough free threads in the thread pools to handle the number of messages (SOAP, HTTP, or WSE requests). When a BizTalk host starts, the default thread pool sizes are small and cannot fulfill the requests in a floodgate scenario. As a result, the adapter must add more worker or I/O threads to the thread pool. This process can be time consuming. More worker or I/O threads are added until the requests can be fulfilled or the maximum thread limit is reached. If the maximum thread pool limit is not large enough to handle the sustained work load, messages must wait until a thread becomes available before they can be processed.
Avoid Thread Starvation
Set minimum worker and I/O thread pool size to a level appropriate to the initial and sustained work load. Maximal worker thread pool size should be set to accommodate possible peak burst load. Sustained and burst load should be defined by the business requirements and validated during your stress testing. We recommend that you do not define excessive thread pool sizes. While large thread pools may resolve thread starvation, they can increase context switching, a situation where Windows switch from one running thread to the other. Excessive context switching can offset the performance gained Add CLR Hosting key to the previous listed registry path if the key does not exist already.
MaxIOThreads
- Location: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\BTSSvc$Hostname
- Enumeration: count
- Default: 20
- Max:
- Min:
- Key: DWORD
MaxWorkerThreads
- Location: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\BTSSvc$Hostname
- Enumeration: count
- Default: 25
- Max:
- Min:
- Key: DWORD
MinIOThreads
- Location: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\BTSSvc$Hostname
- Enumeration: count
- Default: 1
- Max:
- Min:
- Key: DWORD
MinWorkerThreads
- Location: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\BTSSvc$Hostname
- Enumeration: count
- Default: 1
- Max:
- Min:
- Key: DWORD
Calculate the value of the MinWorkerThreads DWORD entry, using the following formula:
(Max number of messages adapter receives at initialization) + (10 percent)
For example, if 50 messages are delivered to the SOAP adapter as soon as the adapter is initialized, set the value of MinWorkerThreads to 55.
Note the following:
- MaxIOThreads and MinIOThreads should have the same values as MaxWorkerThreads and MinWorkerThreads.
- If MaxWorkerThreads or MinWorkerThreads are configured, but not MaxIOThreads or MinIOThreads, BizTalk Server sets MaxIOThreads and MinIOThreads to the values of MaxWorkerThreads or MinWorkerThreads.
- The values specified for the thread pools are per CPU. For example, setting MaxWorkerThreads to 100 has an effective value of 400 on a 4 CPU computer.
These numbers are recommended starting points. Perform appropriate testing to determine the optimal setting for your business requirements.
Also note the .NET thread pools BizTalk host instance can be shared by the orchestration engine, in addition to the managed adapters. To apply the thread pool changes only to a specific adapter, move the adapter’s handlers to a separate host.
A larger thread pool can put more pressure on your Biztalk Server CPUs and the SQL Server hosting the BizTalk databases. Increasing the MaxWorkerThreads value beyond 100 can have an adverse effect on the performance of SQL Server. Make sure the computer(s) installed with SQL Server can handle the additional stress before increasing the thread pool size greater than 100.
For more information about threading issues, see “FIX: Slow performance on startup when you process a high volume of messages through the SOAP adapter in BizTalk Server 2006 or in BizTalk Server 2004” at.
ASP .NET Settings Related to HTTP, SOAP, or WSE Adapter Performance
In addition to thread starvation, there are several ASP.NET settings affecting SOAP, HTTP, or WSE adapter performance. These settings are made in the web.config or machine.config files. Because these adapters communicate with external Web sites or Web services, consider tuning these external components as part of overall performance review. The settings described in this section are specific to BizTalk Server.
The tunable attributes include maxconnection, maxWorkerThreads, minWorkerThreads, maxIOThreads, and minIOThreads. The properties that can be tuned include minFreeThreads and minLocalRequestFreeThreads.
Maxconnection Attribute
The maxconnection attribute in the machine.config file defines the maximum number of connections to a server or group of servers. The default value is 2. While this may be adequate for a desktop application that makes a few concurrent requests, is can be a bottleneck if BizTalk Server makes a large number of HTTP, SOAP, or WSE requests.
maxconnection
- Location: machine.config
- Enumeration: count
- Default: 2
- Max:
- Min:
- Sample:
In most situations, the recommended value for maxconnection attribute is (12 * the number of CPUs). Test results have shown this setting works well in a variety of scenarios. Like any performance related setting, however, validate the setting for the specific scenario.
Increasing maxconnection attribute increases thread pool and CPU utilization. Additionally, the thread pool settings described in the previous section may not be optimal. Test larger thread pool sizes to determine if adjustment is necessary.
maxWorkerThreads, minWorkerThreads, maxIoThreads, minIoThreads
These settings affect a Web application or Web service similar to a BizTalk host instance. Bottlenecks can occur when ASP.NET runs out of worker and I/O threads to process incoming requests or perform I/O work.
To identify this bottleneck, monitor the following performance counters:
- ASP.NET\Requests Queued
- Process\%Processor Time (select aspnet_wp.exe and/or w3wp.exe).
If requests are queued while process utilization is slow, there is thread pool starvation.
maxWorkerThreads
- Location: machine.config
- Enumeration: count
- Default: 20
- Max:
- Min:
- Sample:
minWorkerThreads
- Location: machine.config
- Enumeration: count
- Default: 1
- Max:
- Min:
- Sample:
maxIoThreads
- Location: machine.config
- Enumeration: count
- Default: 20
- Max:
- Min:
- Sample:
minFreeThreads and minLocalRequestFreeThreads Properties
The minFreeThreads property specifies the number of free threads in the worker thread pool. This property ensures there are free threads available if an existing request has work requiring additional threads to process. Incoming requests are queued if the number of free threads in the worker threads pool falls below this property.
minFreeThreads
- Location: machine.config
- Enumeration: count
- Default: 8
- Max:
- Min:
- Sample:
The recommended value is 88 * number of CPUs. This allows for 12 concurrent requests per CPU with maxWorkerThread set to 100. Make sure that maxWorkerThreads and maxIOThreads settings from the previous section are greater than or equal to your minFreeThreads setting.
minLocalRequestFreeThreads
- Location: machine.config
- Enumeration: count
- Default: 4
- Max:
- Min:
- Sample:
The minLocalRequestFreeThreads is similar to minFreeThreads, except it deals specifically with local requests (requests from local host). The default value for minLocalRequestFreeThreads is 4.
For more information about prescriptive architecture, see “Chapter 10 — Improving Web Services Performance” at.
For more information about tuning ASP.NET, see “How To: Tune ASP.NET” at.
For more information about contention, performance, and deadlocks, see “Contention, poor performance, and deadlocks when you make Web service requests from ASP.NET applications” at.
Antivirus Software
Virus scans on a BizTalk Server is a very important operational consideration. While the value of performing scans for known viruses is well documented, the affect on BizTalk Server is less well known. Sometimes, third-party programs run as a program or as filter drivers (driver/program/module inserted into the existing driver stack to perform some specific function) may interfere with the responsiveness of the BizTalk Server service. Two issues caused by antivirus software are file locks and the unloading of the application domain during virus scans.
File Locks
File lock errors may occur if an antivirus software program gains a file lock to the file BizTalk Server is attempting to use for processing. This error may occur if the antivirus software scans the virtual Web Service folders. The results are not always predictable and can vary from slow performance to access denied on certain files. To solve this issue, identify the file improperly locked and if the antivirus software allows, exclude the file from the scan.
Application Domain Restarting
XML Web service files are saved under the .ASMX extension. Like .ASPX files. These are compiled by the ASP.NET runtime when there is a request to the SOAP adapter URI. The ASP.NET application can restart frequently when antivirus software scans the files of the Web service endpoint on IIS. More specifically, restarts occur because antivirus software scans the Web.config file in the root of the application, the Machine.config file, the Bin folder, or the Global.asax file. For more information, see “Why is my ASP.NET application restarting?” at.
Latency
By default, BizTalk Server is configured to optimize throughput instead of response. The time BizTalk takes to respond is referred to as “latency.” In many cases it is important to minimize latency. BizTalk Server may be optimized for this. Keep in mind optimizing for latency can negatively impact throughput and memory utilization. It can also impact other BizTalk processing. There are several good sources of information regarding latency. Professional BizTalk Server 2006 (Jefford, Smith, & Fairweather) devotes an entire chapter on the topic. For more information, see “BizTalk Server 2004: Performance Tuning for Low Latency Messaging” at.
Often overlooked in discussions of latency is the time it takes BizTalk Server to load orchestrations. Additionally, if the virus scanning includes the virtual folder of the low latency Web services each startup is delayed to allow for the just in time (JIT) process to complete. For more information, see the sections titled “Orchestration Application Domain Unloading” and “Orchestration Engine Configuration.”
Web Services
BizTalk Server provides powerful support for exposing orchestrations as Web services and consuming existing Web services. The following information explains how to take advantage of these features and some of the fine points of dealing with the topic of Web services.
Exposing an Orchestration as a Web Service
The Web Service Publishing Wizard is used to publish an orchestration. It creates a Web service proxy hosted in IIS to handle the interface between the Web service interface and the BizTalk Messaging Engine. This proxy uses the operation name specified for the orchestration port as the Web method name. During receipt by the proxy, the Web method name is promoted into the MethodName property and used for routing the message to the orchestration.
By default, any orchestration exposed as a Web service has a subscription expression similar to the following:
Think of this as two subscriptions. The first subscription matches any document typed as this document and is not using the SOAP transport. The second subscription accepts any incoming message as long as it is stamped with the matching MethodName. Both are limited to documents received by the specified receive port.
What this provides is a subscription for the Web service proxy. If the method name exposed by the proxy matches the operation name in the orchestration, a non-SOAP receive location can be added to the receive port for testing.
The method name is promoted automatically from the receive handler, not the pipeline, so even using a “passthrough” pipeline the message is correctly routed. This is why it is important to sync the method name with the operation name when creating the Web service separate from the orchestration.
There are several methods of publishing an orchestration as a Web service, and several issues may be encountered during and after this process. This section covers the steps for using the Web Services Publishing Wizard, solutions to errors, and describes how receiving the document and routing it to the orchestration process works.
Publishing an Orchestration as a Web Service: An Example
The following steps outline how to create and publish an orchestration as a Web service accepting incoming requests and returns a response document.
Open Visual Studio®.
In Visual Studio, on the File menu, point to New, and then click Project.
In the New Project dialog box, under Project types, select BizTalk Projects and then in Templates, click Empty BizTalk Server Project. In the Name of this project box, type WSOrchExample, and then click OK.
In Solution Explorer, right-click the project name, point to Add, and then click New Item.
In the Add New Item dialog box, in Categories, select Schema Files, and in Templates, select Schema. Name the schema schemaInputSchema.xsd, and then click Add.
In the design view for InputSchema, in the left pane, right-click the Root element and select Insert Schema Node, Child Field Element. Name the new field InputData, and then click the Save icon.
In Solution Explorer, right-click the project name, point to Add, and then click New Item.
In the Add New Item dialog box, in Categories, select Schema Files, and in Templates select Schema. Name the schema OutputSchema.xsd and then click Add.
In the design view for OutputSchema, in the left pane, right-click the Root element and select Insert Schema Node, Child Field Element. Name the new field OutputData and then click the Save icon.
In Solution Explorer, right-click the project name, point to Add, and then click New Item.
In the Add New Item dialog box, in Categories, select Orchestration Files, and in Templates, select BizTalk Orchestration. Name the orchestration WSOrchestration and then click Add.
In Orchestration View, right-click Messages and select New Message. In properties for the new message, set the Identifier as Message_Input and the Message Type to WSOrchExample.InputSchema.
Repeat the previous two steps, substituting the Identifier of Message_Output and the schema name of WSOrchExample.OutputSchema.
In Visual Studio, open the toolbox window and drag the following shapes to the Orchestration Design Surface:
- Receive
- Transform
- Send
Select the Receive shape. In the Properties box, set the Activate property to True. Set the Message to Message_Input.
Select the ConstructMessage_1 shape generated around the Transform shape. In the Properties dialog box, set Messages Constructed to Message_Output.
Double-click the Transform shape. In the Transform Configuration dialog box, set the Fully Qualified Map name to WSOrchExample.Mapping.
Under Transform, select Source. In the Source Transform list, add a new row with Message_Input as the Variable Name.
Under Transform, select Destination. In the Destination Transform list, add a new row with Message_Output as the Variable Name, and then click OK.
In the Map Editor, expand the Root element for the Source and Destination schemas. Drag the InputData element to OutputData. This generates a link between these two elements. Click Save.
Switch back to the Orchestration Designer for WCFOrchestration. Select the Send shape. In Properties box, set the Message to Message_Output.
Right-click the Port Surface and select New Configured Port. Click Next until you get to the Select a Port Type page. Set the Communication Pattern to Request-Response and the Access Restrictions to Public. Click Next.
Set the Port Direction of Communication to I’ll be receiving a request and sending a response. Click Next, and then click Finish.
In the Port shape Port surface, click Operation_1. In the Properties box, change the Identifier to WSPort. The Operation Identifier will be used as the Web Method when the orchestration is published as a Web service.
From the Port entry, drag the Request arrow to the Receive shape, and then drag the Response arrow to the Send shape. This associates the Receive and Send shapes with this port. Build and deploy this project.
Click Start, point to Microsoft BizTalk Server and click BizTalk Web Services Publishing Wizard.
On the Welcome page, click Next.
On the Create Web Service page, select Publish BizTalk orchestrations as Web services. Click Next.
Click Browse and select the assembly containing the BizTalk project you created in the previous procedure, then click Next.
In the Orchestration and Ports dialog box, note the ports selection tree. If the orchestration contains multiple ports, select only the ones to be exposed. Because you created only one port, click Next to accept the defaults.
On the Web Service Properties page, click Next to accept the defaults.
On the Web Service Project page, enter the location where the Web service will be created. For this example, use the default. Select Allow anonymous access to the Web service and Create BizTalk receive locations in the following application. In the drop-down list, select BizTalk Application 1, and then click Next.
On the Summary page, review the settings selected for the Web service, and then click Create.
Click Start, point to Administrative tools, and then click Internet Information Services Manager.
Expand the Internet Information Services (IIS) Manager. Right-click Application Pools and click New Application Pool. In the Add New Application Pool dialog box, set the Application Pool ID to WSOrchPool, and then click OK.
Expand the Application Pools folder, then right-click WSOrchPool and select appropriate properties. On the Identity tab, set the Application Pool Identity to the BizTalk service account. Click OK.
In Internet Information Services (IIS) Manager, expand the Web Sites folder, then the Default Web Site entry. Right-click the location you just created and click Properties. On the Directory tab, set the Application pool to WSOrchPool. This sets the Web service to run as the BizTalk service account so incoming requests are written to the appropriate BizTalk database.
Click Start, point to Microsoft BizTalk Server 2006, and then click BizTalk Administration.
Expand BizTalk Server 2006 Administration, BizTalk Group, Applications, and BizTalk Application 1. Select the Orchestrations folder.
Right-click WSOrchExample.WS.Orchestration and click Properties.
In the left pane, select Bindings. In the Bindings pane, set the Host to BizTalkServerApplication, and in Bindings set the Receive Port for Port_1 to WebPort_WSOrchExample_Proxy/WSOrchExample_WSOrchestration_Port_1. This associates the physical receive port created by the Web Services Publishing Wizard to the logical port in the orchestration. Click OK.
Right-click WSOrchExample.WSOrchestration and click Start.
Select Receive Locations and right-click the WebService_WSOrchExample_Proxy/WSOrchExample_WSOrchestration_Port_1 receive location and click Enable.
The orchestration starts and will process incoming requests from the Web service generated by the Web Service Publishing Wizard.
The configuration steps described previously are suitable for a simple solution. More complex scenarios may require you to create a Web service from schemas and then bind an orchestration to the receive port at a later date. Another option is to use the wizard to create the virtual directory and Web service proxy, and then create the receive ports by using the Specify now option.
Using Client Certificates with BizTalk Server
Two issues you might encounter when using client certificates with BizTalk Server are ensuring the certificates are installed correctly and organizational security restrictions.
Client Certificate Installation
Using client certificates while consuming a Web service over HTTPS (a secure Web channel) ensures the certificates are correctly installed. Use the following guidelines:
- The client must have the private key for the certificate used.
- When opening the certificate in the Certificate dialog box, on the General tab, ensure the certificate includes the following text: You have a private key that corresponds to this certificate.
- The certificate should be stored in the Personal store for the BizTalk service host account.
- Use the following procedure to verify that the certificate is stored in the appropriate location.
Log on to the BizTalk Server using credentials for the service host account.
Open the Certificates Administration Console for the current user account, and expand the Personal store.
Expand the Certificates – Current User and Personal nodes.
Make sure the client certificate is listed in the Certificates node. The following illustration shows an example.
Organization Security Restrictions
Each organization may have restrictions on using client certificates for security reasons. One such restriction is when a user requests a client certificate a password prompt is displayed. A client certificate can be used only if the correct password is provided. Because BizTalk Server uses services and services cannot interact with dialog boxes so do not use client certificates requiring password validation.
To prevent this issue, configure the policy so no passwords are prompted when a certificate is used. This setting is enforced by the Group Policy Object (GPO) System Cryptography: Force Strong Key protection for user keys stored on the computer. If setting this policy, then the value should be set to “User input is not required when new keys are stored and used” as indicated in the following illustration:
Consuming Web Service Using a Send Port in Messaging–Only Scenarios
BizTalk Server 2006 included a new feature where users can consume a Web service directly in the send port without using an orchestration.
Simple messages
Consuming a Web service in a send port is relatively simple when the Web service exposes complex types for its parameters. The task becomes more complex when the consumed Web service uses primitive .NET data types such as strings or integers.
For example, assume the client is a .NET application. This application calls a Web service (published from schema using BizTalk Server) using a request-response SOAP receive port.
A solicit-response send port subscribes to the request-response SOAP port. The send port consumes the Web service and returns a response to SOAP receive port. In turn, a response will be sent back to the client application.
If complex in/out type parameters are used for a back end Web service adding a reference to this Web service to the BizTalk project creates Reference.xsd under Web reference section in the project explorer.
To address this issue, use the following guidelines:
- Create a simple BizTalk project.
- Add reference to the Web service.
- Create an input message type schema.
- Create an output message type schema.
- Create a map (for example, map1) which maps the input request to the Web service request schema from Reference.xsd.
- Create another map (for example, map2) which maps the Web service response schema from Reference.xsd to output schema. (You may not require a map for the output message).
- Deploy the BizTalk project.
- Publish schema as Web service for the BizTalk project and publish as follows:
- Input message schema - for request
- Output message schema - for response
- For this receive port, apply the first map (for example, map1) for inbound maps and apply the second map (for example, map2) for outbound maps.
- Create a solicit-response send port configured as follows:
- Use SOAP transport with the following configuration:
- Specify the Web service URL.
- Specify the BizTalk assembly previously created and the appropriate method.
- In the Filters section for the send port, add a filter on the receive port name and specify the receive port name created previously.
- Create a .NET client application. Add a Web reference for BizTalk schemas published as Web service and test the solution.
If using standard .NET types for in/out parameters for Web services, when adding a reference to this Web service to the BizTalk project, Reference.xsd is not created. (Reference.xsd contains message schema formats used to map the message being sent to the web message format.)
Multipart messages
Multiple Web parameters are mapped as a multipart message through the SOAP adapter. If receiving messages from adapters not directly support receiving/sending multipart messages, use a pipeline component to create a multipart message before sending the request to the target Web service. If receiving messages using adapters able to handle multipart messages such as SOAP receive adapter, a custom pipeline component is not necessary.
The following steps outline how to consume a Web service taking multiple parameters without using an orchestration.
- Create a simple orchestration receiving a multipart message with two parts (Number1 and Number2) both are of type integer (int) and return an integer (int).
- Publish the orchestration as a Web service.
- Create a two-way SOAP receive port that subscribes to this Web service just created.
- Create a two-way SOAP send port subscribing to the SOAP receive port created in step 3 by name.
- Configure the SOAP send port to call the method of the back end Web service accepting two integer parameters.
For more information and to use the sample project for this scenario, see.
Consuming Web Services with complex Web Service Definition Language (WSDL)
When consuming a Web service, issues with WSDL’s may surface. In general, there are absolute size and number of import levels limits for schemas in BizTalk Server. If errors occur when using complex or large messages try using a less complex or smaller message. Importing other WSDL’s or schemas into the Web service WSDL is not supported by the SOAP adapter. This does not mean it will not work in all cases. Testing is the only way to know for a particular case. WSDL importing is supported with the WSE 2.0 add-on. This requires a hot fix, see “FIX: Error message when you use the Add Generated Items wizard to generate a schema for BizTalk Adapter for Web Services Enhancements 2.0: "Specified argument was out of the range of valid values"” at.
The following is a suggested workaround for schema imports using Typed DataSets the BizTalk Server Administration console, on the Web Service tab in the SOAP Transport Properties dialog box of the SOAP send port, specify the proxy to use along with the specify assembly, type, and method. For more information, see the SOAP Transport Properties Dialog Box, Web Service Tab topic in the BizTalk Server Help.
Accessing SOAP Headers from an Orchestration
SOAP header information is available inside the orchestration. To add SOAP headers, use a special variation of the property schema. For more information, see Andy Babiec’s blog at.
- Add/update the Web reference.
- Open the Reference.xsd and note the root node for the SOAP header and close.
- Add a property schema to the project and change the target namespace to “”.
- Rename the root node to the value from step 2.
- Locate the Property Schema Base property drop-down list. Set the value to MessageContextPropertyBase.
- Save the file as SoapHeader.xsd and close.
- Add a Construct Message shape to build the message to call the Web service.
- Add a Message Assignment shape if not present after the Transform shape in the orchestration.
- Edit the Message Assignment shape using the following example:
- SampleWS_Request_Msg is the message variable name in the orchestration for the external Web service.
- MyBizTalkProject is the project name.
- AuthenticationHeader is the name of the property schema.
Unknown SOAP Headers
Normally, inbound message header values are context variables of the inbound message. However, unexpected header values can be included. The Web service can be configured to make these available to an orchestration. This is sometimes done to include information not originally part of the Web service. Newer versions of an application may know how to use the data while older versions will continue to work.
No error occurs if an unexpected SOAP header is received by the Web service. By default, the header is ignored. These values are available as the “unknown” header context variable if configured during the deployment wizard. For more information, see the BizTalk Web Services Publishing Wizard topic in the BizTalk Server Help.
RPC–Style Web Services
Microsoft tools are designed by default to work with the “document literal” style of Web services. BizTalk Server does not support remote procedure call (RPC)-style SOAP messages. However, some WSDL’s can be made to work. The difference is the creation of multipart messages to populate the parameters instead of the usual nodes for document WSDL’s. This could lead to a lot of extra coding if mapping is not an option. Here’s an extract from Tomas Restrepo’s blog Commonality.
“This is important because now you have to manipulate the messages manually, using Expression Shapes in your orchestrations. Using tools like maps is out of the question because you can't directly manipulate the entire message as a single unit (since it is multipart).
Another common scenario with RPC/Encoded services is that at least one of the parameters to an operation is a string type used to carry an encoded XML fragment. To manipulate those, you'll want to manually define schemas for the fragments, and then load them at runtime into separate messages using an intermediate XmlDocument instance (there are many examples of this out there, so I won't cover it in more detail here). If you need to assign a value to one of these parameters, you'll probably want to use a simple component that can extract a string from a regular BizTalk message you can assign to it (you can certainly manipulate them directly as strings, but that might be harder).”
Tools
When working with Web services, it is useful to see the traffic between BizTalk Server and the Web server. Network tracing tools provide this functionality. During development and testing use HTTP until everything works. Using SSL (HTTPS) encrypts the data and this limits the usefulness of tracing.
Microsoft’s NetMon is available at no cost from the Microsoft Download Center. A lesser known utility is Fiddler. Most tracing tools do not capture calls to a Web service on the BizTalk Server computer. Determining how the Web service is called, its methods and parameters, sometimes requires extra effort.
In most cases simply consuming the Web service provides the required information. BizTalk Server takes care of these details behind the scenes. Message types are created for communication using an orchestration. When dealing with complex WSDL’s or RPC style Web services (see previous section) more work is required. In this case, the first step is to get the WSDL and see what can be determined from a review. Open the WSDL in Internet Explorer or other editor. Look for import of schemas or other WSDL’s. Check the various types it defines. The WSDL should be provided by the partner or available for review from the Web service.
If the previous recommendations are not enough, create a Visual Studio project to consume the WSDL. Either point the Visual Studio “Add Service Reference” wizard at the Web service or use the WSDL file directly (WSDL.exe) to create the proxy class. Use the project to call the Web Service using the proxy objects. Visual Studio provides more extensive debugging tools for reviewing the behavior of the code than inside BizTalk Server.
Another option is to use a third-party tool, such as soapUI. The soapUI tool consumes the WSDL and graphically shows the methods and parameters. It enables calls to the Web service by typing values into the parameters and clicking a button. Web service results can be examined without a single line of code. For more information about the soapUI tool, see. | https://msdn.microsoft.com/en-us/library/dd297484.aspx | CC-MAIN-2017-26 | refinedweb | 6,637 | 56.25 |
The Full Stack, Part 2: Setting up the UI for our Windows Phone client
In this episode, Jon and Jesse set up unit testing for their MVC 3 application, discuss uint testing and test driven development, and build out a repository.
Actual format may change based on video formats available and browser capability.
That was quick, you just seemed to be getting started. I like the explanation on how to get past the mental block of where to start with the tests.
One thing I'm curious about, is the purpose of the show. It is a how-to of selected "pancakes" in the stack or your approach in creating an application from scratch?
Either way, more please, still hungry.
Much better epsoide, short and too the point, would be good to see you guys lay down the basic tenants of the story for the structure of the site, which i guess your not to sure of yet, what parts of the stack you want to use, how you plan to get there, seeing how they are implemented as well.
Cheers
Very interesting episode; I've been trying to get off the ground with TDD and this was quite helpful. I am a little confused though. First, is Exists() really a method for the repository? Meaning, it seemed like it was created for the sake of testing. Forgive me if that is naive, but I'm still new to this.
Also, I understand that creating the interface for your repository is necessary for testing, but it seems to contradict something that Jesse said at the beginning. If TDD is meant to drive your design, and you only implement what you need and not what you *think* you'll need, doesn't the creation of an interface, at this point, a contradiction? You've written something only so you could implement your tests without knowing if you'll have an actually need for that interface in your code later. So far, we only know that the app will need a person repository so you've got an interface implemented by only one class outside of your tests. It seems like the tests have driven you to create something you don't actually need...except for testing.
Oh. It's too slowly.
Really enjoying seeing so many different aspects of a start->finish project, already including pair programming, TDD, Repository pattern. I hope you plan to continue introducing a new pattern/practice/etc each episode. I have some understanding of each, but it's nice to see the glue or process of putting them all to use in a single project.
Are you planning to release any code snippets or source? There are times when I am not fully aware of your chosen implementation details, such as where you are putting your Repository Interface in relation to namespace and project - Main vs. Test project in the solution.
I would really like to be able to subscribe to this series in iTunes I hope you have the time to make that possible, been really enjoying them but would like to have them on the go with me.
First of all I would like to say that this is good stuff. You're keeping it simple and still showing good best practices.
I think you might have one error in your 3rd episode. The test method for the update of the fake repository doesn't really test the update method. Since you're getting the instance of the object the update take when you set the new name on the person instance and not when you call the update method. But guess you'll find that when you're testing against the reall repository since the get will access the db and not the dictionary.
Keep up the good work!
Starting TDD always reminds me of that Mitch Hedberg joke: "People who smoke cigarettes, they say 'Man, you don't know how hard it is to quit smoking.' Yes, I do -- it's as hard as it is to start flossing."
Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know. | https://channel9.msdn.com/Series/The-Full-Stack/The-Full-Stack-Part-3-Building-a-Repository-using-TDD | CC-MAIN-2017-13 | refinedweb | 715 | 69.41 |
debify 0.0.1
pack a set of files into a .deb file with minimal fuss.
debify: pack a set of files into a .deb file with minimal fuss. * usage: $ find /usr/lib/python2.6/site-packages/foo | python debify.py pack paths py-foo_0.1 'foo for python' py-foo_0.1.deb is created * Why would I want to use this? Because you just want to package these files, with one command invocation, without having to go through a tutorial first. * examples: * round up everything under a directory package everything under /user/lib/foo to be installed to /alt/lib/foo and save it as foo_0.1.deb. $ debify.py pack dir foo_0.1 '<desc>' /usr/lib/foo --dest=/alt/lib * path stream $ find /usr/lib/foo | debify.py pack paths foo_0.1 '<desc>' * cpio $ (cd /usr/lib; find foo | cpio -o) | debify.py pack cpio foo_1.0 '<desc>' --dest==/alt/lib * Motivation Keeping track of a set of related files as a package in a single namespace gets you 80% of the benefit of packaging with minimal efforts. This is true even if you leave out facilities such as dependency management. Consider the alternative: without a convenient way to package files, one often ends up resorting to unmanaged installation options. * The goal is to reduce packing friction so that it is practical to manage the apps and dependencies with the OS-native package management system. This gives - a single name space to manage applications and dependencies - ability to deinstall them - archive of dependencies as .deb files for efficient and reproducible of a configuration * These goals are not achieved by installation and deployment methods such as: - rsync - ./configure; make install - language-specific installers: cpan, setuptools - fabric These methods install, copy, automate but they do not manage packages. * The approach is to work with application-specific installation methods to pack the bits into .deb packages. Right now, the user has to build the list of files installed. The plan is to support automated capture and packaging of common installation methods such as: - make install - easy_install - cpan * How do I capture installed files? To capture installed files, you can do something like: # take a snapshot. most things install somehere under /usr/.. $ find /usr/ | sort > x.pre $ sudo make install # or easy_install or cpan... $ find /usr/ | sort > x.post $ comm -23 x.post x.pre > x.installed-files # inspect the list to see if it makes sense. $ less x.installed-files # debify $ cat x.installed-files | debify.py pack paths foo_0.1 '<desc>' # Install the package over the current image (installed files). # This has the effect of taking the unmanaged app under the control of debian package system. $ sudo dpkg -i foo_0.1.deb # You can clean it up like this. The .deb file can be stashed away for later deployment. $ sudo dpkg -r foo_0.1 Having a jail/chroot sandbox environment would make this much faster and more flexible. But that would be another project..
- Author: karasuyamatengu
- Keywords: debian,package
- License: LGPL
- Platform: POSIX,Windows
- Categories
- Package Index Owner: karasuyamatengu
- DOAP record: debify-0.0.1.xml | http://pypi.python.org/pypi/debify/0.0.1 | crawl-003 | refinedweb | 516 | 60.92 |
16 June 2010 23:00 [Source: ICIS news]
HOUSTON (ICIS news)--Rhodia plans to acquire ?xml:namespace>
Rhodia said the acquisition price was based on an enterprise value of $489m (€396m) for 100% of the company. Feixiang’s current majority owner would retain 12.5% of the capital over the next two years, Rhodia said.
Rhodia chairman and chief executive Jean-Pierre Clamadieu said the acquisition of Zhangjiagang city-based Feixiang in
“We aim to double the size of the acquired business within the next five years,” he said. “Following the completion of this acquisition, Rhodia will generate around one-third of its net sales in
Feixiang Chemicals employed about 650 people, with top line growth averaging 20%/year, Rhodia said.
Rhodia said it expected to finalise the acquisition in the second half of this | http://www.icis.com/Articles/2010/06/16/9368627/rhodia-to-acquire-chinas-feixiang-chemicals.html | CC-MAIN-2014-41 | refinedweb | 135 | 53.21 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.