text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
In this section, you'll create a simple web method (a method that can be invoked over the web) in the web service. This operation is designed to return customer data. In a real world application, this method would probably perform some database lookups. In this simple example, your service will simply return the name "John Smith" to all customer enquiries.
Before you start, be sure that Workshop for WebLogic has MailingListService.java open for editing in the Design View. To ensure that the file is open for editing, double-click on MailingListService.java in the Package Explorer view.
At this point, the method will be marked with red-underlining, indicating a compile error. In the next step, you will correct that error.
return "John Smith";
The final method should appears as follows:
@WebMethod public String getCustomers() { return "John Smith"; }
In Source View, the class should now look like this:
package services; import javax.jws.*; @WebService public class MailingListService { @WebMethod() public String getCustomers() { return "John Smith"; } }
In Design View, the class should look like this:
Click one of the following arrows to navigate through the tutorial: | http://docs.oracle.com/cd/E13224_01/wlw/docs100/guide/webservices/WSTutorial/tutWebSvcStep3.html | CC-MAIN-2016-36 | refinedweb | 186 | 64.71 |
The craft command tool is a powerful developer tool that lets you quickly scaffold your project with models, controllers, views, commands, providers, etc. which will condense nearly everything down to its simplest form via the craft namespace. No more redundancy in your development time creating boilerplate code. Masonite condenses all common development tasks into a single namespace.
For example, In Django you may need to do something like:
$ django-admin startproject$ python manage.py runserver$ python manage.py migrate
The craft tool condenses all commonly used commands into its own namespace
$ craft new$ craft serve$ craft migrate
All scaffolding of Masonite can be done manually (manually creating a controller and importing the
view function for example) but the craft command tool is used for speeding up development and cutting down on mundane development time.
When craft is used outside of masonite directory, it will only show a few commands such as the
new and
install commands. Other commands such as commands for creating controllers or models are loaded in from the Masonite project itself.
Many commands are loaded into the framework itself and fetched when craft is ran in a Masonite project directory. This allows version specific Masonite commands to be efficiently handled on each subsequent version as well as third party commands to be loaded in which expands craft itself.
The possible commands for craft include:
You can "tinker" around with Masonite by running:
$ craft tinker
This command will start a Python shell but also imports the container by default. So we can call:
Type `exit()` to exit.>>> app<masonite.app.App object at 0x10cfb8d30>>>> app.make('Request')<masonite.request.Request object at 0x10d03c8d0>>>> app.collect("Mail*Driver"){'MailSmtpDriver': <class 'masonite.drivers.MailSmtpDriver.MailSmtpDriver'>,'MailMailgunDriver': <class 'masonite.drivers.MailMailgunDriver.MailMailgunDriver'>}>>> exit()
And play around the container. This is a useful debugging tool to verify that objects are loaded into the container if there are any issues.
Another useful command is the show:routes command which will display a table of available routes that can be hit:
$ craft show:routes
This will display a table that looks like:
======== ======= ======= ======== ============Method Path Name Domain Middleware======== ======= ======= ======== ============GET / welcomeGET /home homeGET /userPOST /create user======== ======= ======= ======== ============
If you are trying to debug your application or need help in the Slack channel, it might be beneficial to see some useful information information about your system and environment. In this case we have a simple command:
$ craft info
This will give some small details about the current system which could be useful to someone trying to help you. Running the command will give you something like this:
Environment Information------------------------- ------------------System Information MacOS x86_64 64bitSystem Memory 8 GBPython Version CPython 3.6.5Virtual Environment ✓Masonite Version 2.0.6Craft Version 2.0.7APP_ENV localAPP_DEBUG True
Feel free to contribute any additional information you think is necessary to the command in the core repository.
To create an authentication system with a login, register and a dashboard system, just run:
$ craft auth
This command will create several new templates, controllers and routes so you don’t need to create an authentication system from scratch, although you can. If you need a custom authentication system, this command will scaffold the project for you so you can go into these new controllers and change them how you see fit.
These new controllers are not apart of the framework itself but now apart of your project. Do not look at editing these controllers as editing the framework source code.
Validators are classes based on validating form or request input. We can create validators by running:
$ craft validator LoginValidator
Be sure to read the Validation documentation to learn more about validators.
Masonite uses Orator which is an active record style ORM. If you are coming from other Python frameworks you may be more familiar with Data Mapper ORM's like Django ORM or SQLAlchemy. These style ORM's are useful since the names of the column in your table are typically the names of class attributes. If you forget what you named your column you can typically just look at the model but if your model looks something like:
class User(Model):pass
Then it is not apparent what the tables are. We can run a simple command though to generate a docstring that we can throw onto our model:
$ craft model:docstring table_name
Which will generate something like this in the terminal:
"""""
We can now copy and paste that onto your model and change whatever we need to:
class User(Model):"""""pass
You can also specify the connection to use using the
--connection option.
$ craft model:docstring table_name --connection amazon_db
If you wish to scaffold a controller, just run:
$ craft controller Dashboard
This command will create a new controller under
app/http/controllers/DashboardController.py. By convention, all controllers should have an appended “Controller” and therefore Masonite will append "Controller" to the controller created.
You can create a controller without appending "Controller" to the end by running:
$ craft controller Dashboard -e
This will create a app/http/controllers/Dashboard.py file with a Dashboard controller. Notice that "Controller" is not appended.
-e is short for
--exact. Either flag will work.
You may also create resource controllers which include standard resource actions such as show, create, update, etc:
$ craft controller Dashboard -r
-r is short for
--resource. Either flag will work.
You can also obviously combine them:
$ craft controller Dashboard -r -e
If you’d like to start a new project, you can run:
$ craft new project_name
This will download a zip file of the
MasoniteFramework/masonite repository and unzip it into your current working directory. This command will default to the latest release of the repo.
You may also specify some options. The
--version option will create a new project depending on the releases from the
MasoniteFramework/masonite repository.
$ craft new project_name --version 1.3.0
Or you can specify the branch you would like to create a new project with:
$ craft new project_name --branch develop
After you have created a new project, you will have a
requirements.txt file with all of the projects dependencies. In addition to this file, you will also have a
.env-example file which contains a boilerplate of a
.env file. In order to install the dependencies, as well as copy the example environment file to a
.env file, just run:
$ craft install
The
craft install command will also run
craft key --store as well which generates a secret key and places it in the
.env file.
All frameworks have a way to create migrations in order to manipulate database tables. Masonite uses a little bit of a different approach to migrations than other Python frameworks and makes the developer edit the migration file. This is the command to make a migration for an existing table:
$ craft migration name_of_migration --table users
If you are creating a migration for a table that does not exist yet which the migration will create it, you can pass the
--create flag like so:
$ craft migration name_of_migration --create users
These two flags will create slightly different types of migrations.
After your migrations have been created, edited, and are ready for migrating, we can now migrate them into the database. To migrate all of your unmigrated migrations, just run:
$ craft migrate
You can also refresh and rollback all of your migrations and remigrate them.
This will essentially rebuild your entire database.
$ craft migrate:refresh
You can also rollback all migrations without remigrating
$ craft migrate:reset
Lastly, you can rollback just the last set of migrations you tried migrating
$ craft migrate:rollback
If you'd like to create a model, you can run:
$ craft model ModelName
This will scaffold a model under
app/ModelName and import everything needed.
If you need to create a model in a specific folder starting from the
app folder, then just run:
$ craft model Models/ModelName
This will create a model in
app/Models/ModelName.py.
You can also use the -s and -m flags to create a seed or model at the same time.
$ craft model ModelName -s -m
This is a shortcut for these 3 commands:
$ craft model ModelName$ craft seed ModelName$ craft migration create_tablename_table --create tablename
Service Providers are a really powerful feature of Masonite. If you'd like to create your own service provider, just run:
$ craft provider DashboardProvider
This will create a file at
app/providers/DashboardProvider.py
Read more about Service Providers under the Service Provider documentation.
Views are simply html files located in
resources/templates and can be created easily from running the command:
$ craft view blog
This command will create a template at
resources/templates/blog.html
You can also create a view deeper inside the
resources/templates directory.
$ craft view auth/home
This will create a view under
resources/templates/auth/home.html but keep in mind that it will not create the directory for you. If the
auth directory does not exist, this command will fail.
Jobs are designed to be loaded into queues. We can take time consuming tasks and throw them inside of a Job. We can then use this Job to push to a queue to speed up the performance of our application and prevent bottlenecks and slowdowns.
$ craft job SendWelcomeEmail
Jobs will be put inside the
app/jobs directory. See the Queues and Jobs documentation for more information.
Masonite has a queue feature that allows you to load the jobs you create in the section above and run them either asyncronously or send them off to a message broker like RabbitMQ. You may start the worker by running:
$ craft queue:work
You'll need to read the documentation in the Queues and Jobs documentation for futher info on how to setup this feature.
You may create a PyPi package with an added
integrations.py file which is specific to Masonite. You can learn more about packages by reading the Creating Packages documentation. To create a package boilerplate, just run:
$ craft package name_of_package
You can scaffold out basic command boilerplate:
$ craft command Hello
This will create a
app/commands/HelloCommand.py file with the
HelloCommand class.
You can run the WSGI server by simply running:
$ craft serve
This will set the server to auto-reload when you make file changes.
You can also set it to not auto-reload on file changes:
$ craft serve --dont-reload
or the shorthand
$ craft serve -d
If you have unmigrated migrations, Masonite will recommend running
craft migrate when running the server.
The serve command also has a live reloading option which will refresh any connected browsers so you can more quickly prototype your jinja templates.
This is not a great tool for changing Python code and any Python code changes may still require a browser refresh to see the changes.
$ craft serve --live-reload
You can bind to a host using
-b and/or a port using
-p
$ craft serve -b 127.0.0.1 -p 8080
Masonite comes with a pretty quick auto reloading development server. By default there will be a 1 second delay between file change detection and the server reloading. This is a fairly slow reloading interval and most systems can handle a much faster interval while still properly managing the starting and killing of process PID's.
You can change the interval to something less then 1 using the
-i option:
$ craft serve -r -i .1
You will notice a considerably faster reloading time on your machine. If your machine can handle this interval speed (meaning your machine is properly starting and killing the processes) then you can safely proceed using a lower interval during development.
Masonite comes with a way to encrypt data and by default, encrypts all cookies set by the framework. Masonite uses a
key to encrypt and decrypt data. Read the Encryption documentation for more information on encryption.
To generate a secret
key, we can run:
$ craft key
This will generate a 32 bit string which you can paste into your
.env file under the
KEY setting.
You may also pass the
--store flag which will automatically add the key to your
.env file:
$ craft key --store
This command is ran whenever you run
craft install
Great! You are now a master at the craft command line tool.
You can also scaffold out basic test cases. You can do this by running:
$ craft test NameOfTest
This will scaffold the test in the base
tests/ directory and you can move it to wherever you like from there.
Masonite has the concept of publishing where you can publish specific assets to a developers application. This will allow them to make tweaks to better fit your package to their needs.
You can publish a specific provider by running:
$ craft provider ProviderName
You can also optionally specify a tag:
$ craft provider ProviderName --tag name
You should read more about publishing at the publishing documentation page. | https://docs.masoniteproject.com/v/v2.2/the-craft-command/introduction | CC-MAIN-2020-16 | refinedweb | 2,139 | 60.85 |
Content-type: text/html
puts, fputs - Writes a string to a stream
Standard C Library (libc.a)
#include <stdio.h>
int puts(
const char *string);
int fputs(
const char *string,
FILE *stream);
Points to a string to be written to output. Points to the FILE structure of an open file.
The puts() function writes the null-terminated string pointed to by the string parameter, followed by a newline character, to the standard output stream, stdout.
The fputs() function writes the null-terminated string pointed to by the string parameter to the output stream specified by the stream parameter. The fputs() function does not append a newline character.
Neither function writes the terminating null byte.
The st_ctime and st_mtime fields of the file are marked for update between the successful execution of the puts() or fputs() function and the next successful completion of a call to one of the following: An fflush() or fclose() function on the same stream The exit() or abort() function
Full use.
Upon successful completion, the puts() and fputs() functions return the number of characters written. Both subroutines return EOF on an error.
The puts() and fputs() functions fail under either of the following conditions: The stream is unbuffered. The stream's buffer needed to be flushed, the function call caused an underlying write() or lseek() to be invoked, and this underlying operation fails with incomplete output.
In addition, if any of the following conditions occur, the puts() or fputs() function sets errno to the corresponding value. read: fgetws(3), fputws(3), gets(3), printf(3), putc(3), putwc(3). delim off | http://backdrift.org/man/tru64/man3/fputs.3.html | CC-MAIN-2016-44 | refinedweb | 265 | 62.88 |
Working With Design Patterns: State
Conditional logic is essential to building any application, yet too much can make an application incomprehensible. Many of the applications I build require that an object exist in many different states, with behavior differing from state to state. A straightforward implementation involves lots of if statements and complex conditionals, producing overly convoluted solutions in short order. As a remedy, I use the state design pattern to keep my code from getting out of hand.
Holdings in a library provide a good example. A holding is a copy of a book (see Listing 1). (In my implementation, the book is simply the ISBN classification information. Thus, each holding object references a copy number and a book object.) Holdings can be checked out, checked in, they can be moved from branch to branch, they can be held by a patron, they can be warehoused, and so on. Each of these events puts the holding into a state where different rules apply. For example, a book that's checked out obviously can't be warehoused.
Listing 1: The Book class.
// BookTest.java import static org.junit.Assert.*; import org.junit.*; public class BookTest { public static final Book CATCH22 = new Book("0-671-12805-1", "Catch-22", "Heller, Joseph", "1961"); @Test public void create() { assertEquals("0-671-12805-1", CATCH22.getIsbn()); assertEquals("Catch-22", CATCH22.getTitle()); assertEquals("Heller, Joseph", CATCH22.getAuthor()); assertEquals("1961", CATCH22.getYear()); } } // Book.java public class Book { private final String isbn; private final String title; private final String author; private final String year; public Book(String isbn, String title, String author, String year) { this.isbn = isbn; this.title = title; this.author = author; this.year = year; } public String getIsbn() { return isbn; } public String getTitle() { return title; } public String getAuthor() { return author; } public String getYear() { return year; } }
Listing 2 shows a starter implementation for Holding. (Note that I'm not yet concerned with the relevancy of the patron ID.)
Listing 2: An initial Holding implementation.
import static org.junit.Assert.*; import java.util.Date; import org.junit.*; public class HoldingTest { private Holding holding; private static final Date NOW = new Date(); private static final String PATRON_ID = "123); assertTrue(holding.isOnLoan()); assertEquals(NOW, holding.getLoanDate()); } @Test public void checkin() { Date later = new Date(NOW.getTime() + 1); holding.checkout(NOW, PATRON_ID); holding.checkin(later); assertFalse(holding.isOnLoan()); } } // Holding.java import java.util.Date; public class Holding { private final Book book; private final int copyNumber; private Date checkoutDate;) { checkoutDate = date; } public void checkin(Date date) { checkoutDate = null; } }
Page 1 of 4
| http://www.developer.com/design/article.php/3753906/Working-With-Design-Patterns-State.htm | CC-MAIN-2017-26 | refinedweb | 419 | 51.65 |
N O . 7 NNo. O . 67 N O . 7 2017
Ce re n e m jou r n a l
is the graduate journal of the Centre for Research in New Music University of Huddersfield
Contents Editorial: Music and Persons 7 Matteo Fargion + Jonathan Burrows: On Making Portraits 21 Christian Wolff: In Conversation 37 Mira Benjamin + Luke Nickel: Correspondence on Tuning 49 Michael Finnissy + Cassandra Miller: Transcription, Photography, Portraiture 58 Mark So + Joseph Kudirka: The Name Pieces 71 Contributors 222
Frontispiece: from Mark So, Millay (2010) Edit, typesetting: Lawrence Dunn Picture credits p. 223
7
editorial
lawrence dunn: music and persons
A Rose is a Rose is a Round, James Tenney, part of Postal Pieces, 1965-71.
What has music to do with the people it’s ‘for’? Music can seem to be an almost inevitably ‘dedicated’ activity— in that music can actively resist not being ‘for’ something or somebody. (The designation ‘music’ does after all imply dedication to the muses.) Dedication is not the same as ‘tailoring’ or ‘design’—it is more abstract; indeed, more devotional. As in consecration, it has the character of setting-apart, of surrendering, of gifting. Dedication has two potentially directed aspects: an internal and
Cerenem Journal no. 6
an external; but they comingle. The persons for whom the music is ‘for’ can be an internal circle, a private kin; or they can be some variety of external public or ceremonial (as originally, symbolic figures: the muse, prince, deity, etc.) The boundary between one’s kin and the public is necessarily porous, in the sense that the kin invents a way for the public to articulate itself. The boundaries of the public are shaped by its being made up of potential kin; and kinship (which, if it is not familial kinship, is here artistic or ‘in-law’ kinship) can always be dismantled, divorced, disowned. Kin is brought from the public realm and in theory can be divorced back to it—though a kinship relation cannot be made raw again once it has been cooked. The implicit thing, though, that underlies dedication (‘forness’) is a kind of contradistinction. If there is a for then there must be a not for. There are those for whom this music is not for. There is music that is ‘not for me’. If one isn’t careful, dedication can become a kind of kernal, from which spiral out answers most other potential questions about music. ‘Why make this music?’ ‘Because it is for my kin.’ Indeed, the act of making is identified with dedication to the kin—if it is undedicated, it isn’t yet ‘made’. The artistic kin could be taken to be analogous to the ‘artworld’ a la Danto1, only that the kin are more articulated: they make up those with which an individual affiliates and aligns, has been adopted and looked-after. Danto’s artworld is a more mercenary and capitalist arena; the art kin is precapitalist, based on relations of tutelage and apprenticeship and collectivity and friendship, and can be explicitly anti-capitalist. Nevertheless, the modern capitalist artworld has at its core an elaborate cascade of multigenerational art kinship. It is not so much ‘study’ that grants adoption by this family so much as it is studentship. Affiliation is not something one can obtain merely by parody or imitation. Affiliation within the art kin is done by dedication—that the art is held to obtain a genuine (rather than feigned) relation to its progenitors and is accepted to have such. This is the situation after the transformations of the twentieth century: it is the kin, and not the merchants or patrons, who are 1 See Arthur C. Danto, The Transfiguration of the Commonplace, Harvard University Press, 1981.
9
those to actually have the power to grant actions as aesthetic in the first place (as opposed to anaesthetic, or utilitarian, or mercenary, or commoditised). Or rather, if, as Boris Groys would have it, ‘everything is aestheticised’,2 the kin are those who can grant aesthetic ‘worth’, as opposed to (mere) aesthetic ‘content’. If artists today continue to be involved in making craft-objects or completing ‘renderings’ or ‘likenesses’, then the status of these labour-products is not determined by the ‘artistry’ of their rendering or likeness; they are determined by the way these objects are deemed and dedicated. As Andrea Fraser puts it: ‘An artist is a myth. Artists internalize the myth in the process of their development and then strive to embody and perform it.’3 This is, one supposes, the difference between the outsider and insider artist. The outsider artist is deemed to be essentially incapable of performing their art—their art remains undedicated.4 They cannot get detached enough from their autonomism. Outsider art essentially isn’t art (it remains, from the bland point of view of habitually attendant professionals, occupational therapy) until it is brought into the artworld by its representatives and rededicated as such. Of course, the art of outsider artists is art—and it is from the point of view even from those with severe mental disability.5 But one feels it must only be art in the same way that other autonomisms are art; that the doodle is art; or that prehistoric art is ‘art’—in that they have been held to be, or ‘deemed’ art. Certainly in the case of prehistoric art, this ‘deeming’ is such an anachronism that scholars 2 Boris Groys, In the Flow, chap. 3, Verso, 2016. 3 Quoted in Sarah Thornton, 33 Artists in 3 Acts, ‘Scene 6’, W. W. Norton, 2014. 4 ‘[T]he work of art is only such—that is, both “work” and “of art”—and only has meaning for us because “it is only present through a relation with the other”, because it “calls for the other”, because it “requires the other”. But the art brut work has no need of the other . . . The maker of art brut neither invites nor addresses us.’ Alain Bouillet quoted in David Maclagan, Outsider Art: From the Margins to the Marketplace, Reaktion Books, 2009, p. 143. 5 ‘Even where we seem to be beyond any ordinary form of intentionality, let alone that usually associated with the creation of “works of art”, there is arguably something inherent in the human gesture of making something that still has its own significance. As one writer, Madeleine Lommel, puts it: “How can we not take account of the fundamental impulses, that is, the confrontation with matter, that innate process on which man has to depend in order to impose an answerable presence.”’ David Maclagan, ibid. pp. 145-7.
Cerenem Journal no. 6
considering it in the twenty-first century feel it is necessary from the get-go to make a disclaimer.6 Such is the overwhelming capacity for the art kin to necessitate a landscape of dedication and rededication that one is tempted to regard the doodle only as ‘proto-art’. (Doodling is to treading-water as the dedicated mark is to freestyle or breaststroke: swimming that is not only in a particular direction but has also a generic or preordained quality—i.e. ‘stroke’. Does water treading count as a stroke?) * I should make an apology. This issue is many months late. Most of the material was assembled through 2016: it should have appeared in January at the latest. However, in December, my step-father was diagnosed with cancer—and everything (at least from my perspective) went on hold. After he died, in February, I found myself emotionally overwhelmed and immersed in compositional commitments. I also wanted to contribute something to this issue, but struggled with exactly what. The present issue was always going to be about how music gets to be about people—initially, what music might have to do with portraiting, identity. But since that time it’s taken me a while (and a writing-through of a number of compositions for other people in the intervening period) to end up a little more constructed about how music becomes to be ‘about’ and ‘with’ other people. It occurs to me now that grief is a rather useless sort of emotion when it comes to musicmaking. At least, that particular grief one comes by after loss, rather than the ‘pre-emptive’ grief borne of the knowledge that loss will come. I suppose one could call it ‘melancholy’. Oddly enough, this sort of ‘pre-emptive’ grief was something I’d had in mind making pieces throughout 2016; it remains so. Death is, after all, fundamentally just an idea; life, on the other hand, is actually lived, in the way that music is lived and is made of 6 ‘I have never felt comfortable with the use of the term ‘art’ to describe so many different [prehistoric] phenomena and have become increasingly aware of the difficulties involved in their study. . . . Although there have been many accounts of prehistoric art, nearly all of them begin by making the assumption that the concept is a useful one.’ Richard Bradley, Image and Audience: Rethinking Prehistoric Art, Oxford, 2009, p. viii.
11
Three Pears, Walnuts, Glass of Wine and Knife Jean-Baptiste-Simeon Chardin, Musée de Louvre, 1768
bits of living. Loss is a huge part of musicmaking—though one might argue it’s become less foregrounded since the advent of recording. But it remains the case that musical activity can and only happens once. Recordings are at best likenesses. Sometimes, they are elaborately worked fictions—and fictions, like likenesses, can be more real than reality. Reality disappears. Likeness has the habit of articulating life; life becomes a ‘version’ of previously assimilated likenesses. Still, it takes an unusual level of acceptance (and a certain skill and togetherness) to be truly acclimatised to the genuine levels of loss involved in making music. Ephemerality is these days a kind of fetish, and has an allure exactly because documentation is so straightforward and hence so ubiquitous. In Montreal in April, and later in Glasgow, I was working with Linda Catlin Smith—initially a mentor, she became a good friend. She had also experienced loss fairly recently, and I asked her about how she came to understand it from the point of view of composition: I think the complex things that I feel and think are always with me,
Cerenem Journal no. 6
in the background of my composing, like an atmosphere. And it’s the act of composing that helps me the most when I am struggling with large or overwhelming emotions. I more or less use the work as a place to go, a kind of solace or retreat from myself. That may seem selfish, I don’t know. I don’t make sad or grieving pieces—I don’t make elegies or memorials—but loss is a big part of life and is in the background of my thoughts. It is like the aura of a still life painting —those beautiful Chardin paintings of fruit for instance—where the objects are surrounded by a solemn yet sensuous solitude.7
I sympathise with a lack of interest in the elegiac. Still, what do I know. It occurs to me that as a twentysomething I have simply not lived long enough—and have enjoyed the privilege of security as a fortunate westerner—to have much experience of loss, in all its awful contradictions. As much as it is a mystery what art and music have to do with other persons, it’s a mystery what life itself has to do with other persons—who drift in and out of varying and maddening conditions of unknowability. As Linda said, ‘I find that grief is a very strange creature and it moves in unpredictable ways.’ * How does music, as a ‘dedicated’ activity, differ from the other arts? The nature of the visual or installation artist today has become remarkably similar to that of the composer. Their status is confirmed as such as it were ‘behind the scenes’, in rehearsal, in planning and programming (‘curation’)—they arrive into the public realm in fully-formed mythic attire, the gallery/concert-hall doing everything it can to establish status whilst assuming the condition of transparency. The onlooker is a ‘guest’, or a ‘friend of the family’. This differs markedly from the older traditions of visual art, where the artist was employed to produce renderings and paintings-as-status-objects for patron-owners. As the content of artworks became less ‘retinal’, artists became much more concerned with their position as performed in front of a public. Still, music functions differently in the sense that quite often the piece is a ‘gift’ granted to the performer, without whom it would not 7 Email, June 11, 2017.
13
A melpa woman from the Mt. Hagen region, New Guinea. She is adorned with moka shells.
exist in the world. Installation works are not ‘given’ to their gallerists or commissioners in quite the same way. Music exists typically as a relationship between specific persons; a visual artist’s studio, or installation hanging team are not specific in this sense. An installation is not a gift given to the workers and assistants who have to fabricate and transport and install it. Gift-giving and exchange are fundamental to human interaction such that this foregrounding by musical relationships ought to be given a second look. Anthropologist Marilyn Strathern put it with a scintillating bluntness: ‘for “person”, one can write “gift”.’ She elaborates: It is arguable that all Melanesian gift exchanges are ‘reproductive’ . . . Melanesians borrow origin stories, wealth and—as in the area I know best (Mount Hagen)—the expertise by which to organise their religion and their future. One clan takes from another its means of life. Indeed, exchanges surrounding the transfer of reproductive potential are intrinsic to the constitution of identity. . . . Pigs create
Cerenem Journal no. 6
pigs and money creates money, as shell valuables also reproduce themselves[.] . . . Persons are not conceptualised . . . as free-standing. A Hagen clan is composed of its agnates and those foreigners detached from other clans who will give birth to its children; a woman contains the child that grows through the acts of a man; shells are mounted on the breast. One person may ‘carry’ another, as the origin or cause of its existence and acts. An implicate field of persons is thus imagined in the division or dispersal of bodies or body parts.8
Agnates are descendents of male lines: the child in Mt. Hagen and the Trobriands has typically ‘two fathers’—the first being the man ‘whose semen moulds somatic identity’ and the other (the mother’s brother) ‘who defines the kin group to which the child belongs.’ Moka shells are given by men and adorn women (see above), retaining a sedimented record of their previous exchange, signalling status. The men who give them obtain status through their gifting, just as they do with their fathering. Their fathering is gifting. For Strathern, the shells are not mere ‘symbols’ of bodily reproduction: they are its equivalent. For Strathern, the gift is more basic than we would typically hold it to be, with its ‘altruistic’ connotations9 in modern, post-industrial western culture. Gifts are not ‘presents’, where a present is a kind of generalised consumerism. Gifts are rather part of the process of ‘personification’: ‘the entire system of production, distribution and consumption . . . that converts food and objects and people into other people.’ Gifts are ‘given’ in the sense that the world is ‘given’: in that the world is not only inherited, but also shared, contested, delimited, reproduced. Where there are gifts, so there are persons. 8 Marilyn Strathern, Reproducing the Future: Essays on anthropology, kinship and the new reproductive technologies, Manchester University Press, 1992, pp. 120-5. 9 Strathern is clear on this: in situations of exchange, ‘[t]here is no axiomatic evaluation of intimacy or closeness. . . . On the contrary, people work to create divisions between themselves. For in the activation of relations people make explicit what differentiates them. One may put it that it is the relationship between them that separates donor from recipient or mother from child. Persons are detached, not as individuals from the background of society or environment, but from other persons. However, detachment is never final, and the process is constantly recreated in people’s dealings with one another. To thus be in a state of division with respect to others renders the Melanesian person dividual.’ (emphasis added) Ibid. p. 125
15
Musical compositions are gifts of a much greater level of abstraction—and which often accrue the habitual altruistic character of gifts in a society ordered by commodity, capital and contracts. But baked into them is this much more fundamental set of relations. The musical artist is a person whose personification is determined by their gifting and granting.10 Gifts in the form of compositions are articulated by exchange with other (specified) persons—performers—whose request and receptivity is mutually reinforcing and reproductive. Altruism and exchange go together with labour, with ‘carrying’. This line of reasoning can lead to some slightly alarming places. Are the composer and performer the ‘parents’ of a piece? Is the piece a kind of child, whose existence is presaged by parental efforts—indeed, labour, to bring into the world? A child which has an independent and subsequent existence? Especially given the gender roles so often found in classical music—that of the male composer and female virtuosic soloist—this kind of picture is worryingly apt. Of course, heterosexuality is hardly the limit of sexual relations! Musical parentage comes in as many myriad kinds as every other kind parentage and sexuality. Still, parentage is a seductive way to see musical relationships, inasmuch as, so often, the piece (as opposed to the score or notation) is an entity which seems to accrue its own life and rights. A hybrid life, made out of an imaginary part, a set-down part, performed and remembered parts. It would also seem often to have its own ‘requirements’, at least during its making—there are things the piece ‘needs’, ‘can do without’. Such requirements exist during reproduction too, during recording, subsequent rendition, etc. The greatest requirement, though, is perhaps is ‘subsistence’. Pieces, like persons, only exist in the word inasmuch as persons are at hand to enable their existence to continue. Pieces 10 In a related sense, think of how often a sounding of a work is identified with the person who authored it: ‘What are they playing on the radio?’ ‘I think it’s Berlioz.’ The person of Berlioz becomes identified and personified by performances of works granted to and given by others. This sort of ‘distributed’ personification—personification by way of residual authorship—contrasts with more usual kinds of personhood (which could be personification by action amongst relations), such that even seeing composers doing ordinary, ‘related’ things can appear weird for subsequent onlookers. This is how disembodied their personhood has become—the continuity of their identity has been achieved not by reproduction through relation, but through dedicated ‘residuals’. See
Cerenem Journal no. 6
do not have wills; they cannot elect such-and-such. But the presence of a discernible will, nor the condition of ownership, does not in other situations lead to renunciation of personhood. The person with extreme autism, with no ability to make choices for themselves, does not stop being a person. A slave owned by another does not cease to be a person. Are pieces persons? Given the interest in the points-of-view of objects and systems (things usually held to be non-persons) in Actor Network Theory and recent philosophy, positing such a thing isn’t as outlandish as it might first appear.11 What makes pieces and other artworks more like persons is to do with their ‘dedicatedness’, the fact that they have been put into the world and set-apart—consecrated, as it were—so as to be themselves and not some other thing. A piece, whatever its makeup, and however multiple its composition, is deemed to be singular. Persons are, after all, ensembles of matter and thought, composed of parts, which are taken to have singular identities, whose continuity is retained only by interaction with and reliance on others. As Strathern says: ‘Persons are not . . . free-standing.’ Acquired personhood through dedication—maybe this is what ‘naming’ is.12 In an early text (1916), unpublished during his life11 Of the ‘actor’ in Actor Network Theory, Bruno Latour writes: ‘It is not by accident that this expression, [the ‘actor’] like that of ‘person’, comes from the stage. Far from indicating a pure and unproblematic source of action, they both lead to puzzles as old as the institution of theater itself—[that of] the difference between [the actor’s] “authentic self” and his “social role”. To use the word ‘actor’ means that it’s never clear who and what is acting when we act since an actor on stage is never alone in acting. . . . By definition, action is dislocated. Action is borrowed, distributed, suggested, influenced, dominated, betrayed, translated.’ He goes on: ‘This “drawing” actors as there are debates about figuration in modern and contemporary art.’ Bruno Latour, Reassembling the Social: An Introduction to Actor Network Theory, Oxford University Press, 2005, p. 46, 54. 12 Of the Fayum portraits (Greco-Roman paintings made in Egypt from the 1st to 3rd centuries), John Berger writes: ‘the two of them, [sitter and painter,] living at that moment, collaborated in a preparation for death, a preparation which would ensure survival. To paint was to name, and to be named was a guarantee of this continuity.’ John Berger, Portraits, Verso, 2015, chap. 2.
17
time, Walter Benjamin wrote: Things have no proper names except in God. For in his creative word, God called them into being, calling them by their proper names. In the language of men, however, they are overnamed. There is, in the relation of human languages to that of things, something that can be approximately described as ‘over-naming’: over-naming as the deepest linguistic reason for all melancholy and (from the point of view of the thing) of all deliberate muteness.13
Nature is mute, and ‘where there is only a rustling of plants, there is always a lament. Because she is mute, nature mourns.’ But it is not as if, in a godless universe, humans are endowed with any greater capacity for naming. That which is named comes into the panoply of thinghood, but it is not made into a being merely by being called such-and-such. Benjamin is right to think of the human capacity for naming and dedication to be borne of hubris—but then, nothing is borne of hubris more than the God of Abraham himself, who is not a person (because he has no ‘beginning’, he ‘is that he is’), but is nonetheless the very entity whose single charge is to make covenant with and single out those persons who are his against those who are not his. Who he is for and who he is not for. The God of Abraham is the entity (the device) the Israelites use to dedicate themselves; he names them. Benjamin suggests that, because human, names are ‘already withered.’ But then so is Yahweh’s name: because there is no God—his is a ‘mere’ name, a reminder that his naming and dedication is required for his continued existence. Yahweh’s having-to-be-named is an embarrassment; his is a name akin to every other name, including those which are held to be false: Astarte, Asherah, Ba’al, Hadad, Aten. Yahweh’s name is just as ‘thrown upwards’, just as crudely ‘linguistic’, a naming made by having tongues wrap around and smother the soft palate; syllables pushed into the world to be remembered and transmitted and rendered mutable and forgettable. Why else would its utterance be taboo? His name is as withered as any other. 13 Walter Benjamin, ‘On Language as Such and on the Language of Man’, Selected Writings, Vol. 1, ed. Marcus Bullock, Michael W. Jennings, Harvard University Press, 1996, p. 73.
Cerenem Journal no. 6
Benjamin inverts the melancholy of mute nature, suggesting that ‘in all mourning there is the deepest inclination to speechlessness’. This speechlessness is ‘more than the inability or disinclination to communicate. That which mourns feels itself thoroughly known by the unknowable.’ It is almost that, in a condition of lament, one becomes aware of one’s primordial muteness, the deep humanness of one’s ability to name, which is also a characteristic hubris; one’s inability to provide a continual ‘naming’, so as to retain those who did live and now do no longer. That which is, is as such, whether it is named or unnamed; its naming is what grants us access and relation. But in mourning, the ambiguities of naming and personification collide with the basic conditions of being—the unknowabilities of being, such that there can be in the world ensembles of matter and thoughts, that are singular despite their singularities being unlocatable, that are multiple despite their multiplicities denied; that are not ‘free-standing’, that ‘carry’ and ‘are carried’. There is no good way to know what a person is; we form them for ourselves. Our forms dwindle. They can be refigured.
19
* I must thank Liza Lim for inviting me to edit the following issue, as well as the individual contributors, and Sam Gillies for assistance. The present issue consists of a series of exchanges and conversations, conducted in 2016. The first is a conversation between myself and Jonathan Burrows and Matteo Fargion, about their series of dances 52 Portraits,. These are remarkable pieces, for Burrows and Fargion the culmination of almost 25 years of working together. Each is an individual entity—yet they all are ‘akin’, being made with and for, and also ‘using’, the dancers portraited. This is followed by an interview with Christian Wolff, conducted by Joseph Kudirka and Nick Williams. Wolff mostly discusses the Exercises, their genesis as well as their recording, as well as issues of composing with and for others. There follows an exchange between Luke Nickel and Mira Benjamin, concerning Benjamin’s recent research into tuning and Just Intonation and her personal experiences of navigating it—of ‘identifying’, ‘instantiating’, ‘inhabiting’, ‘sailing’. Michael Finnissy’s interview by Cassandra Miller follows. Finnissy discusses his personal approach to transcription, ‘writing-through’, and also touches on portraiture, photography, painting, the influence of David Hockney and Walter Benjamin, museums, culture, melancholy, and everydayness. Finally, there is Joseph Kudirka and Mark So’s exchange on So’s ‘name pieces’—a large series of pieces made through dedication and transcription. Their exchange forms something of a vicarious catalogue, stretching over ten years of work, showing the extraordinary range and extent of So’s method. Thanks must again go out to all the contributors, who put up with delays and were so generous with their contributions. LD, July 2017
Photo by LD (2010)
Cerenem Journal no. 6
Photo by LD (2006)
21
jonathan burrows + matteo fargion: on making portraits interview with lawrence dunn
ld: It had been Tim Parkinson who had introduced me to Jonathan and Matteo’s work: both Tim and Matteo had been students of Kevin Volans in the ’80s and ’90s. Jonathan and Matteo have been working together since 1989, with dance pieces made for live performance and for television. Around 2000 they began producing a series of tw0-handers, Jonathan and Matteo both appearing together on stage. Both Sitting Duet (2001) was the first of a series of such duets, recently profiled by William Forsythe's Motionbank project. Their work together has had quite some influence, particularly in the dance world—the publication of Jonathan’s A Choreographer’s Handbook in 2010 did something to further solidify this influence. Matteo is somewhat shyer, a little more elusive and probably not quite as well-known in musical circles as Jonathan is in choreographic. But his subtly informal, sardonic, and (particularly in the case of the 52 Portraits) moving music is a crucial part of their collective project. Music of his has been programmed particularly by Tim Parkinson at Music we’d like to hear, and also by Parkinson Saunders, a duo of Tim’s with James Saunders on which Burrows and Fargion have had an impact. This conversation relates to their large-scale web-project 52 Portraits,, published in video format online throughout 2016. The interview was conducted by email in the middle of 2016, halfway through the cycle of dances, which are all danced (apart from the final dance) by other dancers, friends and associates. ld: Clocking in at maybe 200 minutes overall, with an enormous cast, this has got to be the most ambitious project you have both worked together on. Given that it’s all split up into little bits maybe it doesn’t feel like that, but considered as a single entity, one would have to look at least toward mid-century ballet to find something even slightly on this sort of scale. And yet, all these dances are being
Cerenem Journal no. 6
quietly released week-by-week onto the web. How did this project come about? JB: I had been thinking for a long time about other ways dance might occupy the internet, other than music videos and short clips of spectacular dancing that you might see on Facebook. The model for me was the year I spent on-and-off following Tim Etchell’s daily political playbill series called Vacuum Days, which ran for the entirety of 2011. Matteo and I had had a two year experience of working with exploratory digital software, motion capture and so forth, as part of William Forsythe’s Motionbank1 project in Frankfurt, so we had some idea of that place where art meets the digital, but what I liked about Tim’s project was that it wasn’t about things looking digital but rather about the obvious ways we all use software. So we decided to make a project which would take the short form of Facebook postings, but give it this accumulating quality, so it might transcend the usual instant and forgettable nature of dance clips on social media. And the choice to stage each portrait at a table, was made with the understanding that many people would watch them while sat at a table with their laptop, so the watcher sits opposite the performer, sharing a familiar technological situation. The project has given Matteo and I a way to engage with making a much bigger kind of piece, with a large number of collaborators, but at the same time working in the way we always work: from the start step at a time, paying attention to detail and focussing everything on the gap between one thing and the next. LD: I wonder if the dances made for this project might represent a return to an earlier way of working—as the majority of your pieces over the years have been performed not by third-party dancers but by yourselves. One early collaboration was a 1994 dance made for television, called Hands. Was that the last dance made with the camera in mind? How would you compare your approach then to now, twenty years later? JB: Recently Matteo and I seem to have found more satisfactory ways to invite other people to join us in our work, encouraged perhaps by a moment in dance where collective practice has become important again. And we’re very glad about that. And in terms of making something specifically for camera, yes, I 1 see
23
Still from Hands (1994), directed by Adam Roberts, choreography and performance Jonathan Burrows, music Matteo Fargion, design Teresa McCann, camera Jack Hazan. watch?v=vqJ-kQwxfFI
guess for many years we’ve been more interested in making rougher representations of actual live performance, but 52 Portraits goes back to what we were trying to achieve for camera in 1994 with Hands. In fact for me I would say that I think 52 Portraits is finally a way of continuing the Hands project, which we wanted to do for a long time, and it seems sometimes these things take decades and you just have to wait. LD: These dances are all called portraits, but it wouldn’t be unreasonable to look at all of the two-hander pieces (going back to Both Sitting Duet) as being self-portraits of one kind or another. Would it be wrong to think of your work, generally speaking, as being more interested in portraiture than tableaux? JB: I think your question touches upon something very interesting about dance, which is the way that no matter how abstract or distanced it seems, there is always a sense of the person revealed. Having said that though, the job of the dancer or performer is usually to resist the autobiographical impulse at all costs, because to embrace it is to reduce other rich and contradictory elements, like more abstract or formal things, and then you risk losing some of the peculiarities and uncertainties which make performance resonate. LD: If I had to pinpoint the difference between Hands and these pieces, it could be the collective effect-by-osmosis of the duets. We as viewers of your pieces have gotten used to seeing Burrows and Fargion: the effect of this is that, when watching these dances for other
Cerenem Journal no. 6
Still from Counting to One Hundred (2014)
people, the style of movement echoes Jonathan’s own characteristic movements; and of course pretty much all of the music is played or sung by Matteo. It brought to mind Gilbert and George, who, because they have for so long appeared in every one of their paintings, one now ‘expects’ to see—even if they are hidden or don’t appear at all. With their paintings there emerges (at least with me) a certain ‘where’s-wally’ looking-around-for-them. In other words, the difference between Hands and 52 Portraits is that, in the portraits, Burrows and Fargion are consistently ‘in the background’—is this reasonable? JB: I think you’re right that many of the people who’ve worked on the portraits know our work, and are in some sense in negotiation with it already when they enter the room, regardless of what they propose. This might be dangerous in terms of trapping what happens in a certain too familiar place, but at the same time the more familiar aspects and performance tone of Matteo and I’s work creates a common ground where we might meet and move things forwards without too much instruction. And these 52 meetings with different artists are anyway feeding and disrupting and interrogating what Matteo and I do and think and assume and doubt and wish for, so the exchange is mutual, and that’s the point of doing it. LD: There’s also something in the duets that feels therapeutic—I fairly
25
Still from Both Sitting Duet (2001; 2012 performance)
often get the feeling that Jonathan is Matteo’s therapist. Do you think there’s something therapeutic about these portrait dances too?—especially given that the song texts regularly dig into the dancing individual’s backstory and childhood, desires, etc. jb: Well you’re right that the therapeutic nature of the dancing body is never far from the surface when we practice or watch dance, but for me the process of 52 Portraits is perhaps more sociological and political. The intention of the lyrics is to throw the usual idea of the perfect, blessed, angelic dancer figure, and focus on more interesting, conflicting and contradicting information and ideas about what a dancer might be and why we might dance, and to expose the hidden politics of dance practice. Matteo and I are interested in counterpoint, both as a love between the parts but also as a friction which causes something else to happen. So for us the lyrics and music of each portrait are about sustaining and at the same time questioning the thing done. One of the pleasures of the project has been to experience the skill that dancers have, to be precise and at the same time spontaneous, and to pitch their performance with self-conscious awareness in relation to the camera and the viewer. And all of them come with a different methodology. And for me this is another aspect of the project, that as well as giving equal space to known and unknown
Cerenem Journal no. 6
From 52 Portraits, Betsy Gregory. All photography by Hugo Glendinning. Click on each picture to go to the video of the dance
Robert Cohan
27
(l-r) Francesca Fargion, Jonathan Burrows, Matteo Fargion
Siobhan Davies
Cerenem Journal no. 6
William Forsythe
Flora Wellesley Wesley
29
Deborah Hay
Namron
Cerenem Journal no. 6
artists, it also gives space to different approaches to working, in a way that challenges the way the dominant discourse wants always to simplify and to reject what doesn’t conform or no longer conforms. LD: In this project there is a striking mix of persons doing the dancing. Some are quite well known (Betsy Gregory, William Forsythe, Robert Cohan, Siobhan Davies), while many others are young and relatively unknown. To put it bluntly—what’s your relationship with factual biography? In that, with a young dancer whose background is unknown, one could essentially tell any story one would like? JB: The portraits work like this: I make some exchange with the artist about what they might do, inviting that they start from what is overused, worn out, dug up, archaeological and somehow burned into their motor memory; and I suggest that they might trace or map those remnants into the space in whatever way, not to show the moves but just be in the act of engaging with them. And I offer the musical form of La Folia, which Matteo and I worked with extensively throughout 2014, and some use it and others don’t and for the ones who don’t I suggest other ways of mapping the thing, like a song sung privately in the head, which perhaps contains some sort of questioning. And every person arrives and says the same thing, ‘I’ve had no time, I’ve got nothing really.’ And then they sit down and the work comes out. This all takes no more than an hour or so, and Hugo Glendinning is lighting as he goes and shooting from the start. And when we’re done I ask them some stock questions and some questions provoked by the conversation in the room, and I ask for a piece of music that matters in whatever way. And I write the text from the interview on the train home, using what they say verbatim, and I send the lyric and chosen music to Matteo, and Hugo sends him the video, and the music is written very fast. And the performers never hear the music until they see the final portrait. The process is a kind of benign ineptness, built upon a lifetime of working together. The actual skills we use are hardly evident, and the same goes for the dancers. The human body changes too rapidly, and experiences what’s happening on a somatic level too intensely to grasp half of what is happening at any given moment, so we learn to deal with the superficial. And to answer your question, it’s not about truth or not, it’s much messier than that, because that’s
31
Rembrandt van Rijn, Woman Bathing in a Stream, 1654, London, National Gallery Henry Raeburn (attr.), The Reverend Robert Walker Skating on Duddingston Loch (The Skating Minister), c.1790, Edinburgh, National Gallery of Scotland
how the body is. LD: Was there anything as regards historical models of portraiture that affected the way these dances were composed?—the lighting, for example, is very much chiaroscuro. For me, two different sorts of historical portraiture seemed to be relevant—one being the private, ‘at a distance’ picture (Vermeer’s interiors, for example, or Rembrandt’s Woman Bathing in a Stream, whose model was probably his lover Hendrickje). The sort of pictures which are full of desire, psychology, maybe even a little erotic voyeurism—but very much ‘observed’ by the artist. The other sort of picture would be the ‘fantasy’ or ‘character’ portrait, where either a stock figure or a real person is bent into a shape or a pose by the artist. These pictures are more rhetorical, and are not seen from ‘afar’ but are instead very much flatter, with the subject foregrounded almost to a point of disembodiment. Raeburn’s The Skating Minister springs to mind. It seemed to me that these dances had these two sorts of portraiture present as models and
Cerenem Journal no. 6
flitted between them. JB: I like very much your picture of these two kinds of portraiture, which sound like what Robert Lowell described in poetry as ‘the raw and the cooked’. I think both are present in 52 Portraits, and it’s usually an accident of who we’re working with and what happens on the day. I’ve been thinking more about this issue since I read your comments and looked at the images of Rembrandt’s mistress and Raeburn’s skating man, and I’ve come to the conclusion that sometimes the difference in the gaze that’s invited in 52 Portraits is to do with the colour of the clothing which the person chooses to wear. I’ve noticed that dark clothing leaves the person floating in a space which softens and contains them without distracting us with surface; whereas light coloured clothing places the performer very much within the room, in a more two dimensional and plastic way. Finally there is within performance, as within the visual arts, an ongoing tension between more objective and more subjective approaches and gazes, and I’m aware that portraiture is a dangerous thing to attempt in a climate resonant with this discussion. However my reason for doing it is not so much to represent or defend a subjective stance, or get into that argument at all really, but rather to use the portrait form as a way to challenge the hierarchies of currency within dance practice which constantly want to place one approach or style above another. And I do this because as an audience member I continue to find extraordinary experiences in the most unlikely and least acceptable of places, regardless of style or conceptual viewpoint. It seems to me that the only criteria really as to what resonates seems to be that the person is more or less consistent and more or less evidently sentient. LD: The music for these dances is enigmatic too—it feels both throwaway and carefully laboured. Each is a song, with lyrics referring to the dancer, and each uses a model tune (from Tina Turner, or MIA, or Iggy Pop, or The Roots, or Stravinsky’s Les Noces) though I’m not sure they’re all that recognisable as they are mostly reconfigured and recomposed (I certainly didn’t recognise any). What was the thinking on this? JB: Matteo and I just liked the idea that someone’s ordinary life or ideas, might be sung as though what we are hearing is crucially important. And the act of singing has a way of universalising what
33
has been said. LD: Do you have any thoughts on informality and formality? There feels something deeply informal about these dances—domestic, at turns—but also something about masks and formality and outward presentation. The music too, is often very informal, but this can sometimes feel jarring somehow, but I don’t know ‘with what’. JB: Matteo and I have had a policy for many years of saying yes to any invitations to perform, and then figuring out how to do it afterwards, whatever the space and conditions of working. Hence the title of the new piece we’re making, which is called Any Table Any Room. So we might be performing one week in a large proscenium arch theatre, and the next week in a hall without technical equipment. And each of those two extremes requires a different approach to the balance between what is formal and what is informal in the performance, and both qualities must be there in order that the audience members are invited and engaged, and at the same time free. The whole purpose of our performances is to be under the same roof, which is a term we borrowed from the director and performer Jan Ritsema, and the same philosophy applies to the portraits. LD: There does seem to be a dialogue, both overall, and in the song lyrics themselves, between a certain ‘behind-the-scenes-ish-ness’— things to do with funding and the Arts Council, careers, education, boring practicalities—and something deeply lyrical. But then I guess this is a preoccupation of much of your work? (A Choreographer’s Handbook swings quite a bit between these two places.) JB: In dance now there is a slow recognition that artistic practice includes many different elements, including how we deal with the practicalities and with the public face of what we do, and I wanted 52 Portraits to reflect this in a respectful way. We are living through a period when there is a vast infrastructure of arts professionals, waged and protected by holiday pay and pensions and so forth, in ways artists can only dream of. And this class of arts professionals does good work but is also busy creating gateways for artists to pass through or not, and are constantly having to collude with government to create ever increasing bureaucratic mechanisms that we must negotiate. I wanted that 52 Portraits highlighted the voices of artists while quietening these voices of bureaucracy, and one way to do that was to let artists speak directly about the daily job of
Cerenem Journal no. 6
surviving. LD: As we are now more than halfway through these dances being released, is there any long-term structure or development across them—even something emerging by accident? jb: The only principle for curating the project was that it must be people whose work we love. But as the project has developed so it has become clear that it can never give room for everyone who should be there, and so we are looking at ways to make clear at the end that the list of 52 is in no way comprehensive and that it could go on. And we have already been asked would we do it again in another context, and our preferred model would be that the idea is put into the commons and anyone who wanted to make or subvert or do whatever they want with their own portraiture, would be welcome. And the list of 52 is in a sense deliberately random, shifting from known to quite unknown people, through obvious choices but with occasionally surprising choices. And the important thing for me is that everyone is equal under the roof of the project, so when I was asked could someone show just the portraits of older performers as part of another event, I said no, because to single out the older performers would be to make a judgment on their age, and for me there is a politics in the fact of ignoring all the usual hierarchies which stereotype or marginalise artists for whatever reason. LD: Jennifer Walshe recently fingered you (rather cutely, in a footnote) in MusikTexte, where she was introducing the term ‘New Discipline’—meaning a recent tendency toward incorporation of movement and the body and sociality and theatricality in an outwardly musical context.2 I’m not sure there’s anything ‘new’ or indeed, ‘disciplined’ about the trend she’s noticed (that might have been the point of her term), but anyway, did you have any thought on this? Do you think of what you’re doing as expressly new or experimental or revisionist? JB: Well this is a nice article by Jennifer Walshe and it’s very flattering to be mentioned in it, and I think she explains very clearly that her use of the term ‘New Discipline’ is pragmatic, so as to provoke a recognition of what’s happening in terms of this current interest which composers have in performance. And of course this rekindled interest in the performing body is strongly present also in the visual 2 Jennifer Walshe, ‘Ein Körper ist kein Klavier’, MusikTexte 149, May 2016, p. 3
35
arts. But for me it’s what I’ve always done because I’m a dancer, and I guess the more necessary question I have is why there’s been this sudden turn back towards the body, and I think we’re all still trying to work that out. Meanwhile Matteo and I tend to be moving in the opposite direction, where we talk of what we do more and more as being music, in order to clarify our position within the continuing conceptual moment in dance. Because we somehow fit in with this conceptual moment, but in other ways we make decisions which disappoint, so we’re at pains always to make clear we never promised to entirely fulfill the conceptual obligation, and the reason is that we’re busy making music. It’s rhetorical but it helps. It keeps our options open. ▧
Cerenem Journal no. 6
Photo by LD (2010)
37
christian wolff: in conversation with joseph kudirka + nick williams
jk: This interview was conducted in the autumn of 2009, when Christian Wolff was in Huddersfield to receive an honorary doctorate. I was directing Edges ensemble at the time, and we had been working on a retrospective concert of his work, which was presented that week. Earlier that year, I’d seen Christian in Ostrava, where his piece Rhapsody, for three orchestras, had been premiered, so it was still fresh in my mind. jk: I was wondering, specifically, about things like the Exercises—did you write those with players in mind? Or were they more of a musical idea you thought people might take an interest in? cw: Actually, no; though it was fairly soon that a kind of floating band for the Exercises emerged. But initially, no, I just plunged in. Along with a few other pieces around that time they were a sort of my response to Philip Glass and Steve Reich and that sort of music— which initially I liked a lot, it was great. jk: You mean insofar as it’s limited material? You had been using even less material earlier . . . cw: No—it is that too, but it’s more diatonic. I was never into that ‘steady beat music’, [but preferred] just having that occasional possibility of pulse. nw: That’s interesting because when I put on some of the Exercises and Songs when I was an undergraduate, we actually did them in the same concert as things like Piano Phase, and the first of Riley’s Keyboard Studies. We felt that there was some sort of connection between them even though they sounded very different on the surface. cw: Yeah, it’s nice that people are surprised when I say that! The first time we did them in New York, and then the ‘band’ emerged which were a very good group, all of whom happened to be in New York— Frederic Rzewski, John Gibson (the saxophone player who played in the Philip Glass Ensemble), and er . . . jk: . . . Arthur Russell? cw: Yeah, how’d you know that!
Cerenem Journal no. 6
jk: You played a recording of that group that included Arthur Russell. cw: Oh—Arthur was a character. And—the later the day gets the worse my name recall becomes . . . Garrett List, trombone. I mentioned Frederic. And then it would depend—David Behrman sometimes came and played, he played viola in those days . . . sort of. And I played keyboard—and actually with Frederic, who would want to play piano, I would probably play guitar. And I used to play flute, so I might play a little flute. And organ and percussion—David and I mostly went for the percussion as we couldn’t keep up with the other guys! [laughs] So that was the group. jk: Between then and now you’ve changed the way you play them, in some ways. cw: Have we changed the way we played them? jk: You have, I think. cw: Well I’d be surprised if we didn’t. jk: There are things even in the score—it seems like you’ve become more liberal, about how they can be interpreted. Like on the Ten Exercises disc, saying that there can be octave displacements. cw: Yeah—that comes straight from the players. But, I mean Larry [Polansky] was going nuts trying to—it is hard. jk: I guess if you’re used to reading it in one way . . . cw: If you’re used to it in one way and if you have a transposing instrument, it’s a real pain in the neck. I suppose someday somebody might prepare material in B-flat, as nowadays you can do that so easily once you get it into the computer. So I stretched it. But I think I mention in the note it’s also for the piccolo but that doesn’t occur on the record. She [Natacha Diels] really wanted to play her piccolo and I was like, ‘okay . . .’ [laughs] Almost everything is out of piccolo range so . . . It was an interesting recording process [for the Exercises], as you can imagine, because you just can’t do it! Which is nice in a way— mostly I really don’t like recording sessions, they’re such a drag. Play two minutes, or half a minute, stop, ‘oh let’s fix this’—it’s awful. Whereas these pieces, if you stop, you have to start again, you can’t pick it up in the middle. And you can’t edit that way either! It just has to be one take. Fortunately we had a lot of takes, and it took a while to sort it all out. And some things that we thought were really great—typically those pieces, which are so fluid, fixing them for a
39
recording already feels a little wrong. And then you discover that what you can get away with, or what would be perfectly fine in a concert situation just won’t fly on a recording. It’s a really different medium entirely. jk: Yeah. cw: So if there’s a little glitch, sometimes it’s quite beautiful—and just move on to the next thing. But if it’s fixed there on the record, you’ve got to hear it every time, it won’t work! As a result we lost a lot of material that I thought, at the time, felt really good; just that little moment here and there. jk: Do you think that’s part of a reason that a lot of your stuff isn’t recorded? We talked yesterday about how there’s not a version of Changing the System, except for this one you’d planned. cw: Yeah, that’d be a hard one. I like to think of them of just documentation of a performance. If you could do it in a live context and then put that on a cd I’d be perfectly happy. nw: How would you feel about hypothetical recording process of something like Changing the System where you would record each of the elements separately, and then they’re edited, layered, together? cw: Yeah, you could do that—my first impulse is to say no, but I think I have enough experience now to realise that that actually sometimes works very well. I mean even Robyn [Schulkowsky] who’s a fabulous virtuoso [did this], when she was recording some solo pieces of mine—parts of which are really outrageously difficult! For instance there’s one piece which is mostly a transcription into rhythmic notation of a Josquin five-part motet. nw: For solo percussion? cw: Yeah, solo percussion—she has to keep five voices going. She can do it, but on the recording she said ‘to hell with this, I’m doing two tracks!’ [laughs] And it was fine, perfectly okay. Because all of that tension you get from a difficult performance, you don’t see it [on the recording], you don’t feel it, it’s just not there. So, why not. And then another one which is on the record—actually, the percussion duo, it’s almost my favourite thing on the record (it’s the one with the noises in the background)—it was meant to be the accompaniment to another piece. And there we decided to just record separately—fortunately in this case, to a certain extent—the percussion
Cerenem Journal no. 6
Christian Wolff in Huddersfield
part that runs simultaneously with the other stuff. It’s no. 14. And that’s the one that really broke my heart as we had a couple of 14s that I thought were really cooking, but they turned out not quite [right]. But we really liked our percussion duet! So we kept that. jk: When you recorded the percussion for that, were you listening back to a version in headphones? cw: No, just doing it. Again, that’s something you wouldn’t do with live performance, but it’s idiomatic to recording to work that way, so you do it. Actually I was thinking of something else: she did a
41
solo CD of percussion music (another recording of the same piece that requires the five-part playing) and there’s again one section where you have to play at least two parts, but they work the way the percussion does in that Exercise where the durations are all determined by the length of time it takes for the sound to die out. And I think this thing has up to three lines doing that and she has to do that simultaneously. nw: So you have to be aware of which sounds finish first . . . cw: You can imagine! We do that as a duo, and it simplifies it considerably. jk: But it was written as a solo? cw: It was written as a solo—I really pushed it in that one I think. Now there’s a case of writing for a person. It’s like writing for David Tudor—you know the guy’s gonna like what you write, even if it’s totally impossible, he’s going to find it interesting. And Robyn’s the same way. Solo music, there I definitely will write for people. Whereas ensemble music tends to be more [various], depending . . . If it’s for a group of people like this one here where I know everybody, or if it’s for some New Music ensemble that I’ve never worked with, it could make a difference, it does make a difference. jk: I’m trying to think of the right way to ask it. When you’re listening to another version of one of these pieces you’ve maybe written for a specific person or occasion, years later, is it always pleasing, is it sometimes upsetting? Do people sometimes not get what you thought you were able to convey in that one instance? cw: It runs the whole gamut of possibilities here. I mean it could be just a bad performance, it comes with the territory. Otherwise, no, I usually find the differences interesting, even if I don’t really like them. As long as I’m convinced that this person has done it seriously and is actually using the music that I’ve provided (which is something, at least in the old days, you couldn’t count on at all). I’ve decided that people are who they are—including musically, and if that’s what they are musically and that’s how they do it, and they’re doing it seriously, and they’re doing it according to what I’d written, that’s fine. I may not be totally delighted with that kind of person or that kind of music making, but it’s alright. nw: But if they do it in good faith . . . cw: Exactly.
Cerenem Journal no. 6
Excerpt of Christian Wolff, Exercise 14b
jk: Have you ever changed something or done something for a new piece because someone is doing something in this committed way, but they’re interpreting it in a different way that you’d planned? You could, say, look at the wording, and realise how they got there? cw: Oh okay, that there might be some loophole that I’d overlooked. jk: Can that be nice? cw: Yeah, often I’m delighted, people will think of things to do that I had not thought of. jk: Could that change the way you write? Say you wanted to word something to get an effect . . .
43
Excerpt of Christian Wolff, Exercise 13
cw: You learn from it, yeah, it makes one more aware. With the earlier pieces I often forget what the hell I thinking, ‘what the hell was this instruction, for God’s sake?’ jk: Some of these notations you’ve come up with have stayed consistent. Like the wedge, the indeterminate pause. I guess that’s a case of you having done something once—is the Exercises the first time that’s used? cw: That’s a good question. Probably. jk: And somehow it just worked, you’ve kept it up until . . .
Cerenem Journal no. 6
cw: It’s probably the single most important notation I use. jk: I mean, I think it’s that, or things similar, which are now adopted by other composers, because there’s really nothing like it. cw: Well, it’s on the analogy—there are various little signs for breaks, fermata would be one, comma would be another one, and sometimes there are variations . . . nw: But most of the others . . . cw: But they’re specified. nw: . . . have some sort of specification. We would think of the comma as shorter than the fermata, whereas the wedge . . . cw: The wedge is completely open. jk: And also noteheads without stems. Did you get that from anywhere? The only things I can think about are some things from Cage and Brown and Feldman, but they usually operate differently from the way you handle them. cw: I don’t know. But earlier notations have things like that too, if you look at gregorian chant for instance. They have other things too, dots . . . nw: Neumes. * jk: . . . I don’t think I could be a composer who sits somewhere. Working with people is so important. And some composers can just work on a score, mail it off and turn up to the premiere. cw: I couldn’t do that. jk: You’ve lived in Hanover [Massachusetts] . . . cw: That’s very isolating. Actually there is a modest and small music department with some interesting people in it. And the trick of living in Hanover is to get out as often as possible. [laughs] Boston’s only two and a quarter hours away, and New York isn’t too bad either. jk: That’s why you and Larry [Polansky] have worked together so much, isn’t it? cw: Yeah, on the spot. jk: One thing that I thought was interesting in the new three-orchestra piece, Rhapsody, is that you seem to—in some ways it really reminded me of your much older music. 1950s kind of stuff. Very different from your other orchestra music, this new piece, do you
45
think? cw: It’s changed, yeah. jk: It seemed to me a bit like Ives working with orchestra, that you have these groups, assembled ‘as instruments’. I mean, any orchestrator does this, but maybe not to the extreme where you’ve separated to them into the three groups. It made me think of sonorities from the early string quartets and piano pieces, where you have something—a chord—that’s treated as ‘a thing’, not as separate pitches; and that there’s some parallels operating with the orchestras. Is that completely nuts? cw: No, I mean the sound might come out like that. The writing, I’m trying to remember; I would say the largest proportion of it is just old-fashioned counterpoint. But it’s not about the counterpoint, it’s just a way to get the sound. And then you have the sounds coming from, actually, more than three places, but more or less three rough locations. Again, it’s orchestration, but in a slightly different way because you get to do things like—you can have everybody, the whole group, playing exactly the same thing. Why not? And then you can have only two instruments from over there, and two from over here, playing the same thing. The number of combinations is mind-boggling. But it’s actually also in my early music. When I started with two melody instruments with three notes, okay, that would make what I’d regard as twelve sounds, that was pretty clear. But then if you’d just up the ante by one instrument, you suddenly have thirty five possibilities. And then with say three instruments and four pitches, there’s no end to what you can do! [laughs] Thinking about the orchestra made me think like that again. And I would actually draw up little charts of what combinations [were possible]—but then I quit, it’s hopeless. nw: Too many . . . cw: But it does focus your mind, thinking that way. jk: Also, it seemed like there were a smaller set of pitches being used than there has been in some more recent stuff? cw: It’s possible. I must say, I kept the range in the middle. There are areas where the counterpoint is fairly close, and I keep the ambitus . . . jk: Lots of beautiful fourths and fifths and very consonant chords. cw: Right.
Cerenem Journal no. 6
jk: How did you decide what the make-up would be, of each orchestra? I know you had strings in each one. But then the extra players . . . cw: I don’t know quite how that happened. This is my second piece for three orchestras! [laughs] And the first one was a really wild, dishevelled operation, sort of like Burdocks but worked out a little further. And I didn’t want to do that again—and I resisted even doing it, but Petr [Kotik] wanted me to do that. So, first of all, I was trying to not do what I’d done before. And then—I’d previously heard some string orchestra music and liked the sound quite a lot, and thought it’d be fun to do that, never done that. And then that seemed a little thin for three orchestras. And then I’d been listening to a lot of Haydn, and I thought ‘he does very well’, with a handful of strings and maybe two wind instruments, maybe brass or whatever. So I think of these as three little Haydn orchestras. [laughs] But minimal, and weird because I think one of them has a harp and trombone, the straight one has a flute and horn, that’s perfect, and then the other one has trumpet and bassoon which isn’t that far off either. But then it was mixed up with something else—music that I hadn’t really listened to at all with any attention—which was Bruckner. [laughs] I thought Haydn and Bruckner, what a great combination! But I really like Bruckner, it’s so good, I hadn’t really noticed what great music that was. That was more for scope, or scale; and also repetition. You know, he really does repetition very well, and he’s not shy about doing it. It’s almost Feldman territory, where he’s got something, it’s very distinctive, do it once, twice, three, four times, he doesn’t care. He just keeps it running. jk: I don’t remember which symphony it was anymore but when I was in school and played in orchestras, I had a director cut out a whole hunk of these repetitions. It was ridiculous, twenty-five, thirty bars, exactly the same thing. cw: And it’s beautiful, he knows exactly what he’s doing. jk: His justification was because that’s how Strauss had done it. And I thought ‘well that’s ridiculous’. Bruckner’s better than Strauss. cw: Absolutely. By quite a bit. ▧
47
Photo by LD (2006)
Cerenem Journal no. 6
Photo by LD (2006)
49
mira benjamin + luke nickel: correspondance on tuning LN: The following is an exchange of email correspondence between myself and Mira Benjamin on the subject of microtonality, tuning and portraiture. With emphasis on tuning in particular, we discussed a variety of practical considerations and metaphorical models. LN: We began this exchange a few weeks ago with a phone conversation, in which you pointed me to Bob Gilmore’s Keynote at the 2005 edition of Microfest (UK). In his lecture, entitled ‘Microtonality: My Part in its Downfall’1, Gilmore explores the difficult—even paradoxical—task of narrating a history of microtonality. While one might discuss the history of microtonal notation, it is only possible to discuss microtonality as a phenomenon in opposition to a perceived ‘non-microtonality’, of which there is arguably none. Microtonality is relational. Gilmore ends his lecture with a quote from Kyle Gann: ‘music is a footnote to the history of tuning’. My first impulse was to draw a link between tuning and the idea of portraiture. However, if I am to follow that line of thinking, I would like first to better understand this above quote. Can you elaborate on your understanding of Gann’s statement, and Gilmore’s reference to it in the context of his lecture? MB: Gilmore gave this quote as a summing up of his lecture, in which he narrates a potential history of microtonality. In this context I think Bob was acknowledging the formative influence of tuning practice upon the development of composition and performance. He suggests that due to this influence musical works could be regarded as instantiations of specific tuning practices. For example, the violinist and composer Giuseppe Tartini, having discovered the 1 Bob Gilmore, keynote address at Microfest 1, October 15 2005,
Cerenem Journal no. 6
difference tone in 17542, composed harmonic material that not only exhibited but also furthered the practice of Just Intonation. Perhaps in Tartini’s practice music and tuning enabled each other. Gilmore offers a lovely and succinct description of this interactivity in another of his articles, ‘Changing the Metaphor: Ratio Models of Musical Pitch in the Work of Harry Partch, Ben Johnston, and James Tenney’: Most models [of musical pitch] are designed not merely to provide a description of pitch ‘space’ but to suggest or embody an explanation of it. All such models are attempts to circumscribe and make manifest the processes by which we form cognitive representations of musical materials. Clearly the model and the observations that arise from it are linked: observation is done in the ambience of the model; the model is created in the context of an observation strategy. This interaction helps evolve the adequacy of the model and the sensitivity of the observation.3
Discussing microtonality from a non-comparative stance is a pervading challenge. I have found it helpful in my own research to try to define clearly at what point in the actualisation of music we choose to locate this thing called ‘microtonality’. If microtonality refers to a musical outcome—if it is a way of describing or analysing what we hear or have heard in a musical performance—then it is indeed unlikely that a non-comparative discussion will be possible. However, we might instead consider microtonality as a process—one which occurs pre- and in-performance. Such a process can provide useful strategies through which one can navigate the complex, iterative worlds of pitch and tuning. The violinist and composer Marc Sabat has said that musical pitch occupies a ‘glissando-continuum’.43 If we are to think of pitch 2 Robin Stowell, Violin Technique and Performance Practice in the Late
Eighteenth and Early Nineteenth Centuries, (Cambridge, Cambridge University Press, 1985), p. 147. 3 Bob Gilmore, ‘Changing the Metaphor: Ratio Models of Musical Pitch in the Work of Harry Partch, Ben Johnston, and James Tenney’, Perspectives of New Music, Vol. 33, No. 1/2 (1995), p. 458. 4 Marc Sabat, ‘Intonation and Microtonality’, New Music Box (1 April 2005). Accessed 21 October 2016. Available from:
51
as a flexible continuum without fixed points, then we must also appreciate the constant state of intonational decision-making that is inhabited by the performer. I would suggest that microtonality might be a way of characterising these decisions: it is a collection of vocabularies or models through which we can organise our understanding of pitch, and make decisions with relative consistency in different musical contexts. The concept of microtonality allows us to bypass any perceived normativity that might be assigned to any one tuning system or musical practice, and engage with the tuning itself as a process and a practice. LN: This idea of the glissando-continuum reminds me of raw paints—or perhaps even a canvas—which must be manipulated by performers who pattern it: manipulated, consciously and unconsciously, into paintings (or performances) that are both representational and symbolic. As a performer, is the negotiation of the endless pitch-glissando continuum, via microtonal models, a series of conscious decisions? How do these decisions reflect the player’s performative agency, and do they create a sense of individuality? Could this be likened to an act of (self-) portraiture? MB: Sabat’s use of the phrase ‘glissando-continuum’ reflects a concept of tuning that understands pitches in terms of their relationships with other pitches. In an article for New Music Box (2005) he writes: I would describe intonation as the art of selecting pitches, or (more accurately) pitch-‘regions’ along the glissando-continuum of pitchheight (following James Tenney’s description in the 1983 article ‘John Cage and the Theory of Harmony’). The ‘tolerance’ or exactitude of such regions varies based on the instrument and musical style. In this context, microtonality is an approach to pitch which acknowledges the musical possibility of this entire glissando-continuum and is not limited to the conventional twelve equal tempered pitch-classes.5
Sabat here refers to James Tenney’s discussion of pitch space, following Cage’s 1959 writings (‘each aspect of sound . . . is to be seen as a continuum, not as a series of discrete steps favored by 5 Ibid.
Cerenem Journal no. 6
conventions . . .’6). I’m not sure I’d be drawn to see this glissando-continuum as something raw or unfinished, nor as a material which is available to be manipulated or patterned—although I might be more inclined to draw these analogies if I were a composer. As a violinist, I’d say the idea of a glissando-continuum encourages me to think about pitch space as a whole, fluid environment where locations (coordinates, even?) can be understood in various ways. I believe Sabat’s approach not only describes but also defines a pitch in terms of relationship (proportion, proximity, triangulation) with other pitches. A single pitch, even when quantitatively measured (i.e. in Hz), is abstract— only through its relationship with another pitch can we begin to form a model that can be communicated and mutually understood. I do think the negotiation of the glissando-continuum requires a series of constant decisions—although these may be be intuitive (or even unconscious), or they may be based in a more explicit model of pitch space. I would suggest that the degree to which these decisions might be ‘likened to an act of (self-)portraiture’ depends on the player’s underlying motivations and priorities. A performance could easily be seen as an act of self-portraiture— especially in circumstances where the music or musician prioritises dramaturgy or expressive narrative. In this kind of situation, intonation might function as a medium through which the player can project their intentions (perceptions), and is likely to result in tuning choices that aim to accentuate or exaggerate this intended (perceived) impact. For example, the cellist Pablo Casals defined a manner of intoning melodic passages, which he termed ‘expressive intonation’. Casals advocated a subtle exaggeration of the relative sizes of diatonic intervals, relating to their melodic function: semitones were to be narrowed when approaching functionally important pitches, major thirds widened in the context of diatonic tetrachords, and sevenths intoned lower to encourage the perceived inevitability of their downward cadential resolution. The overall effect was a slight exaggeration of the difference between major and 6 James Tenney, ‘John Cage and the Theory of Harmony’, in Peter Garland
(ed.), Soundings 13: The Music of James Tenney, (Santa Fe, NM, 1984), p. 55. Reprinted in From Scratch: Writings in Music Theory (Urbana: University of Illinois Press, 2015), p. 280.
53
minor intervals, which Casals implied could reinforce a listener’s impression of melodic expressivity.7 However, if the player instead chooses to prioritise the process of tuning, or if the music requires them to do so, then the tuning choices themselves become the subject of the performance, and the performer is likely to make tuning choices that are more akin to mapping. Rather than asking, ‘how do I think should this sound?’ the player may instead ask, ‘how does this choice fit into my understanding of the whole?’ To me, mapping is not about (self-)portraiture, but about representing, describing, and inhabiting a space that includes and exceeds all its instantiations. It has been my experience that what we might call a ‘microtonal’ approach to pitch and tuning seems to suit the music and musicians for whom this process of tuning is itself the primary motivation. LN: I read your answer as a rejection of interpretive ego, and an embracing of the mechanics and physical realities of an instrument and the way it makes sound. Am I correct in understanding then that this process-based approach to microtonality affords the willing performer an infinite multitude of options to be navigated during performance? And that because there are so many options, any one process employed during performance might not form a portrait because it is only a small map of how a certain performer has negotiated a certain landscape? I’m taken with your metaphor of maps and space: perhaps turning to this metaphor again will help me better understand your answers. The process you describe seems akin to re-envisaging a whole terrain as negotiable, rather than only using the main roads wellworn by time and practice. In this process-based application of microtonality, all terrain becomes open, with a performer striking out in whichever direction makes sense to them at the time. Sometimes they might encounter main roads with many people passing, and other times ancient, deadened paths that still remain fruitful for the courageous traveler. 7 David Blum, Casals and the Art of Interpretation (Berkeley, University of California Press, 1980) pp. 102-10.
Cerenem Journal no. 6
This metaphor reminds me very much of Philip Thomas’ proposal of an experimental performance practice. In it, he describes experimental performance as focusing on doing the job required with little preoccupation of narrative and continuity.8 When you’re exploring terrain, each moment must be spent navigating the particularity of where you are and what is in front of you. How do you personally negotiate this landscape? Once you’re off the beaten path, what strategies do you employ to understand the landscape around you and continue to move in any direction? What goes into your thought process when navigating the pitch space of the endless-glissando continuum? mb: I really enjoy your description, and wholeheartedly concur on many points—particularly when it comes to Thomas’ comments about ‘doing the job’. However, I would hesitate to characterise my practice of tuning as ‘going off-piste’. To use this analogy seems to me to re-subscribe to the comparative discourse surrounding tuning, which we have been trying to avoid in this discussion. Maybe, if we’re sticking with metaphor for the moment, it might be more appropriate to think about tuning as sailing—negotiating the dynamics, behaviours, ‘currents’ if you like, of a fluid surface. In more practical terms, it appeals to me to think about pitch using models that accommodate both the familiar and the unfamiliar on equal grounds. Convention encourages us to think about tuning by comparing what is less familiar to what is more familiar. We recognise the diatonic/chromatic pitch set from our experience of music, and call these pitches ‘twelve-tone equal temperament’ (a coarse generalisation). But in string practice, generalisations like this represent category errors. The practice of string intonation is necessarily relational, and involves a spectrum of microtonal nuance, which we negotiate according to context. Tenney describes the ‘tolerance range’ of each pitch—‘a range of relative frequencies within which some slight mistuning is possible without altering the harmonic identity of an interval.’9 8 Philip Thomas, ‘The Music of Laurence Crane and a Post-Experimental Performance Practice’, Tempo 70, no. 275 (2016): p. 11. 9 Tenney, ‘John Cage and the Theory of Harmony’, pp. 55-83.
55
I think you are spot on when you talk about embracing the physical realities of the instrument—I’d go further to say the physical imperatives of the instrument. A violin has only four ‘default’ pitches (open strings), and when these are both adjustable and unpredictable, which pitches may we call ‘normal’, and which are therefore ‘extended’? Evidently, the notion of normative and non-normative pitch is somewhat abstract in the practice of string playing. So yes, I think it is constructive, as you say, to approach the whole terrain as negotiable—which is not to say that players are faced with infinite choice. As appealing as it may be to think of musical pitch as a great expanse of infinite possibility, in practice musical context goes a long way to defining constraints or preferences that guide our tuning choices. However, within a given musical context, there are still vital decisions to be made, and in order to filter or streamline these decisions, I have often found it revealing to project an established tuning system or model of decision-making. For example, I often experiment with Just Intonation-based tunings as a way of navigating scores that use common practice pitch notation. In some instances, I have found it helpful to define preference rules10 that guide my tuning decisions. These exercises let me filter my decisions through extra levels of constraint, which help me to focus my choices, de-prioritise the familiar, and work past my learned responses and received practices. LN: Thanks for this answer. I would like to examine more thoroughly your practical approach to the process of tuning. Specifically, I was wondering if you could walk me through how you would approach tuning in two different scenarios. In the first 10 Preference Rules are proposed by Fred Lerdahl & Ray Jackendoff in their
book A Generative Theory of Tonal Music as a means of identifying which of a possible number of interpretations of a musical event is the most appropriate in a given context. Defining their theory as ‘a description of the musical intuitions of a listener who is experienced in a musical idiom,’ Lerdahl & Jackendoff acknowledge that any musical example is subject to a number of possible analyses. A preference rule defines, in the light of contextual factors and other preference rules, a likely constraint or parameter which can be applied to an analysis. (See: Fred Lerdahl and Ray Jackendoff, A Generative Theory of Tonal Music, (Cambridge, MA: MIT, 1983), 3-9.
Cerenem Journal no. 6
scenario, you are met with a traditionally notated 5-line-stave western score (what you have called common practice pitch notation). In the second, you are met with a more prescriptive score that explicitly indicates its tuning practice using either graphic symbols, words, or other methods. How do you approach performing (or teaching) the tuning within both scores? I am particularly fascinated by the metaphor of sailing you gave in an earlier answer. Perhaps it might be fruitful to draw this into your analogy? MB: Both of these examples represent major thirds:
In the first example (a.) the intonation is understood implicitly by the player; a reading will depend on context and in many cases rely on an understanding of harmonic function. In the second example (b.) the intonation is more explicitly prescribed for the player, and a reading will rely on a familiarity with the harmonic series. (Interestingly, it is completely possible that the two above examples of notation could result in identical realisations: the first third [C-E], performed on a violin in the key of A major, may very well contain the same pitch content as the second third [C-E], played below the violin E-string in 5-limit Just Intonation.) A player’s sounding of each of these examples must not be conflated with the notation itself. Another discussion could explore the various functions notation might serve, but in the context of the practice of string intonation, it is fair to say that notation functions as a set of cues that prompt the player to audiate certain sounds or sounding proportions, and to sonify these according to preference, experience, context and choice. So with respect to the above two examples, the process by which I (and I think most string players) would perform each of these forms of notation would be one and the same: I would observe the notation within the surrounding context, make decisions based on my
57
observations, audiate my intended sounding result, and carry out a practiced set of physical and technical movements which would bring me as close as possible to that intention. I would then listen to what had sounded, respond to it, and add that impression to the overall context going forward. This is the practice of intonation, no matter what the initial prompt. Like sailing, this process is neither entirely one of planning, nor of spontaneous reaction. It is an exercise in maintaining the tension between the two—exploring contingency while remaining mindful of one’s position within the whole. Fittingly, this discussion brings us back to Bob Gilmore’s Microfest lecture, and another question he posed that day: ‘is the designation “microtonality” still useful today?’ I think through this conversation we have offered a response to that question. And I propose that we continue these conversations, present and future, in appreciation of the generosity and curiosity with which Bob sailed the seas of musical pitch. ▧
Cerenem Journal no. 6
michael finnissy + cassandra miller: transcription, photography, portraiture Transcription1 mf: A transcription is something that you’re writing through; and as you’re writing through, you’re thinking along the way as you go. Whereas an arrangement (to me) would presuppose that you haven’t done very much to the shape to alter it. Of course, they reflect the context—the actual topic of the piece—because a country tune is less posh than a mélodie would be. And a Gershwin arrangement . . . Arrangement is the term that most arrangers use when they’re doing stuff for singers who are going to sing a Gershwin song in a show or cabaret. [Whereas] transcription is a more high-falutin term, that you might use about operatic work; traditionally it is ‘transcription’ and ‘paraphrase’ which are the usages of Liszt, Busoni and various others who made operatic transcriptions for the piano. I was already dealing and wrestling with these issues, and my teacher Bernard Stevens, who was an extraordinarily sensitive, erudite man, suggested that I read the essay by Busoni about transcription. He suggested this because Busoni’s view of transcription is a very global, very holistic one. Busoni basically says all musical activities are in some form transcription. [. . .] There were two decisions I made: the first one was that I wasn’t going to transcribe things literally, so that the aim wasn’t to produce a ‘piano arrangement’, a straightforward transcription, inasmuch as such a thing is possible (you do sometimes find those ‘easy arrangements’, of say the Czardas from Die Fledermaus or something). I wasn’t going to do that; and [secondly] I wasn’t going to just do a kind of ‘decoration’ of the original. I set my mind to actually composing with the material, pretty much as if it were my own, because 1 A video version of this interview is online here, watch?v=3ZMCOw4hAZA. It was edited by Cassandra Miller.
59
Michael Finnissy at home
by the time I’d made the choice of what to do, it becomes like the subject of a (well, here we go with our analogies, we’re treading on thin ice here) portrait. Let’s say I decide to make a portrait of you—as a photographer or as a painter—then, of course, you are the subject, and I’m not simply reproducing you if I take a photograph of you: I choose lighting, I choose an angle to photograph you from, because my view of you is not your view. Famous quote from Picasso about Gertrude Stein: he painted Gertrude Stein, she said ‘it doesn’t look anything like me.’ He said: ‘It will.’ You can read any number of quotations from Hockney about this aspect of portraiture, and how the amount of work that you have to do, as a portraitist—the hours of sitting, the hours of looking at somebody’s face—is very different from photography, not that it’s any less or any more, but it’s just different. So I was thinking: what am I doing, if I’m transcribing? In our discussion earlier, before the camera and microphone were switched on, I said that actually I think all music is [potentially] there, all musical ideas are ‘ready gain’ for transcription. In a sense, is it even possible to take anything—any collection of sounds—and for it not to be a transcription? Because all those pitches are ‘just there’. All those dynamics are ‘just there’. And you can pretend that you’re ab-
Cerenem Journal no. 6
stracting them out of thin air, but of course you’re not, and as soon as you start joining pitches together they resemble something else. I decided I was not going to play this game of ‘originality’ either; it’s not the issue. Of course I’d read books about how the process is what you’re doing, not the material—so I just focussed on that and got on with it. In fact I don’t think I ever really had a problem because, to me, with a much more mature visual arts training, the visual arts had already dealt with [the issue of originality], with figures like Marcel Duchamp. If collage and montage in cinema, and objets trouvés were part of the visual arts vocabulary, it didn’t seem at all controversial to use them in music, it seemed entirely natural. I frequently—if I’m asked to do this—I frequently refer to my models from that world: Rauschenberg, David Hockney, Warhol, Stan Brakhage are as important (in some ways maybe even more important) than the whole chain of musical influences one goes through (Satie, Debussy, Bartók, Xenakis, Schoenberg). And also, can you not filter? Of course you can’t not filter, because this [filtration] is working all the time, and you’re not only consciously making choices, but you’re unconsciously making choices. So, it depends on whether you had a happy childhood or not, what your knowledge is, what your experiences are. All of these are filters. Of course, who you are, where you are, what you do, who you’re friends with, who you fuck and who you eat dinner with, makes an incredible amount of difference to what kind of work you do, what kind of music you write. Would we really think it couldn’t be like that? So it’s all filtering. [. . .] Photography I wanted to think about whether I was a photographer or not, because I was brought up with photography, my father was a photographer. And he was a qualified surveyor, but his job, when I remember what he was doing when I was maybe 3 or 4 years old, I used to go around with him, and was he was doing at that stage was documenting the rebuilding of London, photographically. He was preparing an archive of the rebuilding of London after the war. It was quite interesting work, though I don’t suppose I recognised
61
that then; I do now. I wondered—I started writing the piece [The History of Photography in Sound] 2004–5 maybe2, so it’s not an early piece—so I was wondering, have I been a photographer all these years? Am I just continuing his work? Because I do sometimes casually refer to what I do as a kind of documentation, and I do put myself into my work to an extent that some composers don’t. And I acknowledge my sources to an extent that most composers don’t. (I actually do that in the score, I list the pieces: as we go along there are little arrows saying this is a quote from ‘blah’.) I’m not a photographer [but] I read a lot about photography. We were talking earlier about Roland Barthes and Camera Lucida, there’s Susan’s Sontag’s book on photography, and there are quite a few other tomes, about the meaning of photographs, the background, the ‘reading’ of a photo. Which I find very fascinating if slightly offensive because photos are a visual medium, and visualising something is not an narrative, it’s not a literary process. The way in which a photograph evolves—and sometimes the information [the photograph] is designed to impart—are not the same as a writing a story, or describing a thing. You’re sometimes revealing a thing in the same way Paul Klee writes about revealing an object by painting it, or by drawing it. You’re actually extracting something which is not . . . I’m not literary either, so it’s not something I find in literature. But one has to be careful in music, because what are you doing? Are you simply supplementing, are you adding sounds to the world that haven’t been there before? (There’s the question of taking responsibility for doing that, needless to say.) What are you seeking to do? Are you bringing sounds out of nothing? Are we simply making a kind of refuse dump of sound, or creating some pompous edifice? People refer to works as ‘monumental’: I really hate that word. It’s like those dreadful buildings in Paris, where everything is pseudo-Romanesque memorial to something. I don’t design monuments. I have adventures, I go on journeys. If I have to talk about pieces, the process is one of discovery. I set myself a task thinking ‘well I might have a good time if I do this.’ If I find I’m not by day 3 then I scrap it all and start something else, because I’d be a fool not to. It’s both uncovering and disclosing and investigating; it’s performing 1 [Ed. The actual dates of composition of The History of Photography of Sound are 1995-2001—MF may just have got the wrong decade.]
Cerenem Journal no. 6
David Hockney, Hawthorn Blossom, Woldgate (2009)
an autopsy; it’s instinctive, it’s very technical, it sometimes gets very abstract. [. . .] The thing about the Hockney print up there [on the wall] is the extent to which you’re not seeing what is depicted, because if you say ‘it’s a road with trees alongside it, end of story’, that is about 0.001 of the content of that painting. It’s actually [a picture] about painting. If I tell you that something is called Alkan-Paganini, it’s about 0.0001 per cent about Alkan and Paganini. Even though the amount to which I’m quoting Alkan and Paganini in the same way that Hockney is ‘quoting’ a tree—that’s certainly there. But any sense that it was actually ‘by’ Paganini or Alkan [is] long gone. That’s not the point; that’s not what I’m doing. I’m not transcribing in order to reveal what’s there already. Which is the odd thing about—why would you write down a tune that’s in your head? This is the commonplace question: ‘Do you write music that’s in your head, or do you write it down first?’ Well if it was in my head first there wouldn’t be any point it writing it down, it would be so boring. [. . .]
63
Walter Benjamin cm: Walter Benjamin, Susan Sontag and Roland Barthes—today I’m particularly interested in the engagement with Barthes, and I’m curious: has he influenced your thinking? And if so how? mf: Not as much as Walter Benjamin. The reason I didn’t mention Benjamin when I said the piece [The History of Photography in Sound] was dealing with references to Barthes and Sontag is that Benjamin is so much more than a philosopher of photography. Benjamin’s project, and the whole way that he looks at the world, is fundamental to the way I think about producing stuff. cm: Can you tell me more about that? mf: Not in any detail I can’t really, but I read an awful lot of Walter Benjamin’s writing (certainly not all of it but a great deal of it). The interface between reality and meditation, between recollecting and tranquility (as Coleridge and company in the nineteenth century put it), and what it is that the artwork actually is, when you’ve made it. It’s all there in Benjamin. If I could quote reams I could, but I won’t; I’ll just say go away and read it because it’s essential. And it’s the modern world—it’s like reading Wittgenstein. It’s not the nineteenth century any more. In a funny kind of way I think Barthes is more nineteenth-century. It’s very nostalgic, it’s very much about memory—which is an important facet of what I do too, how we remember stuff and so on. But the narratives that hang off memory are different for Barthes than they would be for me. I’m not investing that kind of sentimental attitude. And not in a bad way, but it uses sentiment as the key element. I don’t want to do it because that doesn’t bring in as much technique, and I’m interested in the technique too. I’m interested in what my pen can do, what my eye can do, what my ear can do, what I can hear, what I can analyse. None of that has anything to do with sentiment. That’s all very objective. (Or at least it seems so to me.) So it’s interesting—I haven’t [previously] declared much interest in Benjamin because it’s almost too important. [. . .]
Cerenem Journal no. 6
Composing mf: It is a transcendence of time and place when you’re composing. cm: What do you mean by that? mf: It means that you’re not aware of where you are, who you are, what you’re doing. The best composing happens when you lose awareness. Of course you’re in a kind of control, but it’s Feldman that says (or was it Rauschenberg? Feldman quotes Rauschenberg quite a bit; I think it’s Rauschenberg) ‘let the brush do the work’. There’s something about holding the brush, and what you see is what you’re doing, but you’re actually escaping from that at the same time. You’re not ‘manipulating’ it; you’re allowing the itness to manipulate what happens next, it’s a transcendence of self. Out of body experience. And probably I could have either been a photographer and been quite happy doing that, [or] I could have been an anthropologist and been quite happy doing that too. Maybe one day I shall write a piece called The History of Anthropology in Sound. cm: [laughs] mf: But I’m doing that all the time. [. . .] mf: I enjoy changing stuff. Why do I enjoy doing that? Because it makes it feel more alive. I just set some words that have been very important to me: they’re from a short novella (I think that’s what you would call it; it’s unfinished) by Georg Buchner, who wrote the play [Woyzeck] that Berg based Wozzeck on. And in it, Buchner pretends to be [Jakob] Lenz, who’s another quite agitational—I don’t know how to describe it—‘alternative’ German writer from the early nineteenth century. He makes Lenz say ‘All I demand of art is that it has life.’ Isn’t that fantastic? And that’s all I demand too. When I find that my pieces, to me, don’t have life, I either burn them, or I change them until they do. And what is life? Unpredictability, spontaneity, love, hate, everything. Nothing excluded. Because otherwise you haven’t told the truth. cm: Haven’t told the truth—what do you mean by that? Haven’t told the truth of what it means to be alive. . . ? mf: Yes, you haven’t told the truth of what it means to be alive now.
65
You haven’t said the audience ‘this is what life is, this is what life can be.’ Not sitting in rows in some fucking concert hall, listening to some guy play the piano for five-and-a-half hours, that’s not life. It’s part of life, but not the whole of life. Don’t get confused about what it is. This piece is about life in all its diversity. It’s a sort of exemplar that you take away with you and think about afterwards, and you say ‘Oh I see, okay, my life’s a bit like that too.’ I’d be really happy with that kind of response. Doesn’t need anything more intelligent. Actually I think that response is very intelligent. cm: When we were walking here you told me, when I asked how you were doing, you said that you liked what you were working on, and that it was sort of new in the way that you related to found material. And I said ‘Stop! We’ll talk about that on tape.’ So what did you mean by that? mf: I found some different things to do. Different ways in which I can make it clear to the audience what the relationship is between the found object (which is alluded to or quoted, with alterations) and how one moves away from that to something else, and comes back to it. It’s really the degree of focus, or where things are; can I describe this cinematically without confusing the issue? Probably not. But it’s like if you work in close-up or in medium shot or long shot. It’s quite interesting: there’s a wonderful film by Carl Theodor Dreyer called The Passion of Joan of Arc, and it’s nearly all in close-up. You hardly notice it until you think about it and then you think ‘Oh my god, it’s all in close-up.’ Which is really weird because we’re used to seeing things change in perspective. So it’s that kind of thing, it’s a very small thing. It’s like Hitchcock’s Rope, which is all done in very long takes; but do you notice? Unless you’re looking for it, no, you don’t. That’s not the point—except that he [Hitchcock] is having a hell of a time, showing you can do it. And working out how you can do it. Because of course if you want any kind of camera movement, you’ve got to arrange the choreography in a particular way, and it creates a nightmare technically. Relating to Schubert in the way that I’m doing it in this piece has created a nightmare of things I’ve got to be really ingenious about. And I’m having fun doing that of course, because is it Stravinsky who says: ‘The more limitations you set yourself, the better it is.’ In a way these things are limitations, and you’ve got to get past them
Cerenem Journal no. 6
to do a good job. I don’t want anybody to see what I’m doing and say ‘Oh my god, he’s done that, oh how fantastic!’ because that would be to destroy the whole point. I don’t want it to be clever. It’s just that I’m having fun. It amuses me anyway. Museums cm: Would you say that your relationship to transcription—how to transcribe or what to transcribe, or any of these issues around identity or portraiture that we’ve been talking about—has this changed throughout your life? Was it much different when you were younger? Is it much different now than it was ten years ago? mf: When I was younger, I think I believed that the culture would change. I believed that people would listen less to classical music. I didn’t know anything about the industry, so I was probably very naïve to [think] that. But the story. . . What happened was: I was very scared when I went to the Royal College [of Music] because I’d never had any proper composition training before. By midday most days I had a headache. I couldn’t see the wood for the trees, so I had to get out of there. The Victoria & Albert museum is very close to the Royal College, so I used to seek sanctuary in the V&A. It’s a museum, and it rejoices in the fact that it’s a museum. It doesn’t try and hide the fact (although it does more now, it didn’t then). It was the archetypal Victorian museum; it was dusty, it was full of weird objects and odd juxtapositions. And I thought, this is kinda cool. Why would anybody get this place together? Then, I heard people describing the culture that we lived in (this was the mid 1960s) as a ‘museum’ culture. And I thought, ‘So? What’s the problem with that? I like museums. I like being in the Victoria & Albert museum.’ I thought ‘I’m going to turn this around, I’m going to make a museum.’ Some aspect of my work is going to be the creation of a ‘Victoria & Albert museum’ all of my own. cm: Do you mean this in a way that relates to your relationship to all these references that you use? mf: Yeah, because I’ve been fairly systematic in working through the history of Western European music, and also the way in which ‘exotic’ musics from the orient and the near east, and remoter corners
67
of places like Transylvania, Azerbaijan, have impacted on (generally speaking) central European, not to say Austro-German tradition. In my work you can find pieces which reference Hildegard of Bingen (actually fairly extensively, I keep returning to that, because it’s so fine, and it’s monodic, and it has spiritual radiance that thrills me). And it goes all the way from Hildegard up to parodies of the present day. All of these things are in my museum, in their places with the labels on. Sometimes they’re little jokes, that can be only appreciated if you know these composers really. [. . .] cm: I like how you talk about the ‘Western’ tradition or ‘central European’ tradition, and how these ‘exotic’ musics have informed it, and you say that’s your ‘topic’. This brings up the question of: are you the ‘outsider’ to these musics—? mf: Always. cm: Always outside? Even with Brahms? mf: Yeah, sure. I can go to Hamburg, but Brahms isn’t there. Brahms’ way of writing music is wonderful, but it’s not possible any more, it’s not tenable any more, as a choice. One can stand back from it and see it. I mean, I don’t want to get too much into this because I don’t really think that’s the reason I do it [i.e. transcription]. Mostly it’s curiosity. I’m curious about these things, and I like them and possibly feel guilty about them. Like Saint-Saëns’ music for example, which I adore. But I feel guilty about liking it so much. It seems strange to me that French composers have always done more about ‘exotic’ music than most other nations have. Africa and Spain and so on, a lot of French composers have written music about those places. It interests me too that when English folk music was finally being properly collected and documented, it was by clergymen, and the ‘moneyed classes’, and they usually collected at competitions, at which folk singers used to sing their best numbers. Now no folk singer will ever give you his best number, actually. But they got some pretty sensational material nonetheless. But were they the best people to be collecting it? I don’t think they really were, because their class was completely different, and of course they were patronising these poor people, these people who were their servants, and often vastly senior in age. When Grainger (and Grainger was the best of
Cerenem Journal no. 6
them probably) was collecting folk music with wax cylinder recordings, he was collecting from folk singers that were probably fifty years’ his senior, and they could have been his servants. I wonder what sort of social attitude that was (I’ve never experienced it), and what effect it had on something like the desperation to do the job in a particular kind of way. But it’s all that we have; and that’s the purpose of having a museum isn’t it. It’s putting those objects there so we can interrogate them. It’s a very confrontational museum, mine. cm: There’s this nice quote from Sontag about this relationship perhaps. ‘To photograph is to appropriate the thing photographed. It means putting oneself in a certain relation to the world, that feels like knowledge. And therefore like power.’ mf: That’s it in a nutshell. It gives me a spurious kind of power. And it gives me a spurious kind of satisfaction, which I try and move on from constantly, which is why I’m setting myself challenges all the time, because I’m never satisfied with what I do. But I want it to be as good as it can be, and I want people to know these things intimately, and to interrogate them. It’s not going to happen in my lifetime probably, sadly. Because youngsters who are now coming to my music are very respectful, and of course I like that. But it’ll be interesting to see what they really can find there. That’s the only real thing I hope to leave behind me, is something that’s worth looking at and investigating—as I have looked at those topics and interrogated them. I think that is our responsibility, to interrogate and puzzle over the world that we have, because we can. [. . .] Melancholy, everydayness cm: When Barthes talks about photographs, one of the things that pricks him is this ‘time’ business, that there is something that was present that can no longer be touched. He finds this painful. mf: And poignant. cm: Yeah. It seems to me that there are composers where this is their relationship to music of history, that they’re mourning that it’s gone. But I don’t sense that in what you’re saying, and I don’t sense that in your music. mf: Hm.
69
cm: What is your relationship with this ‘time’ distance? mf: I don’t think I’m melancholy about time—I am melancholy. I think anybody that’s aware of mortality is going to be melancholy, because you’re painfully aware; and the nearer I get to that moment . . . You’re aware that you’re not going to be here for ever, and these things are not going to last maybe more than a few years. It’s impossible not to be slightly melancholic. But I’m not morbid about it. I think we should celebrate that, and we should look it in the face and stare it down, and say this is the way it is and has always been. Basically I want to go in mid-sentence. The one thing I grew to dislike very much about Brahms are the long farewell codas. That kind of melancholy I don’t like very much, although I’m quite happy to parody it every now and again. I like to cut off at the end and say ‘where did that go?’ [. . .] This is what I do, this is who I am; and as Hockney said once, in an interview about painting, ‘this is what I was put here to do’. It’s no big deal, one just gets on with it. He paints every day—he’s eighty, plus—and he goes out there with his easel, it’s just everyday, it’s like cleaning one’s teeth. I think that’s how it should be—I don’t have to go into some super meditative state to compose, I mean look at the mess! I’ve tidied it up a bit, but before I went to meet you at the station [the table] was just covered in paper. It’s a chaos. I know where it all is—but breakfast gets mixed up in it, the dog treats now, pots of tea. It’s just that ordinary. It’s no big deal. ▧
Cerenem Journal no. 6
from Mark So, Samuel Vriezen (2016)
71
joseph kudirka + mark so: the name pieces joseph kudirka: When thinking about what to write for this journal, I’d been working on an article about dedications that composers put in scores and how those dedications relate to, foster, and help create community and culture. The composer I know who’s dedicated more works to other people than anyone else I know is Mark So. So, whereas that paper focused on the larger community, I thought to reverse the focus here and look in detail at the works of Mark So that are dedicated to others, and more specifically at his ‘name’ pieces—pieces Mark has written over the last ten or so years titled after peoples names. These pieces interest me not only in this community/cultural aspect, but also—and perhaps more so—in how they create a real body of work unlike what the vast majority of composers are doing now. While we often see now that composers work very much on their technique, and quite a bit on what might be termed ‘style,’ it is far more rare to see a composer who can truly be said to have a practice; to have a true method of working that’s observable across a large body of work. Mark So has done a few of these works in series, such as his writing through the poetry of John Ashbery: making a score which corresponds to each poem. While I like those works and they certainly foster a very unique personal relationship between So and Ashbery, it is these ‘name’ pieces of his that I’m especially drawn to, perhaps because of my interest in scoring and notation. These pieces are systematic. In many ways, they are completely impersonal; there is a system of transcription of the person’s name that’s going to be followed. In this post-serialist, post-minimalist world, it’s interesting to have a composer in the 21st century following such a rigid system across so many pieces for so long. However, like with serialist work, it’s not the system itself that’s of interest, but how the composer deals with it differently in specific instances. Of special interest to me with these works is So’s notation. In some ways, it’s very traditional, but in others it’s completely radical. It’s radical not so much in its newness or any particular in-
Cerenem Journal no. 6
novation, but in the way that it simply comes to insist upon itself. The notation in these works has the beautiful duality of being both incredibly deliberate as well as seeming to be completely natural. I get the feeling in looking at these works that I could be looking at a practice that’s existed for hundreds of years, and in a way I am—these works are as indebted to the history of western notated music as Chopin preludes or Beethoven sonatas, but are also—and unapologetically so—incredibly unique and personal. I just wanted to write a nice a little article about these works and perhaps ask the composer a few questions, but this is Mark So we’re dealing with; one of the most prolific, intense composers working today. He’s also one of my best friends (though I don’t think I’ve seen him for nearly a decade). His music, like he as a person, is incredibly rich and rewarding if you give it the patience it deserves. There’s a lot to be had from any piece right off the bat, but if you give it more time, let it ruminate, and take it together along with other pieces as a complete body of work, each individual piece starts to make all the more sense. In starting to ask Mark about these pieces, he gave forth more information than I was frankly prepared for, letting me in on all of the details of the development of these works over the past ten years. What follows are largely his words guiding me through these pieces, with brief commentaries of my own on individual scores or larger points to be made which occur to me when looking at this progression. The more I came to look at these works and learn about them, the more I felt like I was just scratching the surface of what’s going on here; what I’d thought was simple and beautiful still is, but—like many things that we adore for their simplicity and beauty—is also worthy of a detailed study that could become entirely consuming. While this is edited down from the exchange Mark and I had about these works, I have kept the text largely as he sent it and in the order that he sent it, going year-by-year from 2006 to 2016. Though I could have further condensed it, perhaps trying to bring out one aspect or another in particular, I feel that this is a rare opportunity to share a real insight into a composer’s working method. Without further ado, Mark So:
73
Mark So: Before I get into these, I should clarify the nature of the scale [used to transcribe letters into pitches]—it’s: A–G going up in naturals from A on the lowest space in bass clef up to G H B at the top of bass clef I–N going up in flats from B e below middle C up to Ge O–U going up in sharps from AØ above middle C up to GØ V–Z going up in naturals again, from the second A above middle C up to E Ed. This article is broken up into sections, as follows: (1) proto, 2006-7
p. 74
(2) early series on mini paper, 2007
p. 76
(3) divergent strategies, 2007
p. 81
(4) 2008
p. 84
(5) 2009
p. 103
(6) 2010
p. 121
(7) 2011
p. 136
(8) 2012
p. 154
(9) 2013
p. 178
(10) 2014-5
p. 192
(11) 2016
p. 207
Cerenem Journal no. 6
(1) proto, 2006-7 These are basically name pieces that happened before they became a series, or before I knew what they are. for ZACKARY DRUCKER was something I very deliberately made using the letters of their name, between early and late 2006, although I don’t at all remember how I mapped the letters to pitches. VICKI! I — I THOUGHT I HEARD YOUR VOICE! for Vicki (2007) is really the first piece I made using the semblance of the alphabet I’ve kept going forward, basically a very weird ascending, mainly whole-tone series (with one reversal from H to I) covering the entire alphabet. there are minor differences from the one I’d ultimately settle on, but it’s basically there in evidence. It’s also an octave higher in this piece. It’s also on a postcard—something which comes to predominate the set much later. It’s for Vicki Ray, who was my piano teacher at Calarts. [Ed. Clicking on each image will link to the relevant page of Mark’s website, where the scores can be downloaded.]
75
Cerenem Journal no. 6
(2) early series on mini paper, 2007 These were done in early 2007 while I was in Taos. The first is TASHI, done a day after VICKI!, on a scrap of notation paper—again, this odd found backing; possibly still a ‘proto.’ The alphabet is the one going forward, but again, an octave higher. No last name (a number of these early ones are first name only).
LILACS isn’t a name piece but was written just after (within 1 or 2 days) VICKI! and TASHI. It’s significant because here you have a very comprehensive use of the alphabet in transcribing an entire poem (‘Syringa,’ by John Ashbery) into sort of a piano+ piece, and it’s set in the octaves that I use going forward. [opposite, p. 1 of 7]
77
Cerenem Journal no. 6
A couple weeks later, still in Taos, the names get rolling: GENEVIEVE (this nice bohemian woman I met at the coffee shop who was I think lovers with an older leatherworker I met there through a friend, named Kevin Cannon, who gets a piece much later), ZACKARY (a proper go?), BILLY (Mintz, jazz drummer/composer and a great guy who was there at the residency with me)—you’ll notice these are kinda ‘fussier’ in terms of the use of traditional notation (certainly this is true of all the earlier ones, but it’s almost fetishistic here), I think because I thought they were cuter that way, as little chiseled mini Chopins, or something. . .
79
After that (like another 2 weeks, still in Taos), I think I consciously started letting the work of the named person come in and influence the notation/score, or maybe it’s because now it starts being composers whose work I admire: CHRISTIAN WOLFF, MICHAEL (Pisaro, no last name), JOSEPH KUDIRKA (Michael remarked when we played it at dogstar that summer that it was as though we were in the presence of the man himself), JERRY GOLDSMITH.
Cerenem Journal no. 6
81
(3) divergent strategies, 2007 In July 2007 I was in Düsseldorf, and rather ad hoc, I scribbled down JOHN McALPINE on a beer coaster, which I gave to him because maybe it was his birthday (I don’t have any documentation of the piece; he probably left it on the table)—I remember a forest scene of some kind, but I think the letters in his name were sort of strung from graphic points on the coaster on little staff lines I drew in here and there for context. It’s a piano+ type piece, but the separation of the notes from a clear linear sequence is a precursor to the listing form that many later name pieces use. Then I also scribbled down JAMES ORSHER in a notebook, I think to have something to play that night before a piece of his. It’s a little cartoonish looking, but looking back on it now, it sets the form for one strand of lots of the name pieces going forward.
URSULA KWASNICKA (my mom, who is a harpist) is the first piece that’s really a list, or lists, of notes. [over page] Flying home from mom’s, I got drunk on the night plane and got excited when the pilot told us we were over Lubbock (they still used to do that sort of thing), so I pulled out some paper and wrote MADISON BROOKSHIRE. Yep, it looks like an aeroplane, but it’s also an effort to unconsciously/consciously bring together what were already these two divergent strands; the line and the list. The paper for this and the one for my mom is this kind of fancy pants lavender letter writing stuff Tashi gave me. [over, opposite]
Cerenem Journal no. 6
83
From this point I’m simply going to list the pieces for each year, descending in the order of completion, linking notable examples/ variations as they come up. Obviously, this is going to take longer and longer as I go, since I have to go through them all one by one, and there are more and more each year in proportion to my overall output. One thing I should add from earlier: going back to BILLY (2007), I sometimes exercise the option for writing a piece for (a) specific instrument(s), usually reflecting the instrument played by the named musician (Billy Mintz is a drummer, but was working on a really long piano piece while I knew him). . .’ jk: So, the pieces presented here are simply examples, often outliers from Mark’s normal practice. Most of these were selected by Mark, but in a few cases I’ve used my discretion to edit things down to what I found to be either the most convenient or interesting.
Cerenem Journal no. 6
(4) 2008 EILEEN MYLES The last one on the tiny paper VINELAND TURNING for Christa & Christine is actually the first of the pieces that’s not explicitly a name piece, even though it is; the pitches are derived from CHRISTA and CHRISTINE; this one was rattling around for a couple years, and the working on the name pieces to that point convinced me that that was a good strategy for finishing the piece—it falls near the beginning of 2008, right after EILEEN MYLES. [over page] jk: Perhaps I’m projecting from my own working methods here, without doing an actual survey, but I think this example provides an interesting insight into the working method of many composers. That is, the idea for this piece occurred to Mark before other, similar pieces, and this idea then had an influence on making those earlier works. Once those had been made, they recursively influenced the creation of this duet. I think this working method happens across the arts at large; one work doesn’t simply follow another in a quasi-narrative structure, but rather the body of work as a whole must be taken into account when trying to find context for a single work, not simply those works which came before it. DOROTHY STONE (in memoriam) The first memorial-type piece; in the 2 list format, but with an additional indication, thus making it not just a name piece (something which some of the later pieces start to pick up on).
85
Cerenem Journal no. 6
87
LEONARD ROSENMAN (in memoriam) It’s lists, but also a reflection of Rosenman’s style (I was working as an assistant to his widow when he died).
Cerenem Journal no. 6
RICHARD SERRA Another postcard; linear writing scored to reflect a sculptural idea. RON ATHEY gave me an amazing massage; this one is stylistically related to the very first (proto) one for ZACKARY DRUCKER, who refers to herself as Ron’s biological daughter; this one introduces a new grid paper stock that only recurs once or twice, I believe. JOHNNY CHANG Back to the fancy stationery from Tashi; settling into the format perhaps already established before this, as the variations from piece to piece get subtler. [over page] HARRIS WULFSON (in memoriam) [over page, opposite] for PAIK A more elaborate hybrid that’s considerably more than a name piece; the Wolff ^ begins to appear as a basic part of the notation. [below]
89
Cerenem Journal no. 6
91
Cerenem Journal no. 6
jk: One of the aspects I find most interesting in Mark So’s work, but with these pieces in particular, is how the composer deals with notation in the sense of what is expected from the performer in terms of what might be termed ‘literacy.’ No explanation is given on the score for the meaning of the wedge (which is a notation for an open-length pause, introduced first by Christian Wolff), nor for the unstemmed open or closed note-heads (which are used by many composers, including Wolff, but also Morton Feldman, Antoine Beuger, and many others). As he says, the wedge becomes a basic part of the notation in this score, though it was used earlier in CHRISTIAN WOLFF. Generally, with the notation in these works, details are given about how to read the notation only when it is specific to that single work, whereas elements of notation that are consistent throughout the body of work are left unexplained, with the expectation that the performer will know how to interpret it. TERRY JENNINGS A more basic, more ambiguous approach to harmony and melody, coming out of the previously established idea of independent linear voices; the first of a long line of pieces written on a xeroxed single sheet of staff paper.
93
SIMEN JOHAN The first one with an explicitly chordal implication drawn out of the two independent linear parts idea. [see over next few pages] CAT LAMB A more elaborate example of ensemble scoring (which has been a tendency in several of the foregoing pieces) deriving primarily from the emergent formal implications of the name transcription itself. G. DOUGLAS BARRETT has assigned rhythmic values LEWIS KELLER A variant on the division of parts, finding implicit voicings in the total lay of notes, not just from the first and last names - this becomes a predominant tendency much later, particularly the idea of establishing 3 voices ORIN HILDESTAD A more detailed approach to notation/scoring. LUKE THOMAS TAYLOR (3 constellations) An approach to deriving a prismatic multiplicity of parts, using layered implications of the tripartite name and the notational as well as graphic implications of groupings. JASON THOMAS Another approach to establishing 3 voices. TAYLAN SUSAM A basic idea of rhythmic cells contextualizing the relationships between discrete parts begins to emerge—I think probably something first suggested in TERRY JENNINGS, and likely a hangover from my obsession with his Piano Piece 1960, that summer. (In Düsseldorf, in the summer of 2007, Manfred Werder—I think it was Manfred Werder . . . maybe it was John McAlpine?— gave a performance of the Jennings piece). GEORGE BRECHT, in memoriam I think for the first time, first and last name used to create two successive blocks of harmonic activity (rather than independent/concurrent voices); also, another early example deriving three voices.
Cerenem Journal no. 6
95
Cerenem Journal no. 6
97
Cerenem Journal no. 6
99
Cerenem Journal no. 6
101
Cerenem Journal no. 6
103
(5) 2009 MANFRED WERDER A different sequence, spelling each name down through chordal pairs, first and last divided by a wedge. [over next few pages] ANTOINE A horizontal list (the list reimagined as a line); the first of several pieces laid out horizontally on a sheet of white letter size paper. EVA-MARIA HOUBEN A reimagining of the interspersed two-voice concept as a single melody. OSWALD EGGER Using syllabic division in each name to imply subphrases within a broken melody. CHRISTIAN KESTEN (5 blancs) A 5 voice derivation of consequent note spellings of first and last name, as two blocks of shifting harmony SORIANO UY SO A 4 voice derivation another piano piece for JULIE SIMON A piano piece for the dedicatee, essentially a name piece without being explicitly so.
Cerenem Journal no. 6
105
Cerenem Journal no. 6
107
Cerenem Journal no. 6
109
Cerenem Journal no. 6
jk: As I noted before, it is cases like this that make looking at a composer’s work as a whole so interesting. While Mark is conscious of having this working practice of writing these ‘name’ pieces, it’s not cut-and-dry. Some pieces are explicitly so, some are explicitly apart, and others are somewhere inbetween. CASSIA STREB A 4 voice derivation for viola, combining the resultant notes derived from the names with the 4 stringed aspect of the instrument. [over next pages] DOUGLAS WADLE for 1 or 2 trombones, allows for the possibility of unpitched/otherwise produced sounds in the highest register. RADU MALFATTI trombone solo, allows for the possibility of unpitched/otherwise produced sounds in the highest register; has a strict time structure. jk: Of interest to me with these two pieces for trombone (and for trombonists) is how the composer does not waver from the methodical practice of transcribing the name into specific pitches, but rather writes the pitches while also making allowances for reading the pitches in variant ways, fully knowing they are not otherwise playable on the trombone. KERSTIN FUCHS The second piece using the narrow horizontal graph paper, after RON ATHEY (2008); the wedge is here qualified as a place where something may be written or read; another instance where a noise may replace a tone. THE BELLES OF BASIN A sort of name collection, spelling the first names of several women who live in Basin, MT (I’d met them all during a residency in 2005, and wrote the piece when I ran into one of them at a grocery store in Helena while passing through in 2009)
111
Cerenem Journal no. 6
113
Cerenem Journal no. 6
115
GAYLE BLANKENBURG My piano teacher in college; perhaps the first piece using a 4 stave (not necessarily 4 part) voicing that basically assigns a bass and treble cleff to the notes in each name (an expansion upon the layout with two blocks of activity on two staves, first and last name before and after a wedge or other break); also, some variant notation is introduced to help differentiate qualities of density—the part with fewer notes (first name) has sustains while the part with more notes (last name) has short tones. [over next pages] SAM SFIRRI Another approach to strict time structure, more ambiguous this time (I sense the slightest influence of the two earlier pieces written on graph paper) LETITIA QUESENBERRY 4 stave layout, assigned rhythmic values. MERCE CUNNINGHAM Written on chinese burning paper; a noise if pitch unavailable— this obviously could take over the whole piece and imply a very different sort of scenario from the playing of notes, and this implication is deliberately not suppressed, somewhat in the way the impression on the other (shiny) side of the score presents a compelling face. AGNES MARTIN Another take on the 3-voice idea
Cerenem Journal no. 6
117
Cerenem Journal no. 6
119
Cerenem Journal no. 6
121
(6) 2010 These start going for a greater immediacy—I use this term loosely, but for instance, you no longer see me going back over my pencil draft in pen so much, and you also start getting kind of quick impressions of the person or their work, almost like a sketch. ZAK LAWRENCE Kind of a spreading-out of the field of resultant notes (there’s also a wrong note in this one. . . ); you have kind of a 3 voice idea, but more all-over attention to implicit groupings. [over next pages] JOHN WIENERS Starting to really become a deliberate attempt at portraiture by this point—probably evident in quite a few earlier pieces, but this one is almost narrativizing Wieners’s personality/voice. AMA BIRCH A very measured approach to the 3 voice idea TWO VIOLINS for Andrew Tholl & Andrew McIntosh This starts a run of instrumental miniatures for specific musicians in which the parts are entirely derived from the spelling of their names, without explicitly being name pieces; several of these were for a concert series in a coatroom at the hammer museum, dubbed the little William theater, which I believe was put on by machine project. SIMONE FORTI This one sort of pushes the implications of MERCE CUNNINGHAM, in that I very consciously devised this as a piece that Simone (who’s a movement performer and not a musician) could very much perform herself, almost treating the score like a choreography. MILLAY for the Millay Colony This one expands the list idea by having each letter in the sequence notated in the top corner of 6 successive pages in the colony register; each letter also associated with a word. [2nd page is journal frontispiece]
Cerenem Journal no. 6
123
Cerenem Journal no. 6
125
Cerenem Journal no. 6
127
DICKY BAHTO Probably the most beautiful one of its sub-type. [over next pages] untitled (MS) A unique piece—it’s how I signed the doorjamb when I left my studio at Millay, i.e. a little name piece with my initials; simple notation implies the field/room as other surrounding source. . . MIKE RICHARD This one has accompanying chords jk: Since Mike Richard isn’t as well known as composers such as Michael Pisaro and Christian Wolff, who have pieces named for them in this series, I feel it’s worth noting the commonalities between this piece and some of Richard’s work. While a student at CalArts (around 2002 and 2003), Mike was setting large parts of Spinoza’s writings to music which many musicians, including myself, performed. There was a vocal soloist who followed the text, and set of chords, which an ensemble played from, but not having individual parts. The ensemble moved through these chords, following the soloist and text. COREY FOGEL This one treats the notes as a secondary accompaniment to drums. MARI Another go at dispersing the list, this time each letter is notated in the top corner of its own notecard. JULIE TOLENTINO Kind of a mysterious one since it implies an axis where a note might go ‘in/down’ perhaps perpendicular to the continuity of the sequence of notes as much as go ‘along’ with an emergent phrase; it implies a kind of potential choreography that engages a dimension that’s beyond the notation. JAMES BENNING (5 frames) Another piece predicated on the implication of not only fragmented materials, but fields.
Cerenem Journal no. 6
129
Cerenem Journal no. 6
131
Cerenem Journal no. 6
133
Cerenem Journal no. 6
135
Cerenem Journal no. 6
(7) 2011 JOHN CAGE Like MERCE CUNNINGHAM, written on burning paper, this time treating the two lists as two grids, each qualified by the backing material as a different metallic tone. [opposite] MORTON FELDMAN An uncannily Feldmanesque approach to counterpoint, for pianos; also coincidentally, probably the first implication of the emergent 3-voice treatment in the form it takes later on (I know I’ve been telegraphing this development for a while now. . . ) The next few getting a little more free-form, also a little more ‘sketchy’—as per the previous two, an attempt to really push the form, but now combining their deliberate craftsmanship with a more spontaneous head/hand—they’re almost expressionistic and hard to pin down in their variety: AARON SPAFFORD [over next pages] ROBBIE HANSEN JR LUCIE JANE BLEDSOE VOLKER STRAEBEL STOSH FILA/PIG PEN one and the same person NICOLAS MILLER BLINKY PALERMO DAVID HUGHES ANDREW MILLER JONATHAN JACKSON
137
Cerenem Journal no. 6
139
Cerenem Journal no. 6
141
Cerenem Journal no. 6
143
Cerenem Journal no. 6
145
Cerenem Journal no. 6
147
MARTIN BACK [over next pages] LIZ KOTZ This one introduces a layered visual experience into the writing/ scoring itself (sharpie & pencil), integral to the composition yet indeterminate in terms of the score (a consequence of certain other pieces which have involved a strong visual/material aspect). MILTON BABBITT JOHN BARRY CHARITY COLEMAN Layered writing again, this time with a corrected draft aspect. ANNE PORTER
Cerenem Journal no. 6
149
Cerenem Journal no. 6
151
Cerenem Journal no. 6
153
Cerenem Journal no. 6
(8) 2012 Some more rather spontaneous & idiosyncratic ones, letting forms emerge in the sketching of the name, but swinging between characterizing the subject in some way and doing a very even and elegant distribution study of the field of notes, almost as though mapped on a grid: JONATHAN MARMOR [over next pages] LUTHER PRICE GEORGES DELERUE 4 stave form emerges ROALD AMUNDSEN CHRISTOPH GIRARD 4 staves again VIOLA AND (for Natalie Fender Brejcha) Another instrumental piece that’s de facto a name piece, this time dispensing with clefs (since it’s not a name piece, per se) but holding to a very clear grid; also, using triads; this one was commissioned by the dedicatee for viola with percussionist and dancer, so the idea was that it could potentially but not necessarily score all three activities.
155
Cerenem Journal no. 6
157
Cerenem Journal no. 6
159
Cerenem Journal no. 6
161
The 4 stave format really gets established here, and the variations get reduced to a matter of open or closed note heads, kinds of separations between events, and implicit variations in quality of simultaneity; all pencil/rather quickly drafted in one pass, with some very light text indication to give a slight characterization, occasionally with some minimal extra notational feature—because these are getting so self-similar, it’s probably all the more worthwhile to examine the scores individually: jk: Here, Mark makes a point similar to the larger point I want to make about his larger body of work, which I think may go against conventional wisdom; that is, with these scores being so similar— through having this clear, methodical practice—it actually becomes possible to understand the important, distinctive details within each piece. If there were only a few of them, this wouldn’t really be possible, and what is notable/important/interesting in any one piece could well go ignored. CAROLYN CHEN [over next pages] ANASTASSIS PHILIPPAKOPOULOS SHANNON EBNER ERIKA VOGT ADAM FITZGERALD RAY BRADBURY
Cerenem Journal no. 6
163
Cerenem Journal no. 6
165
Cerenem Journal no. 6
167
Cerenem Journal no. 6
These start to get crafty again, keeping the grid-like multi-stave format but more along the lines of exploring material/visual variations, both in the sense of found aspects (using smaller pieces of scrap music paper) and in different layered qualities of writing: NEIL ARMSTRONG A collage; torn staff paper against black construction paper to depict the mountains of the moon; this tearing of the paper in part presages the 1/2 sheet run of name pieces. [opposite and over] HANS W. KOCH A tiny miniature, like TASHI (2007) Rather than being a tautology, in this case, ‘tiny miniature’ makes sense, when a work is particularly small within a world of pieces that is already a series of miniatures. TIM JOHNSON Perhaps the first 1/2 sheet piece, though in this case that was just the size of the scrap I found to make it on. EZRA BUCHLA Perhaps a minor thing, but coming back to the full sheet for this one, I deliberately don’t consider the whole page but put the datestamp ‘footer’ near the middle of the sheet; thus, the piece is now conceptually less than coextensive with the sheet it’s written on; this both changes the status of the score slightly, and also I think causes me to eventually start doing two per page. . . BEN OWEN
169
Cerenem Journal no. 6
171
All of these are petty standard 1/2 sheet pieces—many of them are paired for a reason, but sometimes, for no reason; in general, I started waiting until I made both before separating the scores, and there’s a slight tension between their independence and the implication that they used to make up a single whole sheet of paper (I’m attaching a few examples of what many of these looked like, pre-separation); generally all in the grid-like 4 stave format, sketched very quickly and with only the slightest individuation based mainly on the way the notes map out: JEREMY MIKUSH The first true 1/2 sheet name piece [over] ALBERT ORTEGA Made on the remainder of the page that NEIL ARMSTRONG was torn from.
Cerenem Journal no. 6
173
DAVID KENDALL Using xerox to slightly alter the layout; this one has a correction that involves the application of a white label with the title written on it; this creates a unique texture, and the pdf gives you a feel for both sides of the page. [below] KRAIG GRADY [over next pages] DAVID KALHOUS (piano) JAMES SAUNDERS GABOR KALMAN & NORMAL LLOYD I left this pair together because they’re a couple CARMEN CAMERON-WOLFE A return to full sheet, with a strip of text glued into the score, which is otherwise in the standard 4 stave grid-like format.
Cerenem Journal no. 6
175
Cerenem Journal no. 6
177
Cerenem Journal no. 6
(9) 2013 ANDREA LAMBERT & KATIE JACOBSON This one is for another couple (one of whom had just committed suicide)—instead of separates, I integrated their two names into kind of an 8 staff grid; the piece also exhibits probably the most intense exploration of various layered features; pencil, ink, erasure, whiteout, applied correction blocks; I think I’ve also used the two kinds of vertical ligature before (solid line, dotted line) to indicate different though somewhat ambiguous qualities of togetherness, determined whether connecting vertically adjacent notes in adjacent staves, or skipping one or more staves (this is somewhat consistently applied in many of the name pieces of this general period). [opposite]
179
Cerenem Journal no. 6
The next several return to the half-sheet format (with two name pieces executed on a single sheet, again, often articulating some conjunction the two people form in my mind), and a fairly regular application of the 4 stave grid-like note layout (2 staves for each melodic sequence, representing each name) and 2 types of vertical ligatures; there’s quite a run of first-draft-and-next pieces, done quite rapidly and mechanically, yet intuitively judging mostly the vertical order of the staves (whether treble/treble/bass/bass or treble/bass/ treble/bass) and the application of the vertical ligatures, all in one pencil layer (I’ll just attach a few of the full-sheet name pairs for your reference—remember, though, that the sheet is ultimately cut and the pieces exist independently): DAVID RATTRAY DAVID WOJNAROWICZ MONICA MAJOLI ANTONIN ARTAUD KATE BROWN STUART KRIMKO TRULEE GRACE HALL This one and TARA JANE O’NEIL explore adding a third voice into the 4 stave grid. LEOPOLDINE CORE DAVID KERMANI [Opposite and over next pages. As of writing, MS has not published these scores online.]
181
Cerenem Journal no. 6
183
Cerenem Journal no. 6
185
Cerenem Journal no. 6
The rest start getting a little divergent, breaking more or less with the mould established above: TAYLOR MEAD A cute little horizontal phrasing ligature added. [over next pages] ELAINE BARKIN Filled noteheads, diagonal ligatures, guide marks. TOM LEVINE ‘painterly’ layers return (pencil, charcoal, correction label), 3 staves ARTHUR RIMBAUD ambiguous noteheads, guidemarks MARCUS RUBIO correction labels, vertical and diagonal ligatures THOMAS FLAHERTY Somehow, in this and CYNTHIA FOGG a more or less free form has reemerged, but completely informed by/coming out of the latest standard format. DIANA NYAD back to the mould TAI KIM A total anomaly—the return of the vertical list as two chords; made on a square scrap of staff paper. KATHERINE HAGEDORN The standard format, but with a counterpoint of shorter tones (filled notes) against longer tones (open notes)
187
Cerenem Journal no. 6
189
Cerenem Journal no. 6
191
Cerenem Journal no. 6
(10) 2014-15 Here we have a stretch generally marked by a tension between mechanical adherence, more or less, to the half-page 4 stave single pencil layer format established previously, and more unique pieces, either in terms of quirks of notation that arise within the format, or more fundamental anomalies: ERIN KIMMEL Quite to format, yet done on a unique scrap of staff paper. DEAN ROSENTHAL Some phrasing indications added to imply voicing emergent within the grid. LUKAS KENDALL A complicated study, with connections implied between the same note appearing in difference staves/voices, plus various other simple indications of connection/separation recently developed within this format. BRIGID MCCAFFREY To format, but on hand drawn staves, giving it the quality/grain of a consistent textured surface.
193
Cerenem Journal no. 6
195
A run of pretty standard ones here; a few pre-cut pairs: STEPHANIE SMITH MARK TRAYLE WANDA COLEMAN LESLIE SCALAPINO [Ed. Over the page. These above are also unpublished.] ANDREW YOUNG slightly anomalous and elaborated indication of voices [below]
Cerenem Journal no. 6
197
Cerenem Journal no. 6
Things break and get different again here, as the pieces really become objects, specifically mail art, owing to the shift to using postcards as the supporting/framing medium—often, various elements of characterization are in play, between the parties involved in the exchange (myself and the titular subject), any image(s) depicted, other text, places (depicted, or of publication, composititon, destination. . . ), applied materials (stamps, other labels. . . ), and of course the notated name piece itself, also by now a layered entity (even when only drafted in one layer): MATH BASS The 4 stave grid format is broken in favor of treating each name as a quasi-melodic, quasi-harmonic block, minimally delineated BEN BORETZ Typed labels with text from one of Ben’s pieces, each name treated syllabically.
199
Cerenem Journal no. 6
The 3 stave format really emerges here, with its individuated study of implicit sub-voicings; using hand drawn pencil staves on blank postcards: ULRICH KRIEGER CRAIG SHEPARD NARIN DICKERSON SEAN BATTON (2nd piece) I had forgotten I already made him one, when making one for him and his partner, KELSEY BRAIN
201
Cerenem Journal no. 6
203
Three somewhat anomalous elaborations of the 3 stave format, and back to more complex objects: LAUREN DAVIS FISHER MPA KATHLEEN JOHNSON We’d just been in Utah to do Brainchild, part 3, kind of a sci-fi opera we collaborated on; the color painted on the back is the nail polish we all wore for the 2nd brainchild performance in L.A.—I was glad to get Mari to be in it, among others.
Cerenem Journal no. 6
205
Cerenem Journal no. 6
This is the last installment: (Oh, by the way, in case you were wondering, there are a couple pieces that are not name pieces in the sense of this series of transcribed names, but which are named after composers: MAHLER and Kong transcriptions (Steiner)—each of which is a kind of radical transcription of my favorite piece by that composer—‘Ich bin der Welt abhanded geckommen’ and the moody overture from King Kong, respectively. Soon, I’ll start working on a Monteverdi piece, transcribing Arianna’s lament, and I’ve long wanted to do BRUCKNER, based entirely on a brief passage for Wagner tubas in the adagio of the 8th symphony.)
207
(11) 2016 so far . . . The 3 stave format continues—taking what are essentially two melodic voices tracking each other, and drawing from that format the most basic implications of three (or more) voices—postcards/ layered mail art objects (for the most part): SEAN GRIFFIN [over page] honey (Eileen’s dog) An anomaly: a variation on the list, each letter notated in one ply of a bar coaster, placed in a different room in Eileen’s house in Marfa. (Of course not a complete anomaly, as John McAlpine—one of the first of these name pieces—was also written on a coaster.) [over page, opposite, each image shows H O N E Y] HOLLY WOODLAWN (/Marlene Dietrich/Zackary Drucker) [following pages] SAMUEL VRIEZEN Another anomaly: once again, the list, taking after MILLAY in listing the name as notated marginalia, one letter/note on every other page of, in this case, a blank notebook, with each name crossing on inverse trajectories, meeting in the middle. ARIANA REINES A postcard she handed me in New York with her address on it; a bit of a departure; two staves, red pencil, kind of a return to an older idea of just a sequence of dyads. Not really mail art, but gifts given personally—the red Marfa pencil remaining in evidence: LYNN XU A single staff, continuing with red pencil, more of a free exploration.
Cerenem Journal no. 6
209
Cerenem Journal no. 6
211
Cerenem Journal no. 6
213
Cerenem Journal no. 6
215
Cerenem Journal no. 6
JOSHUA EDWARDS two staves, complicated voicings [opposite] CAITLIN MURRAY back to basics—a simple contrapuntal idea [below] NINA PURO syllabic breakdown of the two names [over next pages] Back to mail art, and basic contrapuntal ideas: ROBERT BLATT JORGE GOMEZ
217
Cerenem Journal no. 6
219
Cerenem Journal no. 6
221
I’d still be interested in whatever you’d have to say or speculate about notation. I feel like all I basically did was just look at all of them again for you, and state the obvious. Although revisiting everything has yielded a few details surrounding certain developments in the series that I hadn’t really thought about because after all, they just happened that way, which may not be all that obvious . . . But I’m excited that you’re doing it, the dignity of small things . . . jk: I’m excited that I did it; a bit overwhelmed, though inspired. I’d like to think this look at this body of work—in some ways in-depth, and in others incredibly casual—will also inspire others. In spending so much time with these pieces lately, I’m left with more questions than I had when I’d started looking at them, but I don’t think that’s a bad thing. I think if I felt everything was cleanly sorted away, I’d be a bit disappointed. Now, I want to go out and play these pieces, and I hope others do as well. ▧
Cerenem Journal no. 6
Co n t r i b u to rs Mira Benjamin is a Canadian violinist and member of ensemble Apartment House, completing a PhD at Huddersfield. Her research centres on Just Intonation and its practical and pedagogical applications. Jonathan Burrows is a choreographer and dancer. He danced with the Royal Ballet for 13 years before leaving to pursue his own choreography in 1992. Lawrence Dunn is a composer and improviser, completing a PhD at Huddersfield. His orchestra piece Ambling, waking was recently presented at Tectonics 2017. In October Quatuor Bozzini will perform a new string quartet. Matteo Fargion is a composer and performer. Studying with Kevin Volans, he has written much music for theatre and dance, working with Jonathan Burrows since 1989. He is a visiting member of faculty at Anne Teresa De Keersmaeker’s performance school PARTS. Michael Finnissy is a composer and pianist. Prolific and influential, and the subject of two books, he was formerly president of the ISCM. Joseph Kudirka is a composer and performer, born in Grand Rapids (Mi.). A graduate of Huddersfield, a release of his music performed by Apartment House has recently been issued on the label Another Timbre. Cassandra Miller is a composer from Canada, currently living in England, completing a PhD at Huddersfield. She studied with Christopher Butterfield in Victoria. Widely performed, she was also director of Montreal’s Innovations en Concert.
223
Luke Nickel is a composer and artist from Canada, currently based in England. Recently graduated with a doctorate from Bath Spa University, he is director of Cluster Festival, Winnipeg. Mark So is a composer and performer living in Los Angeles, a graduate of Calarts. According to Madison Brookshire, in his music ‘there are often moments of great beauty . . . but there are never moments of transcendence. As a listener, you are ineluctably in the present, wrestling with it.’ Nick Williams is a composer and educator. A graduate of Huddersfield, his music draws on modernism and minimalism. In 1982 he founded Soundpool Ensemble, giving many first performances. Christian Wolff is a composer. He studied music under Grete Sultan and John Cage, to whom he introduced the I Ching. He also taught Classics at Harvard and Dartmouth College.
The copyright of the articles included in this volume belongs to the respective authors, who have kindly made them available through a Creative Commons Licence (CC BY-NC-ND). Mark So’s scores are freely available to download from his website, Photography from 52 Portraits (pp. 26-9) is © Hugo Glendinning. Excerpts of Christian Wolff, Exercises (pp. 42-3) are © C.F. Peters, 1974. James Tenney, Postal Pieces (p. 7) are © Smith Publications. Hawthorn Blossom, Woldgate (p. 62) is © David Hockney / Tyler Graphics Ltd.
Overleaf: from Mark So, Millay (2010)
Edited by Lawrence Dunn, Issue 6 contains articles about and from composers and performers across the UK, Europe and USA engaged with practi... | https://issuu.com/cerenem/docs/cj6-31jul_final | CC-MAIN-2020-24 | refinedweb | 26,338 | 59.43 |
> I am going to start an embedded C project under Debian Linux in the near future.
Sounds like a regular project to me - if you mean the target runs Debian.
Or are you developing using Debian,...
> I am going to start an embedded C project under Debian Linux in the near future.
Sounds like a regular project to me - if you mean the target runs Debian.
Or are you developing using Debian,...
> What are the benefits of learning DSA?
The ability to write programs that take up more code and time than the usual 1-page student homework, finished in a single edit.
For example, if your...
> Most of the time the processor i am working with has very limited time
Which one?
Are you working with other more capable micro controllers the rest of the time?
> a. Analog data reading of...
Basically, you need to design one of these first.
Finite-state machine - Wikipedia
Did you read the rest of the console API?
PeekConsoleInput function - Windows Console | Microsoft Docs
There's no need to block on console input with peek.
What's a QR request?
Oh, for reference saying @member will just get your posts moderated.
This isn't twitter/discord/reddit or any other new-fangled social media things.
Specifically, it doesn't...
Unless you have a good reason for separating create from init, then "create is init" and "cleanup is destroy" is a much simpler interface.
For a start, you don't have to worry about garbage...
First off, a console program is already a windows program.
Just making it a GUI program would not solve your focus problem for example.
You should be able to ameliorate the focus problem with...
Are all elements of c_t the same type?
If they are, you can probably cook something up with offsetof - cppreference.com
Maybe review your other threads on the same subject:
MSB and LSB of the signed value
Signed and Unsigned Variables
Missing basics of signed and unsigned numbers (this was 3 years ago)
Mostly,...
How many 'member_of_interest' are there?
Anytime you start putting numeric suffixes on symbols is a sign you should be thinking about an array.
Then your 'member_of_interestx' parameter simply...
Looks OK, does it work?
Do you get anything at the receiver?
> sprintf(buffer,"%d",data);
You might want to add a space at the end.
sprintf(buffer,"%d ",data);
Otherwise your receiver will...
The UART is just a transport between two end points.
How you send "My array is of size of 100 of bit16." depends entirely on what the other end of the UART (the receiver) expects to see.
> void...
You can right-click on the watched variable, choose properties and change the watch expression to whatever you want.
You may as well promote the input directly to int32_t
Start with
int32_t l_temp_rawcount = (int32_t)rawcount;
int32_t l_temp_offset = l_temp_rawcount - 512;
> l_temp_square_u32 = (uint32_t) (l_temp_offset_s16 * l_temp_offset_s16);
Here, the...
Having setup a watch point on a pointer variable, use the context menu to then say "dereference".
16462
Depends what you want really.
If you're looking to make a GUI of some sort by the path of least resistance, then I can recommend WxWidgets.
It comes with an extensive set of sample applications...
On Linux, I use code::blocks and GCC
On Windows I use VS
Maybe, but there isn't a lot of choice TBH
List of widget toolkits - Wikipedia
The low level Win32 API is C, but you have to do a lot of work yourself if you're wanting to avoid all the creature...
Well it might be an idea to say
- which allegro you installed Allegro - Download - Latest version
- what the actual error message(s) were
- what your example test code looks like
- what you typed...
> I don't want to debug in terminal. Is it possible to debug in IDE
Yeah, CodeBlocks does a pretty reasonable job of driving the basics of gdb using the GUI.
But what you can do in the GUI is...
> Is this valid code?
Yeah, maybe, if you travel back to the 1970's.
It was rendered obsolete towards the end of the 1980's when the language got standardised.
Whether you can find a modern...
It works for me.
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
void printBits64(uint64_t v, int firstBit, int lastBit)
{
You would define the start of your function like this
void paFunc(const float* in, float* out, long frames, void* data){
auto start = high_resolution_clock::now();
paConfig *config... | https://cboard.cprogramming.com/search.php?s=46bf56c24d95e542bcab9c68866e8f76&searchid=8278950 | CC-MAIN-2022-21 | refinedweb | 751 | 75.61 |
CodePlexProject Hosting for Open Source Software
I find it's interesting very much.
When the Orchard resolution is opened in Visual Studio 2010, there are many red line in its Them project's views code which remind "The type or namespace name 'Orchard' could not be found (are you missing a using directive or an assembly reference?)".
So I add reference for Orchard.Framework and Orchard.Core. The warning or error disappeared after rebuilding the project.
But what confuse me is why the project still can be built successfully even it do NOT include the two above project?
You should probably be using the full source code, not the package that has only the web site project.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://orchard.codeplex.com/discussions/249163 | CC-MAIN-2016-40 | refinedweb | 153 | 74.69 |
table of contents
NAME¶
ttyname, ttyname_r - return name of a terminal
SYNOPSIS¶
#include <unistd.h>
char *ttyname(int fd);
int ttyname_r(int fd, char *buf, size_t buflen);
DESCRIPTION¶
The function ttyname() returns a pointer to a pathname on success. On error, NULL is returned, and errno is set appropriately. The function ttyname_r() returns 0 on success, and an error number upon error.
ERRORS¶
ATTRIBUTES¶
For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶
POSIX.1-2001, POSIX.1-2008, 4.2BSD.
NOTES¶'t be used to access the device that the file descriptor refers to. Calling ttyname() or ttyname_r() on the file descriptor in the new mount namespace will cause these functions to return NULL and set errno to ENODEV.
SEE ALSO¶
tty(1), fstat(2), ctermid(3), isatty(3), pts(4)
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://manpages.debian.org/bullseye/manpages-dev/ttyname.3.en.html | CC-MAIN-2022-05 | refinedweb | 173 | 65.52 |
I am continuously running into performance issues with Navmesh Cut on recast graph. Each time it takes about 290ms.
Cell Size 0.25 (400x400)Tile size 24I use Recast Mesh Obj for one plane with all include options off.Thread Count: Automatic High Load
WorkItemProcessor.ProcessWorkItems() 99.1% total
My game is top down and when a building is constructed I create Navmesh Cut for each wall. I cannot find a better way to do this. I was thinking about using point graph with a recast, but I still need to do navmesh cutting.
Any help deeply appreciated.
Hi
How many navmesh cuts are you using?How large are they? (i.e how many tiles are they overlapping roughly).Do you think you could send me a screenshot?Which version are you using?
290ms does seem very high. In the example scene "Example3_Recast_Navmesh1" cutting the graph using 35 navmesh cuts takes only about 4.5 ms on my computer.
Note that deep profiling slows things down a lot. It is only useful for checking what percentage is spent on some task, not for checking the absolute time taken.
I turned off deep profile and yes time decreased down to about 50 ms. But that does not resolve my problem(normally, fps stays at about 180, while cutting it drops down to 4)
I use four navmesh cuts per building.The building is modular so I tested different sizes. Building(four cuts) entirely in one tile is in profiler 53ms. The building across 17 tiles took 127ms.
My version is 4.0.10 (2017-05-01).
Buildings will not move so except for deleting it, there is no change to their position.
EDIT:I created a new scene with camera, plane, and gameobject with a simple script that creates new gameobject with navmesh cut. I still get about 50ms.
EDIT2:I opened Example3_Recast_Navmesh1,created empty gameobject and added my test script. It had same bad performance (about 80ms).
here is my test script:
using UnityEngine;using Pathfinding;
public class Test : MonoBehaviour {
public bool create;
// Update is called once per frame
void Update () {
if(create)
{
var obj = new GameObject();
obj.transform.localEulerAngles = new Vector3(0, 0, 0);
var col = obj.AddComponent<NavmeshCut>();
obj.transform.position = new Vector3(10, 0, 10);
obj.transform.parent = transform;
col.useRotationAndScale = true;
col.isDual = true;
create = false;
}
}
}
what am I doing wrong?
Screenshot of pressing P and then object hitting ground in Example3_Recast_Navmesh1
Make sure you do not measure just the first time this happens. The first time any cut is done, the JIT-compiler will have to compile all the cutting code which will take some extra time.The first time a cut happens on my computer it takes 20 ms, but subsequent times it only takes around 2.5 ms.
Awesome! I tried and it takes much less time to do it the second time. Thank you very much! | http://forum.arongranberg.com/t/navmesh-cut-performance/4217 | CC-MAIN-2018-26 | refinedweb | 484 | 77.33 |
SwappyVkFunctionProvider
#include <swappyVk.h>
A structure enabling you to provide your own Vulkan function wrappers by calling SwappyVk_setFunctionProvider.
Summary
Usage of this functionality is optional.
Public attributes
void(* SwappyVkFunctionProvider::close)()
Callback to close any resources owned by the function provider.
This function is called by Swappy when no more functions will be requested, e.g. so you can call dlclose on the Vulkan library.
getProcAddr
void *(* SwappyVkFunctionProvider::getProcAddr)(const char *name)
Callback to get the address of a function.
This function is called by Swappy to get the address of a Vulkan function.
init
bool(* SwappyVkFunctionProvider::init)()
Callback to initialize the function provider.
This function is called by Swappy before any functions are requested. E.g. so you can call dlopen on the Vulkan library. | https://developer.android.com/games/sdk/reference/frame-pacing/struct/swappy-vk-function-provider?authuser=2 | CC-MAIN-2022-05 | refinedweb | 125 | 52.15 |
On Jan 7, 2007, at 10:14 AM, minh thu wrote: > Hi, > > I'd like to write a small library along the lines of the mailbox egg. > The goal is to have a queue where the elements are ordered by some > kind of value attached to them (in my case, it will be the moment when > the element has to be extracted from the queue). So essentially a priority queue.
Exactly, just forgot the name...
> > Reading the code, and being fairly new to scheme and chicken, I have > some questions: > > - why is there a queue.scm file ? Is it because usage of the extras > unit can be disabled ? No. Because the extras unit doesn't disable interrupts. Mailbox doesn't synchronize access to its globals explicitly but disables interrupts, so the entire file is a critical-section.
Does that mean that when a file disables interrupts, when calling code from another unit, it could be interrupted ? So using a (disable-interrupts) declaration is a bit like making all methods of a java class synchronised ?
> - what are those ##sys#xxx and ##core#xxx functions ? > is it permitted (good style) to use them ? where are they > documented ? Permitted, yes. Good style? Well, I wouldn't use them unless necessary, or unless hacking in someone else's code that does use them. You may notice that 'queue.scm' defines wrappers around the '##sys' access routines so the main body of the code calls very few directly. And some operations are just not available outside of the sys namespace, in which case they must be used. Documented, no.
Thanks, but, what are they ? What can I use instead of ##sys#slot ? In the present case, is it ok to use them ?
> > When/if I complete the writing of such a library, would it be good to > add it to Eggs Unlimited ? > Have you an idea for its name ? ordered-mailbox ? Sure.
Ok, thanks ! mt | http://lists.gnu.org/archive/html/chicken-users/2007-01/msg00061.html | CC-MAIN-2016-36 | refinedweb | 320 | 74.29 |
The QNetworkProxy class provides a network layer proxy. More...
#include <QNetworkProxy>
Note: All the functions in this class are reentrant.
This class was introduced in Qt 4.1., QHttp and QFtp..
The SOCKS5 support in Qt 4 is based on RFC 1928 and RFC 1929. The supported authentication methods are no authentication and username/password authentication. Both IPv4 and IPv6 are supported, but domain name resolution via the SOCKS server is not supported; i.e. all domain names are resolved locally..
This enum describes the types of network proxying provided in Qt.
While Socks5 proxying works for both Tcp and Udp sockets, Http proxying is limited to Tcp connections. Http proxying also doesn't support binding sockets.
See also setType() and type().
Constructs a QNetworkProxy with DefaultProxy type; the proxy type is determined by applicationProxy(), which defaults to NoProxy.
See also setType() and setApplicationProxy().
Constructs a QNetworkProxy with type, hostName, port, user and password.
Constructs a copy of other.
Destroys the QNetworkProxy object.
Returns the application level network proxying.
If a QAbstractSocket or QTcpSocket has the QNetworkProxy::DefaultProxy type, then the QNetworkProxy returned by this function is used.
See also setApplicationProxy(), QAbstractSocket::proxy(), and QTcpServer::proxy().
Returns the host name of the proxy host.
See also setHostName(), setPort(), and port().
Returns the password used for authentication.
See also user(), setPassword(), and setUser().
Returns the port of the proxy host.
See also setHostName(), setPort(), and hostName().
Sets the application level network proxying to be networkProxy.
If a QAbstractSocket or QTcpSocket has the QNetworkProxy::DefaultProxy type, then the QNetworkProxy set with this function is used.
See also applicationProxy(), QAbstractSocket::setProxy(), and QTcpServer::setProxy()..
See also type().
Sets the user name for proxy authentication to be user.
See also user(), setPassword(), and password().
Returns the proxy type for this instance.
See also setType().
Returns the user name used for authentication.
See also setUser(), setPassword(), and password().
Assigns the value of the network proxy other to this network proxy.
This function was introduced in Qt 4.2. | http://doc.trolltech.com/4.3/qnetworkproxy.html | crawl-002 | refinedweb | 333 | 54.39 |
- Import the example project
- Register the agent
- Create your GCP credentials
- Configure your project
- Provision your cluster
- Use your cluster
- Remove the cluster
Create a Google GKE cluster
Learn how to create a new cluster on Google Kubernetes Engine (GKE) through Infrastructure as Code (IaC). This process uses the Google and Kubernetes Terraform providers create GKE clusters. You connect the clusters to GitLab by using the GitLab agent for Kubernetes.
Prerequisites:
- A Google Cloud Platform (GCP) service account.
- A runner you can use to run the GitLab CI/CD pipeline.
Steps:
- Import the example project.
- Register the agent for Kubernetes.
- Create your GCP credentials.
-:
- A cluster on Google Cloud Platform (GCP) with defaults for name, location, node count, and Kubernetes version.
-
gke-agentand select Register an agent.
- GitLab generates a registration token for the agent. Securely store this secret token, as you will need it later.
- GitLab provides an address for the agent server (KAS), which you will also need later.
Create your GCP credentials
To set up your project to communicate to GCP and the GitLab API:
- Create a GitLab personal access token with
apiscope. The Terraform script uses it to connect the cluster to your GitLab group. Take note of the generated token. You will need it when you configure your project.
- To authenticate GCP with GitLab, create a GCP service account with following roles:
Compute Network Viewer,
Kubernetes Engine Admin,
Service Account User, and
Service Account Admin. Both User and Admin service accounts are necessary. The User role impersonates the default service account when creating the node pool. The Admin role creates a service account in the
kube-systemnamespace.
- Download the JSON file with the service account key you created in the previous step.
On your computer, encode the JSON file to
base64(replace
/path/to/sa-key.jsonto the path to your key):
base64 /path/to/sa-key.json | tr -d \\n
- Use the output of this command as the
BASE64_GOOGLE_CREDENTIALSenvironment variable in the next step.
Configure your project
Use CI/CD environment variables to configure your project.
Required configuration:
- On the left sidebar, select Settings > CI/CD.
- Expand Variables.
- Set the variable
BASE64_GOOGLE_CREDENTIALSto the
base64encoded JSON file you just created.
- Set the variable
TF_VAR_gcp_projectto your GCP’s
projectname.
Optional configuration:
The file
variables.tf
contains other variables that you can override according to your needs:
TF_VAR_gcp_region: Set your cluster’s region.
TF_VAR_cluster_name: Set your cluster’s name.
TF_VAR_cluster_description: Set a description for the cluster. We recommend setting this to
$CI_PROJECT_URLto create a reference to your GitLab project on your GCP cluster detail page. This way you know which project was responsible for provisioning the cluster you see on the GCP dashboard.
TF_VAR_machine_type: Set the machine type for the Kubernetes nodes.
TF_VAR_node_count: Set the number of Kubernetes nodes.
TF_VAR_agent_version: Set the version of the GitLab agent.
TF_VAR_agent_namespace: Set the Kubernetes namespace for the GitLab agent.
Refer to the Google see your new cluster:
- In GCP: on your GCP console’s Kubernetes list.
- In GitLab: from your project’s (). | https://docs.gitlab.com/14.10/ee/user/infrastructure/clusters/connect/new_gke_cluster.html | CC-MAIN-2022-40 | refinedweb | 502 | 59.5 |
Quite a while ago I discussed using flat arrays and address calculations to store a tree in a simple array. The trick wasn’t new (I think it’s due to Williams, 1964 [1]), but is only really practical when we consider heaps, or otherwise very well balanced trees. If you have a misshapen tree, that trick doesn’t help you. It doesn’t help you either if you try to serialize a misshapen tree to disk.
But what if we do want to serialize arbitrarily-shaped trees to disk? Is it painful? Fortunately, no! Let’s see how.
First, let’s recall what a tree traversal is. There are of course many, many ways to visit all nodes in a tree, but the three usual depth-first ways—those found in textbooks anyways—are:
- Pre-order search, where the root is visited, then the left subtree, then the right subtree;
- In-order search, where the left subtree is visited first, then the root, then the right subtree;
- and Post-order search, where the left and right subtrees are visited first, and the root last.
Either traversals can be used to enumerate all nodes in a tree, they’re all equivalent in this regard. One or the other may be more convenient for the specific application you’re considering. For example, post-order may be more suitable for the evaluation of arithmetic expressions stored as trees. If you want to enumerate the contents of an ordered/search tree, then in-order will give you the values in increasing order. Sometimes you just don’t care: you can serialize a tree in any order you want: the complexity of encoding and decoding is comparable.
But if you want to serialize a tree, you have to make sure that the deserialized tree is reconstructed correctly, that is, that all nodes point to the right nodes. You can’t really save pointers, since when you will reconstruct the tree, nothing guaranties that the same memory locations will be used. One idea might be to assign a number to each node, and store these numbers rather than pointers, and somehow use that information to resew the nodes later. That might work, but 1) we need a first scan to assign numbers, 2) we need to write those to disk, and 3) that doesn’t simplify deserialization one bit.
Furthermore, the structure of the tree itself mostly dispenses us from worrying about pointers. We only need to encode structure, and structure in a tree is very simple to describe: a node, in a binary tree, can have zero, one, or two child(ren). We only need to encode the presence (or absence) of children, and the content of the node, and nothing else. OK, let’s demonstrate. Consider this tree:
Let’s now indicate on the tree, the null pointers by a simple 0 and non-null pointers by 1s:
If a 1 is associated with a pointer, that means it points to a subtree—with at least one node. If it’s a zero, then there is no subtree associated with the pointer. This gives us the idea that we can only save one bit—0 or 1—to indicate if there’s a subtree to be decoded or not. A pre-order (because, why not) decoding would go as follows:
- Decode the data contained in the node (you’ll need some way of encoding the data in the node, something like JSON or a binary representation),
- If the next bit is a 1, decode (recursively) the left subtree, if it’s a zero, there is no left subtree to decode;
- If the next bit is a 1, decode (recursively) the right subtree, likewise, it if’s a zero, there is no right subtree to decode.
(In- and post-order searches merely reorder the above steps.) The decoding will need some kind of iterator, or index, that advances linearly while decoding. Encoding proceeds pretty much the same: encode the data, encode (recursively) the left subtree, encode (recursively) the right subtree.
*
* *
It sounds simple, but is it actually simple? Sometimes simple ideas turns out to be rather complicated to implement right, but not this time! The encode and decode procedure (here prefixed by p_, as they are private methods) are given by:); }
The decoding procedure is a bit more complicated than the encoding because you have to check for subtrees, the “bit” is coded as a whole ASCII character in a string, and that some care must be taken to advance the “cursor” correctly.
On the above tree, the output of p_encoded is
a1b1c0001d1e001f00,
which, decoded back gives:
(a(b(c))(d(e)(f))),
which is a lame parentheses-based representation of a tree.
*
* *
A more complete Proof of Concept code is found below: click to expand.
#include <iostream> #include <iomanip> #include <string> // needs to_string to be overloaded for your data // template <typename T> class tree { private: class tree_node { public: tree_node *left,*right; T data; tree_node(const T & new_data, tree_node *new_left=nullptr, tree_node *new_right=nullptr) : left(new_left), right(new_right), data(new_data) {} ~tree_node() { delete left; delete right; } }; tree_node *root; // These two should be passed as template // arguments (that's a bit tedious for a // PoC). // T p_decode_data(const std::string & tree_code, size_t & offset) const { return tree_code[offset++]; // for now, char } std::string p_encode_data(const T & data) const { // works fine for 'char' return std::string(sizeof(T),data); }); } void p_show(tree_node * node) const { if (node) { std::cout << "(" << node->data; // must have suitable overload! p_show(node->left); p_show(node->right); std::cout << ")"; } } public: void show() const { p_show(root); std::cout << std::endl; } std::string encoded() const { return p_encoded(root); } tree(const std::string & tree_code) { size_t offset=0; root=p_decoded(tree_code,offset); } tree() : root(nullptr) {} ~tree() { delete root; } }; int main() { const std::string tree_code="a1b1c0001d1e001f00"; tree<char> t(tree_code); std::cout << tree_code << std::endl; t.show(); std::cout << t.encoded() << std::endl; return 0; }
*
* *
Is this the only way of serializing a tree? No. Standish [2] presents quite a few, but none quite as simple. But it’s not exactly all that simple either: a lot of complexity may be hidden in the encoding and decoding of the data contained in the nodes. Even something as banal as a string need special care… how do we take spaces into account? Do we use quotes? Then we need an escape mechanism to introduce quotes within the string… etc. Maybe a simple length+raw data encoding is sufficient. What if the data also contains pointers?!
[1] J. W. J. Williams — Algorithm 232: Heap Sort — C. ACM, vol. 7, n° 6 (1964) pp. 347–348
[2] Thomas A. Standish — Data Structure Techniques — Addison-Wesley (1980) | https://hbfs.wordpress.com/2016/09/06/serializing-trees/ | CC-MAIN-2017-13 | refinedweb | 1,113 | 59.23 |
Python Jumpstart by Building 10 Apps is a course designed as an introduction to Python. This course is not your typical introduction found on Coursera or EDX, which are usually designed around some weeks-long introduction that get you to objects around week 8. This course - which you can reasonably complete in a week if you are dedicated - will get you into the more advanced topics within hours, not weeks.
This is a 'learn by watching me do' course in which the author develops each application in front of your eyes. If you do the projects side-by-side, you will learn a lot. You won't learn everything that he knows, especially as it pertains to specific library knowledge, but you will get the flavor of the library and should have the proper vocabulary to explore StackOverflow and the library documentation to figure out the specifics.
There is a total of 7.2 hours of instruction for $69 with lifetime access. This doesn't sound like much course time, but it is quite information dense.
The author, Michael Kennedy, is an experienced Python developer who runs a weekly podcast Talk Python to Me in which he speaks with contributing members of the Python community. I highly recommend giving this a listen.
If you are new to Python or just haven't practiced for a while, this course is for you. Additionally, if you just missed some things along the way - 'lambdas' anyone? - you will find that the course material is organized in such a way that you can easily focus in on the particular topic that you missed. I'm an embedded C guy with very hack-ish knowledge of Python, so this course has been very helpful to me.
As an electronics guy, I didn't actually take any courses in programming beyond assembly. As a result, I was never exposed to actually building a program as a professional might. Kennedy takes you through each application, first sketching out the rough program or file 'flow' and then adding detail. By the third application, the viewer can predict the sequence of events that will start the course. The familiarity of format between the modules gives the user a familiar hand-hold as they move on to each new application.
Kennedy supplies his code in the form of a git repository. This is a great thing to reference from time-to-time, but do not copy/paste from his code. You will miss something. For instance, named tuples format:
import collections # this one is wrong! MovieResult = collections.namedtuple( 'MovieResult', 'Title', 'Poster', 'Type', 'imdbID', 'Year' ) # this is correct MovieResult = collections.namedtuple( 'MovieResult', 'Title, Poster, Type, imdbID, Year' ) # see the difference?
Because of a basic misunderstanding in format, I simply missed the nuance of the format of named tuples and - without typing it myself - I would have had to find the issue on my own at some later time.
Kennedy performs his development in PyCharm - a Python IDE - and a significant amount of screen time is
dedicated to showing you how to get the most out of PyCharm. This was a bonus for me since I use PyCharm, but may be a drawback for you.
If you are on the edge, the ease with which Kennedy performs many basic and more advanced tasks with the keyboard-only may convince you that PyCharm is The Way. You should note that there is a 'Community Edition' of PyCharm that is free and is quite capable. For this series of videos, you likely won't notice the difference between the 'Pro' and 'Community' editions. You would if you were developing for web applications or other more advanced applications than just Python.
The lessons are still perfectly applicable to those who would prefer VIM or EMACS, but you might be annoyed by the continual references to how PyCharm will help you out. Overall, the choice to use PyCharm should enable new users to move forward without too many issues, seeing results quickly.
Kennedy has 7.2 hours in which to show you Python and he really pulls it off. This course covers what many 2-semester Python courses would cover. Just look at the curriculum to quell your doubts. If you are an in-betweener - like me - who doesn't want to tackle beginner-style 'for' loops but does want to delve deeper into Python than you have in the past, this is your guy.
To the right is one of the application outlines. The layout is very typical of each of the courses. The application is always introduced first. It is then sketched out and a bit of code is written. Through the coding process, Kennedy moves through with a fluid train of thought, giving the most obvious C-like solutions first, then refining to more "Pythonic" solutions. Somewhere just after introducing a new Python feature, Kennedy will have a specific video just to elaborate on a 'Core Concept'. After the core concept, application development continues as if it were never interrupted. I find this to be an effective teaching/learning format.
The course does start using file I/O in the last few applications and these files are not provided! One of them is a 2.5GB text file, so you
probably don't want to download it to test your program. You do want something, however, so I
forked the repository and added a couple of small files so that the bare bones
of the required files were present for a couple of applications. Look in /apps/
Kennedy does touch on application structure, but doesn't go quite far enough. When looking around on github, I often see applications packaged into a directory, then another application in another directory, with imports going from one to another in a fashion that I don't completely understand. I feel like 10 minutes dedicated to this topic would really help the new and semi-seasoned Python developer in his/her jumpstart.
There are two topics that I would like to have seen covered that didn't receive a mention: testing and profiling. Testing and profiling are an integral part of modern application development and merit a significant percentage of the Talk Python to Me air time. Since snap ci and other continuous integration services are frequently sponsoring episodes of the podcast, I expect that a few videos focused on testing would give the author an opportunity for sponsorship. I realize that testing in particular could easily be its own course, so I look forward to seeing this one on the course list soon.
A few videos on string manipulation and searching - such as with regular expressions - would have been appropriate, but it doesn't feel like it is missing. The course does cover using the CSV module, which accounts for half of the string search/manipulation that I perform. If you think that you know the CSV module, you should watch anyway. There is a cool little trick that makes unpacking CSV data easy and robust.
I'm not advanced enough to know if application deployment should be a course of its own or get its own half-hour here. Kennedy does touch on a method or two to get code to execute in Python2 and Python3 environments, but not the whole picture. A few minutes on application deployment would be very helpful.
If you are looking for a modern GUI course, you might want to pass on this one. All user interaction is completed in the command line, allowing the viewer to learn Python. Some courses/languages - I'm looking at you, Java - try to introduce the paradigm of GUI development to beginners still trying to learn the syntax and concepts of the language. Kennedy's simple user interaction allows the user to focus on Python manipulations, not frames and panes.
Most videos are in the 3-6min range, with the longest video at 13:21. These are pretty bite-sized and you could easily spread the information-dense 7.2-hour course out over a more lengthy amount of time. If you watch these straight-through, you will miss out on half of the value of the course. Take your time, pause the video, re-watch sections that you missed, and type out the examples yourself. Experienced programmers should expect to dedicated 12-ish hours to the task. Beginners should checkout some books or tutorials for when they don't fully understand an example. I would expect new programmers to easily spend 30 hours or more learning the terminology and techniques.
I came at this course having developed a few small applications for my own purposes. I have exposure to most of the material already, so I was able to do about 3 applications/day without much of a problem.
There isn't anything in this course that isn't covered elsewhere for free. That being said, I have spent many hours learning the 'getting started gotchas' one hard lesson at a time. Had I gone through this a few years back, I suspect that I wouldn't have had some of the headaches that I suffered through.
For a beginner, I would recommend trying to develop one app/day. That may be ambitious, particularly on the last few applications. Try not to move forward without actually understanding what is being shown. The topics highlighted by the author as a 'Core Concept' should be understood thoroughly before advancing.
For the in-betweeners, I would still recommend watching this course through in its entirety and manually typing out the applications that have features that you aren't familiar with. I feel like I could run through some portion of these videos every few months and learn a bit more every time.
For the seasoned Python developer, this is likely a pass. Having said that, I would be very surprised if anyone - including Michael! - wasn't able to pick up a new trick or remember an old trick long-forgotten.
I highly recommend this course to developers of all vintages, but I suspect that it hits its sweetest spot with coders already familiar with at least one other object-oriented language.
My thanks to Michael Kennedy for publishing his video series and for permission to utilize his images and artwork. Keep up the great work! | http://forembed.com/review-python-jumpstart-by-building-10-apps.html | CC-MAIN-2018-39 | refinedweb | 1,719 | 62.17 |
#include "Tlc5940.h"void setup(){ Tlc.init(); randomSeed(analogRead(0));}void loop(){ Tlc.clear(); for (int i=0; i<32; i++) { Tlc.set(random(0,95),random(2047,4095)); } Tlc.update(); delay(500);}
The decoupling caps you need are 0.1uF ceramic. On each chip as close to the power supply pins as possible. The best is to solder a surface mount cap directly across the two pins.A photo of your setup would help define other problems of layout or construction.
My direct experience with the 5941 chip is that the datasheet recommended .1uF is not enough. I believe it has to do with fluctuations in the shared GND when the chip turns on its 80mA LEDs and so suddenly dumps a lot of power on the GND plane. But this is mostly guesswork because my scope is limited to 100Mhz.
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=110606.msg831370 | CC-MAIN-2016-22 | refinedweb | 177 | 77.03 |
Terms defined: Singleton pattern, actual result (of test), assertion, caching, defensive programming, design pattern, dynamic loading, error (in a test), exception handler, expected result (of test), exploratory programming, fail (a test), fixture, global variable, introspection, lifecycle, pass (a test), side effect, test runner, test subject, throw (exception), unit test
We have written many small programs in the previous two chapters, but haven't really tested any of them. That's OK for exploratory programming, but if our software is going to be used instead of just read, we should try to make sure it works.
A tool for writing and running unit tests is a good first step. Such a tool should:
- find files containing tests;
- find the tests in those files;
- run the tests;
- capture their results; and
- report each test's result and a summary of those results.
Our design is inspired by tools like <span i="Mocha"Mocha</span> and Jest, which were in turn inspired by tools built for other languages from the 1980s onward Meszaros2007,Tudose2020.
How should we structure unit testing?
As in other unit testing frameworks, each test will be a function of zero arguments so that the framework can run them all in the same way. Each test will create a fixture to be tested and use assertions to compare the actual result against the expected result. The outcome can be exactly one of:
Pass: the test subject works as expected.
Fail: something is wrong with the test subject.
Error: something wrong in the test itself, which means we don't know whether the test subject is working properly or not.
To make this work,
we need some way to distinguish failing tests from broken ones.
Our solution relies on the fact that exceptions are objects
and that a program can use introspection
to determine the class of an object.
If a test throws an exception whose class is
assert.AssertionError,
then we will assume the exception came from
one of the assertions we put in the test as a check
().
Any other kind of assertion indicates that the test itself contains an error.
How can we separate test registration, execution, and reporting?
To start, let's use a handful of global variables to record tests and their results:
// State of tests. const HopeTests = [] let HopePass = 0 let HopeFail = 0 let HopeError = 0
We don't run tests immediately
because we want to wrap each one in our own exception handler.
Instead,
the function
hopeThat saves a descriptive message and a callback function that implements a test
in the
HopeTest array.
// Record a single test for running later. const hopeThat = (message, callback) => { HopeTests.push([message, callback]) }
Independence
Because we're appending tests to an array, they will be run in the order in which they are registered, but we shouldn't rely on that. Every unit test should work independently of every other so that an error or failure in an early test doesn't affect the result of a later one.
Finally,
the function
main runs all registered tests:
// Run all of the tests that have been asked for and report summary. const main = () => { HopeTests.forEach(([message, test]) => { try { test() HopePass += 1 } catch (e) { if (e instanceof assert.AssertionError) { HopeFail += 1 } else { HopeError += 1 } } }) console.log(`pass ${HopePass}`) console.log(`fail ${HopeFail}`) console.log(`error ${HopeError}`) }
If a test completes without an exception, it passes.
If any of the
assert calls inside the test raises an
AssertionError,
the test fails,
and if it raises any other exception,
it's an error.
After all tests are run,
main reports the number of results of each kind.
Let's try it out:
// Something to test (doesn't handle zero properly). const sign = (value) => { if (value < 0) { return -1 } else { return 1 } } // These two should pass. hopeThat('Sign of negative is -1', () => assert(sign(-3) === -1)) hopeThat('Sign of positive is 1', () => assert(sign(19) === 1)) // This one should fail. hopeThat('Sign of zero is 0', () => assert(sign(0) === 0)) // This one is an error. hopeThat('Sign misspelled is error', () => assert(sgn(1) === 1)) // eslint-disable-line // Call the main driver. main()
pass 2 fail 1 error 1
This simple "framework" does what it's supposed to, but:
It doesn't tell us which tests have passed or failed.
Those global variables should be consolidated somehow so that it's clear they belong together.
It doesn't discover tests on its own.
We don't have a way to test things that are supposed to raise
AssertionError. Putting assertions into code to check that it is behaving correctly is called defensive programming; it's a good practice, but we should make sure those assertions are failing when they're supposed to, just as we should test our smoke detectors every once in a while.
How should we structure test registration?
The next version of our testing tool solves the first two problems in the original by putting the testing machinery in a class. It uses the Singleton design pattern to ensure that only one object of that class is ever created Osmani2017. Singletons are a way to manage global variables that belong together like the ones we're using to record tests and their results. As an extra benefit, if we decide later that we need several copies of those variables, we can just construct more instances of the class.
The file
hope.js defines the class and exports one instance of it:]] }
This strategy relies on two things:
Node executes the code in a JavaScript module as it loads it, which means that it runs
new Hope()and exports the newly-created object.
Node caches modules so that a given module is only loaded once no matter how many times it is imported. This ensures that
new Hope()really is only called once.
Once a program has imported
hope,
it can call
Hope.test to record a test for later execution
and
Hope.run to execute all of the tests registered up until that point
().
Finally,
our
Hope class can report results as both a terse one-line summary and as a detailed listing.
It can also provide the titles and results of individual tests
so that if someone wants to format them in a different way (e.g., as HTML) they can do so:]] }
Who's calling?
Hope.test uses the
caller module
to get the name of the function that is registering a test.
Reporting the test's name helps the user figure out where to start debugging;
getting it via introspection
rather than requiring the user to pass the function's name as a string
reduces typing
and guarantees that what we report is accurate.
Programmers will often copy, paste, and modify tests;
sooner or later (probably sooner) they will forget to modify
the copy-and-pasted function name being passed into
Hope.test
and will then lose time trying to figure out why
test_this is failing
when the failure is actually in
test_that.
How can we build a command-line interface for our test manager?
Most programmers don't enjoy writing tests,
so if we want them to do it,
we have to make it as painless as possible.
A couple of
import statements to get
assert and
hope
and then one function call per test
is about as simple as we can make the tests themselves:
import assert from 'assert' import hope from './hope.js' hope.test('Sum of 1 and 2', () => assert((1 + 2) === 3))
But that just defines the tests—how will we find them so that we can run them?
One option is to require people to
import each of the files containing tests
into another file:
// all-the-tests.js import './test-add.js' import './test-sub.js' import './test-mul.js' import './test-div.js' Hope.run() ...
Here,
all-the-tests.js imports other files so that they will register tests
as a side effect via calls to
hope.test
and then calls
Hope.run to execute them.
It works,
but sooner or later (probably sooner) someone will forget to import one of the test files.
A better strategy is to load test files dynamically.
While
import is usually written as a statement,
it can also be used as an
async function
that takes a path as a parameter and loads the corresponding file.
As before,
which registers tests as a side effect:
import minimist from 'minimist' import glob from 'glob' import hope from './hope.js' const main = async (args) => { const options = parse(args) if (options.filenames.length === 0) { options.filenames = glob.sync(`${options.root}/**/test-*.js`) } for (const f of options.filenames) { await import(f) } hope.run() const result = (options.output === 'terse') ? hope.terse() : hope.verbose() console.log(result) } main(process.argv.slice(2))
By default,
this program finds all files below the current working directory
whose names match the pattern
test-*.js
and uses terse output.
Since we may want to look for files somewhere else,
or request verbose output,
the program needs to handle command-line arguments.
The
minimist module does this
in a way that is consistent with Unix conventions.
Given command-line arguments after the program's name
(i.e., from
process.argv[2] onward),
it looks for patterns like
-x something
and creates an object with flags as keys and values associated with them.
Filenames in
minimist
If we use a command line like
pray.js -v something.js,
then
something.js becomes the value of
-v.
To indicate that we want
something.js added to the list of trailing filenames
associated with the special key
_ (a single underscore),
we have to write
pray.js -v -- something.js.
The double dash is a common Unix convention for signalling the end of parameters.
Our test runner is now complete, so we can try it out with some files containing tests that pass, fail, and contain errors:
node pray.js -v
passes: /u/stjs/unit-test/test-add.js::Sum of 1 and 2 /u/stjs/unit-test/test-sub.js::Difference of 1 and 2 fails: /u/stjs/unit-test/test-div.js::Quotient of 1 and 0 /u/stjs/unit-test/test-mul.js::Product of 1 and 2 errors: /u/stjs/unit-test/test-missing.js::Sum of x and 0
Infinity is allowed
test-div.js contains the line:
hope.test('Quotient of 1 and 0', () => assert((1 / 0) === 0))
This test counts as a failure rather than an error
because thinks the result of dividing by zero is the special value
Infinity
rather than an arithmetic error.
Loading modules dynamically so that they can register something for us to call later
is a common pattern in many programming languages.
Control flow goes back and forth between the framework and the module being loaded
as this happens
so we must specify the lifecycle of the loaded modules quite carefully.
illustrates what span
when a pair of files
test-add.js and
test-sub.js are loaded by our framework:
prayloads
hope.js.
hope.jscreates a single instance of the class
Hope.
prayuses
globto find files with tests.
prayloads
test-add.jsusing
importas a function.
- As
test-add.jsruns, it loads
hope.js. Since
hope.jsis already loaded, this does not create a new instance of
Hope.
test-add.jsuses
hope.testto register a test (which does not run yet).
praythen loads
test-sub.js…
- …which loads
Hope…
- …then registers a test.
praycan now ask the unique instance of
Hopeto run all of the tests, then get a report from the
Hopesingleton and display it.
Exercises
Asynchronous globbing
Modify
pray.js to use the asynchronous version of
glob rather than
glob.sync.
Timing tests
Install the
microtime package and then modify the
dry-run.js example
so that it records and reports the execution times for tests.
Approximately equal
Write a function
assertApproxEqualthat does nothing if two values are within a certain tolerance of each other but throws an exception if they are not:
# throws exception assertApproxEqual(1.0, 2.0, 0.01, 'Values are too far apart') # does not throw assertApproxEqual(1.0, 2.0, 10.0, 'Large margin of error')
Modify the function so that a default tolerance is used if none is specified:
# throws exception assertApproxEqual(1.0, 2.0, 'Values are too far apart') # does not throw assertApproxEqual(1.0, 2.0, 'Large margin of error', 10.0)
Modify the function again so that it checks the relative error instead of the absolute error. (The relative error is the absolute value of the difference between the actual and expected value, divided by the absolute value.)
Rectangle overlay
A windowing application represents rectangles using objects with four values:
x and
y are the coordinates of the lower-left corner,
while
w and
h are the width and height.
All values are non-negative:
the lower-left corner of the screen is at
(0, 0)
and the screen's size is
WIDTHx
HEIGHT.
Write tests to check that an object represents a valid rectangle.
The function
overlay(a, b)takes two rectangles and returns either a new rectangle representing the region where they overlap or
nullif they do not overlap. Write tests to check that
overlayis working correctly.
Do you tests assume that two rectangles that touch on an edge overlap or not? What about two rectangles that only touch at a single corner?
Selecting tests
Modify
pray.js so that if the user provides
-s pattern or
--select pattern
then the program only runs tests in files that contain the string
pattern in their name.
Tagging tests
Modify
hope.js so that users can optionally provide an array of strings to tag tests:
hope.test('Difference of 1 and 2', () => assert((1 - 2) === -1), ['math', 'fast'])
Then modify
pray.js so that if users specify either
-t tagName or
--tag tagName
only tests with that tag are run.
Mock objects
A mock object is a simplified replacement for part of a program
whose behavior is easier to control and predict than the thing it is replacing.
For example,
we may want to test that our program does the right thing if an error occurs while reading a file.
To do this,
we write a function that wraps
fs.readFileSync:
const mockReadFileSync = (filename, encoding = 'utf-8') => { return fs.readFileSync(filename, encoding) }
and then modify it so that it throws an exception under our control.
For example,
if we define
MOCK_READ_FILE_CONTROL like this:
const MOCK_READ_FILE_CONTROL = [false, false, true, false, true]
then the third and fifth calls to
mockReadFileSync throw an exception instead of reading data,
as do any calls after the fifth.
Write this function.
Setup and teardown
Testing frameworks often allow programmers to specify a
setup function
that is to be run before each test
and a corresponding
teardown function
that is to be run after each test.
(
setup usually re-creates complicated test fixtures,
while
teardown functions are sometimes needed to clean up after tests,
e.g., to close database connections or delete temporary files.)
Modify the testing framework in this chapter so that if a file of tests contains something like this:
const createFixtures = () => { ...do something... } hope.setup(createFixtures)
then the function
createFixtures will be called
exactly once before each test in that file.
Add a similar way to register a teardown function with
hope.teardown.
Multiple tests
Add a method
hope.multiTest that allows users to specify
multiple test cases for a function at once.
For example, this:
hope.multiTest('check all of these`, functionToTest, [ [['arg1a', 'arg1b'], 'result1'], [['arg2a', 'arg2b'], 'result2'], [['arg3a', 'arg3b'], 'result3'] ])
should be equivalent to this:
hope.test('check all of these 0', () => assert(functionToTest('arg1a', 'arg1b') === 'result1') ) hope.test('check all of these 1', () => assert(functionToTest('arg2a', 'arg2b') === 'result2') ) hope.test('check all of these 2', () => assert(functionToTest('arg3a', 'arg3b') === 'result3') )
Assertions for sets and maps
Write functions
assertSetEqualand
assertMapEqualthat check whether two instances of
Setor two instances of
Mapare equal.
Write a function
assertArraySamethat checks whether two arrays have the same elements, even if those elements are in different orders.
Testing promises
Modify the unit testing framework to handle
async functions,
so that:
hope.test('delayed test', async () => {...})
does the right thing.
(Note that you can use
typeof to determine whether the object given to
hope.test
is a function or a promise.) | https://stjs.tech/unit-test/ | CC-MAIN-2021-39 | refinedweb | 2,718 | 64.81 |
Clojure Tutorial: misc notes
This page is WORK IN PROGRESS.
Rich Hickey fanclub
Clojure Transducer
saved for later reading.
ISeq is a interface
-
-
-
[Inside Clojure's Collection Model By Alex Miller. At , accessed on 2016-03-17 ]
to read
java's class tree and interface tree and their inter-connections is really overwhelming
Clojure Doc Convention
clojure doc uses lots conventions, but As far as i know they are not explained. (in contrast, racket scheme lisp doc also has lots conventions, but they are more logically sound, and, explicitly explained.)
here's a example of clojure doc:
Usage: (fn name? [params*] exprs*) (fn name? ([params*] exprs*) +)
?means optional.
*means 0 or more.
+means 1 or more.
Usage: (every-pred p) (every-pred p1 p2) (every-pred p1 p2 p3) (every-pred p1 p2 p3 & ps)
& somemeans 0 or more args.
clojure.core - Clojure v1.8 API documentation#clojure.core/fn
sort in clojure
sort a array/list
(def xx [7 3 1 8]) (sort xx ) ; (1 3 7 8) (sort (fn [a b] (> a b)) xx) ; (8 7 3 1)
what happens when you use “sort” on a map data type?
(def xx {:b 4, :a 1, :c 2, :d 3 }) xx ; {:c 2, :b 4, :d 3, :a 1} (sort xx) ; ([:a 1] [:b 4] [:c 2] [:d 3])
review function multi-arity
(time expr) → Evaluates expr and prints the time it took. Returns the value of expr. clojure.core/time
def can be used without value, that is, just declare.
clojure.core/bind
bind is for binding.
misc Clojure notepad
;; load the file yy.clj in directory xx (load "xx/yy")
;; compile the file yy.clj (compile 'xx.yy)
Clojure compiles to the directory stored in
*compile-path*. Default value is
"classes" (relative to current directory.)
clojure.core/*compile-path*
see also:
Clojure or lein by default runs a function named
-main.
In lein, you can specify in the defproject file in key
:main
gorilla-repl notes
rendering process steps
- get expression, send to Clojure to eval.
- turn the
*out*(or some like that) into a presentation X
- turn X into JSON and send to browser. browser turn this into DOM.
this function
gorilla-renderable.core/render is the main one. Like
pr for repl, but for gorilla-repl.
gorilla-renderable.core/render is the sole function in “Renderable” protocol. (
)
[Rich Hickey Q&A By Michael Fogus. At , accessed on 2014-11-21 ]
What is Clojure edn?
edn is “extensible data notation”.
edn has Clojure's lisp syntax. edn is to Clojure as JSON is to JavaScript.
edn home at
to understand clojure in depth, the best is Rich Hickey's “Reference Documentation” essays
Any programing tutorial that mention patterns, agile, idioms, koan, ninja, zen, tao, monk, cartoons, mentions of Martin Fowler, needs to be garbage collected.
clojure problem: string not in core
string manipulation is the most important element in modern programing. This is the first thing you teach to beginners, about any language.
But with Clojure, it's not core part of the language. You need to load it as a lib, or, goto Java's string stuff.
Try to tell it to beginners. You either face the clojure namespace complexity, or dealing with one bag of the Java interoperation complexity.
PS: actually, just prefix the namespace. For example,
(clojure.string/upper-case "xyz").
One Weird Annoyance About Clojure: Java
one annoying thing about Clojure is that it runs on Java Virtual Machine. Clojure is actually a so-called hosted language, and this is by spec. (and JavaScript is a hosted language too. [see JavaScript Tutorial: Understand JavaScript in Depth]).
This practically means is that, Clojure isn't a “fixed” language. For example, when it's running on JVM, its regex is Java syntax, its string functions are Java language's string methods. When it runs in JavaScript world (as ClojureScript do), its regex is JavaScript's regex. [see JavaScript Regex Syntax]
But, that's not all. The MOST annoying thing is that, you actually need to know a lot about stinking Java in order to do any real world work with Clojure. (don't let any hacker type fanatics tell you otherwise) You need to know Java class/object system, its package system, its namespace scheme, Java Virtual Machine system, its compilation cycle, its tools and achive files. Even the official Clojure documentation, outright simply say things like “returns a Java xyz object.” with no further explanation, in lots of places.
it is not to be taken lightly, that when you learn Clojure, you actally need to gradually understand a lot about Java ecosystem too.
but at least, thank god, it's a lisp that has industrial value, and is a Java replacement.
Clojure Tutorial: protocol
clojure parser instaparse, and excellency of clojure docs
A clojure parser lib, instaparse:, seems to be the topnotch parser. Instaparse is written by Mark Engelberg.
It has excellent tutorial too. Clear, to the point.
- No juvenile unix humor.
- No puerile perl drivel. [see Perl Documentation: the Key to Perl]
- No verbosity plus propaganda as GNU documentation, for example, emacs manual [see Problems of Emacs's Manual]
- No academic mumbo-jumbo, a la Haskell's documentation. [see Idiocy of Computer Language Docs: Unix, Python, Perl, Haskell]
so far i've seen, clojure community's doc is top-notch. The official documentation is excellent. The lib/build manager Leiningen's documentation is excellent. [see Clojure Leiningen Tutorial] doc by dummies, Technical writing at its worst
Clojure doc need cross-links, and example for some functions. 3rd party docs notwithstanding. For example, reading about
some->, what the hell is “thread”? no link to
->, and no example. Clojure Tutorial.
looks like Clojure solved another lisp problem:piping functions
(-> expr1 expr2 …) but Clojure calls it “threading”. LISP Syntax Problem of Piping Functions
persistent data structure
learned something new: persistent data structure, as in Clojure. (it's not “saved to disk”)
In computing, a persistent data structure is a data structure that always preserves the previous version of itself when it is modified. Such data structures are effectively immutable, as their operations do not (visibly) update the structure in-place, but instead always yield a new updated structure. (A persistent data structure is not a data structure committed to persistent storage, such as a disk; this is a different and unrelated sense of the word “persistent.”).[1]
These types of data structures are particularly common in logical and functional programming, and in a purely functional program all data is immutable, so all data structures are automatically fully persistent.[1].
source: Wikipedia Persistent data structure
Clojure is one big copy of Wolfram Language
Clojure is one big copy of Wolfram Language (aka Mathematica), even down to many function names (those sequence functions).
Of course, it is a inferior copy.
- It provides many “reader” level syntax sugars, but not a complete syntax layer meta-expression as Wolfram Language. [see Wolfram Language, Mathematica]
- It provides a cleaned up lisp macros, but no comprehensive term-rewriting system called pattern matching.
- It has a linear command line interface (of which the hacker idiots calls REPL), but not a Notebook (For example, gorilla-repl is trying to provide. [see Interactive Clojure Notebook: gorilla-repl])
- It lacks integrated plotting/graphics. (again, gorilla-repl tries)
- Its syntax is nested structure, but not the source code file. Mathematica, the entire source code file (notebook) is one sexp, and can be programed.
- Clojure doesn't include a renderer, for example, relies on markdown, HTML, TeX, etc extra to write/process docs. Mathematica, it's part of the language, render/typeset document/presentation or math formulas more complex than TeX, in real time, and automatically, no need to learn special syntax. [see Math Typesetting, Mathematica, MathML]
- Clojure now does fractions and infinite precision arithemitcs automatically, but lacks the thousands functions for doing math, such as integration, derivative, Solve equations, matrix, ….
Google Plus discussion | http://xahlee.info/clojure/clojure_misc.html | CC-MAIN-2018-51 | refinedweb | 1,317 | 57.57 |
Lance's Whiteboard
Random scribbling about C#, Javascript, Web Development, Architecture, and anything else that pops into my mind.
My past year as a HTML5/Javascript developer… Part1
I’ve been fairly mum on work for the past year, but I figured I should do a brain-dump of some of the things I have learned and experienced as a HTML5/Javascript developer.
The pitfalls of GDD.
Over the years, I have sampled approaches to software development ranging from RAD, XP, Waterfall, Agile, Scrum, SOA, TDD, and have recently started looking more seriously at the BDD/DDD(D) camps. However, throughout my forays into this potpourri of acronyms and metaphors for programming, I continue to find myself falling back on the crutch of GDD – the least Agile and productive approach of all.
T4 Template error - Assembly Directive cannot locate referenced assembly in Visual Studio 2010 project.
I ran into the following error recently in Visual Studio 2010 while trying to port Phil Haack’s excellent T4CSS template which was originally built for Visual Studio 2008.
Seadragon & Deep Zoom
I stumbled upon this today and definitely want to play with this further when I have time....
FW: Batch Updates and Deletes with LINQ to SQL
I'm currently on a project creating a proprietary data-migration tool using C# & Linq. I'm still new to Linq, but quickly discovered the challenges of doing mass-updates and deletes with Linq.
Minimum & Maximum Dates in code
When updating Sql columns that need a minimum or maximum date, consider using the defaults from the System.Data.SqlType namespace:
What we dont know "will" hurt us...
Argotic Syndication Framework 2008 released.
I love ClearContext!!
After several months of using the Free version of the ClearContext addon for Microsoft Outlook, I just cant imagine what I would do without it. It has reduced my email time, kept me more organized, and uncluttered my Inbox better & faster than any ad-hoc system I have devised in the past.
Manual CRUD operations with the Telerik RadGrid control. | https://weblogs.asp.net/lhunt | CC-MAIN-2018-22 | refinedweb | 339 | 60.55 |
Friendships and Intimate Relationships: Relationship Types, Attachment Styles, and Satisfaction Theories
Fight or Flight
Relationships-The Beautiful Struggle
When asked what made them happy, many respondents answered that having friends and other positive relationships was at the top of their list (Aronson, Wilson, & Akert, 2010). Relationships are what get us through our days. Imagine the person with whom you have the closest relationships. Can you imagine life without them?
In the article below, intimate relationships are discussed, including exchange and communal relationships, attachment styles, and three theories of relationship satisfaction. The beautiful struggle that is relationships; are they worth it?
Exchange Versus Communal Relationships
Intimate relationships develop of the basis of attraction (Aronson et al., 2010). Did you choose your significant other based on looks? The five factors that influence personal attraction are proximity, reciprocal liking, physical attractiveness, similarity, and evolution, which consists of the partner being able to reproduce (Aronson et al.). Once attraction brings two people together, what happens next? Read on to find out!
In exchange type relationships, the individuals in the relationship exchange benefits evenly (Aronson et al., 2010). When one person provides a favorable action or behavior, the other is expected to reciprocate (Aronson et al.). For instance, if one partner offers a back rub to the other, the one who offered, is expecting a favorable action in return, such as a massage or foot rub. The problem here is that partners keep track of their efforts, and feel neglected or taken advantage of when they are exerting more than they are getting in return (Aronson et al.).
In communal relationships, partners do things for the other without the need or expectation of repayment. Partners fulfill the other’s needs, and do not care whether or not they are paid back (Aronson et al., 2010). Following the example above, in this case, if one partner gives a back rub to the other, it is done out of kindness, and nothing is expected in return. Communal relationships are expected to be long lasting and intimate (Aronson et al.). These relationships are hard, because we have to be willing to give up our selfish desires. I don't know about you, but I would rather have my back rubbed then give a back rub! Speaking of back rubs, when I receive one, my husband uses this amazing tingler on my scalp, and it is worth a million bucks! It makes my whole body tingle, and for the price, it can't be beat!
Amazing Body Tingler
If you have never tried this before, I seriously recommend buying one now. I use them when I have a headache, and they work wonders! At that price, they can't be beat and you can get endless fun out of them. We use them on our toddler sometimes and the way it makes him squirm is hilarious! I also recommend trying it on your dog, you won't be dissapointed!
Attachment Styles
Attachment styles refer to “the expectations people develop about relationships with others, based on the relationship they had with their primary caregiver when they were infants” (Aronson et al., 2010, p. 284). There are three different styles, and those are secure, avoidant, and anxious or ambivalent (Aronson et al.). Attachment styles have an impact on the level of intimacy shared in friendships in relationships. Those who grew up with the secure attachment style, for instance, are expected to have mature, long-lasting relationships with their partner (Aronson et al.). Those with the avoidant attachment style, on the other hand, have a hard time developing intimate, close relationships (Aronson et al.). Last, those with the anxious or ambivalent attachment style spend much of their time worrying. Though they crave closeness, they worry that their partner does not return the strong feelings they have for him or her (Aronson et al.).
Social Exchange Theory
Relationships are easier to maintain when the couple has similar attitudes (Aronson et al., 2010). This is because having similar attitudes helps the individuals in the relationship to feel that spending time together is worthwhile and satisfying (Aronson et al.). People also feel satisfied when they are around someone who likes them (Aronson et al.). In terms of the social exchange theory, how people sense a relationship depends on their “perceptions of the rewards they receive from it, their perceptions of the costs they incur, and their beliefs regarding what kind of relationship they deserve” (Aronson et al., p. 287). Relationships are more likely to last when the social rewards outweigh the costs (Aronson et al.). On the other hand, a relationship is more likely to end when the relationship experiences more turmoil than praise or validation (Aronson et al.).
My husband and I spend most of our time dealing with our children, but when we aren't we like to play games together for fun! Neither one of us are board game people, but went out and bought one when we were stuck in a hotel one night, and have been addicted ever since! They cause much needed laughter and bring us close together, a feeling we miss when we are ships passing in the night.
Fun
The only real point to this game is fun. I highly recommend it if you and your significant other are looking for some humor or laughing.
Fiesty
Strategy
We play this when the kids go to bed early and we are tired of sitting there watching TV every night. It actually gets good conversation going, and we don't just play for fun! To spice it up, play for back rubs or dates. I am typically the winner, so I have gotten a few foot massages and a movie night where I got to pick the movie off this game!
Equity Theory
Different from the social exchange theory, the equity theory highlights the need for fairness in relationships (Aronson et al., 2010). The equity theory believes that people in relationships want the rewards they experience to be equal to the rewards the other person in the relationship experiences (Aronson et al.). It is thought that equitable relationships are the most rewarding and stable (Aronson et al.). Relationships are maintained when both parties feel an equal amount of rewards from the relationship. On the other hand, relationships end when one partner is feeling over benefited, and the other is feeling under benefited (Aronson et al.). What that means is that one person is getting most of the rewards and yet incurring few costs, while the other is not getting many rewards, but sustaining many costs (Aronson et al.).
Penetration Theory
The social penetration theory refers to the corresponding actions that take place between a couple in the expansion of a mutual relationship (Taylor, 1968). The behaviors in reference include an interchange of little details such as viewpoints or values, an interchange of emotions bearing positive or negative affect, and mutual activities such as athletics or reading (Taylor). Relationships start off much like friendships, as two individuals take time getting to know one another. This is when casual conversations take place, before private and personal matters become the topic of discussion (Taylor). Relationships are maintained when the couple can predict the emotional reactions of one another and important matters can be discussed (Taylor). Relationships end, however, when costs exceed the benefits and there is a withdrawal of disclosure (Taylor).
Are You Thinking About Your Relationship?
Relationships are everywhere, and there are many different types. Two different types along with theories on satisfaction in relationships were discussed above. Exchange relationships versus communal relationships was one concept explored. Also explored were three different attachment styles; secure, avoidant, and anxious or ambivalent. Last, the three theories were social exchange theory, equity theory, and penetration theory.
Are you in a committed relationship? Maybe you just have a best friend, mom, sibling that you care for. Either way, relationships are a meaningful part of this life, and they are worth holding on to. Work hard, and remember, all relationships take work.
Love
If you could be granted someone to spend the rest of your life with or endless money until you die, which would you choose?
Does Your Relationship Look Like This?
| https://hubpages.com/relationships/Friendships-and-Intimate-Relationships-Relationship-Types-Attachment-Styles-and-Satisfaction-Theories | CC-MAIN-2017-43 | refinedweb | 1,367 | 52.8 |
Hello folks, As we know that dotty is new Scala compiler (also know as Scala 3.0) which is coming with some new features and improvements. To get more details about the dotty and its environment setup. Please follow our beginner guide blog.
In this blog, I will describe you about newly introduced Union data type of dotty and its various properties.
A union is data type, which is denoted by “|” pipe infix operator and it looks like A | B. A union type (A | B) has a values of all values of type A and also all values of type B.
The primary reason for introducing union types in Scala is that they allow us to guarantee that for every set of types, we can always form a finite least upper bound (lub). This is both useful in practice as well as in theory. The type system of Scala 3 (i.e. dotty) is based on the DOT calculus, which has also a union data types.
A union type can also be joined with a non union types and form the finite join types. Like we can define join of a union type T1 | … | Tn as the smallest intersection type of base class instances of T1,…,Tn.
Syntactically, unions follow the same rules as intersections, but have a lower precedence than intersection(&).
Example of Dotty’s Union Types –
You can find the complete example of Dotty Union Type from here.
In this example, we have two case class models:
- UserName
Both have a common method lookup(). Which display the UserName and Password info on basis of its type calling.
We have also defined a getLoginInfo() method which takes a union type as an argument and return a union type value.
def getLoginInfo(loginInfo: UserName | Password): Password | UserName = { loginInfo match { case u@UserName(_) => u.lookup u case p@Password(_) => p.lookup p } }
Union types holds the commutative properties example: A | B =:= B | A
val value1: UserName | Password = getLoginInfo(UserName("Narayan")) val value2: Password | UserName = getLoginInfo(UserName("Narayan")) if(value1 == value2) println("Commutative properties satisfy") else println("Commutative properties does not satisfy")
Other Important points are –
- Union type also supports distributive properties over the intersection (&) type.
- Join operation on union types guarantees that the join is always finite.
- The members of a union type are also the members of its join.
- Union type “|” operator has lower precedence than both intersection(&) and type inference pattern (“:”) operators.
- We can effectively use union type in pattern matching.
In next blog we will describe few more new data types of dotty. Like Type lambdas, Match types etc.
Stay Tunes, happy learning !!!
References: | https://blog.knoldus.com/dotty-union-data-types/ | CC-MAIN-2021-43 | refinedweb | 438 | 64.71 |
Why the BufferedHttpResponse stops proceeding requests?
I am doing a modification to the previous question I asked because when I dug deeper I could find out where the request waits. I am testing swift by generating synthetic latency inside PUT method of obj/server.py.
My system consists with 5 storage nodes and 1 proxy. I am using replication degree 5 and have the write quorum 1 and read quorum 5 for strong consistency because it is a requirement of my test.
Then I inserted a synthetic latency (a while loop to check for primes just 2-3 seconds) only in one of the storage nodes. What I expected to see is that the final result would not get any impact because only one correct response is enough to success a write.
Even though the proxy check for write quorum and break with only 1 correct response, when it initiate the second write request it waits at self._read_status() in the below function in common/bufferedhttp.py
def expect_response(self): if self.fp: self.fp.close() self.fp = None self.fp = self.sock.makefile('rb', 0) version, status, reason = self._read_status() if status != CONTINUE: self._read_status = lambda: (version, status, reason) self.begin() else: self.status = status self.reason = reason.strip() self.version = 11 self.msg = HTTPMessage(self.fp, 0) self.msg.fp = None
This happens I guess, because previous request is still in the PUT method due to the synthetic delay I added.
But I am confused. Is it the expected behaviour?
Why does not it allow to proceed?
What do the BufferedHttpConnection and BufferedHttpResponse do? | https://ask.openstack.org/en/question/62348/why-the-bufferedhttpresponse-stops-proceeding-requests/ | CC-MAIN-2021-04 | refinedweb | 266 | 60.92 |
I don’t know a thing about python - I was under the impression that I would be able to set some tool options and then create a button to ‘run’ them essentially. At the moment I just used the following - Straight from the script log -
import c4d from c4d import gui def main(): c4d.CallCommand(431000015) # Bevel tool()[c4d.MDATA_BEVEL_RADIUS]=10 tool()[c4d.MDATA_BEVEL_SUB]=0 if __name__=='__main__': main()
I’m just wanting to create a couple of presets to apply bevels & extrudes on selected edges / faces etc. Any help would be much appreciated.
Console - NameError: global name ‘tool’ is not defined. << thats one problem along with a few others I’m sure. | http://forums.cgsociety.org/t/python-script-log-tool-presets/1720101 | CC-MAIN-2018-39 | refinedweb | 113 | 65.32 |
Erik P. Olsen schrieb:
Thanks, so FQDN means that there MUST always be at least one dot in the hostname. What if I use epo.dk, which is a public domain owned by me, instead of your example olsen.intra? Does that cause any conflict?
To your first question: that a FQDN has at least one dot is a result of its definition. Chris already gave a link. In additionTo your first question: that a FQDN has at least one dot is a result of its definition. Chris already gave a link. In addition you use a valid, because public resolvable domain name there is always the risk that as effect a service request - here mail routing - goes a different route than you would expect and want to. I mean, if the resolvable domain name has DNS records poiting to different locations. To be specific: Sendmail does MX lookups and a query for your "epo.dk" domain gives a valid result:
$ host -t mx epo.dk epo.dk mail is handled by 10 webhotel3.webhosting.dk.Sure, you can try to avoid such pitfalls by doing specific setup "tricks". But isn't it easier to simply use a namespace for your LAN where there is no risk it would be resolvable for any public meaning?
Alexander | https://listman.redhat.com/archives/fedora-list/2006-September/msg02688.html | CC-MAIN-2021-17 | refinedweb | 216 | 74.39 |
Here is my code to transform elliptic curve in quartic form to Weierstrass form using WeierstrassForm_P2_112 and WeierstrassMap_P2_112.
We get Weierstrass form along with the substitutions used for transformation back to quartic.
How to compute inverse transformation?
I can not find anything in the documentation of WeierstrassForm_P2_112 or WeierstrassMap_P2_112.
Is the inverse transformation for this case implemented in Sage? - If not, why?
Isn't only one way transformation useless if I can not transform points found on Weierstrass form to points on original quartic form?
Input:
from sage.schemes.toric.weierstrass import WeierstrassForm_P2_112 from sage.schemes.toric.weierstrass_covering import WeierstrassMap_P2_112 R.<x,y> = QQ[] f="y^2-(2*x^4+3*x^3+5*x^2+7*x+6)" print "Elliptic curve in quartic form:",f,"\n" f=R(f) a,b=WeierstrassForm_P2_112(f, [x,y]) print "Coefficients of Weierstrass form:",(a,b),"\n" print "Elliptic curve in Weierstrass form:","Y^2-(X^3 + %s*X*Z^4 + %s*Z^6)" % (a,b),"\n" X,Y,Z=WeierstrassMap_P2_112(f, [x,y]) print "Transformation: (X,Y,Z) =",(X,Y,Z),"\n" print "Verifying transformation:" print "Does the quartic form divide the Weierstrass form after transformation? Answer:",f.divides(-Y^2 + X^3 + a*X*Z^4 + b*Z^6)
Output:
Elliptic curve in quartic form: y^2-(2*x^4+3*x^3+5*x^2+7*x+6) Coefficients of Weierstrass form: (-106/3, -911/27) Elliptic curve in Weierstrass form: Y^2-(X^3 + -106/3*X*Z^4 + -911/27*Z^6) Transformation: (X,Y,Z) = (-53/12*x^4 - 23*x^3 - 301/6*x^2 - 73/3*x - 31/4, 131/8*x^6 + 313/4*x^5 + 355/8*x^4 - 110*x^3 - 2295/8*x^2 - 1049/4*x - 367/8, y) Verifying transformation: Does the quartic form divide the Weierstrass form after transformation? Answer: True | https://ask.sagemath.org/questions/49495/revisions/ | CC-MAIN-2020-50 | refinedweb | 313 | 55.13 |
Type: Posts; User: oldnewbie
Hi all:
Using MS Visual Studio 2008 C++ for Windows 32 (XP brand), I try to construct a POP3 client managed from a modeless dialog box.
As a first step I create a persistent object -say pop3-...
Paul:
Thanks again for your help.
Finally I've been able to find the error. The key has been your advice " or turn on the preprocessor generation and get the real code from the preprocessor...
Paul:
Thank a lot for your answer. Although I'm not sure to been fully understood all your advices, I try to follow them, so there is the new code -the error remains-.
I've been posted in this...
Hi template gurus:
Using MS V2008 I found a problem that I'm unable to surmount. Here is a minimal reconstruction:
// header.h
#pragma once
#include <map>
struct pXXX {
hoxsiev:
Indeed it is. It's just the result of a newbie design and probably a poor solution to scroll the content of a dialog box who can grow and shrink in size -i.e can include a variable number...
VladimirF:
I just add the LR_SHARED condition to my LoadImage functions, and although I've made just a quick pair of tests, the good old Task Manager suggest that you shot directly in the...
hoxsiev:
Because my WM_PAINT message treatement include several routines -basically to move windows- and prior to any further investigation, do you suggest that the memory leak is in the process...
Not to say that I'm using MS VC++ and the plain vanilla API.
Hi all:
I have an application who uses some BS_OWNERDRAW style buttons, and in the dialog-procedure, some like this to draw them:
case WM_DRAWITEM:
{
LPDRAWITEMSTRUCT pdis =...
Unfortunately I can't help on that question, but I suppose that the application per se can't do that, because it go against any security principle. I suppose that the application need to be started...
May be interesting to read this "Nick on Silverlight and WPF"
Greetings.
ovidiucucu: thanks for your reply
My problem is not how to position or resize a child window in a dialog box. I believe is my lack of a good understand of the interaction between the WebBrowser...
Hello all:
I need to include a WebBrowser control in a window -most exactly in a dialog box-. I'm using Visual C++ and the Windows API "bareback", I mean without MFC or ATL.
I managed to...
>
Thanks a lot. That unveil the mystery to me!!
Hi all:
Trying to follow this tutorial I'v got this code:
// MS VCpp console application
long SetString (char*); // function prototype... | http://forums.codeguru.com/search.php?s=8728c4de19d68735e996785a8c5c1665&searchid=1921003 | CC-MAIN-2013-48 | refinedweb | 440 | 64.51 |
09 December 2010 20:25 [Source: ICIS news]
TORONTO (ICIS)--US-based industrial gases major Air Products on Thursday raised its bid for rival Airgas to $70/share, from $66.50/share, representing its “best and final” offer, it said.
Assuming a $70/share offer price and 84.1m Airgas shares, Air Products offer would be worth $5.89bn (€4.41bn). By contrast, Airgas’ board has valued the company at $78/share, or around $6.6bn.
Airgas' stock traded down 4.9% to $62.74/share in New York at 14.36 local time (19:36 GMT), while Air Products' shares were higher by 1.5% at $88.34/share.
Air Products’ final offer is scheduled to expire on 14 January, but it reserved the right to further amend the expiration date, it said.
Air Products very first offer from February in the long-running takeover battle was $60/share, or around $5.1bn.
In a statement, Air Products’ said the $70/share offer was its “best and final” offer for Airgas and would not be further increased, promising an end to the almost year-long fight between the two US industrial gases firms.
The latest offer provided a 61% premium to Airgas' closing price on 4 February 2010, the day before Air Products first announced an offer to acquire Airgas, it said.
"It is time to bring this matter to a conclusion, and we are today making our best and final offer for Airgas,” said Air Products CEO John McGlade.
“The Air Products board has determined that it is not in the best interests of Air Products shareholders to pursue this transaction indefinitely, and Airgas shareholders should be aware that Air Products will not pursue this offer to another Airgas shareholder meeting, whenever it may be held," he added.
"We are more than a year into this process, and the majority of the Airgas board has made it clear that they do not intend to negotiate a deal,” he continued.
“Accordingly, if Airgas shareholders want this compelling offer, they must make their voices heard now,” he said, adding that there were no other bidders for Airgas.
Airgas said it would review Air Products' revised offer and advised shareholders to take no further action at this time.
Additional reporting by Al Greenwood in ?xml:namespace> | http://www.icis.com/Articles/2010/12/09/9418212/air-products-hikes-bid-for-airgas-to-final-70share-offer.html | CC-MAIN-2014-15 | refinedweb | 385 | 61.67 |
Let me guess…
You’re one of those…
Those who are Android app developers, or those who just recently become interested into android app development and exploring different libraries to find the brilliant ones among them.
So today, we’re going to take a deep look into what is Retrofit and how to integrate it into your android application so that you can embrace its benefits.
But, before we get into what is retrofit 2.0 android tutorial, let’s first understand what exactly is retrofit and how it benefits us.
Retrofit is one of the amazing tools that have been released into open source community. And, it is a safe-type HTTP client for applications of java and android both.
However, if you’re wondering that they’re still other options out there such as Volley from Google or AsyncTask, so why not use them? Well then, let me share the main premise behind type-safe HTTP client, which is that you’ll only need to worry about semantics of the queries that you send over the network, rather than the details of how to specify parameters correctly, construct URLs and so forth.
And, another benefit of retrofit 2.0 is that it makes the network calling easy by requiring you to write just a few interfaces and that’s all! Additionally, in retrofit, network calling is far more quicker than the others, which makes it perfect and easy to learn library.
OK now, Enough with the theory part… Let’s begin with the step-by-step guide to understanding how to integrate it into your android app.
We need 3 things to integrate Retrofit to Android Application
- Retrofit Library
- Interface with networking call method
- POJO class to get data from server
Step 1
We need three libraries for retrofit
1.Retrofit Library
2.Gson Converter Library(To Convert Server Incoming Data in POJO)
3.OkHttp Library (For get Log info and set Connection Timeout)
To Add this libraries in application paste following lines into application’s build.gradle in dependencies block.
compile 'com.squareup.retrofit2:retrofit:2.1.0' compile 'com.squareup.retrofit2:converter-gson:2.0.2' compile 'com.squareup.okhttp3:logging-interceptor:3.4.0'
Step 2
Step 3
Now we will make POJO for JSON data which will coming from server. For our case this data will come from server { "code": "200", "status_msg": "Msg found static page", "success": true, "page_title": "Privacy Policy", "page_slug": "Privacy-Policy", "page_description": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum augue odio, varius ut gravida a, ultrices sit amet nunc. In hac habitasse platea dictumst. Praesent at commodo lorem. Nulla mollis nisi ut lectus volutpat molestie. Donec pretium, nulla id pulvinar tempus, neque ex molestie mi, in pretium leo nulla eu velit. In hac habitasse platea dictumst. Donec cursus sapien et lorem sagittis mollis. Nullam id nisl nec elit sagittis hendrerit. Mauris tristique nec est congue sodales. Praesent eget aliquam leo. Aliquam nec lorem est.\r\n\r\nMaecenas non massa ut turpis aliquet mollis. Nulla neque arcu, faucibus ut mollis a, fringilla non ex. In hac habitasse platea dictumst. Suspendisse turpis odio, feugiat non tortor ac, congue faucibus lacus. Proin vitae feugiat ligula. Donec nec felis ornare, porttitor sapien a, faucibus velit. Phasellus faucibus urna in nulla consectetur congue.\r\n\r\nAenean cursus lacus eu purus porttitor, ut aliquam diam posuere. Curabitur ultricies non justo non scelerisque. Phasellus nunc massa, tempus vitae vestibulum .", "added_date": "2016-07-05" }
Put your Json data in white box
By Clicking on Preview, you will get the POJO
Step 4 Now we will made View to display data from server.
Step 5 Now We will setup retrofit client
Create one new Class called ApiHandler and one interface APIInterface
In ApiHandler we will create retrofit client.
Here we set ConnectionTimeout 30 second and add your BASE URL here.
In APIInterface we add HTTPMethods and parameter here in my case i have POST request to server as per your need you can change it with GET,PUT,DELETE,etc..
public interface APIInterface { @FormUrlEncoded @POST("api/") Call staticPagesApi(@FieldMap HashMap<String, String> requestBody); }
Step 6 Now we have all things completed, so executing API Call in MainActivity.
public class MainActivity extends AppCompatActivity { private Toolbar toolbar; private TextView tvPage; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); toolbar = (Toolbar) findViewById(R.id.toolbar); tvPage = (TextView) findViewById(R.id.tvPage); Call call = ApiHandler.getApiService().staticPagesApi(getParams()); call.enqueue(new Callback() { @Override public void onResponse(Call call, Response response) { StaticPages resData = response.body(); if (resData.getSuccess()) { toolbar.setTitle(resData.getPageTitle()); tvPage.setText(resData.getPageDescription()); }else { // Handle your error here } } } @Override public void onFailure(Call call, Throwable t) { // handle network call failure here t.printStackTrace(); } }); } private HashMap<String, String> getParams() { HashMap<String, String> param = new HashMap<>(); param.put("action", "static_pages"); param.put("page_slug", "Privacy-Policy"); return param; } }
Here, we need to pass two params action and page_slug to server.
After run on success we have to handle the UI update.
This is how it looks after calling API
So, by following the above steps, you can now easily use retrofit 2.0 for network calls. Also, the code is for android retrofit tutorial is available on github, click here to get it.
If you still have any doubts, you can contact our android app developer to resolve your issue or you can hire android developer from Space-O Technologies, which is a leading Android App Development Company and we have helped many young and entrepreneurial minds to make their app ideas come to reality. You can also check out our portfolio here. | https://www.spaceotechnologies.com/android-tutorial-about-retrofit-integrate-network-calling/ | CC-MAIN-2019-47 | refinedweb | 937 | 57.87 |
pin performance
hi,
i have tested pin read performance
and it is like this:
read 1000 times pin value:
20558
21
this is my test file
from machine import Pin import time dht_pin=Pin('P9', mode=Pin.OPEN_DRAIN, pull=None) am = time.ticks_ms() au = time.ticks_us() #comment this line if you have not enabled this in firmware for i in range(1001): dht_pin() print(time.ticks_us()-au) #comment this line if you have not enabled this in firmware print(time.ticks_ms()-am)
and i see that i can read one pin value in 21 us
but specification for e.g. dht11/22 is like this:
Bit '0' : ~54uS Low and ~24uS High Bit '1' : ~54uS Low and ~70uS High
i see that i can miss high value because its duration is 24us
how can we improve pin read time?
for test i enabled in firmware time_us()
but previously tested this without any change
I see some possibilities in speed up things by batch read
I will try to write some code and back with results | https://forum.pycom.io/topic/418/pin-performance | CC-MAIN-2017-34 | refinedweb | 175 | 76.66 |
>>.
220 Reader Comments
My memory is kind of foggy, but I thought that the FX in .Net FX stood for Framework eXtension because .Net is already...you know...a framework.
2. C++ can certainly use WPF today. Products under the Expression line were built that way.
Yeah, except that Visual Basic can't form function pointers or safely spawn threads.
C++/CLI has no ready access to XAML or the forms designers in Expression Blend or Visual Studio, and native C++ can't even use the classes.
Agreed and Ars also helped in fueling the unfounded fear. To make a deal because one person mentioned ONE application was in Javascript and HTML5 is just plain silly.
I just don't get this. If .NET is not going away, just say so...
The plans for W8 sound very exciting, and point to this being much more than an incremental update to W7.
They _never ever_ said or hinted otherwise. Maybe they should put out a list of all things that aren't changing in your eyes? DX isn't going anywhere, the Calculator app isn't going anywhere etc etc.
Drama attracts clicks right?
Agreed and Ars also helped in fueling the unfounded fear. To make a deal because one person mentioned ONE application was in Javascript and HTML5 is just plain silly.
It was not just "one person"; it was the person who owns the entire Windows user experience (Julie Larson-Green). And it wasn't an "application"; she said that the entire developer platform was "based on" HTML5 and JavaScript.
Just because you *can* wrap WPF, doesn't mean it's readily accessible from non-.NET languages. Just like I've written code that uses a .NET RabbitMQ assembly...from Delphi (using a hand-written COM interop translation assembly).
Ready accessibility requires things like header files, LIB files, typelibs, IDL files, etc. Ability and ease are two completely different things.
I on the other hand, don't want to care. I want to target the biggest customer base and not worry about learning new frameworks every couple years. The only product to do this is Qt, which other than a version 3 to 4 change, has only added features and never caused me to revisit my old code. Now Qt is sporting QtQuick, which is a JavaScript API married to the Qt API for apps. Unlike the MS platform, all my old code still works, and I can target Windows, OSX, Linux, Android, Nokia, and with some work, iPhone.
Fundamentally, MS is keeping the computing universe from advancing by keeping the API churning. I should not ever have to solve a problem that has already been solved. But every change of framework forces us to go back and do it again. I liked the .NET CLI because it had this promise. I would have liked to see code become a greater commodity than OSs and computers have become. But keeping framework churn prevents that.
I wonder if/when other devs will come to the same conclusion as I have. The only change I'd like to have seen is a 4th generation language version of Qt to compete with C#, which is now available through Python wrappers.
Makes me think there might be the possibility of something real cool they are hiding. Even MS isn't this dumb to just sit there and let this fester without good reason. AT least I don't think they are.
A lot of us looked long and hard at Qt, too. It's a phenomenal framework, with some amazing features. Unfortunately, it always looks like a foreign application on nearly every platform, including Nokia's.
If this article is correct, and C++ gets access to WPF style features, that is the best news I've read all month. QML is nice, but XAML was just slick. And awesome.
Yeah, except that Visual Basic can't form function pointers or safely spawn threads.[/quote]).
Yeah, there's AddressOf, but you're being pretty brave if you want to start using that for threads and things. VB provides no thread safety guarantees.
...
Will it be possible to write immersive applications using C++ or .NET? Will it be possible to write immersive applications using XAML?
What about C++/CLI? C++ is the more modern version of Managed C++. If you need to seamlessly interface managed and native code, you use C++/CLI, which generates both managed and native code. The switch between the two worlds is much less expensive (but not entirely free).
We use C++/CLI quite a lot, but unfortunately Visual Studio 2010 doesn't even provide IntelliSense for this language. So we have to use Visual Assist X instead. So much about current state of tooling.
Last edited by JSawyer on Thu Jun 23, 2011 11:12 am
Makes me think there might be the possibility of something real cool they are hiding. Even MS isn't this dumb to just sit there and let this fester without good reason. AT least I don't think they are.
Like any big company, I am sure that there are tons of cool things that never see the light of day due to internal politics.
In fact, Microsoft announced its direction at the last PDC along with roadmaps for how WPF and Silverlight will fit in. In reality the HTML 5 support that Windows 8 depends on involves standards with a still uncertain future. Since IE 10 will be the Windows 8 desktop, Silverlight applications will gain a first class status that they did not have in the past. At least on Windows 8, nobody will have to worry about out of browser support for Silverlight. The other reality is that environments that support Silverlight will be far more widespread than those that support CSS3, Web Sockets, and HTML 5 graphics. Silverlight plugins may be used to provide Web Socket support to Javascript applications on back level browsers and those Javascript applications may use programming patterns that Microsoft introduced with WPF but failed to support in Silverlight.
It is true that Microsoft's new direction does imply a substantial change. It is true that the adoption of WPF did not happen at the rate Microsoft hoped for. It is also true that Office has yet to move much beyond 1990's technologies. But, it is not true to say the Microsoft went backwards. Microsoft is moving Visual Studio on to WPF. I expect that trend will continue. WPF applications will continue to run in WPF windows embedded in a Windows 8 IE based desktop more or less the same way the do today with a Win 32 desktop.
The event model is more or less the same across DOM level 3, WPF, and Silverlight. So, whatever the event model turns out to be for touch, it is virtually certain to be supported on all three technologies.
My personal evaluation is that Microsoft has made a good decision. Over time I expect the Javascript/HTML environment will become dominant for UI presentation. But I doubt Microsoft will write anything other than code that manages the UI in Javascript. Microsoft actually has had very good customer adoption of .Net in customer server environments. With the new approach even local clients are likely to connect the UI to supporting functions across a local host Web Socket. C# is a very good language and much better than Javascript for any large scale project that doe not have to run in a web browser. Nobody who is used to C# is going to want to adopt Javascript unless there is a very good reason for the shift.
While the new direction is significant, in many ways it is just a new twist on long established Microsoft trends. Ever since they started work on IE 8, it has been clear that Microsoft intended to offer a full function HTML based UI equivalent to what they were doing with XAML based markup. The ideal of integrating browser functionality into Windows goes back even farther. The anti trust lawsuit and much of the controversy over the requirement to include IE in Windows revolved around Microsoft's claim that IE was an integral component of Windows. It will be interesting to see whether the Windows 8 desktop is a pluggable part.
It will also be interesting to see how quickly Windows 8 actually appears. The insistence that it was not committed for 2012 may prove more than academic. Both with the migration of Windows 95/98 client on to Windows NT and in the appearance of Longhorn/Vista there were substantial schedule slips. Microsoft is clearly under pressure to have a shell that works in the tablet/phone segment. I suspect they will find a way to get some kind of support in that area with Windows 8 next year. But it remains to be seen whether they will have Win 32 windows integrated well enough into an HTML 5 desktop next year to provide a new version of Windows ready to ship to their many customers who depend on Win 32 applications.
one of many jabs:
Makes me think there might be the possibility of something real cool they are hiding. Even MS isn't this dumb to just sit there and let this fester without good reason. AT least I don't think they are.
They were dumb enough to release the information when it was clearly not ready to be released
Nice picture as well lol
First, the comedy -- Windows really hasn't undergone significant under-the-hood changes in more than a decade, and with only a few buzzword substitutions this article could very well have been written 5 or more years ago about Longhorn. This is par for the course for Microsoft for their last several OS releases -- when the release date is a long way out we're promised a shower of revolutionary features, but as the release date creeps closer, they never materialize.
And the tragedy -- developers are again left in the dark. I can hardly even fathom working with a platform where all decisions about the future of the platform are made behind closed doors, and then the developers are simply told: this is how it's going to be. How does being silent about it help anyone? Wouldn't it make more sense to work WITH your developer community, announcing proposed changes and ideas and evaluating the community feedback? If I have a great idea for a new Windows application, should I even bother starting to write it today for today's Windows environment, or should I wait for these shiny new tools to build it? More importantly, how can I even begin to make that decision when third-party speculation (like this article) is the best indicator I have of exactly what those shiny new tools might be?
The butthurt would be priceless, and the eventual result would be wonderful for nearly everyone.
In any case, Windows 8 will finally have a modern UI toolkit, but unless devs get some UI design skills, most apps will still look like shit.
you mean blurry fonts and slow rendering? Its no wonder WinDiv ignored WPF. It was never as good as you make out here, it might have been intended to be, but the reality was altogether different. Hopefully DirectUI will be the answer, but its still a shame XAML is the 'gui language' as its so bloated and complicated.
They should modify XAML considerably to make it leaner and more efficient, even binding a variable to a control is a large undertaking - in MCF/C++ I could just tie a variable to a control with 1 line of code, now I need an object with properties and a huge.long.namespaced.binding path, full of curly braces and 'voodoo'. It kind of reminds me of old-style raw Win32 programming with its GetWindowLong functions and WndProcs. XAML needs to be replaced with something much more streamlined and easy to develop without Expression.
Still, it all sounds good, though I wonder what this means what the 'best' language to use is now? It used to be that C# was the "one true" one, now I'm thinking Windows development will be a lot more heterogenous.
You must login or create an account to comment. | http://arstechnica.com/information-technology/2011/06/windows-8-for-software-developers-the-longhorn-dream-reborn/?comments=1 | CC-MAIN-2016-40 | refinedweb | 2,059 | 71.75 |
Has anyone hacked together support for creating daemons
with Go? This seems quite hard to do since forking
is non-trivial (e.g. with pakackage init-functions
spawning new go-routines).
- Taru Karttunen
Can't you just make a program and then make a script to launch
multiple instances of it or background it etc.?
- jessta
--
=====================
r1, r2, err1 = RawSyscall(SYS_FORK, 0, 0, 0)
- Charle Demers
import (
"os"
"syscall"
)
func main() {
var pid, err uintptr
pid, err = fork();
// pid = 0, I'm the child, pid != 0, I'm the parent and pid is the
child process pid.
if err != 0 {
os.Exit(int(err))
}
}
func fork() (pid uintptr, err uintptr) {
var r1, r2, err1 uintptr
darwin := syscall.OS == "darwin"
r1, r2, err1 = syscall.RawSyscall(syscall.SYS_FORK, 0, 0, 0)
if err != 0 {
return 0, err1
}
// Handle exception for darwin
if darwin && r2 == 1 {
r1 = 0;
}
return r1, 0
}
- Charle Demers
This won't work reliably in a real program. Once you've started
another thread, fork+exit in parent is either wrong or impossible,
depending on the operating system. Since imported packages
can start goroutines as part of initialization, and those will likely
cause new threads to be created soon after starting main, just
calling fork is not going to work in general.
By far the easiest solution for now is to use the shell's & feature.
I think at some point there will be an os/background package
that you can import and then use to "fork" into the background,
but it is subtle. I tried to put it together a few weeks ago and
got stuck on some detail (I don't remember what) and set it aside.
Russ
I'll check out this os/background package is ready.
Impressive work by the way, keep it up!
Cheers!
On Feb 7, 12:54 am, Russ Cox <r...@golang.org> wrote: | https://groups.google.com/g/golang-nuts/c/KynZO5BQGks | CC-MAIN-2021-39 | refinedweb | 312 | 72.26 |
current position:Home>Webpack practice: increase the loading speed of a library by four times
Webpack practice: increase the loading speed of a library by four times
2021-08-26 00:36:35 【toln】
Project background
The project is a library used within the company , Use webpack package .(webpack edition : 5.40.0) It used to be using npm Package management in the form of , However, due to the large volume of the project after packaging , Affect the loading speed of the page , So I decided to give up npm Package management , Pack it into js file , adopt script Tag to reference . about npm Package and js file , Each has its own advantages and disadvantages .
Because it is an internal package , And higher requirements for loading performance , So you can use js The way
Optimization steps
One 、 Dynamically import some modules
Some modules that are not necessarily used in the project and are relatively large , use webpack Reference by dynamic import in , This reduces the first loaded package volume . For example, before modification :
import module1 from './module1'; Copy code
After modification :
// Here we will use module1 object const module1 = await import( /* webpackChunkName: "module1" */ './module1' ); Copy code
such webpack Will module1 Pack it up as a stand-alone js file . The file will only be loaded when you import it .
webpack It also supports Pre acquisition module , Is to load when the browser is idle , Instead of loading when you introduce . It's practical to
<link> label
rel Property
preload and
prefetch value .
Two 、 Module split
The previous method is only applicable to modules that will not be used immediately , But for some core modules, this method cannot be used . For example, in our project three.js, The volume of the module is also relatively large , Can pass webpack To configure
optimization Package it into a separate js file ,
// ... optimization: { // ... splitChunks: { cacheGroups: { three: { test: /[\\/]node_modules[\\/](three)[\\/]/, name: 'three', chunks: 'all', priority: 2, }, }, }, }, Copy code
Other larger modules can also be separated .
3、 ... and 、 Modify the module export method
The first two steps are mainly to split the module , The purpose is to realize delayed loading and parallel loading , This basically increases the loading speed
Now we need to modify the export mode of the module . use npm Package management is actually a process after packaging
CommonJS module, Internal use
module.exports = myLibrary export , And then use
require('myLibrary') Mode introduction . Now pack it up as a js Files are imported directly in the browser , So we need to
myLibrary Expose to global objects .( You can also use it AMD Or other browser supported module Syntax , But these methods are not used at present ), take webpack Of
output.library.type from
commonjs Change it to
umd:
library: { name: 'myLibrary', type: 'umd', }, Copy code
Four 、 Deploy js file
We need to deploy the packaged product to the static server . To configure webpack Of publicPath:
// ... output: { path: path.resolve(__dirname, 'dist'), publicPath: ``, }, Copy code
After each package
dist Upload the directory to the server path Next . We are currently using gitlab Of CI/CD Complete automated deployment .
5、 ... and 、 Call... In a new way
Use dynamic loading script The way , The following is the loading function :
function loadScript(src) { return new Promise((resolve, reject) => { const scriptEle = document.createElement('script'); scriptEle.type = 'text/javascript'; // Dynamic scripts async The default is true, Set to false In order to execute ( Parallel loading ) scriptEle.async = false; if (scriptEle.readyState) { // IE scriptEle.onreadystatechange = () => { if (scriptEle.readyState == 'loaded' || scriptEle.readyState == 'complete') { resolve(); } }; } else { scriptEle.onload = () => { resolve(); }; scriptEle.onerror = () => { reject(`The script ${src} is not accessible`); }; } scriptEle.src = src; document.currentScript.parentNode.insertBefore(scriptEle, document.currentScript); }); } Copy code
Parallel loading and sequential execution are used , The order of execution is :
Runtime files --> Separated dependencies --> Master file
6、 ... and 、 Better use of cache
Use the hash value as the packaged file name , To ensure that the cache becomes invalid after modifying the file .
output: { // ... filename: '[name].[contenthash].js', }, Copy code
It should be noted that , When we modify one of the files , The hash value of the packaged product corresponding to other files can also be changed .
for instance , Suppose there are three modules in the project and they are all packaged into three separate modules js file , Module dependencies are as follows :
When we changed module1, We only expect module1 Corresponding js File name hash value changed . In fact, the file names of these three files have changed . The reason is that we didn't package the runtime files separately , Changing the file name of one file will change the content of another file , Form a chain reaction . Use the following configuration :
optimization: { runtimeChunk: 'single', } Copy code
In this way, a separate runtime file will be generated after packaging , This file also needs to be deployed and loaded ( Execute before executing other files ).
So we're modifying module1, Only module1 Corresponding js The name of the file and the runtime file has changed , To optimize the cache .
author[toln] | https://en.qdmana.com/2021/08/20210826003630829f.html | CC-MAIN-2022-40 | refinedweb | 828 | 53 |
-- (..), furtherEnroll, enrollPair, enrollList, enrollAll, enrollAll_, enrollAllT, enrollOneMany) where import Control.Concurrent.CHP.Base import Control.Concurrent.CHP.Parallel. furtherEnroll :: Enrollable b z => Enrolled b z -> (Enrolled b z -> CHP a) -> CHP a furtherEnroll (Enrolled x) = enroll x -- | $ f . (eb:) -- | Given a command to allocate a new barrier, and a list of processes that use -- that barrier, enrolls the appropriate number of times (i.e. the list length) -- and runs all the processes in parallel using that barrier, then returns a list -- of the results. -- -- If you have already allocated the barrier, pass @return bar@ as the first parameter. -- -- Added in version 1.7.0. enrollAll :: Enrollable b p => CHP (b p) -> [Enrolled b p -> CHP a] -> CHP [a] enrollAll = enrollAllT runParallel enrollAllT :: Enrollable b p => ([a] -> CHP c) -> CHP (b p) -> [Enrolled b p -> a] -> CHP c enrollAllT run mbar ps = mbar >>= flip enrollList (run . zipWith ($) ps) . replicate (length ps) enrollOneMany :: Enrollable b p => ([Enrolled b p] -> CHP a) -> [(CHP (b p), Enrolled b p -> CHP c)] -> CHP (a, [c]) enrollOneMany p [] = p [] >>= \x -> return (x, []) enrollOneMany p ((mbar, q) : rest) = do bar <- mbar enrollPair (bar, bar) $ \(b0, b1) -> do ((p', q'), rest') <- enrollOneMany (\bs -> p (b0:bs) <||> q b1) rest return (p', q' : rest') --TODO there is probably a better way to implement the above -- | Like enrollAll, but discards the results. -- -- Added in version 1.7.0. enrollAll_ :: Enrollable b p => CHP (b p) -> [Enrolled b p -> CHP a] -> CHP () enrollAll_ = enrollAllT runParallel_ | http://hackage.haskell.org/package/chp-2.1.0.1/docs/src/Control-Concurrent-CHP-Enroll.html | CC-MAIN-2013-48 | refinedweb | 244 | 50.06 |
I've fallen a few weeks behind on posting links to various articles and blog posts, so this post is a bit long. As you can see, the world of document format interoperability has been humming along irregardless of my inattention ...
Monarch v10 released. Datawatch announced today the release of Version 10 of their flagship report mining and analysis tool, Monarch. They have continued to improve Monarch's ability to read and write XLSX files, and features of the latest release include all-new pivot-table support, read/write support for XLSM format, and read support for the XPS format. Behind the scenes, they've also moved from custom packaging code to the Open XML SDK, and they've been providing us with valuable feedback on what developers need from Open XML dev tools.
Getting an education in standards. Patrick Durusau has started series of posts on how to participate in standards process. The first post, My Standards Education, takes a look at some of the largest standards organizations and how you can get involved; his next post will cover how to learn the rules after you join a standards organization.
Alex Brown on standards reform. Not to be outdone by Patrick, Alex has also started an ambitious series of blog posts recently, on his suggestions for reform/evolution of international standards organizations. His first post on JTC 1 reform prompted a discussion of how standards work relates to nuclear fission and pub meetings, and his second post offers some thoughts on Webbifying the Standardisation Process.
Julien Chable in Anglais. Julien has started doing some posts in English, which really saves me a lot of copying and pasting to translation services. Here are two of interest to Open XML developers:
Information about the Open XML SDK. Zeyad Rajabi, the program manager for the Open XML SDK, is doing a series of guest posts on Brian Jones's blog about the SDK:
SpreadhseetGear announces Open XML support. Speaking of Open XML dev tools, SpreadsheetGear has announced enhanced supprt for Open XML and the XLSX format. The latest release of SpreadsheetGear allows developers to build dynamic dashboards in Excel and then generate ASP.NET or Windows forms applications based on those dashboards.
Developer tips from Eric White. Eric has posted several useful how-to articles on Open XML developer topics lately:
Converting DOCX to XAML. Michael Scherotter's Word 2007 XAML Generator was released on Codeplex last month. It's a Word plug-in that converts a document to XAML for use in WPF or Silverlight applications, with complete source code provided.
Adoption of document formats. Gray Knowlton has revisited an interesting topic: adoption data for Open XML and ODF formats.
Content Management Interoperability Services (CMIS). CMIS, a new standard for integration of ECM (enterprise content management) systems was jointly announced by Microsoft, EMC and IBM last month. This is an area where we're going to see a lot of action in the years ahead, as system like SharePoint become more pervasive in large organizations and users come to expect their document libraries and workflows to work seamlessly across applications and platforms. Ethan Gur-esh provides an overview of how and why CMIS has been developed, and Ryan Duguid has a good summary on the SharePoint team blog.
Reading Excel files from Linux. Chris Rae, one of the funniest people I know, has an interesting post on the Excel team blog about how to process XLSX files with Perl. I forgot to link to him because the post came out during my summer vacation, but it's a great example of how open tools can be used to solve a business problem when the data resides in a standardized XML format such as Open XML.
Last call for DII workshop this week. We're hosting a DII workshop in Redmond this Thursday and Friday, and there's still time to get in if you'd like to hear about how Office is approaching Open XML document interoperability and meet other implementers to discuss various interop topics. Chris Rae (above) is one of several Office PMs who will be presenting, and we'll also have presentations from non-Microsoft Open XML developers and architects, roundtable discussions, access to Office 2007 SP2 (for testing ODF interop, among other things), and a hosted dinner on Thursday evening. The workshop is free, so contact me if you'd like to attend and I'll get you on the list.
Hi Doug,
in MSDN I saw a somewhat high-level view of the OpenXML format as a container for arbitrary documents.
We're thinking about adopting OpenXML packaging for our own document file formats. The advantages we see are in using some of the existing infrastructure e.g. the summary information editor in vista's explorer.
Can you provide a summing-up of links to articles, whitepapers, case studies etc. regarding the use of Open XML, the APIs or the SDK for implementing non-MSOffice application specific document formats? Maybe this could be an interesting theme for a blog entry.
Thanks,
Thomas
Doug,
And this just in ... Microsoft Office 2007 SP2 Beta was released this morning.
:o)
Great idea, Thomas. I'll put together a post summarizing this type of information. Let me know if you have a link to the MSDN article you mentioned.
Have fun, Jesper. :-)
Ahh, after some cleaning up in my brain cells I found that it was the MSDN article about the System.IO.Packaging namespace describing an Office-independent view of an Open XML document (). | http://blogs.msdn.com/b/dmahugh/archive/2008/10/20/weeks-of-links-for-10-20-2008.aspx | CC-MAIN-2015-32 | refinedweb | 925 | 62.17 |
Chapter 11 examined the Form class, the central unit of programming a rich client application with Microsoft Visual C# .NET. This chapter introduces many of the basic controls that are used with Windows Forms, including buttons, list boxes, combo boxes, and labels, as well as the event handling mechanism used by controls to pass notifications to their parents. This chapter also covers containment, which allows specific controls, such as group box and panel controls, to contain other controls.
The classes in the System.Windows.Forms namespace provide a new way to develop forms-based applications for Windows. When I first started programming for the Windows platform, I had to write large amounts of code to route and handle messages sent from the operating system to control the application. Any error in the message routing code would break the application and lead to hours, if not days, of debugging. Copious amounts of C code were required to change even the simplest of control attributes. For example, creating a push button control with a specific color required hundreds of lines of code.
Windows Forms change all of that. With Windows Forms and the .NET Framework, building a rich client application is greatly simplified. You can build applications that are far richer than ever before, while writing less code. In fact, one of the great things about the Windows Forms Designer is how little code you actually need to write. All properties are exposed through the Properties window, enabling you to define the appearance and behavior of controls declaratively, rather than requiring you to write code. By reducing the amount of code you must write, Visual C# .NET and Windows Forms enable you to concentrate on what really counts—adding real functionality to your application. | https://etutorials.org/Programming/visual-c-sharp/Part+III+Programming+Windows+Forms/Chapter+12+Adding+Controls+to+Forms/ | CC-MAIN-2022-21 | refinedweb | 292 | 54.32 |
A const reference is a reference that may refer to a const object:
Example
#include <iostream> int main() { const int ival = 1024; const int &refVal = ival; // ok: both reference and object are const int &ref2 = ival; //error: non const reference to a const object getchar(); return 0; }
We can read from but not write to refVal. Thus, any assignment to refVal is illegal. This restriction should make sense: We cannot assign directly to ival and so it should not be possible to use refVal to change ival.
For the same reason, the initialization of ref2 by ival is an error: ref2 is a plain, nonconst reference and so could be used to change the value of the object to which ref2 refers. Assigning to ival through ref2 would result in changing the value of a const object. To prevent such changes, it is illegal to bind a plain reference to a const object.
A const reference can be initialized to an object of a different type or to an rvalue (Section 2.3.1, p. 45), such as a literal constant:
Example
#include <iostream> int main() { int i = 42; // legal for const references only const int &r = 42; const int &r2 = r + i; std::cout<<r; std::cout<<"\n"<<r2; getchar(); return 0; }. | http://www.loopandbreak.com/constant-references/ | CC-MAIN-2021-17 | refinedweb | 213 | 53.75 |
#include <FXIcon.h>
#include <FXIcon.h>
Inheritance diagram for FX::FXIcon:
NULL
0
1
Create an icon with an initial pixel buffer pix, a transparent color clr, and options Icon to obtain the alpha color from the background color of the image; it has the same effect as IMAGE_ALPHACOLOR in the sense that the icon will be transparent for those colors matching the alpha color.
[virtual]
Destructor.
Create the server side pixmap, the shape bitmap, and the etch bitmap, then call render() to fill it with the pixel data from the client-side buffer.
After the server-side pixmap and bitmapsImage.
Detach the server side pixmap, shape bitmap, and etch bitmap from the Icon.
Afterwards, the Icon is left as if it never had a server-side resources.
Destroy the server-side pixmap and the shape bitmap and etch bitmap.
The client-side pixel buffer is not affected.
Render the server-side pixmap, shape bitmap and etch bitmap for the icon from the client-side pixel buffer.
Resize both client-side and server-side representations (if any) to the given width and height.
The new representations typically contain garbage after this operation and need to be re-filled.
[inline]
Obtain transparency color.
Change transparency color. | http://fox-toolkit.org/ref14/classFX_1_1FXIcon.html | CC-MAIN-2017-22 | refinedweb | 205 | 64.91 |
The last time that I had to meet customer requirements for creating and rendering charts, I ended up building a complex, data-driven, dynamic wizard style application which was built on top of a large API wrapper (that I also wrote) around a client component of a third-party chart rendering application (already purchased) that ran on its own server. The final “submit” step in the wizard led to a large chunk of data being handed off to the third-party app server. My application then retrieved the image bytes from that server, and stored those bytes along with the chart metadata in our own database for both immediate rendering and further use. I made several videos (with engaging background music) for user instruction. One of the videos illustrated how the wizard could be used to graph multiple mathematical functions on multiple axes, to serve as a demonstration of how to use some of the more advanced features of the application. It was a tricky business and it involved several detailed steps. The proper parameters needed to be set just to get the scaling and positioning right. If you wanted to make slightly more than basic customizations, such as adding the color maroon to the default color palette, you had to change configuration files on the third-party app server. It was a massive undertaking, and it took several months to deliver a useful and useable application. At that time, there were a few Javascript charting tools that were starting to surface, but nothing rich and complex enough – let alone cheap enough to warrant replacing an already purchased product – to meet our users’ needs. I’ll dispense with the “back in my day” dialog now, as it is not only a dreadfully boring way to start a blog post, but also all of this occurred only 5 years ago. Nevertheless, that amount of time is an eternity in the world of software development. Fast forward to now. Today we have rich web-enabled client-side chart rendering tools that are quite powerful. Many of them can be used for free (double-check the licensing agreements first). These modern tools can create charts that are immensely complex, bind to dynamic server-side data, handle vast quantities of information, and they can be interactive. Charts that can talk to each other…even move.
In this blog post I will walk you through the creation of a function graphing tool in the form of a Grails application that uses Google Chart Tools via the Google Visualization Grails plugin. I will also be using the exp4j library to parse the mathematical expressions and calculate the results. Google Chart Tools provides us with a Javascript API to create and render a multitude of chart types, each with its own configuration options. The Google Visualization Grails plugin further simplifies this by providing us with an easy to use GSP tag library, which ultimately constructs the necessary Javascript Objects and Google API calls. To follow along, you only need to have Grails installed and configured. Most of these instructions are at the command line level. It should be simple enough to accomplish all of these steps in the IDE of your choosing. A link to the project source code is provided at the bottom of this post.
Let’s dive in!
Fire up a terminal window and navigate to a directory where you would like to create the application, and then type:
grails create-app graphing-calculator
Let’s start by taking care of our dependencies. We’ll grab exp4j first.
cd graphing-calculator/lib
wget
Now open BuildConfig.groovy (in grails-app/conf) and add the Google Visualization plugin to the plugins section.
plugins { compile ":google-visualization:0.6" }
That’s all we need for the dependencies. Now it’s on to the fun stuff.
First, let’s create the primary controller that we will be using in this app:
grails create-controller calculator
Open the newly created CalculatorController.groovy file (grails-app/controllers) and add the following:
import de.congrace.exp4j.Calculable import de.congrace.exp4j.ExpressionBuilder class CalculatorController { def graph() { def graphData = [] //data points to be used in drawing the graph def expression = "3 * sin(x) - 2 / (x - 2)" //a mathematical function to evaluate def graphDataColumns = [['number', 'x'], ['number', 'f(x)=' + expression]] //types and labels for axes //evaluate the function on the interval [-5, 5] for(def varX=-5.0; varX<=5.0;] } }
Create the file grails-app/views/calculator/graph.gsp and open it.
First, import the Google Visualization API.
<gvisualization:apiImport/>
Then add the following to the body:
<gvisualization:lineCoreChart <div id="linechart"></div>
Now save it, go back to the command line and type:
grails run-app
After it starts up, open a browser window and go to
You should see something similar to the image below.
To understand how the chart is rendered, we need to look at the Google Chart Tools documentation for a line chart.
As you can see in the lineCoreChart tag, we pass in the following essential parameters:
elementId, columns, and data. This signifies which element (div) to draw the chart in, the names of the axes, and the data points for those axes respectively.
The width, height, title, curveType, hAxis, and vAxis attributes are all part of the chart’s configuration options. Each chart has many configuration options and you can get really creative with them. Again, please refer to the documentation for details.
Setting curveType=”function” gives us the nice fluid curvy lines that we would see on a nice graphing calculator, rather than straight lines that directly go from point to point. The hAxis and vAxis attributes allow us to define attributes for the horizontal axis and vertical axis respectively.
Note: hAxis and vAxis are of type Object (a Javascript Object). In the above example, we pass an instance of the Groovy Expando class initialized with a Map of the attributes we want to define. The Google Visualization plugin will convert this to the appropriate Javascript Object that the Google Chart Tools API expects. For example, something that ultimately ends up looking like this in Javascript:
{title: 'Hello'}
Could be derived from this:
<gvisualization:lineCoreChart hAxis="${new Expando([title: 'Hello'])}" ... />
In this initial example, for the y-Axis, I wanted to limit the minimum and maximum grid lines that would be drawn. This is done by setting viewWindowMode to explicit, and then specifying values for the viewWindow.min and viewWindow.max attributes. As you can see in the example code, multiple instances of Expando are used to accomplish this.
In your browser, if you view the source, you should see something like this:
visualization.draw(visualization_data, {hAxis: {title: 'x'}, vAxis: {title: 'y', viewWindowMode: 'explicit', viewWindow: {min: -5, max: 5}}, curveType: 'function', width: 400, height: 240, title: 'Function Graph'});
This is Javascript code that the Google Visualization Grails plugin generates in order to create and render the chart via the Google API.
Note: This is very useful for debugging. If you are having problems with the way a chart is rendering (or not rendering at all), simply view the source on the page and verify the accuracy of the content that is being sent along to the Google API.
Now let’s enhance our graphing calculator by allowing the input of mathematical expressions, sticking with just a single variable x.
In the CalculatorController, replace the expression definition with the following:
def expression = params.expression ?: "3 * sin(x) - 2 / (x - 2)"
Then in the browser append the following to the end of the url:
?expression=tan(x)
You should see something similar to this.
Enter your own expressions and play around with it a bit. Refer back to the exp4j documentation for information about acceptable input for mathematical expressions.
Step it up!
Now that you’ve seen the basics, let’s leverage the power of Grails to make a tool that’s more useful. It would certainly be nice to enter the expression in a form, define intervals, add additional variables and assign values to variables, etc. We could simply create a form to allow these inputs, but it also might be nice to store all of these inputs so that we can recall them at a later time. Let’s create some domain classes to do this.
grails create-domain-class graph
grails create-domain-class function
The Graph class will hold basic data about the rendering of the Graph itself. The Function class will store the data for a given function, including the mathematical expression.
Let’s open up Graph.groovy and add the following properties:
BigDecimal verticalLowerBound BigDecimal verticalUpperBound BigDecimal horizontalLowerBound BigDecimal horizontalUpperBound String title String width String height
For now, let’s make everything nullable, except for the “title”.
static constraints = { verticalLowerBound nullable: true verticalUpperBound nullable: true horizontalLowerBound nullable: true horizontalUpperBound nullable: true width nullable: true height nullable: true }
Let’s design it so that we can graph multiple functions at once. We can do this by defining a one-to-many relationship to the Function class.
static hasMany = [functions: Function]
Now let’s open up the Function class and define a few attributes. For the sake of simplicity we’ll limit ourselves to two variables, x and y.
String expression BigDecimal xValue BigDecimal yValue BigDecimal xLowerBound BigDecimal xUpperBound
And for now, let’s make everything nullable, except for the “expression” field.
static constraints = { xLowerBound nullable: true xUpperBound nullable: true xValue nullable: true yValue nullable: true }
Now to get rolling quickly, let’s generate controllers and views for our new domain objects by using that nice out-of-the-box Grails CRUD functionality.
grails generate-all graphing.calculator.Graph
grails generate-all graphing.calculator.Function
Note: If you make a mistake or wish to change a domain class later, you can run the above commands to regenerate the controllers and views. Just answer ‘a’ at the prompt to overwrite the existing files.
Now run the app with the
grails run-app command and you should be able to do CRUD operations with the new domain classes.
Note: If you are using the Grails default datasource configuration, the data will be stored in an a temporary database, in memory. The database and its data are destroyed when the application is terminated.
Create and persist an instance of Function first by going here:
Enter the following values:
XLower Bound: -5
XUpper Bound: 5
Expression: x^2
And click “Create”
Now let’s create and persist an instance of Graph, by going here:
And enter the following values:
Vertical Lower Bound: -5
Vertical Upper Bound: 5
Horizontal Lower Bound: -5
Horizontal Upper Bound: 5
Width: 450
Height: 240
Functions: graphing.calculator.Function: 1 (make sure this is selected)
Title: First Function Graph
And click “Create”
Now let’s go back to the CalculatorController and add a method to use the persisted data. To get started, let’s open up the GraphController that Grails generated, and copy the contents of the “show” method, and paste them in a new method within the CalculatorController called graphStoredData.
def graphStoredData(Long id) { def graphInstance = Graph.get(id) if (!graphInstance) { flash.message = message(code: 'default.not.found.message', args: [message(code: 'graph.label', default: 'Graph'), id]) redirect(action: "list") return } [graphInstance: graphInstance] }
And then let’s add all the content that’s in the graph method, and rework it to use the values stored in the domain classes. The method should look function = graphInstance.getFunctions()?.first() def expression = function?.expression //a mathematical function to evaluate def graphDataColumns = [['number', 'x'], ['number', 'f(x)=' + function?.expression]] //types and labels for axes //evaluate the function on the stored interval for(def varX=function?.xLowerBound; varX<=function?.xUpperBound;] }
And finally add the instance of the Graph domain class to the model.
[graphData: graphData, graphDataColumns: graphDataColumns, expression: expression, graphInstance: graphInstance]
Now let’s make a copy of graph.gsp and call it graphStoredData.gsp, and open it.
Let’s keep it simple for now, by just replacing most of the literal values with the values from graphInstance.
<gvisualization:lineCoreChart
Now, by going to the following url, we should see a nice parabola.
Play around and add new instances of Graph and Function. Then come back to the above url to view the results. Note that for now, only the first function stored in the Graph instance will be drawn.
Moving along, let’s create another instance of Function, with the same parameters as the first, but change the expression to:
-tan(x)
Now create a new Graph instance with the same parameters as the first Graph instance, but select Function 2 (and only Function 2) in the “Function” combo box .
View the graph and verify that it renders correctly. (you should not see a parabola this time)
Now let’s put together a multi-function Graph. Create a new Graph instance with the following parameter values:
Vertical Lower Bound: -20
Vertical Upper Bound: 20
Horizontal Lower Bound: -5
Horizontal Upper Bound: 5
Width: 500
Height: 300
Functions (Select both Function 1 and Function 2)
Title: Multi-Function Graph
Go to:
We still only see the first function drawn, because we haven’t done anything to enable multi-function graphing capabilities. For that, let’s revisit the graphStoredData method of the CalculatorController.
We will need to build up the chart columns and the data from the values of the domain classes. To do this, we’ll need to iterate over all of the instances of the Function class and make the necessary calculations. We can then store the results temporarily in a Map and then iterate over them later to create data points that resemble the following (which the Google API expects):
[point on x-Axis, result of function 1, result of function 2, ... result of function n]
At this point it’s a good idea to remove the expression from the model, since we can no longer guarantee that there will just be one.
Ultimately, the graphStoredData method should end up looking functions = graphInstance.getFunctions() def graphDataColumns = [['number', 'x']] //types and labels for axes def allResults = new ArrayList<HashMap>() //store the results for each function's calculations here functions?.each() { function -> def results = new HashMap() graphDataColumns.add(['number', 'f(x)=' + function?.expression]) //add the type and label for each function def expression = function?.expression //the mathematical function to evaluate //evaluate the function on the stored interval for(def varX=function?.xLowerBound; varX<=function?.xUpperBound; varX+=0.1) { Calculable calc = new ExpressionBuilder(expression) .withVariable("x", varX) .build() def result = calc.calculate(); results.put(varX, result) //these results will be used to build up the data points } allResults.add(results) } //iterate over the results for each function and add the data points to graphData def keySet = new TreeSet(allResults.get(0)?.keySet()) //using TreeSet will sort the keys //each key represents a point on the x-Axis keySet.each() { key -> def finalResults = [] //we will build this up as [x, result 1, result2, ...result n] finalResults.add(key) //the point on the x-Axis for(def i=0; i<allResults.size(); i++) { def results = allResults.get(i) finalResults.add(results.get(key)) //the result of the calculation for each function } graphData.add(finalResults) } //add the following to the model; graphData and graphDataColumns are fed into google visualizations [graphData: graphData, graphDataColumns: graphDataColumns, graphInstance: graphInstance] }
Now, if we go back to the url again:
We should see both functions drawn on the graph.
Try adding another Function or two to the Graph instance and viewing it again. It should render correctly.
Finish it up!
The final step in this demonstration application is to bring it all together and incorporate a form into the main page. Most of the necessary changes are pretty trivial. So at this point, I encourage you to download the source code, have a look at it, run the application, and play around with it. You will learn a lot more by doing so. Below is a quick summary of the remaining changes.
1. A new GSP file named createAndDisplay.gsp with the following notable content
a. A
<g:select> wired up to
Graph.list() directly. The text displayed for each option is the expression of the first Function instance that belongs to the Graph instance. In addition, if the Graph instance has more than one function, an ellipsis (…) is displayed to give the user a hint that the graph drawn will have more than one function.
<g:select
b. A small form for a Function instance, essentially borrowed straight from the _form.gsp file that was created during the
grails generate-all step for the Function domain class.
2. New methods in CalculatorController
Most notably a
save() method that creates a new Graph instance with some default parameters. The method then creates a new instance of Function from the parameters supplied on the form in createAndDisplay.gsp. It then attaches that Function instance to the Graph instance and saves it. Finally, it redirects back to createAndDisplay, passing along the new id so that the function can be graphed and rendered in the view. The index() method now redirects to createAndDisplay.
3. Final display changes
a. A div positioned to the right of the form that will display the graph, if the id parameter exists. This is simply done with the <g:include> tag.
<g:if <g:include </g:if>
b. Navigation links added to chartAndDisplay.gsp so that the user can easily access the CRUD functionality for the Graph and Function domain objects.
After all is said and done, we have ourselves a usable function graphing app!
Final Challenge!
Lastly, I would like to present you with a challenge. Build a Combo Chart that shows a curved line which is representative of some mathematical function, and bars that represent the area underneath the curve of that function using the <gvisualization:comboCoreChart/> tag. Nerd out and have fun with it!
Download the source from the project page on bitbucket:
One thought on “DIY Graphing Calculator: Google Visualization Grails Plugin”
a good job.
Very interesting. Could not put it down last until I could no longer keep my eyes open anymore.
Concise and clear. As a newbie to groovy and google chart tools, I was able to follow this walk through line by line and implement it on a newly created groovy app.
In the coming week, I’m going to take on your combo challenge.
There is one syntax typo in this page. In the first section, in graph.gsp, the brackets do not match.
<gvisualization:lineCoreChart hAxis="${new Expando([title: 'x')]}" should be <gvisualization:lineCoreChart hAxis="${new Expando([title: 'x'])}"
Thanks Gedion. Let us know how it goes with the challenge!
I’ll try to get that typo fixed.
Fantastic article!
Thank you very much : this post made it so easy to understand how grails works with google API 🙂 | https://objectpartners.com/2013/01/22/diy-graphing-calculator-google-visualization-grails-plugin/ | CC-MAIN-2018-22 | refinedweb | 3,134 | 54.73 |
Introduction :- You already Know ,Internet is emerging as the most widely used medium for performing various tasks,such as online shopping ,Data exchanging and bank transactions etc. All this Information and data need to be secured against unauthorized access from malicious and illegal sources . For this purpose, we use authentication and authorization process. You can learn more from below about authentication and authorization in asp.net application.The Membership Service
You already know, we need to write large piece of code to create forms and user interfaces for authenticating the user and displaying the desired page based on the roles or rights given to the user.But It is very time consuming so that Microsoft developed a new series of server controls ,called login controls.
To use login controls in your website. You just need to drag and drop them on the web page.
There is no need to write more codes in the codes-behind file.The Login controls have built in functionality for authentication and authorization of users.Microsoft were introduced (started) this services with ASP.NET 2.0. This control help to build the registration and login application without writing any code-behind codes and database.
This membership services is an important feature of ASP.NET that helps you validatng and storing user credentials.
The ASP.NET Membership services helps to implement the folllowing functionalities in an application.
- To create new user and password
- To store the membership information ,such as username ,password .address,email and supporting data
- To authenticate and authorization the visitors of the websites.
- It allows the user to create and reset the password
- It allows to create a unique Identification system for authenticated users
There are following Login controls developed by the Microsoft which are used in ASP.NET Website as given below:-
- LoginView
- LoginStatus
- Loginname
- PasswordRecovery
- ChangePassword
- CreateUserWizard
The Login control provides a user interface which contains username and password, that authenticate the usernaMe and password and grant the access to the desired services on the the basis of the credentials.
There are used some methods ,properties and events in this Login control,You can check manually after drag and drop this control on your web form as given below:-
2.) The LoginView Control:-
The LoginView Control is a web server control ,Which is used to display two different views of a web page of any website , dependending on whether the any user has logged on to a web page as anonymous user or registered user .If the user is authenticated,the control displays the appropriate to the person with the help of the following views template.
- Anonymous Template :- This template (default of all) will be displayed when any user just open the web page but not logged in.
- LoggedInTemplate:- This Template (page)will be displayed when the user in logged in.
- RoleGroups:- This template will be displayed when user logged in, that is the member of the specific role (defined role group).
Note:- You can do so, by adding any server controls such as Label ,Hyperlink and TextBox, to the empty region on the loginview control.
3.) The LoginStatus Control :-
The LoginStatus control specifies whether a particular user has logged on the website or not . When the user is not logged in,this control display the Login text as a hyperlink.When the user is logged in ,this control display the logout text as a hyperlink.
To do this ,Login Status control uses the authentication section of the web.config file.
This control provides the following two views:-
- LoggedIn --> It will be displayed,when the user is not Logged In.
- Logout --> It will be displayed when the user is Logged In.
4.) The LoginName Control :-
The LoginName Control is responsible for display the names of all the authenticated users.If no users id logged in ,then this control is not displayed on the page.
This control uses the page .User.Identity.Name namespace to return the value for the user's name.
You can drag and drop Login Name control on the page from the toolbox as given below:-
5.) Passwordrecovery Control:-
The Passwordrecovery control is used to recover or reset the forgotten password of a user . This control does not display the password on the browser,but it sends the password to the respective email address whatever you have specified at the time of registration.
This control has included three views as given below:-
- UserName :- It refers to the view that accepts the username of a user.
- Question :- It accepts the security questions asked from the users.
- Success :- It display a message to the user that retrieved password has been set to the user.
Note :-
- To retrieve and reset password you must set some properties inside the asp.net Membership services.
- You can can learn its methods ,properties and events from PasswordRecovery class yourself.
This control uses the membership service to create a new user in the membership data store.The CreateUserWizard control is provides by the CreateUserWizard class and can be customized by using template and style properties .Using this control any user can easily create an account and login to the web page.
You can drag and drop CreateUserWizared control on the web page as shown below:-
7.) The ChangePassword Control:-
Using this control ,user can easily change your existing password (old password) on the ASP.NET Website.This control prompts uses to provide the current password first and then set the new password first and then set the new password.If the old password is not correct then new password can't be set. This is also helps to send the email to the respective users about the new password.This control is used ChangePassword class.
There are some steps to use Login controls concepts in ASP.NET Application as given below:-
Step 1 :- First open your visual studio -->File -->New -->Select ASP.NET Empty website --> OK -->Open Solution Explorer -->Add a New web form (login.aspx) -->Now drag and Drop Login control and and LoginView control on the page from toolbox --> Add a Hyperlink control in LoginView 's blank space as shown below:-
Step 2 :- Now open Solution Explorer --> Add a New web Form (Registrationpage.aspx) --> Drag and drop CreateUserWizard and LoginView controls on the page --> Put a Hyperlink control inside blank space in LoginView control as shown below:-
- Now select Complete from createUserWizard Tasks as shown above --> Now double click on Continue button and write the following c# codes for Navigation as given below:-
c# codes:-
using System; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; public partial class Registrationpage : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void ContinueButton_Click(object sender, EventArgs e) { Response.Redirect("login.aspx"); } }
NOTE :- You can set this Navigation URL from the properties of Continue button also instead of this above codes.
Step 3 :- Now Add a New Web Form (welcomwpage.aspx) --> drag and drop LoginName,LoginStatus and LoginView Controls on the page from toolbox as shown below:-
Step 4 :- Now Add again a New Web Form (changepassword.aspx)-->drag and drop ChangePassword control on the page from toolbox as shown below:-
Step 5 :- Now Add a New web form (PasswordRecovery.aspx) -->drag and drop Passwordrecovery control from toolbox as shown below:-
Step 6 :- Now open web.config file and write the following codes as given below:-
<?xml version="1.0"?> <!-- For more information on how to configure your ASP.NET application, please visit --> <configuration> <system.web> <authentication mode="Forms"> <forms loginUrl ="login.aspx" defaultUrl = "Welcomepage.aspx" /> </authentication> <authorization> <allow users="*"/> </authorization> <compilation debug="true" targetFramework="4.0"/> </system.web> </configuration>
Step 7 :- Now Run the Application (Press F5) --> Now create a account first for access the website --> Press Create User button --> After that Press Continue button as shown below:-
Step 8 :- After step 7, login.aspx page will be opened --> Now put login credentials such as username and password --> Press login button --> You will see the following output as shown below:-
Note :-
- If you want to change old password , you can change it.I have already built it in this application.
- If you want to recover your password then you can dot it also.
- You can download this whole application from below and run it on your visual studio.
- In Next tutorial , I will put administrative and role based security in this application,First learn this whole concepts otherwise you can face some problems in next tutorial.
- You can also customize this application according to you .
For More...
- How to implement XAML concepts in WPF with example
- How to display XML File data in Listbox using LINQ
- How to implement 3 tier architecture concepts in asp.net with real example
- How to run Linq query against array
- How build a software which can perform read,write,append and search operations
- How to use WCF Services in asp.net application
- How to implement caching and Ajx features in asp.net website
- How to create constraints in ADO.NET
- How to use validation control in asp.net website
- How to make composite application and use it in windows form application
Download Attached file
Download | http://www.msdotnet.co.in/2014/09/how-to-implement-login-controls-in.html | CC-MAIN-2019-26 | refinedweb | 1,521 | 54.63 |
Wikiversity:Colloquium/archives/July 2009
Contents
- 1 New mediawiki skin
- 2 VB 6 Course
- 3 Wikimedia Foundation Gets $300K for Wikimedia Commons
- 4 license update
- 5 WV teamwork
- 6 most wanted
- 7 Mu301: mikeu and jtneill
- 8 Firefox 3.5 and Wikiversity
- 9 file uploads disabled
- 10 Wikiversity on Twitter
- 11 The Hunting of the Snark
- 12 about MediaWiki:Copyrightwarning
- 13 Fair Use
- 14 re-read {{Image copyright}}
- 15 OER Search Discovery
- 16 Election Notice
- 17 Strategic Planning
- 18 User:Sfan00_IMG/Wikiversity_All_Subject_Original_Research_Desk
- 19
- 20 Apology
New mediawiki skin
A new skin, called "Vector", is available for testing (go to Special:Preferences). This is a fruit of the Wikimedia Usability Project. More information here. --
-- (profile|chit chat|email) 23:01, 1 July 2009 (UTC)
VB 6 Course
--DaveFairhall 15:48, 1 July 2009 (UTC)I was following the VB6 tutorial course. But when I arrived at the Lesson page called "Event-Drien Design in VB6" it just ends and there are no more lesson links! Just a link to a computer programming Page which seems to bear no resemblance to the course I was on!!
Can anybody help??
DaveFairhall 15:48, 1 July 2009 (UTC)
- Wikiversity is still very much a work in progress, as you seem to have found out. If you post a URL, I can take a peek and see where the course ends.
- Since WV is a volunteer effort, it's important to get feedback on Learning Resources. It's the only way to know what needs work, what does work and in this case, what's missing. Historybuff 17:42, 1 July 2009 (UTC)
Hi,
Thank you for your speedy response. The page in question is :-
DaveFairhall 08:41, 2 July 2009 (UTC)
Wikimedia Foundation Gets $300K for Wikimedia Commons
To improve usability - good: -- Jtneill - Talk - c 10:11, 2 July 2009 (UTC)
license update
With the implementation of the license update we now have a number of pages that contain outdated or incorrect information. In particular Wikiversity:Copyrights but there are also a number of other pages that need to be proofread and changed. I'm not entirely clear on the terms of the new license and the transitition so I'd like to discuss what is needed before proceeding. --mikeu talk 12:10, 6 July 2009 (UTC)
WV teamwork
I'd like to draw attention to Web 2.0 as a great example of collaborative editing here at Wikiversity. There were 10 participants who made great strides in improving this learning resource. See the difference. Another example is Introduction to the Dutch language diff with 4 participants. I'm bringing this up here to start a dicussion thread on how we can encourage and foster more cooperatively developed learning resources. --mikeu talk 19:03, 8 July 2009 (UTC)
- Just like the clueless one said! We will create the content! CQ 19:55, 8 July 2009 (UTC)
most wanted
According to page hit statistics Principles of Management was the most viewed mainspace page during June, and it ranked #27 for the year. Stream/Circle was the #6 most viewed page for 2009 with an average of 406 hits/day, which is kind of ironic given that it was (mostly) page blanked from February until just today. [1] --mikeu talk 20:10, 8 July 2009 (UTC)
Mu301: mikeu and jtneill
I've been watching these guys.. following them around.
I think I've even rattled their cage.
Sbs:Iii: Wikiversity:Candidates for Custodianship is a masterpiece.
Time has shown me that v: is resilient.. stable.. civilized. Even helpful.
I know I've posted my share of red link vaporwarish nonesense, but I think there is a method to all of this madness.
CQ 06:51, 10 July 2009 (UTC)
Firefox 3.5 and Wikiversity
I've just suggested an optimization for Wikiversity talk:Introduction edit#Firefox 3.5 and Wikiversity. --JackPotte 20:47, 15 July 2009 (UTC)
file uploads disabled
One of the wikimedia servers has been having problems recently which is currently being fixed. "Uploading and generation of new thumb[nail images] will be temporarily disabled on Wikimedia sites" for a short time while these problems are resolved. The plan is for the work to be completed by 22:00 UTC or so. --mikeu talk 21:21, 15 July 2009 (UTC)
It seems the problems have not been completely fixed. This has been a recurring problem recently. For updates on the Wikimedia server status, please visit: --mikeu talk 01:12, 17 July 2009 (UTC)
Wikiversity on Twitter
Follow. Contribute. -- Jtneill - Talk - c 10:12, 2 July 2009 (UTC)
- Also see the testing that we are doing on the Sandbox Server which includes the Wikiversity Twitter RSS feed in a page. --mikeu talk 15:23, 18 July 2009 (UTC)
The Hunting of the Snark
Project proposal: Research on Henry Holiday's illustrations in Lewis Carroll's The Hunting of the Snark --Snark 18:12, 20 July 2009 (UTC)
about MediaWiki:Copyrightwarning
delete the text in order to confirm the change of licenses given by the foundation and keep the mediawiki-core-system-message.Crochet.david 10:33, 24 July 2009 (UTC)
Fair Use
Is fair use is accepted in en.wikiversity or not? Can it be acceptable in beta.wikiversity? --Zepelin 14:19, 21 July 2009 (UTC)
- Can you be more specific, Zep? If you're talking about for images and files, I suggest looking for what you really need on the Wikimedia Commons, since everything there can be linked from Wikiversity anyway. The commons has a quicker more strict policy, so licensing and fair use issues have a larger community dealing with them. Another reason to connect at the Commons first is that we've been deleting redundant resources. Does that help? --CQ 16:27, 22 July 2009 (UTC)
- Thank you for kind reply. I hope to use Commons of course, but book covers or CD covers are banned in commons, but OK in en.wikipedia. And I need to use book cover image to show in beta.wikiversity (now I'm incubating ko.wikiversity in beta). So I hope to know the fair use policy in wikiversity, especially en.wikiversity. --Zepelin 01:32, 23 July 2009 (UTC)
- Hmmm. Fair use and this are all we have so far here at en.wikiversity.org and they relate to US laws. Whatever you do in beta should probably reflect the ko.wikiversity.org site as much as possible so you might want to take a look at this and this. I like the old saying "It's easier to get forgiveness than to get permission." If the book covers help your content and your intentions are "fair" to the book publishers, I don't see much of a problem. The objective is to "empower and engage people around the world to collect and develop educational content under a free content license..." so read Avoid copyright paranoia and then go for it. --CQ 03:49, 23 July 2009 (UTC)
- I see. Sadly ko.wikipedia is following S.Korea laws and fair use is still banned. I don't understand why en.wikipedia is following US laws(not UK / Ireland / Canada / Singapore and so on). Relations of copyright laws in Internet Era and country/language are complexed problem, I think. Anyway I'll follow the ko.wikipedia anyway. Thank you CQ. --Zepelin 09:01, 25 July 2009 (UTC)
- The English projects follow US law because the wikimedia servers are located in the US. Other language projects might limit what is allowed further in order to be compatible with the laws of a country where the language is natively used by the majority in order that the majority of natives might be able to use the work without legal problems. WMF policy allows works under fair use, fair dealings and other similar terms of countries under certain conditions while allowing individual projects to either forbid them completely or restricting under what conditions they can be allowed further. -- darklama 12:03, 25 July 2009 (UTC)
re-read {{Image copyright}}
Can someone re-read the template to find errors in spelling and grammar. Thanks. Crochet.david 09:25, 24 July 2009 (UTC)
- I found no obvious errors in spelling and grammar, but can it be simplified? If you can program a bot, it seems like you could automate some help getting information into a file description. I wish I knew how to write a script or program a bot. (I'm being facetious) Can you help at Wikiversity:Templates? --CQ 06:27, 26 July 2009 (UTC)
OER Search Discovery
"Open Educational Resources are a little tough to really define to everyone’s satisfaction, but we can defer the details. :) We’re generally talking about pedagogical materials (something that could be put to use in the classroom to teach students) available under some sort of open content license."
"On the content creation side, we can provide more ways to add useful metadata to our pages, making it easier for teachers and students searching through educational-themed portals to find them. MediaWiki already provides basic language and license information, but projects like WikiBooks and wikiversity (as well as other MediaWiki users like WikiEducator) could definitely benefit from a consistent way to specify the subject and target audience of lesson modules."
Read more at techblog.wikimedia.org
--mikeu talk 13:58, 11 July 2009 (UTC)
- On a related note, I'm looking for participants here at Wikiversity, at WikiEducator and at Google's knol site who wish to develop Geography learning objects that play off of shared google maps. See Introduction to Learning Objects for an overview and some discussion about metadata. --CQ 14:00, 19 July 2009 (UTC)
- OK. I updated OER Commons and Talk:OER Commons to include the recent discoveries we've been talking about here and below. --CQ 21:26, 26 July 2009 (UTC)
Election Notice
Ladies and Gentlemen,
As you may be aware, there is concern that the site notices:20, 19 July 2009 (UTC)
- FYI: The main page for the elections is: 2009 elections to the Wikimedia Board of Trustees. --mikeu talk 10:44, 19 July 2009 (UTC)
- The voting begins 28 July. To vote there are minimum requirements for having at least 600 edits before 1 June 2009 and at least 50 edits made between 1 January - 1 July 2009. See information for voters for details. Once the voting starts you should go the wikimedia project where you are most active and use Special:Securepoll to vote. Feel free to ask here if you have any questions. --mikeu talk 23:48, 26 July 2009 (UTC))
- Is it possible to get Wikiversity mapped into the interwiki map? That would be handy especially for translation efforts. Is there an interwiki link shortcut for strategy.wikimedia.org yet? CQ 21:16, 29 July 2009 (UTC)
- Created/added link to this new wiki at Wikiversity:Strategy. -- Jtneill - Talk - c 00:02, 30 July 2009 (UTC)
- The process is interesting. I hinted at Wikiversity:Strategy at the Village Pump there. CQ 01:38, 30 July 2009 (UTC)
User:Sfan00_IMG/Wikiversity_All_Subject_Original_Research_Desk
I'd started a page with some possible ideas for research projects Wikiversity contributors too could help guide or advise on.
Any thoughts? Sfan00 IMG 15:12, 30 July 2009 (UTC)
I was a bit disappointed due to the incompleteness of Wikiversity results listed in. I've emailed about this via their feedback link. Others might consider doing the same - do your favourite WV resources show in the DiscoverEd search? -- Jtneill - Talk - c 10:31, 23 July 2009 (UTC)
- Wikiversity content for the most part is "not ready for primetime" anyway. That is, of course, my opinion, and subject to debate. WikiEducator does a whole lot better on the CC-OER search. Even adding "wikiversity" to the search term doesn't help. At least our "Bloom Clock" did OK ranking second. I think our Technical writing and Web design units are pretty good, but they don't show at all. Even searching "technical writing wikiversity" yields this and "web design wikiversity" yields this.
- My disappointment is in Wikiversity's development as a learning community with all of the resources it began with as a promising part of Wikimedia. Quality must improve or Wikiversity doesn't stand a chance. Discipline on the part of contributors and dialogue as a community in my judgment are severely deficient. Merit (and search engine positioning) has to be earned through concerted effort. --CQ 13:52, 23 July 2009 (UTC)
- Actually the first reference to the bloom clock is also a link to Wikiversity ;-). The quality issue is a big one, of course, but at least part of the solution (IMO) would involve clearing out a lot of the abandoned stubs and then working to organize the more promising content. We were making headway a year ago on that task, but the most active contributor to the effort (McCormack) left the project, and I'm not sure whether he left any notes about the structure of that reorganization. --SB_Johnny talk 15:44, 23 July 2009 (UTC)
- "learning community" gets a token link on the main page, then 99% of the main page is an attempt to pretend that Wikiversity is Wikipedia. Wikipedia has always enforced the rule that the Wikipedia project is about its article content and that Wikipedia is not about its participants. Wikiversity needs to put its participants first and promote the "learn by doing" approach that Wikiversity was founded on. CQ: if you want a viable Wikiversity learning community then we should start by dumping the current Wikiversity main page. --JWSchmidt 15:57, 23 July 2009 (UTC)
Among other things these comments relate to an interesting dilemma - is a good WV resource one that is "complete" and high quality - or could it be one that it is rather "incomplete" and inviting of active participation? I suspect and hope WV is about both these options and more. DiscoverEd is beta and could do with input - hopefully it could do a better crawl of WV (e.g., I searched for some of our "complete" and "quality" resources and didn't find them listed) and then hopefully it could have a useful ranking algorithm to approximately reflect relevance/quality in comparison to other OERs. -- Jtneill - Talk - c 03:07, 24 July 2009 (UTC)
- Do we know how they're calculating? I.e., is it the amount of content, the amount of contributors, length of time? We also have a couple of projects that are in some ways largish but are exclusively in the user namespace. --SB_Johnny talk 09:07, 24 July 2009 (UTC)
- CC-OER-search is Nutch and is open source. Have a look at the Nutch FAQ. It gets more technical as it goes and links out to all sorts of tasty morsels. The FAQ is a little outdated so some of the links are 404s but the devs have provided a nice overview here. If you want to delve, follow the hints, such as the Lucene Similarity class. --CQ 11:55, 25 July 2009 (UTC)
"Where do the results come from?"
"Results come from institutional and third party repositories who have expended time and resources curating the metadata. These curators either create or aggregate educational resources and maintain information about them." [2]
- For example: Astronomy Project likely shows up at #2 on a search for Wikiversity on DiscoverED because somebody entered metadata here at OER Commons. The #1 hit is Storyboard Artwork Project which has metadata here and the Bloom clock is here. All those OER Commons pages are feeding to DiscoverED; they are not crawling our site like Google. You might want to take a closer look at my post above about OER Search Discovery. The DiscoverED site is not likely to include results from wv unless we have metadata on our server, or somebody else creates metadata about our learning resources somewhere else. --mikeu talk 04:29, 26 July 2009 (UTC)
- I received an email reply to my query: " The issue has been logged at under the DiscoverEd category. Please let us know any future issues you may discover. Also, if you have any inclusion requests regarding missing feeds, feel free to add them to." -- Jtneill - Talk - c 23:25, 29 July 2009 (UTC)
- I received further reply that Wikiversity is not providing a suitable feed - does this make sense to anyone? How can we provide a suitable feed? -- Jtneill - Talk - c 15:02, 1 August 2009 (UTC)
- We need to submit OPML formatted feed(s): -- Jtneill - Talk - c 15:08, 1 August 2009 (UTC)
- Oh. hey. lookie! There's a Browse: Collection: Wikiversity page with the whole ball of wikiversity wax. --CQ 05:27, 26 July 2009 (UTC)
Apology
I make a full and honest apology to the wikimedia community for my kick-ass stupidity. Maybe I'm banned across Wikimedia. But I certainly was on Wikiquote, wikisource and wikibooks! Apologies to everyone. --Poetlister 2BY24 16:27, 20 July 2009 (UTC)
I accept your apologies =) --195.229.242.58 17:31, 10 August 2009 (UTC)
What did you do? -- Jtneill - Talk - c 23:28, 10 August 2009 (UTC)
- See w:Wikipedia:Wikipedia Signpost/2008-09-15/Poetlister. The person making the comment is almost certainly not Poetlister.--SB_Johnny talk 08:31, 14 August 2009 (UTC)
I accept it, now you need to do something to me for the acceptance. Enlil Ninlil 06:03, 14 August 2009 (UTC) | https://en.wikiversity.org/wiki/Wikiversity:Colloquium/archives/July_2009 | CC-MAIN-2019-43 | refinedweb | 2,905 | 62.78 |
MSP Object instantiation crash
I’m working on an MSP object. I copied the project from a known-good Xcode project and I’ve compared it with the examples, but I keep getting obex errors when I try to load it into Max which, from what I’ve read, usually indicate that there’s something amiss in the setup. The problem is that I’m getting this with a very straightforward setup.
I went through the Xcode project and swapped over everything that had the name of the old project in it, and I’ve cleaned, etc. Can anyone provide some insight?
Here’s the relevant code:
#include "ext.h"
#include "ext_obex.h"
#include "z_dsp.h"
static t_class *foo_class;
typedef struct _foo {
t_pxobject m_obj;
// rest of the structure’s fields
} t_foo;
// Prototypes
t_int *foo_perform(t_int *w);
void foo_dsp(t_foo *x, t_signal **sp, short *count);
void *foo_new(void);
void foo_mode(t_foo *x, int mode);
void foo_assist(t_foo *x, void *b, long m, long a, char *s);
void foo_float(t_foo *x, double f);
int main(void)
{
// NEW METHOD
t_class *c;
c = class_new("foo~", (method)foo_new, (method)dsp_free, sizeof(t_foo), NULL, 0);
class_dspinit(c); // new style object version of dsp_initclass();
class_addmethod(c, (method)foo_dsp,"dsp", A_CANT, 0);
class_addmethod(c, (method)foo_mode, "mode", A_LONG, 0);
class_addmethod(c, (method)foo_float, "float", A_FLOAT, 0);
class_addmethod(c, (method)foo_assist, "assist", A_CANT, 0);
class_register(CLASS_BOX, c); // register class as a box class
foo_class = c;
return 0;
}
void *foo_new(void) {
t_foo *c = (t_foo *) object_alloc(foo_class);
dsp_setup((t_pxobject *)c,3); // Three inlets: input, freq, res
// Always crashes here…
// Create outlet
outlet_new((t_object *)c, "signal"); // Create outlet
//c->freq = 60.f;
//c->res = 0.5;
return (c);
}
t_int *foo_perform(t_int *w) {
return (w+6)
}
void foo_dsp(t_foo *x, t_signal **sp, short *count) {
x->smprate = sp[0]->s_sr;
x->vecSize = sp[0]->s_n;
x->invsmp = 1./x->vecSize; //
x->invSR = 1.f/sp[0]->s_sr;
dsp_add(foo_perform, 5, sp[0]->s_vec, sp[1]->s_vec, sp[2]->s_vec, sp[3]->s_vec,x);
}
Here’s the crash log:
0 com.cycling74.MaxMSP 0x000b7e0c class_obexoffset_get + 6
1 com.cycling74.MaxMSP 0x000b8b8f object_obex_get + 29
2 com.cycling74.MaxMSP 0x000b8beb object_obex_lookup + 39
3 com.cycling74.MaxAPI 0x00ebb9d2 object_obex_lookup + 45
4 com.cycling74.MaxAudioAPI 0x0e591677 z_dsp_setup + 68
5 com.apple.PM_foo_ 0x0f7fbcd2 foo_new + 50
I did, but to no avail. Thanks, anyways!
OK, now that I think on it, that was a quite silly idea. I created a new project and compiled your submitted code, and I can report that the code itself compiles and runs fine.
According to the crash log, it crashes on
dsp_setup when it tries to guess the location of the obex member of your struct with
class_obexoffset_get (which AFAIK calls the ANSI C
offsetof macro). Since this seems to be (at least for me) some sort of aligning issue with the
t_foo struct, it gives the impression that
t_foo actually contains more fields than those you have posted. Could you please verify this?
Also, did you try copying an SDK-example (after verifying that that particular example compiles fine, of course) and just replacing its code with your one?
It seems an interesting bug, though.
Best,
Ádám
I spent some time debugging it today. You are correct, I have quite a few more fields in the struct. I have two tables in the struct:
float tab[4096];
float ftom_tab[4096];
and the code compiles when I remove the second one. (!?) I went through it methodically, disabled variables one at a time, and that definitely makes the difference between crash and no crash.
I’m new to C (coming from Java), so I suspect I’m making the kind of mistakes that Java won’t allow you to make. It’s probably time to go brush up on pointers…
That’s correct. Allocating in the yourobject_new() method, freeing in the yourobject_free() method as mentioned by vanille béchamel is the way to go.
Good to know, and many thanks for everyone who helped me! I’ll try it this afternoon.
Thanks everyone for the contributions. This did the trick. As a brief follow-up to vanille béchamel’s post: (and for anyone who’s searching this later…)
void myobject_free (t_myobject *x)
{
dsp_free((t_pxobject *)x); // Call this first to prevent max from crashing when freeing while DSP is on
sysmem_freeptr (x->freq);
}
Forums > Dev | https://cycling74.com/forums/topic/msp-object-instantiation-crash/ | CC-MAIN-2015-22 | refinedweb | 718 | 63.39 |
11.5: Fixing MyHashMap
- Page ID
- 12796
The problem with MyHashMap is in size, which is inherited from MyBetterMap:
public int size() { int total = 0; for (MyLinearMap<K, V> map: maps) { total += map.size(); } return total; }
To add up the total size it has to iterate the sub-maps. Since we increase the number of sub-maps, k, as the number of entries, n, increases, k is proportional to n, so size is linear.
And that makes put linear, too, because it uses size:
public V put(K key, V value) { V oldValue = super.put(key, value); if (size() > maps.size() * FACTOR) { rehash(); } return oldValue; }
Everything we did to make put constant time is wasted if size is linear!
Fortunately, there is a simple solution, and we have seen it before: we have to keep the number of entries in an instance variable and update it whenever we call a method that changes it.
You’ll find my solution in the repository for this book, in MyFixedHashMap.java. Here’s the beginning of the class definition:
public class MyFixedHashMap<K, V> extends MyHashMap<K, V> implements Map<K, V> { private int size = 0; public void clear() { super.clear(); size = 0; }
Rather than modify MyHashMap, I define a new class that extends it. It adds a new instance variable, size, which is initialized to zero.
Updating clear is straightforward; we invoke clear in the superclass (which clears the sub-maps), and then update size.
Updating remove and put is a little more difficult because when we invoke the method on the superclass, we can’t tell whether the size of the sub-map changed. Here’s how I worked around that:
public V remove(Object key) { MyLinearMap<K, V> map = chooseMap(key); size -= map.size(); V oldValue = map.remove(key); size += map.size(); return oldValue; }
remove uses chooseMap to find the right sub-map, then subtracts away the size of the sub-map. It invokes remove on the sub-map, which may or may not change the size of the sub-map, depending on whether it finds the key. But either way, we add the new size of the sub-map back to size, so the final value of size is correct.
The rewritten version of put is similar:
public V put(K key, V value) { MyLinearMap<K, V> map = chooseMap(key); size -= map.size(); V oldValue = map.put(key, value); size += map.size(); if (size() > maps.size() * FACTOR) { size = 0; rehash(); } return oldValue; }
We have the same problem here: when we invoke put on the sub-map, we don’t know whether it added a new entry. So we use the same solution, subtracting off the old size and then adding in the new size.
Now the implementation of the size method is simple:
public int size() { return size; }
And that’s pretty clearly constant time.
When I profiled this solution, I found that the total time for putting n keys is proportional to n, which means that each put is constant time, as it’s supposed to be. | https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_Think_Data_Structures_-_Algorithms_and_Information_Retrieval_in_Java_(Downey)/11%3A_HashMap/11.05%3A_Fixing_MyHashMap | CC-MAIN-2021-10 | refinedweb | 509 | 70.94 |
Home Information Classes Download Usage Mail List Requirements Links FAQ Tutorial
STK TCP socket server class. More...
#include <TcpServer.h>
STK TCP socket server class.
This class provides a uniform cross-platform TCP socket server interface. Methods are provided for reading or writing data buffers to/from connections.
TCP sockets are reliable and connection-oriented. A TCP socket server must accept a connection from a TCP client before data can be sent or received. Data delivery is guaranteed in order, without loss, error, or duplication. That said, TCP transmissions tend to be slower than those using the UDP protocol and data sent with multiple write() calls can be arbitrarily combined by the underlying system.
The user is responsible for checking the values returned by the read/write methods. Values less than or equal to zero indicate a closed or lost connection or the occurence of an error.
by Perry R. Cook and Gary P. Scavone, 1995–2014.
Extract the first pending connection request from the queue and create a new connection, returning the descriptor for the accepted socket.
If no connection requests are pending and the socket has not been set non-blocking, this function will block until a connection is present. If an error occurs, -1 is returned. | https://ccrma.stanford.edu/software/stk/classstk_1_1TcpServer.html | CC-MAIN-2014-23 | refinedweb | 209 | 57.77 |
L0pht Security Advisory
Advisory released Jan 27 1997
Application: Solaris libc getopt(3)
Vulnerability Scope: Solaris 2.5 distributions
Severity: Non-priveledged users can exploit a vulnerability
in the getopt(3) routine inside libc. As most SUID programs
in Solaris are dynamically linked, users can gain root
priveledges.
Author: mudge_at_l0pht.com
Overview:
A buffer overflow condition exists in the getopt(3) routine. By supplying
an invalid option and replacing argv[0] of a SUID program that uses the
getopt(3) function with the appropriate address and machine code instructions,
it is possible to overwrite the saved stack frame and upon return(s) force
the processor to execute user supplied instructions with elevated permissions.
Description:
While evaluating programs in the Solaris Operating System environment
it became apparent that changing many programs trust argv[0] to never
exceed a certain length. In addition it seemed as though getopt was
simply copying argv[0] into a fixed size character array.
./test >>& ccc
Illegal instruction (core dumped)
Knowing that the code in ./test was overflow free it seemed that the problem
must exist in one of the functions dynamically linked in at runtime through
ld.so. A quick gander through the namelist showed a very limited range of
choices for the problem to exist in.
00020890 B _end
0002088c B _environ
00010782 R _etext
U _exit
00010760 ? _fini
0001074c ? _init
00010778 R _lib_version
000105ac T _start
U atexit
0002088c W environ
U exit
0001067c t fini_dummy
0002087c d force_to_data
0002087c d force_to_data
000106e4 t gcc2_compiled.
00010620 t gcc2_compiled.
U getopt
00010740 t init_dummy
00010688 T main
Next we checked out getopt() - as it looked like the most likely
suspect.
#include <stdio.h>
main(int argc, char **argv)
{
int opt;
while ((opt = getopt(argc, argv, "a")) != EOF) {
switch (opt) {
}
}
}
>gcc -o test test.c
>./test -z
./test: illegal option -- z
Note the name it threw back at the beggining of the error message. It was
quite obvious that they are just yanking argv[0]. Changing argv[0] in
the test program confirms this.
for (i=0; i< 4096; i++)
buffer[i] = 0x41;
argv[0] = buffer;
With the above in place we see the following result:
>./test -z
[lot's of A's removed]AAAAAAAAA: illegal option -- z
Bus error (core dumped)
By yanking out the object file from the static archive libc that is supplied
with Solaris our culprit was spotted [note - we assumed that libc.a was
built from the same code base that libc.so was].
> nm getopt.o
U _dgettext
00000000 T _getopt
00000000 D _sp
U _write
00000000 W getopt
U optarg
U opterr
U optind
U optopt
U sprintf
U strchr
U strcmp
U strlen
Here we see one of the infamous non-bounds-checking routines: sprintf();
More than likely the code inside getopt.c looks something like the following:
getopt.c:
char opterr[SOMESIZE];
...
sprintf(opterr, argv[0]...);
Thus, whenever you pass in a non-existant option to a program that uses getopt
you run into the potential problem with trusting that argv[0] is smaller
than the space that has been allocated for opterr[].
This is interesting on the Sparc architecture as getopt() is usually called
out of main() and you need two returns [note - there are certain situations
in code on Sparc architectures that allow you to switch execution to your
own code without needing two returns. Take a look at the TBR for some
enjoyable hacking] due to the sliding register windows. Some quick analysis
of SUID programs on a standard Solaris 2.5 box show that most of these
programs exit() or more likely call some form of usage()-exit() in the
default case for getopt and thus are not exploitable. However, at least
two of these programs provide the necessary returns to throw your
address into the PC :
passwd(1)
login(1)
On Solaris X86 you do not need these double returns and thus a whole world of
SUID programs allow unpriveledged users to gain root access:
(list of programs vulnerable too big to put here. sigh.)
Exploit:
$./exploit "/bin/passwd" 4375 2> foo
# id
uid=0(root) gid=1(other)
[ note: the source code for the exploit will be made available on the page in a couple of days. Hey, we have
day jobs and sometimes spare time is impossible to come by. ]
Fixes:
For those with source:
If you are one of the few people who have a source code license the fix
should be fairly simple. Replace the sprintf() routine in getopt.c with
snprintf() and rebuld libc.
Super Ugly kludge fix:
If you don't have the source code available (like most of us), one solution
is to use adb to change the name for getopt with something like getopz,
yank a publicly available getopt.c, and put it in place of getopt.
If anyone can tell me how to yank the object files out of dynamically
linked libraries it would be appreciated as you suffer performance
hits among larger problems by doing this from the static library Sun
provides as, of course, it is not PIC code.
Thanks:
Special thanks go out to ][ceman for his co-work on this project.
mudge_at_l0pht.com
---
--- | http://seclists.org/bugtraq/1997/Jan/0090.html | crawl-002 | refinedweb | 864 | 67.69 |
Microsoft Azure Virtual Network Course: NEW
Cloud Academy has just published another new course by Azure Curriculum Director Trevor Sullivan. Trevor is a Microsoft MVP for Windows PowerShel...Learn More
Last week I wrote about using AWS’s Machine Learning tool to build your models from an open dataset. Since then, feeling I needed more control over what happens under the hood – in particular as far as which kind of models are trained and evaluated – I decided to give Microsoft’s Azure Machine Learning a try.
In case you’re not yet familiar with Microsoft Azure, we are talking about a Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) solution created by Microsoft back in 2010. Cloud Academy has some excellent courses introducing you to the platform.
Azure Machine Learning was launched in February this year, and was immediately recognized as a game changer in applying the power of the cloud to solve the problem of big data processing.
Since I’m new to the Azure world, I decided to start my 1 month free trial (including $150 of credit). My dashboard was up and running within a few minutes, during which I gladly took a quick tour of the dashboard features.
After just a few clicks I discovered that Azure Machine Learning offers a fairly independent environment to work on. To be precise, it’s called ML Workspace and is where all your ML-related objects live, although we will still be able to monitor our ML web services directly from the general Azure dashboard.
This was quite simple: every time you want to create something you’ll find a “+ New” button on the bottom left part of your dashboard. Here I looked for “Machine Learning” in the “Data Services” section. The creation process is straightforward: provide names for the Workspace and for the new storage account.
You can always access your Azure Machine Learning Studio, which is where I spent most of my time. The workspace UI is coherent and clear, although the dependency between items might not always be immediately evident. Let’s briefly describe what kind of items you’re going to work on.
As with my previous article about Amazon Machine Learning, I will be using an open dataset, specifically designed for Human Activity Recognition (HAR). It contains 10,000 records, each defined by 560 features (input columns), plus one target column that represents the activity type we want to classify. The authors have manually labeled each row with one of six possible activity types, which means we are trying to solve a multiclass problem.
Once again, we’ll be manipulating the raw dataset to create one single csv file (see this Python script), although we could also have used one of these formats as input for our Dataset object:
But those aren’t the only ways to bring data into Azure Machine Learning. We might use a special Reader object in our experiments to retrieve data via one of the following:
OK, let’s import our csv file (without a header row) and create a new Dataset object. This step might take a while, based on your network speed, but you can continue working on your experiment.
Unexpectedly, working on Experiments is quite fun. As soon as you create your first blank experiment, you’re shown a very user-friendly UI with a lot of building blocks to link together in order to train and evaluate your model.
You have full control over what’s going on and each input and processing phase can be configured or removed at any time.
First things first: I dragged in my Dataset object (I named it “HAR dataset (csv)”, as you can see in the screenshot above) and, of course, not much happened. This is when I had to decide which model to use.
You can find all the available models in the “Machine Learning > Initialize model” menu on your left. Here I selected “Classification” and focused on the Multiclass models (Decision Forest, Decision Jungle, Logistic Regression, and Neural Network).
For the purpose of this demonstration, let’s assume I directly chose “Logistic Regression” although, in truth, I came to this conclusion only after training and evaluating both Logistic Regression and Neural Network (which I will show how at the end of this article).
Note: when you drag the “Logistic Regression” block into your experiments, in practice you are only creating a model configuration object. It will contain all your model parameters, which you can always edit, even after running an experiment.
In order to actually train a model, a “Train Model” object is needed. You can find it in the “Machine Learning > Train” menu.
Each block can be connected to a set of inputs and outputs (the experiment won’t even run if every required connection is not filled). Our Train Model block takes as input an Untrained Model (our model config) and a Dataset object. The UI will really help you, as you can see the input/output type by simply hovering your mouse on the connection slot.
We’re not yet done with training. We still need to let our new block know which is the target column we’re trying to predict.
On the Train Model properties sidebar, I can click on “Launch column selected” or choose the “Col1” variable as target column. Note that, since we uploaded a header-less csv file, our columns are not named, but automatically receive an incremental number.
But it’s not going to be quite so simple. We want to create a model and then be able to understand how it performs. A general solution is splitting your dataset into two random sets (let’s say 70/30), using 70% of your dataset to train a robust model, and then testing it against the remaining 30%. This is to ensure that your model behaves well with new data.
Not a big deal. We can just add a new “Split” block. It takes only one input (our Dataset object) and gives you two output links, based on the “Fraction of rows” parameter. You can try to tune it across several experiments runs and see what changes (maybe 60/40? or 50/50?).
Now, what can we do with our “Train Model” block output? It is, in fact, a trained model, so we’ll put it to work. We add a “Score Model” block to verify that it can take the remaining 30% of our dataset and generate actual predictions.
Once everything is configured, we can finally run our experiment (yes, by clicking the big “Run” button at the bottom!). Incredibly, it takes just a few seconds (about ten seconds with this dataset). Now you can quickly inspect the results by clicking on the output of our new “Score Model” block, and then on “Visualize”.
Here we can find the predicted class for each of our 3,090 rows, together with statistics about each column, including mean, median, min, max, standard deviation, and a graphical distribution of its values (histogram).
Of course, manually reading through 3,000 rows is not how we evaluate our model: we’ll need to add one last “Evaluate Model” block (“Machine Learning > Evaluate” menu).
We’ll quickly re-run the experiment, click on the evaluation block output, and then “Visualize”.
Here we find very useful data about our model accuracy, precision, and recall. More intuitively, you can graphically visualize how the 6 possible classes have been classified in the testing set by looking at the Confusion Matrix. I got pretty good results using Logistic Regression, ranging from 94% to 100% precision. You may also notice particular patterns, for example, those “squares” around the classes 1, 2, 3 and 4 and 5. This happens when classes are, indeed, confused by your model. In our case 1, 2, and 3 are the “walking” classes, while 4 and 5 are “sitting” and “standing”. We expected to have some problems with their predictions the way we had with Amazon Machine Learning. With Amazon Machine Learning, however, we achieved a 100% precision for the “laying down” activity!
In order to finally use our model for real-time predictions, we need to expose it as a Web Service. In fact, Web Services can be created for the result of an experiment inside Azure Machine Learning. Luckily, converting the already created experiment to your first Web Service is fairly painless, requiring only a click on “Create Scoring Experiment”. You can also enjoy a beautiful animation by watching this video if you just can’t wait for your own.
One thing I didn’t notice at first is that every experiment has a “Web Service switch”. Eventually, you can define your training/testing procedure and your web service within the same experiment, switching back and forth between the two visualizations (they simply highlight/blur some input/output blocks).
We’re almost there. There’s just one more task before publishing our web service: we need to name “Web Service Input” and “Web Service Output” blocks. I named mine “record” and “result.” These names will be used as JSON fields in the web service response.
We can finally run the experiment one last time and click on “Publish Web Service”.
Azure Machine Learning automatically creates an ad-hoc API key and a default endpoint for your published web service. That really is all you’ll need. At the very bottom of your web service’s public documentation page, you’ll find sample code for C#, Python, and R.
Here is the simple Python script I coded based on the default one:
import urllib2 import json #web service config webservice = 'YOUR_WS_URL' api_key = 'YOUR_WS_KEY' #dataset config labels = { '1': 'walking', '2': 'walking upstairs', '3': 'walking downstairs', '4': 'sitting', '5': 'standing', '6': 'laying' } n_classes = len(labels) n_columns = 562 #read new record from file with open('record.csv') as f: record_str = f.readline() #POST data input_data = { "Inputs": { "record":{ "ColumnNames": ["Col%s" % n for n in range(1,n_columns+1)], #dynamic "Values": [ ['0'] + record_str.split(',') ] #leading zero }, }, "GlobalParameters": {} } #request config headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+api_key)} req = urllib2.Request(webservice, json.dumps(input_data), headers) try: data = json.loads(urllib2.urlopen(req).read()) result = data['Results']['result']['value']['Values'][0] #too deep? label = result[-1] #last item is our classified class stats = result[-(n_classes+1):-1] # M probabilities, one for each class print("You are currently %s. (class %s)" % (labels[label], label) ) for i,p in enumerate(stats): print("Scored Probabilities for Class %s : %s" % (i+1, p)) except urllib2.HTTPError as e: print("The request failed with status code: " + str(e.code)) print(e.info()) print(json.loads(e.read()))
I did notice that Azure Machine Learning causes a lot of data to travel a great deal across the network. The POST request is pretty heavy of course since we are sending more than 500 input values (with their auto-generated variable name).
What surprised me is how unnecessarily heavy their response format – a JSON table – is, since it contains the input record we’ve just sent, together with the predicted class and all the other classes’ reliability measure. Indeed, in our case, it contains about 18KB of data and the HTTPS call takes about seven seconds (versus the two seconds we experienced with Amazon Machine Learning).
UPDATE: as our reader, Dmitry clarified with his comment below, we can easily reduce the size of every response by placing a “Project Columns” module right before the “Web Service Output” one.
This way – projecting only “all scores” and “all labels” – I managed to cut the response size down to only 560B, although the response time is still about 7 seconds.
{ "Results": { "result": { "type": "table", "value": { "ColumnNames": [ "Col1", "Scored Probabilities for Class \"1\"", "Scored Probabilities for Class \"2\"", "Scored Probabilities for Class \"3\"", "Scored Probabilities for Class \"4\"", "Scored Probabilities for Class \"5\"", "Scored Probabilities for Class \"6\"", "Scored Labels" ], "ColumnTypes": [ "Numeric", "Numeric", "Numeric", "Numeric", "Numeric", "Numeric", "Numeric", "Categorical" ], "Values": [ [ "0", "0.00638132821768522", "0.970375716686249", "0.0132338870316744", "0.000609736598562449", "0.00881590507924557", "0.000583554618060589", "2" ] ] } } } }
Anyway, what we’re after is the “Scored Labels” column (the very last one!). It contains an integer that represents the predicted class for the given record.
Furthermore, the M previous columns contain “Scored Probabilities for Class N”: as already mentioned for AWS ML, these values might help you take more advanced decisions in case the selected prediction by your model is not reliable enough.
Note: there exists an Azure SDK for Python – and even a read-only Azure Machine Learning client – but it can’t be used to access your Web Service. I can see how my new web service is not particularly hard to query, but honestly, I would rather access it with a well structured and tested client than using raw http(s) requests and JSON parsing.
If your model works fine and you (and your team) have access to its endpoint and API key, your web service will start serving predictions on new data. You may want to monitor its load.
You can find all your Azure Machine Learning web services on the general Azure dashboard, inside the “Web Services” section of your Azure Machine Learning Workspace.
Here you can find a detailed visualization of all the requests (predictions), for each configured endpoint.
Furthermore, you can create new endpoints and configure their Throttle level and Diagnostics Trace level. Of course, this can be used to monitor separately or to grant/remove temporary access to your Web Service. Note that you can’t delete your default endpoint. For example, I have created a “dev” endpoint with a “Low” Throttle level and “All” Diagnostics Trace level, to be used for development, so it won’t be confused with production loads and statistics.
As promised, I’m going to discuss how to create an experiment to train and test more than one model at once.
Basically, you just have to add more parallel blocks to your diagram and use the output of your first blocks multiple times. Once you have configured and scored two different models, you can reuse the Evaluate Model block, which can take an additional Scored Dataset object as input.
Let’s see how I trained and evaluated both Logistic Regression and Neural Network models with the very same dataset.
While my goal was to reproduce the same ML model I created on AWS last week, I’ve found out that Azure Machine Learning is much more than just Machine Learning as a Service. You can, in practice, create any data science workflow, using a ready-to-use set of modules to manipulate and analyze your data, including:
Once again, you can learn how to use all these amazing tools by exploring the Azure Machine Learning Gallery. You can even add your own experiments to the list.
If you want to get learn more on Azure Machine Learning, this is your go-to learning path: Introduction to Azure Machine Learning....
What is Amazon Machine Learning and how does it work"Amazon Machine Learning is a service that makes it easy for developers of all skill levels to use machine learning technology.”UPDATES: I've published a new hands-on lab on Cloud Academy! You can give it a try for free and st... ... | https://cloudacademy.com/blog/azure-machine-learning/ | CC-MAIN-2019-18 | refinedweb | 2,545 | 59.53 |
This tutorial covers how to build a fullstack application that allows users to sign up or login, then post tweets to a global feed. You can find the code for the completed app here.
A demo of what we'll be building is currently deployed at fullstack-twitter.onrender.com
MeEndpoint
Before we get started, make sure you have node and yarn installed.
prisma migrateisn't ready for production use. Use Hasura instead.
First, create a new npm project
mkdir fullstack-twitter-clone cd fullstack-twitter-clone npm init -y
Now, add dependencies for next.js and react, as well as some typed development dependencies
yarn add next react react-dom yarn add --dev typescript @types/react @types/node
Now, we create the designated
pages directory that next.js uses for file-based routing.
mkdir pages
Every file within the
pages directory is compiled into it's own route, so
index.tsx can be visited at
/,
about.tsx at
/about, and so on.
Lets add the following component to our first page,
index.tsx.
// pages/index.tsx export default () => <div> hello, world! </div>
Now, run the Nextjs development server
npx next
and visit to see our first component in action. We should have a barebones unstyled webpage with "hello, world!" in the top left.
Now that our react code has the client up and running, let's use Next.js's API routes to write a backend handler in the designated
api directory within
pages
mkdir pages/api
Create a file,
feed.ts within the
api directory and create a simple function that returns an array.
// pages/api/feed.ts export default (req, res) => res.json({ feed: [] })
Head to and you should see some json within your browser.
{ "feed": [] }
Let's make our feed more interesting by adding some fake tweets to
feed.ts
// pages/api/feed.ts export default (req, res) => { const feed = [ { text: "Wow not having to configure and transpile typescript is one of the best parts of next.js", author: { username: "john" }, }, { text: "I'm a firm believer that dark mode should be a universal default on the web", author: { username: "jill" }, }, ] res.json(feed) }
Bonus: you can take a sneak peek at the feed endpoint of the production app, which our endpoint will eventually build up to, at fullstack-twitter.onrender.com/api/feed
Visit your browser again you should see the fake tweets being rendered as raw json.
Note: I have a browser extension, JSON formatter, installed that prettifies raw JSON, like in the screenshot above.
The real power with this approach is that we can write frontend and backend code in the same place, in the same language, and split the logic accordingly. All of the source code goes into the
pages directory, and the backend code is limited to the
api directory. Each seperate file, whether a frontend page or a backend route, is compiled into it's own endpoint, and the two work together to power a fullstack application.
To pull them together, we query the new
api/feed endpoint from the
pages/index.tsx page, and show our list of tweets to the user. We're going to user a small library called SWR for our data fetching, which handles caching, locally changing data during POST requests, and revalidation. The power of SWR and it's ability to make handling cached data on the frontend will soon become obvious.
Also, we want our app to be beautiful on more than just the inside, so let's use Ant Design to boostrap our interface's styles.
First, we install both libraries
yarn add swr antd
Create a top-level
components directory, and a
util directory within that. Inside
util, create
fetcher.tsx and
hooks.tsx.
Within
fetcher.tsx we have
// components/util/fetcher.tsx export const fetcher = (url, data = undefined) => fetch(window.location.origin + url, { method: data ? "POST" : "GET", credentials: "include", headers: { "Content-Type": "application/json", }, body: JSON.stringify(data), }).then(r => r.json())
This basically abstracts away the complexity of POST and GET requests when using SWR, so the requests within the components themselves won't clutter up our react code.
In
hooks.tsx we add
// components/util/hooks.tsx import useSWR from "swr" import { fetcher } from "./fetcher" export function useFeed() { const { data: feed } = useSWR("/api/feed", fetcher) return { feed } }
Finally, let's pull this all together in
components/Feed.tsx, rendering each Tweet in Ant Design's
Card component
// components/Feed.tsx import { Card } from "antd" import { useFeed } from "./util/hooks" export const Feed = () => { const { feed } = useFeed() return feed ? ( <> {feed.map(({ id, text, author }, i) => ( <Card key={i}> <h4>{text}</h4> <span>{author.username}</span> </Card> ))} </> ) : null }
which will give us the same contents as the value of the json endpoint, demonstrating that the data is being retrieved correctly. Finally, we can render the feed in
pages/index.tsx
ah
// pages/index.tsx import { Col, Row } from "antd" import { Feed } from "../components/Feed" export default () => ( <Row> <Col md={{ span: 10, offset: 8 }}> <Feed /> </Col> </Row> )
For one last detail, we need to import Ant Design's CSS stylesheet into our app, so that it's automatically included in all our pages. We do this with a special file,
_app.js in the
pages directory, which next.js uses to wrap all of the other pages.
// pages/_app.js import "antd/dist/antd.css" export default function MyApp({ Component, pageProps }) { return <Component {...pageProps} /> }
Note: you'll have to restart your development server for changes to
_app.jsto take effect.
Now visit and we'll see the naked data from our backend being rendered
Our twitter app won't work if all users can do is read tweets, so we need to give them a way to create them too. Let's add a form component that users can useto add new tweets. Inside
components create
CreateTweetForm.tsx.
Notice the naming conventions, which are entirely for the sake of organization and can be changed to your liking:
- Components are capitalized TSX files (
Feed.tsx)
- Pages are lowercased TSX files (
index.tsx)
- API routes are lowercased TS files (
feed.ts)
In
CreateTweetForm.tsx we call the same
useFeed() hook as in
Feed.tsx, and we additionally make use of the
mutate export from swr. This allows us to change the local state of our feed to reflect the change, even before it's registered by the server, so the user can see their new tweet right away.
// components/CreateTweetForm.tsx import { Button, message, Row, Col, Input } from "antd" import { mutate } from "swr" import { fetcher } from "./util/fetcher" import { useState } from "react" import { useFeed } from "./util/hooks" export const CreateTweetForm = () => { const [input, setInput] = useState("") const { feed } = useFeed() return ( <form style={{ padding: "2rem" }} onSubmit={async e => { e.preventDefault() // we include "false" here to ask SWR not to revalidate the cache with // the feed returned from the server. we'll remove this after the next section mutate( "/api/feed", [{ text: input, author: { username: "Marshall Mathers" } }, ...feed], false ) setInput("") }} > <Row> <Col> <Input value={input} onChange={e => setInput(e.target.value)} /> </Col> <Col> <Button htmlType="submit">Tweet</Button> </Col> </Row> </form> ) }
Import
CreateTweetForm into the index page and render it directly above the
Feed component. Be sure to test this on localhost to ensure you can spam your feed with every thought with which you desire.
The only problem, you may have noticed, is that tweets don't stick around if you refresh your browser. This is because we are currently adding new tweets to SWR's local cache, but is not being sent to the backend or stored anywhere that persist independently of browser sessions and devices.
Essentially, we need a way for:
Now that we've got our app working nicely on single-player instances, we need to make it immune to the effects of time and refreshes by storing all of our users' data somewhere persistent. We do this by bringing in our old friend, the database. With it, we'll use Prisma to handle the datamodel, access the data, and give us type safety throughout the application.
We start by adding prisma to our project
yarn add --dev @prisma/cli
Then initialize the prisma project with
npx prisma init
You'll notice that a
prisma directory was created, and within it a
schema.prisma file and a
.env. You can ignore the latter for now, since we'll be using sqlite to get started and prototype faster, and switching to postgres later as we prepare for deployment.
To configure prisma to use sqlite and to point the prisma client to a local sqlite file on your machine, update the datasource and client attributes within
schema.prisma
datasource sqlite { provider = ["sqlite", "postgresql"] url = "file:./dev.db" } generator client { provider = "prisma-client-js" binaryTargets = ["native"] }
Now, we can add a model for new tweets
// prisma/schema.prisma model Tweet { id Int @id @default(autoincrement()) createdAt DateTime @default(now()) updatedAt DateTime @default(now()) text String }
When creating a tweet, the
id will be automatically generated and assigned an integer, starting from 1 and incrementing from there, and the
createdAt and
updatedAt fields will automatically be filled with timestamps at the moment of creation. So, all we need to do is pass a valid string into the
text field, and we'll have created our first
Tweet entry.
So, we can create a tweet with
const tweet = await prisma.tweet.create({ data: { text: "Hello, Twitter!" } })
Now let's put this to use to allow users to create tweets.
Before we begin, let's add some scripts to
package.json to make it easier for us to call
prisma migrate commands, as well a few more to facilitate the build process for when we deploy our app to production.
"scripts": { "migrate:save": "prisma migrate save --experimental", "migrate:up": "prisma migrate up --experimental", "postinstall": "prisma generate", "generate":"prisma generate", "dev": "next", "start": "next start", "build": "next build" },
Now that we're set up, we can create the sqlite database file, run the migration to create the new table, and then generate the prisma client to create and access tweets.
First, we create the sqlite file and save the migration.
yarn migrate:save
Respond Yes when asked if you'd like to create a new sqlite file, then give your migration a name, like "Create tweet model". Then, we run the migration against our database.
yarn migrate:up
Finally, we can generate the prisma client, which lives in the
node_modules directory and is generated on the fly (usually in a postintall hook) to give us up-to-date typesafe access to our data.
Create the client by running
yarn generate
Which peeks into our schema file for the models defined, generates the client in
node_modules/@prisma/client and concludes with some output dictating exactly how we can use it in our code.
Within the
api directory, create another directory
tweet, and within that
create.ts. This will be another backend function that takes some
text and gives us back a tweet object.
// pages/api/tweet/create.ts import { PrismaClient } from "@prisma/client" const prisma = new PrismaClient() export default async (req, res) => { const { text } = req.body const tweet = await prisma.tweet.create({ data: { text } }) res.json(tweet) }
Notice that we are assuming the
text to be attached to the body of the request.
Now, lets try calling this function from our frontend. After the call to
mutate in our index page, add
// components/CreateTweetForm.tsx fetcher("/api/tweet/create", { text: input, })
Remember to
import { fetcher } from "./util/fetcher" at the top of the file.
Now, try creating another tweet in the browser and head to the Network tab of the console to see the results. You should see a request titled create, after the suffix of the endpoint, and click it to view the resposne. If the response worked, you'll see a response JSON object with an
id,
createdAt, and
text fields.
Now that we can create tweets in our database, let's change our feed API function to retrieve tweets from the database instead of giving us back hardcoded data. Open
pages/api/feed.ts and change the contents to
// pages/api/feed.ts import { PrismaClient } from "@prisma/client" const prisma = new PrismaClient() export default async (req, res) => { const tweets = await prisma.tweet.findMany({ orderBy: { createdAt: "desc" }, }) res.json(tweets) }
and you'll now be able to create tweets, refresh the page, and see them live on.
Our app can't compete with twitter if you can't log in and no one knows who's posts are whose, can it? Let's fix this by giving users a way to log in.
We're going to allow users to sign up, encrypt their passwords with bcrypt then authorize their device by attaching a server-side HttpOnly cookie to their requests.
Basically, we're gonna build a safe and secure way to allow users to sign in with passwords while making sure they or we don't get hacked.
First, we introduce the
User model in
schema.prisma
// prisma/schema.prisma model User { id Int @id @default(autoincrement()) createdAt DateTime @default(now()) username String @unique password String tweets Tweet[] }
You'll notice that every
user has a
tweets field that corresponds to an array of Tweets. This allows us to access a users tweets as simply as with
user.tweets. Also, let's add the other side of the relationship by adding
authorId and
author fields to
Tweet.
// prisma/schema.prisma model Tweet { id Int @id @default(autoincrement()) createdAt DateTime @default(now()) text String + authorId Int + author User @relation(fields: [authorId], references:[id]) }
Now, we can save and run our database migrations to apply our changes. Let's use the scripts we added to
package.json earlier.
yarn migrate:save yarn migrate:up yarn generate
Then, we add our new dependencies
yarn add bcrypt jsonwebtoken cookie
We're going to handle authentication and reflect this to the user by rendering a
Profile component on the page, which will show the user's details if they're logged in, and a
Let's start by creating
Profile.tsx
// components/Profile.tsx import { Row, Col, Button, message } from "antd" import { SignupForm } from "./SignupForm" import { useMe } from "./util/hooks" import { useState } from "react" export const Profile = () => { const { me } = useMe() const [loading, setLoading] = useState(false) if (!me) return null return ( <Row style={{ padding: "1.5rem" }}> {!me.username ? ( <SignupForm /> ) : ( <Col> Logged in as: <strong>{me.username}</strong> {/* TODO: we'll add a logout button here */} </Col> )} </Row> ) }
You'll notice we're using a new hook,
useMe. As you can guess, this will return the currently authenticated user. Let's go ahead and add this hook to our hooks utility.
// components/util/hooks.tsx import { User } from "@prisma/client" // useFeed function export function useMe() { const { data: me }: { data?: User } = useSWR("/api/me", fetcher) return { me } }
Notice that we're importing the
User interface from prisma and applying it to the return type of the hook. This will give us typesafety through the frontend when working with our data, and is one of the most powerful advantages of using Prisma with React Hooks.
Also, our use of SWR will automatically deduplicate uses of
useMe since they have the same key,
/api/me. This means we can call
useMe in several different components, and our app will only make a single request to the backend.
We'll implement the
/api/me endpoint right after we've built the signup form and endpoints.
Then we can create the form itself
// components/SignupForm.tsx import { Row, Col, Button, message, Input } from "antd" import { useState } from "react" import { mutate } from "swr" import { fetcher } from "./util/fetcher" export const SignupForm = ({}) => { const [username, setUsername] = useState("") const [password, setPassword] = useState("") const [login, setLogin] = useState(false) const [loading, setLoading] = useState(false) return ( <Row> <Col> <h3>Sign up</h3> <form onSubmit={async e => { e.preventDefault() if (username.length === 0 || password.length === 0) { message.error( "Uh oh: you can't have a blank username or password." ) } setLoading(true) const { data, error } = await fetcher( `/api/${login ? "login" : "signup"}`, { username, password, } ) if (error) { message.error(error) setLoading(false) return } await mutate("/api/me") }} > <div> <Input value={username} onChange={e => setUsername(e.target.value)} <Input value={password} onChange={e => setPassword(e.target.value)} </div> <div> <Button htmlType="submit" loading={loading}> {login ? "Login" : "Sign up"} </Button> </div> <div> <a onClick={() => setLogin(!login)}> {login ? "New? Sign Up" : "Already a user? Log In"} </a> </div> </form> </Col> </Row> ) }
Notice that our Signup form also serves as a login form, and can switch between the two. Also, it will post to the endpoint
/api/signup if the user is signing up, and to
/api/login otherwise. As you may have guessed, now we'll have to create these two API files to handle the signing up and logging in process themselves.
Let's start with
passwordwith Bcrypt
jsonwebtokenwith the
idand
usernameof the user and the
JWT_SECRETfrom the environment.
// pages/api/signup.ts import bcrypt from "bcrypt" import jwt from "jsonwebtoken" import cookie from "cookie" import { PrismaClient } from "@prisma/client" const prisma = new PrismaClient() export default async (req, res) => { const salt = bcrypt.genSaltSync() const { username, password } = req.body let user try { user = await prisma.user.create({ data: { username, password: bcrypt.hashSync(password, salt), }, }) } catch (error) { res.json({ error: "A user with that username already exists 😮" }) return }) return }
Now, we can implement a similarly-structured
login route.
// pages/api/login.ts import { PrismaClient } from "@prisma/client" const prisma = new PrismaClient() import bcrypt from "bcrypt" import jwt from "jsonwebtoken" import cookie from "cookie" export default async (req, res) => { const { username, password } = req.body const user = await prisma.user.findOne({ where: { username }, }) if (user && bcrypt.compareSync(password, user.password)) {) } else { res.json({ error: "Incorrect username or password 🙁" }) return } }
Before we forget, create a
.env file in the top-level directory and add
JWT_SECRET=appsecret123
Replace
appsecret123 with some less-guessable combination of characters, and restart your development server.
MeEndpoint
Finally, we can build
/api/me
// pages/api/me.ts import jwt from "jsonwebtoken" import { PrismaClient } from "@prisma/client" const prisma = new PrismaClient() export default async (req, res) => { const { token } = req.cookies if (token) { const { id, username } = jwt.verify(token, process.env.JWT_SECRET) const me = await prisma.user.findOne({ where: { id } }) res.json(me) } else { res.json({}) } }
Also, we can return the author of each tweet in the
feed, so that we can render their usernames
// pages/api/feed.ts import { PrismaClient } from "@prisma/client" const prisma = new PrismaClient() export default async (req, res) => { const tweets = await prisma.tweet.findMany({ orderBy: { createdAt: "desc" }, include: { author: { select: { username: true, id: true } } }, }) res.json(tweets) }
If you're using VSCode and hover over the
tweets variable, typescript will show us that the feed is now of type
const tweets: (Tweet & { user: { author: { username: string id: number } } })[]
So we can update our
useFeed hook to return the same type, the same as we did for
useMe earlier.
// components/util/hooks.tsx import { Tweet, User } from "@prisma/client" import useSWR from "swr" import { fetcher } from "./fetcher" export function useFeed() { const { data: feed }: { data?: (Tweet & { author: User })[] } = useSWR( "/api/feed", fetcher ) return { feed } } export function useMe() { const { data: me }: { data?: User } = useSWR("/api/me", fetcher) return { me } }
One last thing: we need to attach the logged in user to each tweet that's created as it's author. We do this by using the
token the same way we do in
/api/me, and then using the prisma client's
connect property.
// pages/api/tweet/create.ts import { PrismaClient } from "@prisma/client" import jwt from "jsonwebtoken" export default async (req, res) => { const prisma = new PrismaClient() const { token } = req.cookies if (token) { // Get authenticated user const { _id, username } = jwt.verify(token, process.env.JWT_SECRET) const { text } = req.body const tweet = await prisma.tweet.create({ data: { text, author: { connect: { username } } }, }) res.json(tweet) } else { res.json({ error: "You must be logged in to tweet." }) } }
Now, test the app on localhost to make sure you can sign up and create new tweets.
It would be nice if users could delete their tweets after they've posted them, as well as log themselves out. Let's create some button components to facilitate this.
To know if a user can delete a tweet, we need to check if the user is the tweet's author. Then we can make a call to a new API endpoint for deleting a tweet, and then locally mutate the cache to remove the tweet from the feed.
Create a new component
DeleteButton.tsx in components.
// components/DeleteTweetButton import { Button } from "antd" import { mutate } from "swr" import { fetcher } from "./util/fetcher" export const DeleteButton = ({ id, feed }) => ( <Button style={{ float: "right" }} danger type="dashed" onClick={async () => { await fetcher("/api/tweet/delete", { id }) await mutate( "/api/feed", feed.filter(t => t.id !== id) ) }} > x </Button> )
and import and render it in the feed component.
// components/Feed.tsx + import { DeleteButton } from "./DeleteButton"; ... <Card key={i}> + {me && author.id === me.id && ( + <DeleteButton id={id} feed={feed} /> + )} <h4>{text}</h4> <span>{author.username}</span> </Card>
Now, let's create the backend half of the equation. Create a new API route for
/api/tweet/delete that takes the
id from the body of the request, then passes it to prisma's
delete method, and returns the (now empty) tweet.
// pages/api/tweet/delete import { PrismaClient } from "@prisma/client" const prisma = new PrismaClient() export default async (req, res) => { const { id } = req.body const tweet = await prisma.tweet.delete({ where: { id }, }) res.json(tweet) return }
Test to make sure you can now delete your own (and only your own) tweets.
Great! Now let's follow the very same process to facillitate logging out.
Create a
LogoutButton.tsx
// components/LogoutButton.tsx import { Button, message } from "antd" import { mutate } from "swr" import { fetcher } from "./util/fetcher" import { useState } from "react" export const LogoutButton = () => { const [loading, setLoading] = useState(false) return ( <Button loading={loading} onClick={async () => { setLoading(true) const { data, error } = await fetcher("/api/logout") if (error) { message.error(error) setLoading(false) return } await mutate("/api/me") }} > Log Out </Button> ) }
Then import and render it in Profile
// components/Profile.tsx + import { LogoutButton } from "./LogoutButton"; ... <Col> Logged in as: <strong>{me.username}</strong> <br /> + <LogoutButton /> </Col>
For our final step, we create the logout API route
// pages/api/logout.ts import { serialize } from "cookie" export default (req, res) => { const cookie = serialize("token", "", { maxAge: -1, path: "/", }) res.setHeader("Set-Cookie", cookie) res.json({ loggedOut: true }) }
Try and logout, and behold that our app is complete.
Don't hesitate to DM me on twitter or email me at me@kunal.sh 🙂 | https://kunal.sh/posts/building-a-fullstack-twitter-clone | CC-MAIN-2020-40 | refinedweb | 3,753 | 56.96 |
Flare-On 2020: Fidler.
Challenge
Welcome to the Seventh Flare-On Challenge!
This is a simple game. Win it by any means necessary and the victory screen.
The file is an x64 binary:
root@kali# file fidler.exe fidler.exe: PE32+ executable (console) x86-64, for MS Windows
As the prompt mentioned, there’s also a Python version with the supporting files:
root@kali# ls 1_-_fidler.7z controls.py fidler.exe fidler.py fonts img Message.txt
Running It
Because this is a CTF, it’s worth giving this a run in a Windows VM. A password prompt pops:
Guessing at a password pops up an “FBI” warning:
On a Linux VM, I did
pip3 install pygame and then ran
python3 fidler.py , and I get the same behavior. I’ll work out of this one because if I need to modify the program (and I will want to), it’s just easier than patching compiled Python.
RE
Find Password
Because I have the source I can take a look. The first two functions defined are
password_check(input) and
password_screen.
password_check gives away the password:
def password_check(input): altered_key = 'hiptu' key = ''.join([chr(ord(x) - 1) for x in altered_key]) return input == key
I can jump into a Python repl and get the key:
>>>>> ''.join([chr(ord(x) - 1) for x in altered_key]) 'ghost'
Gave Overview
On entering that password, now it goes to the game screen:
Clicking on the cat adds one coin:
If I have at least 10 coins, I can click the Buy button, and then I get 10 less coins, but now my coins increase at a rate of one per second. Buying a second autoclicker will have my coins increase at two per second, etc. Even buying autoclickers as fast as possible, getting to 100 billion is going to take forever.
Static Analysis
There’s a function towards the bottom of the code,
decode_flag(frob):
def decode_flag(frob): return ''.join([chr(x) for x in decoded_flag])
This function is called in
victory_screen(token), and the unmodified
token variable is passed into it.
victory_screen is called from
game_screen(), at the top of the main
while True loop:
while not done: target_amount = (2**36) + (2**35) if current_coins > (target_amount - 2**20): while current_coins >= (target_amount + 2**20): current_coins -= 2**20 victory_screen(int(current_coins / 10**8)) return ...[snip rest of the game]...
There’s a
target_amount variable that is set to 103079215104. To get to the call of
victory_screen, first,
current_coins must be greater than
target_amount minus 220, 103078166528. If
current_coins is more than 220 more than
target_amount, it will subtract 220 from
current_coins until it isn’t. Then,
victory_screen is called, with
token as the floor of
current_coins / 108. That means to get to
victory_screen,
current_coins will be between 103078166529 and 103080263679 (inclusive). Anything in that range will result in a
token of 1030:
>>> ((2**36 + 2**35) + 2**20 - 1) // 10**8 1030 >>> ((2**36 + 2**35) - 2**20 + 1) // 10**8 1030
Solve
Decrypt Flag
I can solve from here knowing the token and having the
decrpyt_flag function. In fact, I can just import the Python file and call the function:
root@kali# python3 Python 3.8.5 (default, Aug 2 2020, 15:09:07) [GCC 10.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import fidler pygame 1.9.6 Hello from the pygame community. >>> fidler.decode_flag(1030) 'idle_with_kitty@flare-on.com'
Modify the Program
Alternatively, I can modify the program to win. At the top of the code, the
current_coins variable is initialized to 0:
import pygame as pg pg.init() from controls import * current_coins = 0 current_autoclickers = 0 buying = False
I’ll just drop 110,000,000,000 in:
current_coins = 110000000000
Now, on running the game, it goes right to the victory screen:
Flag: idle_with_kitty@flare-on.com | https://0xdf.gitlab.io/flare-on-2020/fidler | CC-MAIN-2022-40 | refinedweb | 640 | 72.36 |
#!/usr = "OmniWeb";#### LEAVE EVERYTHING BELOW AS IS ####$filter = "MacOSX";print "Search string: ";$keyword = <STDIN>;chomp($keyword);$url = " mode=Quick&OS_Filter=".$filter."&search=".$keyword."&x=0&y=0";$todo = "\'Tell application \"$browser\" to getURL \"$url\"\'";system("osascript -l AppleScript -e $todo");
AppleScript or OmniWeb.
In OW, I have shortcuts to search Bartleby.com's copies of American Heitage and Roget's, as well as VT for OS X, OS 9, and Windows. Example:
vt@
I type "vt frogblast" and it returns the results without me having to load that bloated-ass front page first.
If you use another browser, use an AppleScript:
on run
set theSearch to text returned of (display dialog "Enter search term:" default answer ""
set theURL to "" & theSearch & "&x=0&y=0"
tell application "insert browser here"
activate
GetURL theURL -- you may have to use OpenURL instead ...
end tell
end run
That's the string for OS X searches, but you can use any of them.
-/-
Mikey-San
Thanks - the applescript is very useful. It nows sits in my ScriptMenu.
I want to make one of these for IMDB: what's the trick for finding the right URL?
Well, for Bartleby and VT, you simply reverse-engineer the URL you see after you enter a search string.
IMDB is a bit different, it seems. If you have the time, you can try breaking down their search form at:
Not as straigtforward as VT, unfortunately, but upon first glance, not necessarily impossible.
-/-
Mikey-San
for search by title:
set theURL to "?" & theSearch
for search by name:
set theURL to "?" & theSearch
vt@
mode=Quick&OS_Filter=MacOSX&search=%@x=0&y=0
Has anybody succeeded with this script? I get the same error as dm2243:
% Search_sh
Search string: ccm
syntax error: A Ò can't go after this identifier. (-2740)
I have it working with Mozilla, even though the command
reports that Mozilla gets various numbered errors in
execution.
Works perfectly with Mozilla if you change "getURL"
to "OpenURL" -- in the second to last line following
"$todo".
Very cool.
#!/usr = "Mozilla";
#### LEAVE EVERYTHING BELOW AS IS ####
$filter = "MacOSX";
print "Search string: ";
$keyword = <STDIN>;
chomp($keyword);
$url = "".$filter."&search=".$keyword."&x=0&y=0";
$todo = "\'Tell application \"$browser\" to OpenURL \"$url\"\'";
system("osascript -l AppleScript -e $todo");
Mozilla offers a great method of using keywords to do this same thing: Create a new bookmark with this url:
Open the bookmark manager and look at the properties of your new bookmark. Give it a short, easy to type keyword name and close the window. Say for example we give it the name "vts". Now whenever I want to search Version "Molasses" Tracker for a name I just enter my keyword + search string:
vts what_i'm_looking_for
The %s in the url acts as the variable replaced at runtime in the URL string. Great feature than can be used most anywhere. Here's another example - the url for MacOSXHints is:
You can get more info here:
Thanks!
Yep, the problem is the syntax of the AppleScript part:
for Navigator use get URL
for Mozilla use OpenURL
for IE and OmniWeb use getURL
For other browsers have a look at the Application Dictionary in Script Editor.
Hi all,
You can actually simplify things a bit and create an applescript that works will all browsers, since the perl script from daeley is actually using AppleScript to do most of the work.
Instead of using GetURL, openURL, ... use:
system "osascript -e 'tell application \\"$browser\\" to open location \\"$url\\"'";
If you want this to run in your default browser, I believe you can just use:
system "osascript -e 'open location \\"$url\\"'";
I've tested this with IE, OmniWeb, Mozilla and Chimera.
Tried this script modification and it didn't work. Got the following errors:
Scalar found where operator expected at /Users/balston/bin/vt line 25, near ""osascript -e 'tell application \\"$browser"
(Missing operator before $browser?)
syntax error at /Users/balston/bin/vt line 25, near ""osascript -e 'tell application \\"$browser"
Backslash found where operator expected at /Users/balston/bin/vt line 25, near "$browser\"
(Missing operator before \?)
Scalar found where operator expected at /Users/balston/bin/vt line 25, near "" to open location \\"$url"
(Missing operator before $url?)
Backslash found where operator expected at /Users/balston/bin/vt line 25, near "$url\"
(Missing operator before \?)
Execution of /Users/balston/bin/vt aborted due to compilation errors.
I replaced the entire line:
$todo = "\'Tell application \"$browser\" to getURL \"$url\"\'";
with the suggested line with no luck. I'm probably just don't know what the heck I'm doing. I don't know Perl or AppleScript very well at all. But, any input would be appreciated.
Or just visit MacUpdate ()
You can bookmark the power search page at:
or simple add keywords to the end of this URL:
I use this as a bookmark for searching OSX listings,
javascript:void(search=prompt('Enter%20text%20to%20search%20using%20Version%20Tracker.',''));if(search)void(location.href=''+escape(search))
note: all one line
Wow, that works great in Netscape 7. Thanks!
I wish I could remember where I found the first one of these so I could properly credit the source, but I can't. I've made a few others on the same theme and find them very useful...
VersionTracker Classic:
javascript:void(search=prompt('Enter%20text%20to%20search%20using%20Version%20Tracker.',''));if(search)void(location.href=''+escape(search))
Google:
javascript:void(q=prompt('Enter%20text%20to%20search%20using%20Google.',''));if(q)void(location.href=''+escape(q))
etc.
Here is the bookmarklet for VT OS X:
javascript:void(search=prompt('Enter%20text%20to%20search%20using%20Version%20Tracker.',''));if(search)void(location.href=''+escape(search))
The only difference being an extra X ...
This works great in Chimera.
I assume you are not using OmniWeb. In OS 9, I had bookmarklets in FinderPop that would pop up only when I was in IE, which pleased me to no end. You haven't gotten bookmarklets to work in OmniWeb, have you? I have had no luck, presumably because of the tepid javascript support.
No, sorry, I'm not. I've tried OmniWeb a few times and maybe it's just my system (G3 Pismo 400MGHZ / 320MB Ram / OSX 10.1.5) but I've found OmniWeb god-awful slow and cumbersome feeling. I'll have to stick to IE, at least til Chimera is a little more mature :)
There's a built-in Sherlock plug-in for VersionTracker in case you all forgot.
There's a built-in Sherlock plug-in for VersionTracker in case you all forgot.
(FYI, there's also one for the IMDB, for the poster who wanted to write a search script)
Here's a better version that allows you to type "vt blah" or "vt 'blah blah'" (I aliased the script to "vt" in my shell). Anyway, it takes arguments from the CLI and doesn't force you to enter return twice, plus it can do multiple searches at once.
#!/usr/bin/perl -w
$filter = "MacOSX";
$browser = "OmniWeb";
if(!@ARGV)
{
print "Search string: ";
chomp($keyword = <STDIN>);
@args = split ' ', $keyword;
}
else
{@args = @ARGV}
for my $search (@args)
{
$url = "".$filter."&search=".$search;
$todo = "\'Tell application \"$browser\"\\nactivate\\ngetURL \"$url\"\\nend tell\'";
system("/usr/bin/osascript -l AppleScript -e $todo");
}
I threw together this quick python translation for
anyone that is interested:
#!/usr/bin/env python
import os
import sys
browser = "Navigator"
filter = "MacOSX"
keyword = sys.argv[1]
url = "" % (filter, keyword)
todo = ""'Tell application "%s" to Get URL "%s"'"" % (browser, url)
os.system("osascript -l AppleScript -e '%s'" % todo)
Save as vt.py and just change
the permissions "chmod 0755 vt.py" and run as follows:
./vt.py "keyword"
substitute your search term for "keyword" without any quotes. For
example:
./vt.py database
Good Luck.
SA
Visit other IDG sites: | http://hints.macworld.com/article.php?story=2002072923444892 | CC-MAIN-2017-04 | refinedweb | 1,298 | 64 |
Encapsulation simply means binding object state(fields) and behaviour(methods) together. If you are creating class, you are doing encapsulation. In this guide we will see how to do encapsulation in java program, if you are looking for a real-life example of encapsulation then refer this guide: OOPs features explained using real-life examples.
For other OOPs topics such as inheritance and polymorphism, refer OOPs concepts
Lets get back to the topic.
What is encapsulation?
TheSSN(int ssn))and read (for example. Lets see an example to understand this concept better.
Example of Encapsulation in Java
How to implement encapsulation in java:
In above example all the three data members (or data fields) are private(see: Access Modifiers in Java) which cannot be accessed directly. These fields can be accessed via public methods only. Fields
empName,
ssn and
empAge are made hidden data fields using encapsulation technique of OOPs.
Advantages of encapsulation
- It improves maintainability and flexibility and re-usability: for e.g. In the above code the implementation code of.
- The fields can be made read-only (If we don’t define setter methods in the class) or write-only (If we don’t define the getter methods in the class). For e.g. If we have a field(or variable) that we don’t want to be changed so we simply define the variable as private and instead of set and get both we just need to define the get method for that variable. Since the set method is not present there is no way an outside class can modify the value of that field.
- User would not be knowing what is going on behind the scene. They would only be knowing that to update a field call set method and to read a field call get method but what these set and get methods are doing is purely hidden from them.
Encapsulation is also known as “data Hiding“.
i am really confused, in java we cannot have two public methods in a same source file. one method should only be public.
@Asiri, No we can have any number of public methods in a class. We cannot have more than one public class in the same source file.
doGet() vs doPost() methods are used to change implementation of form, such as doGet() method exposed the all form submitting data into URL bar at page run time.and doPost() method not exposed form data in url bar at run time its more secureful……its my according….
Thanks for very good post
Hi,
Thanks for this post. This is really good and easy to understand. But, can you please explain this with a real time scenarios. because this is easy to understand but how practically this is useful. And where exactly this is needed.
Thanks.
Hi
I can understand the concept of encapsulation and program. But how could u say that. It hide the data but we can use the function. And I have a doubt is so that without using get and set method we can’t achieve encapsulation.
public class Encapsulation1{
public static void main(String args[]){
Encapsulation obj = new Encapsulation();
obj.setEmpName(“Mario”);
obj.setEmpAge(32);
obj.setEmpSSN(112233);
System.out.println(“Employee Name: ” + obj.getEmpName());
System.out.println(“Employee SSN: ” + obj.getEmpSSN());
System.out.println(“Employee Age: ” + obj.getEmpAge());
}
When i am executing above code.the following error has been occurring.Help me to solve this error …This is Error i am getting.
The public type Encapsulation1 must be defined in its own file
Encapsulation obj = new Encapsulation(); //is error
Encapsulation1 obj = new Encapsulation1(); //rectified
Save the file name as Encapsulation1.java
you can not use the key word encapsulation..
so you can rename it with any other name….
very good explanation of encapsulation and how and why it works nice job
what is the difference between abstraction and encapsulation while they both are hiding the implementation from user.plz answer me.
“Abstraction is implemented using interface and abstract class while Encapsulation is implemented using private and protected access modifier.”
Abstraction identifies which information should be visible as well as which information should be hidden. Encapsulation packages the information in such a way as to hide what should be hidden, and make visible what is intended to be visible. I hope that helps.
we use abstraction when we are not sure about certain implementation.
for e.g -> car surely need fuel but may or maynot need ac & wiper features.
so we define implementation for only fuel() and make ac(),wiper()as abstract.i.e no implementation.
encapsulation-> provides data protection/hiding.
->Technically in encapsulation, the variables or data of a class is hidden from any other class and can be accessed only through any member function of own class in which they are declared.
-> Encapsulation can be achieved by: Declaring all the variables in the class as private and writing public methods in the class to set and get the values of variables.
can you please explain the difference between the terms–> Bean,POJO,JavaBean,EJB,Entity,DataTransferObject,springBean–
can you please explain the difference between the private public an protected and encapsulation inheritance and polymorphism with simple examples?
@Ravi… Hi, for Public modifier we can access the methods or varibles from another class to another class. But when we going for Private Modifier we can’t access from another class but we can access within the class.
Inheritance: There is a class called Class A and Class B. We can use the Class A methods in Class B by using the extends Keyword. For eg. public class classA
{public static void add(){…}
}
public class classB extends class A
{
public static Void sub(){…}
public static void main(String[] args){ add();sub()}}
Very nice article.
One question:
Within a single file 2 (two) public class is possible ?
As per my knowledge only one public class we can declare .
To make variables read-only, you can also declare them as “final” and set them in constructor, making them private is not the only way :) . | https://beginnersbook.com/2013/05/encapsulation-in-java/ | CC-MAIN-2017-51 | refinedweb | 1,007 | 56.66 |
This is the Disposition of Comments for the Candidate Recommentation XKMS Working Drafts:
XML Key Management Specification (XKMS) Version 2.0
XML Key Management Specification (XKMS) Version 2.0 Bindings
Color key: error warning note
Appendix C of the XKMS Version 2 Candidate Recommendation, entitled Sample Protocol Exchanges, contains examples of key derivations, some of which appear not to be accurate. I enclose my suggested corrections below.
Section 8.1 (Use of Limited-Use Shared Secret Data) says that "All space and control characters are removed." Given sections C.1.2 and C.1.3, this suggests that a hyphen is a control character. For the sake of clarity I propose using "punctuation characters" instead of or in addition to "control characters".
Also, it might be more appropriate to call the derived quantities "Secret Keys" as opposed to "Private Keys".
C.1.2 Bob Registration Authentication Key Authentication Data 3N9CJ-JK4JK-S04JF-W0934-JSR09-JWIK4 Converted Authentication Data [33][6e][39][63][6a][6a][6b][34] [6a][6b][73][30][34][6a][66][77] [30][39][33][34][6a][73][72][30] [39][6a][77][69][6b][34] Key = HMAC-SHA1 (Converted Authentication Data, 0x1) [92][33][7c][7c][3e][8d][3b][7a] [cf][11][59][89][36][64][56][69] [95][4f][8f][d7]
C.1.3 Bob Registration Private Key Encryption Authentication Data 3N9CJ-K4JKS-04JWF-0934J-SR09JW-IK4 Converted Authentication Data [33][6e][39][63][6a][6b][34][6a] [6b][73][30][34][6a][77][66][30] [39][33][34][6a][73][72][30][39] [6a][77][69][6b][34] First Block = HMAC-SHA1 (Converted Authentication Data, 0x4) [78][f1][e7][b1][b3][fd][0c][bc] [96][04][e7][01][4f][33][78][d3] [0b][c8][5f][bd] Key = First Block XOR 0x4 [7c][f1][e7][b1][b3][fd][0c][bc] [96][04][e7][01][4f][33][78][d3] [0b][c8][5f][bd] Second Block = HMAC-SHA1 (Converted Authentication Data, Key) [1e][7f][e1][b0][ab][d0][f8][09] [2e][28][f3][9d][14][a8][d0][83] [2e][ab][ea][22] Final Private Key [78][f1][e7][b1][b3][fd][0c][bc] [96][04][e7][01][4f][33][78][d3] [0b][c8][5f][bd][1e][7f][e1][b0]
C.1.4 Bob Recovery Private Key Encryption Authentication Data A8YUT vuhhu c9h29 8y43u h9j3i 23 Converted Authentication Data [61][38][79][75][74][76][75][68] [68][75][63][39][68][32][39][38] [79][34][33][75][68][39][6a][33] [69][32][33] Private Key [91][8c][67][d8][bc][16][78][86] [dd][6d][39][19][91][c4][49][6f] [14][e2][61][33][6c][15][06][7b]
C.2.1 Alice)
C.2.2 Bob)
The proposed corrections were made to the Editor's draft.
Appendix C of the XKMS Vesion 2 Candidate Recommendation, entitled Sample Protocol Exchanges, contains examples of key derivations.
It might be more appropriate to call the derived quantities "Secret Keys" as opposed to "Private Keys".
The proposed corrections were made to the Editor's draft.
There is another error in Section C.2.2 that I missed:
This line:
Base 64 Encoding of Pass Phrase Stage 1 PHx8li2SUhrJv2e1DyeWbGbD6rs=
should read:
Base 64 Encoding of Pass Phrase Stage 1 8GYiVK8zBD5E0q9Rq2Y/Gci0Zpo=
The proposed corrections were made to the Editor's draft.
There is a part of the Schema defining "RequestSignatureValue" element in the Compound Request Section (par[127]) which I think it should appear before the beginning of this section, in par[126].
The proposed corrections were made to the Editor's draft.
In the definition of the "StatusRequest" element (par[132]) it is said that it inherits the element attributes of "PendingRequestType", and the same can be understood from the Schema. However, the "ResponseId" attribute -which is already part of "PendingRequestType"- is defined there. To make it more confusing it is said to be Optional whereas in "PendingRequestType" it was Required. Should this reference be removed from there?
The proposed corrections were made to the Editor's draft.
Maybe this is not so important, but in the Data Encryption Example (par[146]) a key is bound to bob @ "example.com" but then in par[147] the name used is bob @ "bobcorp.test". Of course the example is perfectly understandable but maybe both paragraphs should be consistent.
Typo in example - Complete fixed para [146]..)
This issue is deliberately not addressed. Do let us know if this causes any implementation problems in your code.
Stephen Farrell: I can imagine the union vs. intersection result being influenced by who's asking, from where, about whom, with which UseKeyWith, etc. I could also imagine a responder treating all such cases as an error for validate and doing a union for locate!?)
For example, the request was <xkms:UseKeyWith Is the result >
This issue is deliberately not addressed by the XKMS specification. Do let us know if this causes any implementation problems in your code..)
I assume there is a requirement on implementations to ensure that the signature(s) in a message actually refer(s) to the XKMS content. That's probably pretty obvious, but I can see some fairly trivial attacks against implementations that just check a signature is valid without ensuring that the reference actualy refers to the XKMS message.
Is this something worth mentioning in the security section?
Yes it should be mentioned if its not, so best is probably to add this to the issues list so it gets properly checked.
Section 10.11 was added to the spec, with the following text:
The Implementation of XKMS MUST check for signature value reference in the to-be-signed data when using a signed SOAP message. Also, Implementations MUST ensure that all bytes in the XKMS messages ex. from .... must be included in hashing and in the resulting signature value of the message.
Section 8.2.1 of Part 1 of the XKMS spec covers an element <RSAKeyPair> intended to carry public and private parameters of an RSA key pair.
However, the adjoining schema fragment (as well as the complete XKMS schema) does not define this element. Instead it defines another element, <RSAKeyValue>.
RSAKeyPair appears more appropriate for the intended purpose, so I'd favor a schema update, assuming this is not too late.
There is also a ds:RSAKeyValue in dsig so we don't want to reuse that element name either. I'd agree that we should change to properly use RSAKeyPair in the spec and schema, which is, as you point out, a substantive, though v. small, change to the schema.
There're associated changes required in the examples at the end of section 6.4 and in C.3.1, C.3.2 and C.3.3. Those should get caught as part of our "PR spec will contain samples actually used in interop" approach.
The proposed corrections were made to the Editor's draft.
Am not sure if this has been raised before, but I've been playing with schema validation of the various messages and have run into a problem with Xerces rejecting messages because of the (amongst others) KeyUsage Elements. In particular, the schema defines the KeyUsageType enumeration as follows :
<simpleType name="KeyUsageType"> <restriction base="QName"> <enumeration value="xkms:Encryption"/> <enumeration value="xkms:Signature"/> <enumeration value="xkms:Exchange"/> </restriction> </simpleType>
I'm not a huge expert in XMLSchema, but my understanding is that enumeration values are literal. So if I use a different qualifier (or even no qualifier) it will fail strict validation.
E.g. the snippet
<xk:KeyUsage xmlns:xk:Signature</xk:KeyUsage>
will fail, whereas
<xk:KeyUsage xmlns:xkms:Signature</xkms:KeyUsage>
will succeed.
I think KeyBindingStatus will also have the same problem.
Am I misunderstanding XMLSchema? If not - do we really need to enumerate these values in the schema?
The Editor's draft of the XKMS schema was updated to use open enumerations and the QNames were changed to URI values. The open enumeration technique is described in. The rationale for the change is given at
Paragraph [179] refers to TemplateKeyBinding; shouldn't this be PrototypeKeyBinding?
The proposed corrections were made to the Editor's draft.
Paragraph [115], "nonce attribute" should be "Nonce attribute".
The proposed corrections were made to the Editor's draft.
Paragraph [193], "both elements" should be "both attributes".
The proposed corrections were made to the Editor's draft.
Paragraph [149], the text states that KeyName and KeyValue are requested however in [150] only KeyValue is actually specified in request.
The proposed corrections were made to the Editor's draft. KeyName was droped from p. [149].
Paragraph [109] 3.2.4 Element <PendingNotification> Table has a column URI; use Mechanism instead?
The proposed corrections were made to the Editor's draft.
Part-2, paragraph [50] LocateRequest message contains invalid Respondwith value <RespondWith>Multiple</RespondWith>.
The line was suppressed from the example.
Paragraph [67] "with a value of true" should be "with a value of 1".
The proposed corrections were made to the Editor's draft.
Paragraph [68] SOAP 1.1 namespace URI in message should be.
The proposed corrections were made to the Editor's draft.
The SOAP 1.2 namespace URI refers to a working draft [] through out. Shouldn't this be ?
The proposed corrections were made to the Editor's draft.
RegisterResult and RecoverResult may both contain signatures over encrypted data, however the order of these operations is not explicitly stated in the spec.
Given the PrivateKey schema fragment, I'm inclined to draw the conclusion that only encrypt-then-sign is required. Is this the intention and if so does this warrant a clarifying statement to that effect?
Speculation:
I believe the (un-encrypted) RSAKeyPair is deliberatly omitted from PrivateKey so as to *allow* implementations to mitigate the risk of disclosure of sensitive stuff through, say, the use of special purpose cryptographic hardware that, apart from their primary purpose, also can be programmed to extract the private key components from the surface syntax of an RSAKeyPair element. I imagine that this design *could* stand in the way of supporting sign-then-encrypt in XKMS - assuming that generating/verifying an enveloped signature is performed over a schema valid document, which is the only way I have explored.
A new paragraph was added to the specification to remove the ambiguity:
[372a]Implementations supporting encryption of Private Key Data MUST support Shared Secret. Use of Shared Secret is detailed in section 8.1.
In the process of trying to get my head around the compound messaging and I discovered what I believe is an inconsistency in the spec.
The last sentence in Section 2.8, which goes like this:
"Alternatively a client MAY issue a compound request containing multiple inner pending requests corresponding to requests which were originally made independently."
is in conflict with both the schema and also with text elsewhere in the spec - PendingRequests's are not allowed in a CompoundRequest.
If this feature is required then the schema needs to be updated otherwise we'll get away with removing the offending sentence.
Deleted last sentence in para[83].
In the section 3.5.2 of the spec [1] (Element <StatusResult>), p[134], there are explanations about three attributes (Success, Failed and Pending) and what do they mean in the case of a *compound request*. Nothing is stated about a simple request scenario. As I understand it, a code of Success="1" or Failed="1" or Pending="1" would be returned, but maybe a bit of clarification would be appreciated.
[1]
Modified p. [134] in Section 3.5.2 to describe the ResultType attributes for non-compound requests. That is, {Success, Failed, Pending} describe in this case the status of the request operation that has completed and are indicated in the ResultMajor attribute.
I?).
[1]
Removed the OCSP row of the table in #3.2.3 and the related line of schema - fixed para [104] and [106] and updated schema.
Following our client-server tests Tommy and myself were discussing about the number of OpaqueData elements that the specification *intend* to allow in an OpaqueClientData element.
It seems that the way the schema currently stands multiple OpaqueData children are allowed for a OpaqueClientData element,
<sequence maxOccurs="unbounded"> <element ref="xkms:OpaqueData" minOccurs="0"/> </sequence>
, but currently only the first one is handled by Tommy's implementation and so we would like to get confirmation that that's not the expected behaviour.
I remember there was a mail from Tommy [1] suggesting a mention to the child element OpaqueData. On the other hand, the addition of "including its children" has clarified the text, so that might be enough.
[1]
Fixed Para 94: inserted ", including its children," to clarify further
In part-1 of the spec, many examples quote an XKMS service called
I propose that we change this to something more neutral like
or
To avoid having fallback if ever the real owners of that domain (which is registered), complain to us.
Probably a good idea. I prefer your first alternative since exists and refers to rfc2606 whereas the 2nd one just gives a dns error.
FWIW, xmltrustcenter.org was a place where VeriSign put copies of specs etc. prior to the WG starting so I guess the chances of them complaining or of the domain changing hands are both v. small. I just looked for the 1st time in ages and the front page there is quite out of date so again the change is probably a good idea.
The proposed corrections were made to the Editor's draft.
In part 1, par. 122 gives a table that defines the ResultMinor codes. There are some cells in the middle column (Possible Major Codes) that are empty. It can be interpreted that that case is when there is no MajorCode.
I think that rather it's an explanation of the MinorResult by itself.
In all cases, it looks confusing.
Added phrase "Generic Description:" to the cells which don't have a corresponding major result in para[122].
Section 4, Security Bindings, of part-2 is not clear. It requires more explanation text. There is no description of what is meant by "variant" on the tables or why the variant column doesn't give variants to all the options in the tables.
Looks like it needs more editing.
N.B. JK: This issue was announced as closed early on, but required some extra edits since then (done by the Editor, with my feedback). I used the last message announcement and the changelog extract to say it was closed.
Updated and consolidated the tables and text in this section, adding more clarifications where needed. Among the applied changes, we can cite the following:
Removed the URIs related to client authentication modes (which were non-existent and non-normative) in p. [72] and replaced them with text.
Deleted sentence in p. [71] related to profile URIs. Changed "Variants" to "Client Authentication Modes" in p. [74].
Added the definition of payload security to part-1, Section 1.2.
Deleted Row in table [71b] which associated the Request/Response correspondence with the XKMS element Request/MessageDigest as it was an error.
Deleted Row in table [71b] which associated the Request/Response correspondence with the XKMS element Request/MessageDigest as it was an error.
In the tables [71b] and [74a], for Replay Attack, DOS, and Non Repudiation Protection, added "Any" as the client authentication mode.
Changed -- in table [71b] with "Not Applicable"
In table below p. [71b], for the Replay Attack, XKMS element column, added qualifier - "in two-phase protocol"
In the table p. [72], changed Request/MessageDigest to Request.Signature with MAC - which is more accurate
Changed table column name "XKMS element" to "Comment" in pp. [71b] and [74a]
How is the shared secret "holder" in an NotBoundAuthentication intended to be identified?
Apart from altering the schema (adding a "Name" attribute) the only reasonable option seems to be, to combine these two pieces of information and include their base64 encoding in the Value attribute.
For example, a protocol defined out of scope to XKMS and identified by the URI urn:example-protocol:username-password specifies that the Value attribute carries a username/password pair separated by a ':' would take the form of the following instance fragment
<NotBoundAuthentication Protocol="urn:example-protocol:username-password" Value="YWxpY2U6c2VjcmV0"/>
Not sure if the KeyName would be best there, since I'd rather keep the key and auth-id names separate, but in any case, there's Tommy's b64 idea or how about "secret+sfarrell@cs.tcd.ie" (like people use to filter emails). I could also imagine using (whatever's the official term for) a CGI parameter in the URI itself ("").
So, I'd say we're ok not to change the schema for this one - there's enough flexibility for what is probably a corner case.
> Not sure if the KeyName would be best there,
I second that. It seems to me that the KeyInfo in the PrototypeKeyBinding is intended to communicate information to be bound to the key pair being registered.
> So, I'd say we're ok not to change the schema for this one -
>; there's enough flexibility for what is probably a corner case.
I am of the same opinion.
> Tommy's b64 idea
I think the prose could be clearer: - while the schema allows for NotBoundAuthentication be used in any X-KRSS message section 7.1.3 paragraph says that NotBoundAuthentication is for registration only.
- section 7.1.5 paragraph [296] makes liberal use of the phrase "limited use shared secret" ; I don't like the innuendo of that and suggest that replacing this with simply "authentication data" would be more appropriate. Sure, using a limited use shared secret even as per section 8.1 may well be part of the Protocol, but this is specified by the Protocol and therefore out of scope in this spec.
Fixed pp. [291] and [296] - replaced "registration request" with "X-KRSS request"; used the phrase "limited use shared secret" as an example for "authentication data".
[1]
The spec sometimes mentions AbstractRequestType, and sometimes mentions RequestAbstractType (but obviously means the same type). These should be unified.
Replaced AbstractRequestType with RequestAbstractType in paras [111] and [128] in the Editor's draft[1].
[1]
I have *no* objections against the way in which these were handled.
In paragraphs 188 and 189, the semantics of the ValidityInterval element when used inside a UnverifiedKeyBinding element could be clearer. Currently, the spec says that an UnverifiedKeyBinding "describes" a key binding, but "makes no assertion regarding the status of the key binding."
So, what does "the time interval in which the key binding relationship is asserted" mean in a context where no assertion is made?
Modified explanation for ValidityInterval in par. [189] of part-1[1] as follows:
ValidityInterval [Optional]
The time interval for which the key binding relationship is requested to be asserted.
[1]
I have *no* objections against the way in which these were handled.
This protocol basically consists in sending a double hash (the RevocationCodeIdentifier) of a (presumably) user-typable or even user-picked pass phrase as part of the PrototypeKeyBinding element of a RegistrationRequest, and later sending a simple hash of the pass phrase (the RevocationCode) to authenticate a RevocationRequest. The spec does not contain any confidentiality requirements for these messages, and does not suggest encrypting them; paragraph 288 specifically talks about sending the RevocationCode "as plaintext."
An attacker with access to a RevocationCodeIdentifier can launch an offline dictionary attack to recover either the RevocationCode or the pass phrase. The intended recipient of these identifiers can also, of course, launch an offline dictionary attack. These dictionary attacks are an issue when the underlying pass phrases contain too little entropy.
Further, eavesdroppers with access to RevocationCodeIdentifier elements are able to determine whether two revocation codes are identical. If multiple key bindings are associated with the same revocation code identifier (and, hence, the same revocation code) and an eavesdropper observes one revocation transactions, then this eavesdropper is able to create valid revocation requests for any of the other key bindings with identical revocation code.
Based on these observations, I would recommend the following changes to the specification text:
* paragraph 288: "The double MAC calculation ensures that the <RevocationCode> value may be sent as plaintext without the risk of disclosing a value which might have been used by the end-user as a password in another context."
I would suggest to strike this text, for several reasons:
- As far as low-entropy passwords are concerned, the double MAC calculation doesn't ensure anything: These passwords are made accessible to an offline dictionary attack and can subsequently be compromised. (They aren't technically disclosed, though.)
- The text seems to indirectly encourage re-use of shared secrets across different contexts. It shouldn't.
- The text seems to suggest the use of low-entropy passwords to generate the revocation code. It shouldn't.
Paragraph 363 suggests that "the shared secret SHOULD contain a minimum of 32 bits of entropy if the service implements measures to prevent guessing of the shared secret, and a minimum of 128 bits of entropy otherwise."
It should, at the very least, be made crystal clear that "measures to prevent guessing of the shared secret" should include strong confidentiality protections for revocation code identifiers and revocation codes, to provide safeguards against the dictionary attacks outlined above, and to protect against attackers recognizing deliberate or accidental collisions of revocation codes.
* The security considerations part of the document should make clear that implementations should not re-use revocation codes across different key bindings (regardless of the amount of entropy used when generating them). Note that strong confidentiality protection of RevocationCodeIdentifier and RevocationCode elements would also help against this problem.
Other than the resolutions suggested, (which look fine at first glance), ought we put in place a better (though non-interoperable!) fix for the "able to determine whether two revocation codes are identical" threat which looks, now that he says it, a bit sloppy?
Fixes could be:
- use a salt, chosen at registration time and available for querying (needs a new operation, so yuk) at revocation time
- use some keybits, which is fine so long as you haven't lost the key (key loss being quite likely in a revocation scenario)
- lodge foo=H(H(pwd),X) where X is a random integer between 0 and 15, and at revocation time send all 16 possibles, resulting in an 1-in-4 chance of a foo-collision given the same pwd? (terrible hack and I've probably gotten the math wrong as usual, plus, the 1st revocation exposes the collision in any case, but at least the collision isn't apparent from the responder DB)
All sound like a bit too much work at this stage, so would anyone like to do any of these, or is there a better idea?
Deleted the first sentences in para [288] as suggested.
Added clarification statement in para [363] about guessing of shared secrets.:
[363]...Implementations should not re-use revocation codes across different key bindings (regardless of the amount of entropy used when generating them). Note that strong confidentiality protection of RevocationCodeIdentifier and RevocationCode elements would also help against this problem....
[1]
I have *no* objections against the way in which these were handled.
I have a technical question. It concerns Revoke, Reissue, and Recover requests. These types of requests contain a reference to their own types of KeyBindings (RevokeKeyBinding, ...) extensions from KeyBindingType base type. And this type contain a Status element. In your examples, for these types of requests, Status is always set to "Intermediate".
I do not understand the utility of this field in requests. Would it be possible to imagine other values than "Intermediate" in requests, and if so, in which cases ?
Frederic's question would no doubt be best answered by someone with a better perspective of the XKMS schema's evolution than myself, however, I am willing to offer my personal view:
I didn't find any occurences of the "Intermediate" so I assume that Frederic meant "Indeterminate". As a result, I don't think this is an editorial issue.
It looks like the primary intended use of the KeyBindingType is in the KeyBinding element in a ValidateResult. It is not clear how a server can make use of the Status element in Revoke, Reissue anad Recover operations and in any case, I think the server should probably not rely on the client's opinion of the Status of a key binding.
The fact that the {Revoke, Reissue, Recover}KeyBinding elements are all of type KeyBindingType has the unfortunate(?) effect that the Status element is required in all of these elements. Instead they should be of types derived directly from UnverifiedKeyBindingType (or KeyBindingAbstractType); this would allow for information set elements better suited for each of the intended purposes.
Also, the addition of a RevocationReason info set element in a RevokeRequest would be welcome - at least X509 and PGP supports this notion.
All of this obviously implies schema changes and it is getting late for that.
The following paragraph was added to the Editor's draft[1]:
[206a]Note that the X-KRSS {Revoke, Reissue, Recover} KeyBinding elements are all of type KeyBindingType, which requires a Status element (c.f. Section 7). In the case of Reissue, Revoke, and Recover requests, servers MAY ignore the Indeterminate <Status> status value and Clients MAY set Indeterminate as status value.
[1]
Section '7.1.7 Element PrivateKey' does not specify whether or not to include a Type attribute in the EncryptedData [1]. I have seen this specified in other specs that make use of EncryptedData, most recently in SAML 2.0. The difference is that in XKMS, it is not intended that the EncryptedData be used with a decrypt-and-replace operation. Instead, it seems, the content of the EncryptedData (the RSAKeyPair markup) is to be treated as a separate document.
The fact that everybody reported interoperability despite the fact that my own server implementation and that of SQLData use different values for the Type attribute has led me to believe that the usefulness of the Type attribute in this application is limited.
However, since it is possible/likely that the RSAKeyPair is pre-appended with an XML declaration (^lt;?xml ... ?>) I am thinking it would be appropriate to require a Type attribute value of *if* the Type attribute is present. The MimeType attribute, while advisory, should be "text/xml" if present.
The following text was added to the description of the xenc:EncryptedData element:
The Type attribute SHOULD be present and, if present, MUST contain a value of. The MimeType attribute, SHOULD be "text/xml".
Due to the contradictory wording of Section '4.4.5 The PGPData Element' of XMLSIG[2] it is not clear whether or not PGP packet types other than Key Material Packet's are allowed in a PGPKeyPacket. In order to support XKMS clients that want to perform "trust computations" themselves (as opposed to delegating this to an XKMS service), access to SignaturePackets and TrustPackets would be useful. This is an XMLSIG issue, but I thought it would be prudent to mention it here anyway.
[2]
This issue was declined as the working group feels that this is a XML DSig [1] issue. It has also been mentioned that this issue could be adressed as an XML DSig Errata.
[1]
N.B. This topic was discussed with Tommy, who is part of the XKMS WG, on the 8 March 2005 XKMS Teleconference (minutes). No acknowledge from Tommy was sent to the declined announce as Tommy agreed with it directly in the meeting.
As the ProofOfPossession element is optional, an additional ResultMinor #ProofOfPossessionRequired would enhance clarity in situations where a client does not include this element for a server that requires it.
The proposed changes were accepted and added to the Editor's draft[1].
[1]
If a server does not support the TimeInstant element, it should indicate a failure *unless* it includes the optional ValidityInterval. The spec does not currently require this. Here too an additional result code may be useful.
A new minor result code, TimeInstantNotSupported, was added to the Editor's draft[1], A server returns this code when it doesn't support optional the TimeInstant element, rather than just silently discard it (updated pars. [213] and [122]. Updated the description of the TimeInstant element and corresponding result codes to schema.)
[1]
The AuthenticationType is currently a xsd:sequence consisting of two optional elements, KeyBindingAuthentication and NotBoundAuthentication. I can't think of a reasonable usage scenario where both of these would be present (or both absent). An xsd:choice may be a better ... choice. Alternatively, some constraining text would do the job.
The WG didn't want to modify the XKMS schema at this point of time. Added sentence "XKMS Responders do not have to support both of these optional elements in a request message." to para 291 of part-1[1].
[1]
RespondWith is only advisory so this is not a big deal but ...
RespondWith's should be discouraged/disallowed in request types for which the corresponding result type does not contain any key binding types. i.e. CompoundRequest, PendingRequest and StatusRequest. The CompoundRequest could be an exception, in which case it should be stated that RespondWith's in the containing request are applied to all inner requests.
Updated para [112a], [128a] and [132] of part-1[1] by adding that the <RespondWith> element cannot be present in these request elements.
@@@ JK: verify with Tommy.
[1]
After some thought and analysis, my feeling is that XKMS has an ambiguous use of the term [Optional] when referring to elements and attributes.
The interpretations that this term can have here are:
- An element or attribute that a client or a server may choose to include in a message.
- Implementation of a given element or attribute is required/recommended/optional
The point I want to make here is that we may have an optional element, but whose implementation is required by a server. If the implementation is optional, the server may then decide to ignore it. However, it should not ignore it if the implementation is required.
In few cases, the spec actually says that the implementation is optional. Most of the time it just says optional and it lets the reader the guesswork as to whether the inclusion of the element/attribute or its support by a client/server is optional.
In my opinion, this is is a source of confusion and of potential interoperability problems. The spec should be more precise here, while still leaving freedom of choice to the user.
You'll find here below a list (n.b. left in the original message [1]) of all the elements and attributes that are optional. I think it would really be good to have some extra text that says if their implementation is optional, recommended or required. We could add this as an appendix too.
Defined a new ResultMinor code, OptionalElementNotSupported. A server returns this code when it doesn't support a given feature. See par. [352a] of part-1[1].
[1]
(summarizing this issue reported by Tommy for archival purposes)
The X-KRSS message defines the KeyBindingAuthentication element that lets a server authenticate the key binding element within an X-KRSS request. The content of this element has a ds:Signature calculated with an HMAC using a preshared secret.
The XKMS CR specification doesn't define how to identify the preshared secret. One developer did it using ds:KeyInfo.Keyname, while another one used UseKeyWith with a request can notify the server which shared secret it used. One implementation used ds:Keyinfo.Keyname where another one used UseKeyWith with certain values to make it work.
In order to avoid interoperability problems, it would be good if the XKMS recommended how to do this. Tommy's proposal to use ds:KeyInfo.Keyname for this makes sense to me.
Added the following paragraph to the Editor's draft[1]:
[277a]Clients and Responders MAY use dsig:KeyName for HMAC validation. Alternatively, they may use other Identity related information derived from security binding, such as the Sender's IP address.
[1]
Deprecated the shared string canonicalization algorithm in favor of the StringPrep SASL profile. Indicated that binary data doesn't need to be canonicalized.
The changes were made to Section 8.1, p. 329a-c of the Editor's spec[1].
[1]
The spec. is not clear for what happens in an asynchronous request if a user sends a ResponseMechanism=Pending without including a PendingNotification element (which is optional). This seems to imply that the client needs to poll the server using the StatusRequest. This is not clear in the spec.
The spec says that the service MAY use the notification mechanism indicated by the client, it does not have to do it. Therefore, it may be prudent of a client implementation to not rely only on notification to indicate when to issue the PendingRequest but to also to poll using StatusRequest, after all the notification may never arrive.
Part-1 of the draft specification[1] was modified to add a more explicit polling/notification description and the text in pp. [55] and [56] and in Section 2.5. A new section, 2.5.2, was added to cover the Status Request polling procedure. The asynchronous example given in this section was revised and an SMTP notification example was added.
[277a]Clients and Responders MAY use dsig:KeyName for HMAC validation. Alternatively, they may use other Identity related information derived from security binding, such as the Sender's IP address.
[1]
Last update: $Date: 2005/06/23 13:48:57 $. | http://www.w3.org/2001/XKMS/Drafts/cr-issues/issues.html | CC-MAIN-2016-40 | refinedweb | 5,613 | 54.52 |
Big O Notation: Calculate Time & Space Complexity
Efficiency in algorithms is very important when it comes to scaling and sizing. Total execution time and space required for an algorithm can be the difference between a seamless experience and one that is laggy and robust. Let’s quickly run through a practical example, this is algo to calculate whether a number is prime or not.
If we stick to numbers relatively tiny in size we can expect to return a result in a second or two. However, if we use a number like 27644437 this will take significantly longer it may even throw an error depending on the data type used to initialize the variable.
Big O notation is how programmers describe the performance of an algorithm. As a programmer we are concerned with the worst-case scenario, the really big prime number cases that will break our algorithms. With this in mind we will learn how to quickly calculate you the Big O notation of an algorithm.
How to calculate Big O
To calculate Big O, there are five steps you should follow:
- Break your algorithm/function into individual operations
- Calculate the Big O of each operation
- Add up the Big O of each operation together
- Remove the constants
- The highest term will be the Big O of the algorithm/function
Simple, let’s look at some examples then.
public class Add
{
public static void main(String[] args)
{
int num1 = 8; // initialize num1 to be 8
int num2 = 4; // initialize num2 to be 4
System.out.println(num1 + num2); // Output should be 12
}
}// Don't worry if you don't know Java syntax just know we are getting the sum of the two numbers
Our equation returns the sum of two variables that where initialized in the lines above it. There are 3 operations going each 0(1) which would be 0(3) but we remove the constants to calculate the complexity quickly. So the time complexity would be 0(1) or constant.
The time complexity is O(1) or constant time because the program doesn’t loop and the operations happen one time. No matter how big the numbers get it will still run one time.
Okay, so now let’s try a harder example.
public class NestedLoop
{
public static void main(String[] args)
{
int x = 0; // This will increment and serve as the outer counter
int y = 5; // This will decrement and serve as the inner counter
while (x < 10)
{
System.out.println("Outer counter: " + x);
while (y > 0)
{
System.out.println("Inner counter: " + y);
y = y - 1;
}
x = x + 1;
}
System.out.println("Loop Completed.");
}
Now we could look at all the operations and evaluate each one time but we know we need the highest term and the constants are irrelevant. So lets find the highest term, a loop means that the program will run in its entirety until a condition is met.
This can quickly be accessed as 0(n). However, in order for this loop to resolve we need to look at the nested loop inside, which will also runs until its own condition is met.
0(n) * 0(n) = n²
Summary
Big O notation can be hard to understand but it comes up very often in interviews and scaling sizable solutions. The mark of a great programmer is understanding the limits of their builds and improve them. Solving a problem can seem frustrating enough but it is important to think about this ideas of run time and space required.
For more resources check out this links here, they helped me a lot in understanding this concept.
A Beginner's Guide to The Big O Notation | Hacker Noon
Simply put Big O Notation is used to describe the performance of an algorithm. Big O specifically is used to describe…
hackernoon.com | https://kodysamaroo.medium.com/calculate-time-space-complexity-b516884823dc?source=user_profile---------7---------------------------- | CC-MAIN-2022-40 | refinedweb | 636 | 58.82 |
Bummer! This is just a preview. You need to be signed in with a Pro account to view the entire video.
What Is A CSS Framework Anyway?42:04 with.
- 0:00
[MUSIC]
- 0:02
[SOUND] [UNKNOWN]
- 0:08
Well, the first thing I'll say is sorry about yesterday.
- 0:16
I've been feeling ill since the weekend.
- 0:19
I had to work, from Monday, here which was fantastic.
- 0:21
And I worked at [INAUDIBLE] and it was like a scene from The Exorcist.
- 0:24
So, it's probably safe as not to give the talk.
- 0:27
Did anyone see the pre-recorded version I made?
- 0:30
Right.
- 0:30
So, sorry if that's fairly terrible quality but I thought
- 0:34
it was a good idea to, at least, have that prepared.
- 0:36
But I do get to give it life today.
- 0:38
So yeah, I'm Harry.
- 0:40
Thank you all for coming, and sorry for messing you around and stuff.
- 0:42
Feel really terrible about that.
- 0:45
My talk is What is a CSS Framework Anyway?
- 0:47
And it's going to be a fairly critical,
- 0:50
but hopefully quite balanced look at CSS frameworks.
- 0:53
Why we use them.
- 0:54
Why we might not use them.
- 0:56
There are a lot of opinions about CSS's frameworks, most of
- 0:59
them negative, so I'm gonna be looking up why that might be.
- 1:04
And looking at ways we can perhaps build
- 1:06
them better, make them work more effectively for us.
- 1:12
So actually, my name's Harry, I'm a consultant front-end architect, which
- 1:14
is a fairly pretentious job title I picked sometime last year.
- 1:19
In my work, I travel around quite a lot with, and meeting new companies.
- 1:23
And my job is basically to help them write CSS better.
- 1:27
In doing this, I found out a lot about CSS frameworks.
- 1:30
How people try and use them to solve their problems.
- 1:33
How they create problems for people where they wouldn't expect them.
- 1:37
Where they come up against really, really
- 1:38
unusual things [INAUDIBLE] false promise I think.
- 1:44
A lot of a lot people hedge all that back on a CSS format,
- 1:48
only to find that they don't work as well as perhaps they thought they might.
- 1:52
And it's been really, really interesting finding this kind
- 1:54
of stuff out, over the last six or so months.
- 1:57
It should be a fairly self-critical talk as well.
- 2:00
I've actually got my own CSS framework, which I will mention briefly.
- 2:03
But this isn't gonna be a talk that disses Bootstrap or, or slams Foundation.
- 2:07
This should be a fairly balanced talk discussing even the work I've done, and
- 2:11
a very good example of what not to do in the CSS framework world.
- 2:16
Yes like I said I have been working for myself for a few months now and
- 2:19
it's in this time I really learned a
- 2:21
lot about why framework can really hold organizations back.
- 2:25
A lot of the organizations I have worked with have
- 2:29
tried writing their own CSS frameworks and they struggle with that.
- 2:33
You know writing a library for just use of themselves.
- 2:36
A lot of people struggle with trying to use third party frameworks.
- 2:40
You know their mileage really is limited by using a third party library.
- 2:45
So that's the kind of stuff we're going to cover in the talk today.
- 2:49
It's fairly important that I put a disclaimer in here, because
- 2:53
like I mentioned I do actually own my own CSS framework.
- 2:56
This talk shouldn't be an advert for
- 2:58
Inuitcss, and like I said, Inuitcss should be
- 3:02
a fairly solid example of what not to
- 3:04
do in, in, writing third party css frameworks.
- 3:08
Because I've been working on this for three years and
- 3:10
I've managed to get a lot of things wrong with it.
- 3:12
So we'll discuss that in quite a bit of detail later on as well.
- 3:18
I think it might be a good idea to start with a brief history of CSS frameworks.
- 3:23
But it will be really, really brief.
- 3:24
This won't be a history lesson.
- 3:26
I'm not even sure, just how factually accurate a lot of this is.
- 3:29
but, that's not important.
- 3:30
It's just to try and set the scene.
- 3:31
And help try and prove a few points.
- 3:34
I asked for a show of hands I the recording
- 3:36
yesterday which completely doesn't work if I'm not in the room.
- 3:38
But the idea again today is if you could you know, just put your hand up.
- 3:42
Who remembers Huey the Yahoo!
- 3:44
user [UNKNOWN] library?
- 3:46
Hands up who's familiar with Huey?
- 3:48
Not as many people as I would have expected.
- 3:51
Yui began life in 2005.
- 3:53
I couldn't find an exact month, but 2005.
- 3:57
So a long, long time ago.
- 3:58
It was probably one of the first,
- 4:02
UI libraries obviously that I can really remember.
- 4:06
I couldn't find anything really predating it.
- 4:09
Is one of the first packaged up UI toolkits that
- 4:12
anyone ever came up with as far as I'm aware.
- 4:15
I think it's actually more than just a
- 4:17
CSS framework, it had JavaScript stuff in there.
- 4:20
And apparently the entire download, if you actually downloaded UE, it was 11.2 meg.
- 4:24
So, when you say that Bootstrap's bloated just think back to UE.
- 4:30
Skip forward a little bit,
- 4:34
we get blueprint.
- 4:35
Now who remembers blueprint?
- 4:38
So few more hands.
- 4:40
So when I got into the web, around, well is around 2006, 2007,
- 4:44
I remember blueprint being fairly big [UNKNOWN] it was thought out quite a lot,
- 4:50
it was a really useful tool for,
- 4:51
for developers because, 2007, in internet years
- 4:55
is such a long time ago, and the browser landscape was much more hostile.
- 5:00
We've got a lot more varied browsers today.
- 5:03
But back in August 2007, the best version of IE was IE7.
- 5:09
So what, Blueprint aimed to do was not be an all singing,
- 5:12
all dancing UI toolkit with fancy CSS buttons and [UNKNOWN] drop downs.
- 5:18
It's a really humble, problem solving, framework.
- 5:21
It had a reset in there.
- 5:23
It had some base typographical styles.
- 5:26
It had a gird system.
- 5:26
It had really simple, humble things.
- 5:29
That try to solve problems that exist in a very hostile landscape.
- 5:35
in actually doing my research and finding content for
- 5:37
these slides, I, I found the article announced blueprint,
- 5:42
the earl of, the developer of blueprint actually wrote
- 5:44
when he had announced that he was open sourcing it.
- 5:48
And he said this: I started looking at existing CSS frameworks
- 5:51
trying to find one that was right for me, the only viable
- 5:54
option was the UI Library, but UI is just way to bloated
- 5:57
for what I want from a framework; and that was in 2007.
- 6:01
Put your hands up if you think that sounds like a 2014 problem.
- 6:06
That sounds like something people say every day today.
- 6:09
I want to know how on Earth we had this
- 6:11
problem in 2007, and we're still having it 7 years later.
- 6:14
It's absolutely crazy that this is still the
- 6:17
number one reason why people don't use CSS frameworks.
- 6:21
This should have been solved 7 years ago,
- 6:22
but it's still cropping up every day today.
- 6:25
And you can't actually find this article for real on line.
- 6:30
The U.R.L. down in the bottom right,
- 6:32
is the way back machines archive this article.
- 6:35
I just find it, really, really interesting to read that.
- 6:38
Looking back seven years if no one had told me the
- 6:40
date I would have assumed that was written two months ago.
- 6:43
I find it really interesting that this is a
- 6:45
problem that we've had for so long and no ones.
- 6:48
Fundamentally solved it yet.
- 6:51
And actually getting the content together for these
- 6:54
slides as well, I find out a quick tweet.
- 6:56
I asked my followers what they thought was wrong with CSS frameworks.
- 6:59
You know, why don't people use them, what are their problems as they see them.
- 7:04
And
- 7:07
I got a lot of replies, which is fantastic,
- 7:08
I got a hell of a lot of replies.
- 7:09
Right, this is the Tweet I put out.
- 7:11
Why don't you use a CSS framework, what is wrong with them in your eyes.
- 7:15
If you hit that URL you'll see all of the replies a lot, because I did get loads.
- 7:19
Because people've got hundreds and hundreds of opinions on CSS frameworks.
- 7:24
And I just picked a few choice examples to try
- 7:27
and help strengthen and prove some points, backups and points in.
- 7:31
In this talk.
- 7:32
So if you want to see every response you
- 7:33
could just follow the trail from from that URL.
- 7:37
again, more show of hands.
- 7:38
So, when a tweet comes up and I read it out,
- 7:40
can you just put your hand up if you would agree with
- 7:42
the sentiment of that tweet, cuz I'd like to see what
- 7:44
you guys think, see what extent you guys agree with the criticisms.
- 7:48
So first response was.
- 7:51
Blurt slash ubiquitous design, usually.
- 7:53
I like to start with something smaller first and
- 7:56
build upon it, which is why I made cardinal.
- 7:59
So blurt an ubiquitous design, would that be a
- 8:00
reason why perhaps you might not use a framework?
- 8:03
And there's no right or wrong answer.
- 8:05
I mean, if you disagree with this and keep your hand down.
- 8:07
But I find it really interesting to see that.
- 8:09
A lot what of the problems we have with CSS frameworks
- 8:11
are very shared problems, so it's weird that they even exist still.
- 8:15
If everyone's having these problems, why on Earth do they still exist?
- 8:17
Surely it's the sensible thing to do to solve the shared problems first.
- 8:20
So yeah, this is a really common one: blurred and ubiquitous design.
- 8:24
So ubiquitous design is I'm guessing is about the fact that.
- 8:28
If you use this CS affirm like everywhere,
- 8:30
surely every site will start looking the same.
- 8:32
A really common criticism of things like Bootstrap you know.
- 8:35
A lot of designers say that every site looks like Bootstrap now.
- 8:39
And that's not necessarily Bootstrap's fault at all.
- 8:43
The next one I've got usually they do so much more than
- 8:46
I actually need, and there's a lot of [INAUDIBLE] extra or unnecessary code.
- 8:49
Yup, see?
- 8:53
Bloated frameworks still exist.
- 8:55
And they existed seven years ago, they still exist today.
- 8:57
So it's crazy that we haven't solved these problems.
- 9:00
So yeah, extra unnecessary code.
- 9:02
That was a bad thing.
- 9:04
Why would you download 10,000 lines of CSS if you only need 80 of them?
- 9:10
But they're always far too opinionated.
- 9:11
So opinionated frameworks are, are an interesting point.
- 9:15
Always far too opinionated when they're not.
- 9:18
So, not as many people agree with that one.
- 9:20
Opinionated prescriptive.
- 9:22
They say a lot about how things should be done.
- 9:25
They're fairly Well yeah prescriptive is probably the best word for that.
- 9:30
It is really quite interesting, I think
- 9:31
frameworks should be opinionated in some areas, and
- 9:34
we will discuss that, but yeah that's a
- 9:37
really common argument is how opinionated frameworks are.
- 9:45
So I do like them as a base which I
- 9:46
build upon, but most people want the full thing these days.
- 9:49
Like Bootstrap, brackets overkill.
- 9:52
I don't know to what extent I agree that everybody wants Bootstrap.
- 9:55
Because from where I stand, it seems like everybody hates it.
- 9:59
But a lot of people a lot of frameworks do have a lot in them.
- 10:03
The full thing.
- 10:04
And we'll discuss that in a little bit as well.
- 10:06
But this, this first bit's interesting.
- 10:08
I do as a base which I build upon.
- 10:10
That's exactly what a framework should be.
- 10:11
A framework should go no further than that.
- 10:13
And that's another sort of concept we're going to discuss.
- 10:18
Maybe frameworks have gone too far.
- 10:20
Look at bootstrap, it's not a framework.
- 10:21
It's a theme off the shelf.
- 10:23
So very similar to the previous three,
- 10:25
frameworks going too far, overstepping their mark.
- 10:27
We're going to discuss the difference between a framework and a UI toolkit.
- 10:31
But framework has a very, very finite
- 10:35
definite limit of how far it should go.
- 10:37
Once you go beyond that, it stops being a framework.
- 10:40
So look at Bootstrap, it's not framework, it's a theme off the shelf.
- 10:43
My intention here as well isn't to point out Bootstrap.
- 10:45
You could replace Bootstrap with any number of frameworks, I imagine.
- 10:49
And, so this, this is an anti-boot structure, I think
- 10:51
which gets mentions the most because it's the most popular.
- 10:56
Most projects are unique so this idea of a you know, if all
- 10:59
designs look different, how can you possibly
- 11:01
use third-party CSS to accommodate different [INAUDIBLE].
- 11:04
Who'd agree with that one?
- 11:06
I would, I definitely agree with that one.
- 11:08
It's hard to transplant a look and feel
- 11:10
across ten different, very different looking web sites.
- 11:14
Probably impossible to do that.
- 11:17
Similar then, not a huge amount of things I do on one site carry over to the next.
- 11:20
I haven't found a framework that covers what does.
- 11:22
So that's the same kind of thing, how are you supposed to write CSS?
- 11:26
A styling line widget can be applied to any amount of websites.
- 11:31
So this is a bit of both.
- 11:32
Not both in a way that matches the
- 11:34
designs and hassle of unpicking someone else's decisions.
- 11:37
So it's this opinionated thing again.
- 11:39
Who's actually worked with a framework and found themselves undoing things in it?
- 11:43
And thinking oh why is this in there, that.
- 11:45
How do they know that I want that much padding on my buttons?
- 11:48
I have to undo that.
- 11:49
And you know, that's one thing.
- 11:50
The rest of the framework decides all these things for you.
- 11:52
And it's to a point where you're barely using the framework at all.
- 11:55
You've, you've picked it apart so much that
- 11:56
you're not really left with much of it.
- 12:00
This is a really good reply for, well two reasons, because he's numbered them.
- 12:05
One I find them limiting.
- 12:07
A framework that limits you is literally
- 12:09
the opposite of what a framework should be.
- 12:12
A framework is a productivity tool.
- 12:13
A framework should make you faster.
- 12:15
It should be an enabler.
- 12:17
A framework limiting you is doing the complete opposite, opposite
- 12:20
of what it should be right out of the starting gate.
- 12:23
it's, that's its fundamental downfall.
- 12:26
Now I feel their real strength is when you design around the framework, which no
- 12:29
one really does, so again, this sounds
- 12:31
like their idea of people just using things
- 12:33
off the shelf, wanting the whole thing you know, the idea that no one's actually
- 12:37
using frameworks to that full potential, people just
- 12:39
assume the framework will do everything for them.
- 12:43
The last one is I think it's because designing
- 12:45
a framework under one set of conditions is hard enough.
- 12:47
Designing for all is impossible.
- 12:49
Now, this is really interesting because a lot
- 12:51
of the clients I've worked with are building
- 12:53
UI toolkits, so they've got their own set of buttons they might use on every site.
- 12:58
Writing that for themselves is difficult.
- 13:00
Writing that in a third-party manner is almost impossible.
- 13:02
So
- 13:07
you have to wrap all these up into some bullet points far too bloated.
- 13:11
Most projects are unique, how can standard CSS accommodate bespoke user interfaces.
- 13:16
Far too prescriptive and opinionated, these frameworks are doing far too much.
- 13:18
I didn't ask for this, why is this framework doing this.
- 13:22
They try and be all things to all men.
- 13:23
This idea that, you know, it's a theme off the shelf.
- 13:26
This framework's going too far.
- 13:28
And and they get in your way.
- 13:31
Blowup's really interesting.
- 13:32
And blowup's bad for two reasons.
- 13:34
The sheer amount of code you're sending over the wire is bad for performance.
- 13:37
So you don't want to be sending 120 kilobytes
- 13:39
of framework CSS when you only need 20 of them.
- 13:43
So in a literal sense, blowup is bad for performance.
- 13:46
Having too much in there, is, is literally too much.
- 13:51
Well, I think another more interesting angle to blurt in
- 13:53
frameworks, the size of the file system is bad for developers.
- 13:57
It promotes unhealthy curve barriers.
- 14:00
I was working with a client recently who
- 14:02
they were using a framework, a popular one.
- 14:05
And they were using, and I opened the assets directory, and all
- 14:08
of a sudden, I saw hundreds of SAS files for that css.
- 14:13
And I was like wow, this, this project is a lot bigger than I imagined.
- 14:16
Like, you, are you using all of this?
- 14:18
And everyone was like I don't really know, I think
- 14:19
we just downloaded it and we use bits of it.
- 14:21
I was like how much of it do you use, can we get rid of any of this?
- 14:23
And the next guy was like we just try not to touch it.
- 14:26
And that's a really dangerous place, who, okay,
- 14:28
by a show of hands, does that sound familiar?
- 14:31
That's terrible.
- 14:32
You've got a code base which completely miss sells what the project is like.
- 14:37
I was like Jesus, how in the hell have you written so much CSS.
- 14:40
Well, we haven't, it's just there, it's what we get.
- 14:43
So it's really bad in terms of
- 14:45
your working environment, you know, when you're opening
- 14:47
up your code base and you've got 100 CSS files and you're using three of them.
- 14:51
It's just a real bad sign.
- 14:56
You need projects.
- 14:57
So this is the problem with using
- 14:58
or confusing CSS frame works with UI Toolkits.
- 15:02
I think there's a big problem where
- 15:04
people are mis-selling UI Toolkits as frame works.
- 15:07
Something that has whole design in it can't be used very effectively.
- 15:11
So this is a real problem.
- 15:14
If fully designed components aren't reusable across designs?
- 15:17
Of course they aren't.
- 15:18
How can you have one designed set of buttons on one site, and
- 15:21
then have them on another site, and not be sympathetic to the design?
- 15:25
Unless it's very highly configurable, it's just not doable.
- 15:29
This is why object-oriented CSS is a really good thing.
- 15:32
Who actually works with object-oriented CSS?
- 15:36
Cool.
- 15:36
So, you'll be familiar with the idea of separating [UNKNOWN] from [UNKNOWN] in
- 15:39
the idea that you can actually build
- 15:41
design patterns with no cosmetic design whatsoever.
- 15:43
So object-oriented CSS can be transplanted across different
- 15:47
projects because it doesn't actually have any cosmetics.
- 15:50
He doesn't know they look and feel backed into it.
- 15:54
Opinion editor.
- 15:55
They need to be opinionated to an extent there'd be no point in
- 15:59
using own opinion, own opinionated frame work
- 16:01
because it'd have to reason for existing.
- 16:03
A framework with no opinions might as well do nothing, if it's
- 16:06
just gonna sit back and say, well here, do whatever you want.
- 16:08
There'd be no point in using one anyway.
- 16:10
But it has to be opinionated in the right places.
- 16:13
Opinionated in telling you what color to you for your buttons is a bad thing.
- 16:17
But opinionated in terms of, well here's
- 16:19
a sensible naming convention, that's a good thing.
- 16:22
So a framework being opinionated, it is important, but only so far.
- 16:26
So buy a [UNKNOWN] with any product.
- 16:33
Any product all, I'm not talking on the web.
- 16:35
Anything, so, is successful.
- 16:38
Typically just one job very, very well.
- 16:40
As soon as you start to try to do everything, the [UNKNOWN] get's diluted.
- 16:46
Each finger it does becomes less and less effective at doing.
- 16:49
So when you've got a framework, or rather when you've got sim
- 16:52
thing, a responsive mobile first SAS based you know 101 different components.
- 16:58
When using SAS as a framework it's very
- 17:00
dangerous because a framework can't do all of this.
- 17:03
They're very at odds with each other.
- 17:06
And, yeah.
- 17:06
They'd get in your way.
- 17:07
So this is a really bad place to be
- 17:09
because a framework is meant to be a productivity tool.
- 17:11
It's meant to be a helping hand.
- 17:13
A framework getting in your way is the last thing it should do.
- 17:17
So this is a really, really bad thing.
- 17:18
And you might as well have not used a
- 17:19
framework at all because now you're in a situation where.
- 17:22
You're working in spite of a CSS framework, rather than because of it.
- 17:25
A CSS framework should make you work more effectively.
- 17:27
If it gets in your way, and you have to undo things.
- 17:30
And it's tripping you up, and it's hampering your productivity, it's
- 17:33
literally failing its one job, which is to make you more productive.
- 17:39
A couple more.
- 17:39
yes, after reading all of the replies.
- 17:41
We need a framework that's really customizable,
- 17:42
and just spits out the stuff you need.
- 17:44
Someone should really start working on that.
- 17:46
[SOUND] No, I apologize.
- 17:50
That is the only slide that bigs myself up.
- 17:53
One thing I found really interesting is Inuit got
- 17:55
a lot of things really wrong, for a long time.
- 17:57
Now I start to actually look at
- 17:59
what people cited as the problems with frameworks.
- 18:02
And it turns out those are the kind of things we should be solving.
- 18:04
I'm not there yet, by any means.
- 18:06
But seeing the responses to this tweet,
- 18:09
is actually really good market research for me.
- 18:10
If anyones thinking of writing their own CSS Framework,
- 18:14
I'd suggest keeping that kind of stuff in mind.
- 18:16
The thing that everybody complains about, that's
- 18:18
the thing we should probably be solving.
- 18:21
I really appreciate, appreciate this reply.
- 18:24
Who agrees, at least with the sentiment of, this tweet?
- 18:29
Okay, well, that's interesting.
- 18:30
I really appreciate this because ego is an issue.
- 18:34
I, it's my job to write CSS, so how much of
- 18:38
a fraud would I look if I was using someone else's framework?
- 18:41
So I'd say ego is a massive issue.
- 18:42
I don't need a framework; CSS is easy, I can write my own.
- 18:45
Which I find really interesting when you think
- 18:47
about other open soft projects that we use everyday.
- 18:50
You know, most people don't write their own program in language,
- 18:52
they'll go and use PHP because someone's done it for them.
- 18:55
But yeah, I think this is a really
- 18:56
interesting and we'll cover this one a bit more.
- 18:59
So we all have very, very strong opinions about CSS frameworks.
- 19:03
Or most of us do.
- 19:03
And I wonder why that is.
- 19:06
I'm not a psychologist by any means.
- 19:09
And also, the next section isn't
- 19:10
necessarily true as in like factually true.
- 19:14
But from my own observations and from anecdotal evidence, I think there are
- 19:19
a few reasons why we might have such strong opinions on CSS frameworks.
- 19:24
CSS is something that we can all do ourselves.
- 19:26
You'll find that most software engineers started writing h1 CSS.
- 19:30
Most designers, to an extent, can write h1 CSS.
- 19:32
And developers, I mean it's our job to write h1 CSS.
- 19:36
So, if we can all find it, if it's something we can
- 19:38
all do you know, we don't really need any help with it.
- 19:41
Surely, we wouldn't need help writing CSS, it's our jobs.
- 19:47
It's easy, so it's easy to have an opinion.
- 19:48
I put easy in quotes because obviously, CSS is getting much more difficult.
- 19:52
You know, long-tail legacy browsers, responsive design.
- 19:56
So with this one there's there's a theory.
- 19:59
I can remember it's official name, but it's.
- 20:01
Something like the bite shed theory, which states that
- 20:04
as things get more complicated they get discussed less.
- 20:08
So conversely, easy things get discussed you know, like flogging a dead horse.
- 20:12
They just get discussed over and over again.
- 20:15
The theory loosely states that you've got a company you know,
- 20:19
a big, a big science you know, a big science company.
- 20:23
And they're discussing two things on an agenda.
- 20:25
The first one is do we build a bike shed for the employees?
- 20:28
And they have a discussion which lasts two
- 20:30
hours about, you know, do we spend £5000 on
- 20:32
a bike shed, do we need one, can't they just chain the bikes up to the railings?
- 20:36
So for two hours, they discuss a £5000 spend.
- 20:39
But the same company could discuss in 15 minutes their
- 20:42
entire approach to buying a new nuclear reactor for $200 million.
- 20:46
And I see this all the time in the work I do.
- 20:48
We'll have, I've actually had like hours, and
- 20:51
hours of conversation about using a framework, or
- 20:53
using a UI Toolkit, or, writing our own
- 20:55
third-party, writing our own third-party esque internal library.
- 20:59
It can take hours.
- 21:01
And I've seen entire database schema discussed over IM.
- 21:05
So as things get more difficult, we tend to discuss them less.
- 21:10
The fact that there are now so many tells you everything you
- 21:12
need to know and I think this, XKCD comic really sums this up.
- 21:17
If you can't read that from the back, the slides are
- 21:20
available online already but this is a really, really good comic.
- 21:24
just, summing up the, the problem with competing standards.
- 21:28
And this is certainly true in the, CSS framework world.
- 21:33
So, why do we have so many?
- 21:36
again, I don't necessarily know this is a fact.
- 21:38
But, educated guesses and observation tell me there are probably a few reasons.
- 21:44
As well as not being a psychologist I'm also not
- 21:47
a poet, but I'm aware of this really, really good metaphor.
- 21:50
I use this metaphor quite a lot.
- 21:53
water, water every where, nor any a drop to drink.
- 21:56
This is from a a poem called The Rime of the Ancient Mariner.
- 22:03
Which its, it discusses this sort of tragic
- 22:06
and horrible irony that these sailors are trapped
- 22:12
on a boat surrounded by hundreds and hundreds
- 22:14
of miles of water, none of it's drinkable.
- 22:16
Which could be kind of phrased a bit more like, how on earth do we
- 22:19
have so many CSS frameworks but we can't decide on a single one to use.
- 22:23
I just got a five minute call then.
- 22:25
So this talk is 25 minutes long.
- 22:26
So I'm gonna have to hurry up a lot.
- 22:30
So opinionated developers.
- 22:32
Subtle differences in frameworks.
- 22:33
You know?
- 22:33
This is, this is Bootstrap but with, three spaces instead of tabs.
- 22:37
selfishness.
- 22:38
I'm really guilty of this one.
- 22:39
And this isn't me dissing you guys and other developers, I'm firmly in this camp.
- 22:44
selfishness.
- 22:45
I want to use my framework.
- 22:47
I want people to use mine for it, I want to be number one on Hacker News.
- 22:50
It's this idea about you know, well everyone
- 22:52
else is getting their framework, I want one.
- 22:55
Perception of inability.
- 22:57
So again, like if I use someone else's framework,
- 22:59
I might look like I can't write CSS myself.
- 23:02
So I go out and write my own so that, people know that I can write CSS.
- 23:07
A really interesting fourth experiment I did?
- 23:09
I was comparing CSS framework to other open source frameworks
- 23:12
and how we end up getting things so drastically wrong.
- 23:16
Things like jQuery and Symfony, everyone uses these all the time.
- 23:19
There are communities around WordPress, but we've still got
- 23:23
so much bad stuff to say about CSS framework.
- 23:25
It's like we're completely trailing behind.
- 23:29
I picked that the two biggest CSS
- 23:32
frameworks and mine, because I'm the worst offender.
- 23:36
April 2011, I launched inuit, three years almost to the day.
- 23:40
Actually it's almost inuit's birthday.
- 23:42
I'm on version six already.
- 23:44
Anyone familiar with semantic versioning?
- 23:46
Senver knows it.
- 23:46
That means I have rewritten inuit six times in three years.
- 23:50
That's, that's just terrible, that's just whimsical, oh, well
- 23:53
I don't use IDs anymore so I'll re-write the framework.
- 23:55
Or [INAUDIBLE] is nice, I'll put underscores in there.
- 23:58
You know, it's crazy, I've re-written this framework six entire times in three years.
- 24:03
Foundation released in October 2011 on it's fifth major revision.
- 24:06
Bootstrap Bootstrap and inuit were released within days of each other.
- 24:10
Bootstrap not doing anywhere near as bad but it's still version three.
- 24:14
Three major rewrites in three years, that's incredible
- 24:16
right, it's really, really, fast but irresponsible development.
- 24:21
Compare that to things like jQuery, born in 2006, it's only on version two.
- 24:24
Two major rewrites in, what is that, eight years.
- 24:28
Symfony, the PHP framework, October 2005.
- 24:32
Storm version two itself.
- 24:33
Two major rewrites in that amount of time.
- 24:35
How on earth does CSS framework just plowing ahead
- 24:37
when people are getting other open-source projects, so right.
- 24:42
WordPress right, nearly 11 years old, 11 years and it's on version three.
- 24:48
How on earth did inuit get to version six in three
- 24:51
years when WordPress, being 11 years old, is only on version three?
- 24:56
The answer to that in a roundabout way is this.
- 24:59
The semver.org website, one of the first FAQs is,
- 25:03
if even in the, if even the tiniest backwards
- 25:06
incompatible change to public API requires a major version
- 25:09
bump, won't I end up on version 42 very rapidly?
- 25:12
And I read this and I thought, well yeah that's why I
- 25:14
knew it's version six, yeah things have changed, that's fine, you know.
- 25:17
[UNKNOWN] answer to this really hit the nail on the head.
- 25:20
This is a question of responsible development and
- 25:22
foresight, which is something I clearly didn't have.
- 25:25
I didn't have a product road map I didn't have
- 25:27
a goal, I didn't have a sensible development plan for inuit.
- 25:30
So that's why mines on version six already.
- 25:32
And I think this is a problem across a
- 25:34
lot of front end open-source, particularly the CSS framework landscape.
- 25:40
So CSS frameworks vs UI Toolkits then.
- 25:43
There is a sort of difference.
- 25:45
It's a very important difference as well.
- 25:46
Who's familiar with this film?
- 25:49
I had to include this meme.
- 25:51
Anyone know what this film is?
- 25:53
Nowhere near enough.
- 25:55
Homework is to go watch the Princess Bride this weekend, fantastic film.
- 25:59
framework.
- 26:00
You keep using that word, I don't think it means what you think it means.
- 26:03
The actual definition, I apologize for using a meme as
- 26:05
well, I know we're not children, but it seemed really appropriate.
- 26:08
A framework is a narrow meaning of
- 26:09
basic structure underlying a system concept, or text.
- 26:13
You've got a UI Toolkit which has a certain suite
- 26:15
of buttons and a carousel that works in a certain way.
- 26:17
That's not an underlying concept, that's not an underlying structure.
- 26:21
That's a UI Toolkit, and unfortunately UI Toolkits that
- 26:24
have got fully designed components are sold as CSS frameworks.
- 26:28
UI Toolkits aren't a bad thing.
- 26:30
The reason they exist, they help people out.
- 26:32
But when they are sold frameworks, you get people like my clients, hedging
- 26:35
their bets, and thinking they can really use these to scale massive products.
- 26:39
Only to find it wasn't the CSS frameworks at all.
- 26:44
I did a really quick sweep of the current framework
- 26:46
on UI Toolkit landscape, and it's important to note that
- 26:48
UI Toolkit I don't think is an actual phrase that
- 26:51
gets used as much, but that's what I call them.
- 26:54
So over this, it's a very non-scientific diagram as well.
- 26:58
But over on this side, SUIT CSS from Nicholas Gallagher.
- 27:00
Anyone familiar with SUIT?
- 27:03
Handful of people.
- 27:04
Really a really decent, decoupled framework, it is a framework.
- 27:08
It's got decent principles behind it.
- 27:11
Low specificity, high reusability.
- 27:15
So it's a framework in its truest sense.
- 27:17
Inuit, include my own there, is a framework.
- 27:19
It's a little more opinionated.
- 27:20
It has a managed architecture.
- 27:22
So it's decoupled but it has a managed architecture that brings it all together.
- 27:27
Pure and Topcoat are from I think, Yahoo and Adobe, respectively.
- 27:31
They kind of bridge the gap.
- 27:32
So they've got principles of low specificity, high reusability,
- 27:37
but have a little bit of UI in them.
- 27:39
So they will actually be very prescriptive in how they look.
- 27:42
And on this side, we've got things
- 27:44
like Bootstrap, Foundation, which are fully designed toolkits.
- 27:47
And I say that, you know, at the time of writing.
- 27:49
This could well change, I don't know what the
- 27:50
product road map is for either Bootstrap or Foundation.
- 27:53
But these are fairly designed things, so
- 27:56
that's why people get into difficulties trying
- 27:58
to write their own UI on top of something that's already got UI in it.
- 28:02
But there is definitely room for both.
- 28:06
A UI Toolkit's a fairly full-on product.
- 28:08
You'd probably implement, rather than work upon.
- 28:10
You'd drop a UI Toolkit into a site and use it as-is.
- 28:14
And it'd do the majority of the work for you.
- 28:17
They're great if you need a UI out of the
- 28:19
box which is a very, very common thing to need.
- 28:21
I'll cover that in the next slide.
- 28:23
If you're not great with design or front-end dev, and you need someone to
- 28:26
do that all for you, then a UI Toolkit will really help out there.
- 28:30
prototyping, a lot of people use things like Bootstrap for making prototypes.
- 28:34
That's really useful.
- 28:36
This guy's a friend of mine.
- 28:38
That's actually him.
- 28:39
That's all real.
- 28:40
He knows I'm doing this, but that's all, you know, factual.
- 28:42
That's his actual name.
- 28:43
That's a photograph of him.
- 28:45
And he's a really, really great software engineer.
- 28:48
And every weekend, he'll create some new crazy thing on
- 28:51
nerd.js or [UNKNOWN] or whatever he's working with at the moment.
- 28:54
And he needs a quick GitHub page's website to put this on.
- 28:58
He's gonna turn to Bootstrap.
- 28:59
There's a reason I use WordPress for a
- 29:01
blogging engine, because I don't wanna write PHP.
- 29:02
The reason, there's a reason someone like Nick would use
- 29:05
Bootstrap, because he doesn't wanna write CSS, and that's totally fine.
- 29:09
People give Bootstrap a lot of stiff.
- 29:11
I think what Bootstrap's done, has been
- 29:14
really, really great for empowering people like Nick.
- 29:17
People who do need a UI for something, a quick marketing site.
- 29:21
Whenever I hear from any developers dissing
- 29:23
Bootstrap, I always think, Bootstrap's not for you.
- 29:26
Bootstrap's not a product for someone who wants to write their own CSS.
- 29:29
It's for people like Nick.
- 29:30
It's for people who do need a helping hand with user interface.
- 29:35
Conversely, a CSS framework would be a helping hand.
- 29:38
It should guide and lead decisions.
- 29:39
It shouldn't come in and make those decisions for you.
- 29:41
It should kind of steer you in the right direction.
- 29:43
We discussed, you know, a CSS framework
- 29:45
should be a productivity tool, an underlying concept.
- 29:47
So a CSS framework should be a helping hand.
- 29:52
It's more of a concept.
- 29:52
When we looked at the definition of
- 29:54
framework, it's a concept, underlying a system.
- 29:58
And if you make the minority of a project
- 30:01
you might start with the CSS Framework but the
- 30:03
better news is, you still get to write a lot of your own CSS on top of it.
- 30:08
These are good if you perhaps, you know, you've not worked
- 30:11
on a site where, scalability or architecture has been a problem.
- 30:16
But you've started working on a project where you do need a helping hand.
- 30:19
A framework with opinions in those areas can really, really, help you out.
- 30:23
They're good if you're not particularly good with
- 30:25
CSS but you do have your own designers.
- 30:27
So if you need a helping hand getting started with CSS but you've got your
- 30:30
own design to implement, a CSS framework is probably gonna be a lot of help.
- 30:36
A really simple very [UNKNOWN], a high level comparison I guess, of a
- 30:42
UI Toolkit to a CSS framework is that a UI Toolkit will do a lot of the work for you.
- 30:47
It will come on and say, this is what your buttons look like, this
- 30:50
is what your carousel looks like, a friend that would give you a helping hand.
- 30:54
A friend that will tell you, hey look
- 30:55
maybe if you're building buttons you should use this.
- 30:59
The UI Toolkit can be a fairly quick win.
- 31:01
You can download Bootstrap or Foundation or something like that and drop
- 31:04
it into a project and be up and running in 15 minutes.
- 31:07
On the other hand, a CSS framework is very much a fairly large commitment.
- 31:10
Because a CSS framework is an underlying concept, it will tell you
- 31:13
how you work on that site for the next two, three years.
- 31:16
So a CSS framework isn't something that could really be taken lightly.
- 31:19
Something that will influence how you actually fundamentally
- 31:21
work on CSS for the lifetime of that project.
- 31:26
A UI Toolkit will typically tell you what, so what do my buttons look like?
- 31:29
You know, what does my top bar look like?
- 31:32
A framework will tell you how.
- 31:33
It will tell you how do I build these buttons, how do I build a top bar?
- 31:37
You know, how will I build a carousel?
- 31:39
A UI Toolkit [INAUDIBLE] gets the job done.
- 31:42
A framework gets the job started.
- 31:44
A framework allows you to write a lot of your own stuff effectively on top of it.
- 31:50
A real quick look at what's wrong with CSS then.
- 31:52
A bit of a departure real quickly, but nearly everything is wrong with CSS.
- 31:56
For a start, it's really old.
- 31:58
And we can't change it.
- 31:59
So CSS was written at a time when the web was much smaller.
- 32:02
It's not really suitable for today's product.
- 32:06
But we can't change it, we're stuck with how CSS works.
- 32:09
It operates in a global namespace.
- 32:11
Now this is like a quasi-technical one.
- 32:14
But you can't fully sandbox or encapsulate CSS, it's leaky.
- 32:17
It interferes with other parts of the the page of a site.
- 32:21
So it's got things like you know, just, it operates in
- 32:24
a global namespace, which, it really makes your working life a nightmare.
- 32:28
It's based on inheritance, it cascades, the cascade needs managing.
- 32:32
It's very loose, which is really good for the web
- 32:34
in general, you know, CSS is really easy to learn.
- 32:36
I probably wouldn't be where I am if CSS wasn't so easy to learn.
- 32:40
But it makes it also very easy to write badly.
- 32:43
So I thought it's really important the order
- 32:46
in which you write your CSS, it's fairly critical.
- 32:49
But the problem is when you've got
- 32:50
something like specificity that one does everything.
- 32:53
Specificity only does all of the above, so we're
- 32:56
left with a tool which is something like this.
- 32:59
With an old, loose, legally, leaky
- 33:01
global-, globally-operating inheritance-based
- 33:03
language, which is entirely
- 33:05
dependent on source order, except when
- 33:06
you introduce its own worst feature, specificity.
- 33:10
These are the things we need to be solving.
- 33:12
These are the things that are wrong with CSS.
- 33:14
So CSS' frameworks should be the cool, fun stuff.
- 33:16
Framework shouldn't tell us how to make [UNKNOWN] xyz.
- 33:20
CSS frameworks should be solving the things that
- 33:23
everybody has a problem with, on every project.
- 33:26
So a CSS framework should be an enabler.
- 33:29
It should be helping hand, it should teach you and guide
- 33:32
you how to things in the way that you want to.
- 33:35
You should pick a CSS framework who's principles you agree with and
- 33:39
it should carry on to do your own work in that manner.
- 33:41
It should enable you to do your own thing.
- 33:44
It should be helping hand in a school of four.
- 33:46
So remember that dictionary definition of framework.
- 33:49
A framework should be a concept, it should be a group of ideas.
- 33:53
You could even have a framework that isn't a single line of code.
- 33:56
You can have a framework which just purely deals architectural concepts.
- 34:00
My work from mundow discussed some CSS architectural concepts, and
- 34:03
those are kind of frameworks, they are ideas for writing CSS.
- 34:07
So a CSS framework doesn't need, even need to contain a single line of code.
- 34:12
Highly configurable.
- 34:13
So, of course, you will have some opinions in there.
- 34:15
If you make you know, if you have a default buttons object perhaps that
- 34:19
just gives you a padded link you have to pick a value for that padding.
- 34:22
If you're gonna put that in a framework, make it configurable.
- 34:27
Design free.
- 34:27
We don't have a framework that does any design because that
- 34:30
means it's not very useful to be used across many projects.
- 34:32
So we need to keep as much design as possible out of our CSS frameworks.
- 34:36
And we need to keep all the code that's in them fairly ignorant and agnostic.
- 34:39
Ignorance is a, a word I use quite a lot when discussing
- 34:41
code because ignorant code is typically really good quality third party code.
- 34:46
It doesn't care where it lives, you know, or it might be moved to.
- 34:50
Anything that is ignorant is typically
- 34:52
really, really good for a third-party code.
- 34:55
So it sounds really unsexy and really dull,
- 34:58
but we don't want to solve the fun bits
- 35:00
we want to solve the difficult ones, if we are going to use or write a CSS framework.
- 35:06
I also want to be able to decide my own typographic scale.
- 35:09
I also want to sort of, I also want
- 35:11
to decide what color buttons I'm gonna, I'm gonna use.
- 35:13
That's a thing that I enjoy doing.
- 35:15
Those are things that I want to do myself.
- 35:17
So don't solve those for people who want to write it themselves.
- 35:21
Solve the difficult bits for them.
- 35:22
You probably get a lot more a lot more love if you actually solve the boring
- 35:26
bits for people, because they get to have,
- 35:28
get, they get to do the fun bits themselves.
- 35:31
So what should we keep out of CSS frameworks?
- 35:36
Well, a CSS framework shouldn't be a final call.
- 35:39
It shouldn't be you know, a fully designed solution.
- 35:42
It shouldn't be, well it shouldn't be a final solution.
- 35:45
It shouldn't be, everything.
- 35:46
It shouldn't be the be-all and end-all.
- 35:49
It should not be a UI Toolkit.
- 35:50
UI Toolkits have their place.
- 35:51
They can and should exist.
- 35:53
But we need to make sure we make a firm decision on which one we're working with.
- 35:58
again, it shouldn't be designed or prescriptive.
- 36:00
A CSS framework shouldn't tell people how things, work.
- 36:04
They should tell people how to do things themselves.
- 36:10
yeah, so, a CSS framework, but by
- 36:12
and large any framework whatsoever, shouldn't do
- 36:15
you work for you, it should allow you to do your own work faster.
- 36:18
Remember the tweet, the guy said he found CSS frameworks limiting.
- 36:21
This is a really bad idea for a CSS framework to, to do
- 36:24
your work for you, because it's making decisions that you then have to unpick.
- 36:29
So a CSS framework or any framework at all, should be
- 36:31
an enabler to make you work a lot, lot more efficiently.
- 36:35
It should do the things that you need solving, get out of
- 36:37
the way, and allow you to hammer out work on top of it.
- 36:42
So should I write my own CSS framework?
- 36:45
Maybe we should.
- 36:46
A lot of people ask me this, a lot
- 36:47
of clients ask me this, should we write our own?
- 36:49
And the answer isn't no.
- 36:50
The answer is definitely not no.
- 36:52
But I have a fairly perhaps brutal
- 36:54
but certainly objective checklist of, of questions that
- 36:58
you should probably answer before you decide
- 37:00
whether you should write your own CSS framework.
- 37:04
What is its reason for being?
- 37:06
Do we actually need one?
- 37:08
And this isn't me trying to trip people up.
- 37:10
This isn't me being horrible and saying you know, do we need another framework?
- 37:13
If it has a reason for being, if it has a genuine, compelling
- 37:16
reason for existing, and the answer is yes, you know, we should make it.
- 37:22
If the answer's no, it might not be the answer you want to hear.
- 37:24
But if there isn't a reason for that framework to exist, perhaps it shouldn't.
- 37:28
Would it be similar or inspired by, anything that already exists?
- 37:33
There's no point making a complete new
- 37:34
framework if it's similar to something else.
- 37:37
Perhaps it might be in everyone's best interest if you
- 37:39
join that open-source community, and try and make that framework better.
- 37:44
Will it be a framework or a UI Toolkit.
- 37:45
Now, there's room for both.
- 37:47
We've discussed that, you know.
- 37:48
And if you want to make a UI Toolkit,
- 37:49
perhaps you've got a really nice design style that you
- 37:52
think could be rolled out across many different sites, or
- 37:54
perhaps you're actually writing a, a toolkit for internal use.
- 37:57
Perhaps you work on a product where you want to standardize all your buttons.
- 38:02
But it is important to decide which one and then stick to it.
- 38:05
If you start writing a framework but then sort of depart into UI Toolkit
- 38:10
territory this is when people will start
- 38:13
to resent the decision you've made for them.
- 38:15
So it's important to know which one you're actually going
- 38:17
to be building, and there is definitely room for both.
- 38:20
Will you open-source it?
- 38:22
This is a really important question.
- 38:24
Open-source isn't just putting something on GitHub.
- 38:26
Open-source is about managing a community.
- 38:29
I didn't think of this anywhere near enough when I launched Inuit.
- 38:32
I just thought, well, put it on, GitHub and if people used it, that'd be cool.
- 38:35
Open-source is a massive responsibility.
- 38:38
I didn't have a product roadmap, I didn't have, a game plan.
- 38:41
That's why Inuit's on version six after three years.
- 38:44
Because I didn't take open-source seriously enough.
- 38:46
So if you are going to open-source you need
- 38:48
to be prepared to deal with the ramifications of that.
- 38:53
Will you expect people to use it?
- 38:55
It's fine if you're gonna write something as a starting point for yourself.
- 39:00
You have that and don't open source it or have it on GitHub but don't market it.
- 39:04
If you expect people to use your framework, you have to deal
- 39:06
with things like you've had a really terrible day at the office
- 39:08
and it's been, it's been a nightmare, and you get home and
- 39:11
all of the sudden you've got 20 pull requests to deal with.
- 39:14
Open-source is a big commitment from a personal point of view.
- 39:17
The amount of stuff around Inuit I have to ignore or have to leave stagnant
- 39:21
for months at a time because I'm busy,
- 39:23
is really, really irresponsible of me, it's terrible.
- 39:26
And this is something I wish I thought of at the beginning.
- 39:28
Of course I want everyone to use my framework.
- 39:30
But then I have to deal with the actual, the ramifications of, I have to
- 39:34
deal with the fact that people are gonna
- 39:35
submit pull requests I might not agree with.
- 39:37
Or people are gonna have opinions that are good for the project, that are good for
- 39:41
Inuit, but I can't let my own ego let those things in, because it's not my code.
- 39:45
It's that selfish developer thing.
- 39:47
This is a massive problem I had with Inuit and, if I
- 39:49
had to start at square one, I'd do things very, very differently.
- 39:55
So just to wrap up then.
- 39:56
Do we really mean framework?
- 39:59
When we discuss the actual framework, do we
- 40:01
actually mean framework or are we discussing UI toolkits?
- 40:04
Having a grasp of the difference in
- 40:06
the two can make discussions far more meaningful.
- 40:08
And it can mean we can avoid a lot
- 40:09
of the problems we typically encounter when using CSS frameworks.
- 40:15
Make sure your framework does very, very little.
- 40:17
And if not, try and do less.
- 40:19
If you look at CSS Inuit CSS's
- 40:22
retrospective product roadmap, it's basically been removing stuff.
- 40:26
I started off doing quite a lot of proscriptive things
- 40:28
in Inuit, and I realized that people don't want that.
- 40:30
Even I couldn't use my own framework for a period because it was too prescriptive.
- 40:35
So I, my product roadmap has been just taking things out.
- 40:39
Solve the tricky bits.
- 40:40
It sounds really dull and really boring, but the tricky
- 40:43
bits are the ones that people want solving for them.
- 40:45
No one wants to use your really cool accessory buttons,
- 40:47
because they want to have the fun of building their own.
- 40:50
Get out of the way.
- 40:51
So we discussed the CSS frameworks and underlying concept.
- 40:54
Do your thing and get out of the way.
- 40:56
Enable people to work effectively on top of your work.
- 41:00
yeah, let people do the fun stuff because of you.
- 41:03
No one wants to start a new project and have to worry about, well
- 41:05
you know, how are we gonna scale it and what directors do we have?
- 41:08
If you can solve that for them, they can get straight
- 41:10
away and make the fun stuff because you helped them out previously.
- 41:16
Get a goal, a purpose, and a roadmap.
- 41:18
It's the number one thing I did wrong with Inuit, is not having a decent roadmap.
- 41:21
And as an open-source project, it's really suffered because of that.
- 41:24
It's the reason it's on version
- 41:25
six already, and that's not responsible development.
- 41:28
For me that's fine, but when people start using Inuit CSS and
- 41:31
a new version comes out every six months, then upgrading it becomes difficult.
- 41:36
They resent the, the speed of change because all
- 41:38
of a sudden they're using outdated firmware they can't upgrade.
- 41:43
So having a decent road map in place for a CSS
- 41:45
framework is vital if you expect or want it to take off.
- 41:50
That's me to wrapping up, so thank you very much for listening.
- 41:53
Sorry about you know, how much do I [UNKNOWN] again
- 41:55
in this smaller room, but yeah thank you very much.
- 41:57
[NOISE] | https://teamtreehouse.com/library/what-is-a-css-framework-anyway | CC-MAIN-2016-44 | refinedweb | 10,322 | 82.44 |
In a previous tutorial, I demonstrated how to create a sliding puzzle game with HTML5 canvas.
To save time I hard-coded the starting tile positions. Game play would be better if the tiles were randomized, but doing so would have led to complications that would require a separate tutorial to explain.
This is that tutorial.
There are a number of ways to randomize the tiles. I’ll look at a few options and discuss their strengths and weaknesses, as well as the problems that arise and how to overcome them.
One simple method is to initialize the puzzle in a solved state, then repeatedly call a function to slide a random piece into the empty space.
function initTiles() { var slideLoc = new Object; var direction = 0; for (var i = 0; i < 30; ++i) { direction = Math.floor(Math.random()*4); slideLoc.x = emptyLoc.x; slideLoc.y = emptyLoc.y; if (direction == 0 && slideLoc.x > 0) { slideLoc.x = slideLoc.x - 1; } else if (direction == 1 && slideLoc.y > 0) { slideLoc.y = slideLoc.y - 1; } else if (direction == 2 && slideLoc.x < (tileCount - 1)) { slideLoc.x = slideLoc.x + 1; } else if (direction == 3 && slideLoc.y < (tileCount - 1)) { slideLoc.y = slideLoc.y + 1; } slideTile(emptyLoc, slideLoc); } }
In this case we’re sliding 30 tiles, twice the total number of tiles in the 4×4 puzzle, and yet most of the pieces remain in their original locations. To get anything resembling randomness we would need many more iterations.
That’s not an efficient way to randomize the puzzle. Ideally, we’d like to move each piece only once. We could initialize the puzzle to a solved state, then iterate through the tiles, swapping each one with a tile chosen at random.
function initTiles() { for (var i = 0; i < tileCount; ++i) { for (var j = 0; j < tileCount; ++j) { var k = Math.floor(Math.random() * tileCount); var l = Math.floor(Math.random() * tileCount); swapTiles(i, j, k, l); } } } function swapTiles(i, j, k, l) { var temp = new Object(); temp = boardParts[i][j]; boardParts[i][j] = boardParts[k][l]; boardParts[k][l] = temp; }
Not only does this method give us a much more random-looking configuration, it does so in fewer lines of code. This algorithm, however, has two serious flaws. The first problem is subtle. Although swapping each tile with a random location is much more efficient than simply sliding pieces into the empty slot, this still is not a truly random algorithm. Some starting positions will show up much more frequently than others.
In a 2×2 puzzle, some starting configurations will occur 87% more often than others. Add a third row and some configurations appear five times as often as others—and it continues to get worse as more tiles are added. Fortunately, there’s a way to achieve true randomness without adding extra complexity. It’s known as the Fisher-Yates algorithm.
function initTiles() { var i = tileCount * tileCount - 1; while (i > 0) { var j = Math.floor(Math.random() * i); var xi = i % tileCount; var yi = Math.floor(i / tileCount); var xj = j % tileCount; var yj = Math.floor(j / tileCount); swapTiles(xi, yi, xj, yj); --i; } }
The mathematics of the Fisher-Yates are beyond the scope of this tutorial, but it does give every tile an equal chance to appear in any square. Using this algorithm, the puzzle is as random as the
Math.random() function can get.
But swapping tiles randomly—with the Fisher-Yates algorithm or any other—leads to another problem. Half of all possible tile configurations give us a puzzle that can never be solved. To prevent unleashing an unsolvable puzzle on an innocent user, we need yet another algorithm.
Before I introduce this algorithm, I need to define two terms: inversion and polarity. An inversion is a pair of tiles that are in the reverse order from where they ought to be. The polarity of a puzzle is whether the total number of inversions among all tiles is even or odd. A puzzle with 10 inversions has even polarity; a puzzle with 7 inversions has odd polarity.
The solved puzzle has zero inversions (and even polarity) by definition. If we swapped two neighboring tiles from a solved puzzle, we would have one inversion.
In this game the board is configured as a two-dimensional array, each piece represented by its x/y coordinates.
But to work with inversions and polarity we’ll think of it as a one-dimensional array. We can convert each tile’s coordinates to a single number n with the formula n = y * w + x, where w is the width. Pictured as a single-dimension array the tiles are numbered like this.
Now let’s consider a randomized puzzle. It might look like this.
There are 19 inversions. 1 is inverted with 0.
To get this total, we need a function to count the inversions for each tile.
function countInversions(i, j) { var inversions = 0; var tileNum = j * tileCount + i; var lastTile = tileCount * tileCount; var tileValue = boardParts[i][j].y * tileCount + boardParts[i][j].x; for (var q = tileNum + 1; q < lastTile; ++q) { var k = q % tileCount; var l = Math.floor(q / tileCount); var compValue = boardParts[k][l].y * tileCount + boardParts[k][l].x; if (tileValue > compValue && tileValue != (lastTile - 1)) { ++inversions; } } return inversions; }
Now we can iterate through the tiles and keep a running sum of the inversions.
function sumInversions() { var inversions = 0; for (var j = 0; j < tileCount; ++j) { for (var i = 0; i < tileCount; ++i) { inversions += countInversions(i, j); } } return inversions; }
Sliding a tile sideways does not change the number of inversions; the empty square has no number, so swapping it with an adjacent tile will always leave us with the same number of inversions. However, we might change the number of inversions when sliding a tile up or down. For example, if we slide the 6 tile down, we reduce the number of inversions from 19 to 17.
The rule is that sliding a tile up or down will change its relationship with w – 1 tiles, where w is the width of the puzzle. So for the 3×3 puzzle, we are changing the tile’s relationship with two other tiles. This may result in a reduction of two inversions, an increase of two inversions, or no change. In the puzzle above, for example, sliding tile 5 up would have left us with 19 inversions, as it would gain an inversion with 4 and lose an inversion with 7.
A puzzle that starts with an even number of inversions will always have an even number of inversions; a puzzle with an odd number of inversions will always have an odd number of inversions. This is true not just for the 3×3 puzzle, but for any puzzle with an odd width. If we’re ever going to reach zero inversions, we must start with an even number.
Since we’ve already calculated the number of inversions, a simple function will tell us whether the puzzle is solvable.
function isSolvable() { return (sumInversions() % 2 == 0) }
The example above is not solvable, since 19 is not even. But suppose the first two tiles were reversed?
Now we start with 18 inversions. The 3 and 6 are no longer inverted, but everything else remains the same. We have a solvable puzzle.
This gives us an elegant solution that preserves the puzzle’s true randomness—every unsolvable puzzle is paired with a unique solvable puzzle that differs only in the first two tiles.
if (!isSolvable()) { swapTiles(0, 0, 1, 0); initEmpty(); }
Unfortunately, this won’t work if one of the swapped tiles is the empty square. We’ll need special code to deal with that situation.
if (!isSolvable()) { if (emptyLoc.y == 0 && emptyLoc.x <= 1) { swapTiles(tileCount - 2, tileCount - 1, tileCount - 1, tileCount - 1); } else { swapTiles(0, 0, 1, 0); } initEmpty(); }
If the empty square is in one of the first two locations, we instead swap the last two tiles. This slightly skews the randomness, but we’re still much closer than any other algorithm can get us.
There’s just one problem remaining. If the width of the puzzle is an even number, sliding a tile up or down reverses the polarity. This is because, as we saw above, the tile changes its relationship with w – 1 tiles.
In order for the puzzle to be solvable, it must have an even polarity when the empty square is on the bottom row (assuming the empty square is on the bottom row when the puzzle is solved). When the empty square is on the next row up, the puzzle is solvable if the polarity is odd. So for an even-width puzzle, we must sum the inversions plus the distance between the empty row and the bottom row.
function isSolvable(width, height, emptyRow) { if (width % 2 == 1) { return (sumInversions() % 2 == 0) } else { return ((sumInversions() + height - emptyRow) % 2 == 0) } }
Now we must edit the line that calls this function.
if (!isSolvable(tileCount, tileCount, emptyLoc.y + 1))
There are a couple things to note here.
First, because the
emptyLoc array is zero-based, we need to add one before comparing it with the height.
Second, for a square puzzle we don’t technically need two parameters for height and width; they are the same value, and we’re passing the
tileCount variable to both. But separating them in the function clarifies which dimension is used in each equation. If we were to make a rectangular puzzle, we’d know where to use width and where to use height.
Randomizing a sliding puzzle turns out to be more work than creating the puzzle in the first place, but it’s worth the effort for the better game play it provides. You can see an example of a randomized puzzle here. | https://www.sitepoint.com/randomizing-sliding-puzzle-tiles/ | CC-MAIN-2017-43 | refinedweb | 1,625 | 64.71 |
Building a Java program
Posted on March 1st, 2001 extension com, edu, org, net, etc., was capitalized by convention, so the library would appear: COM.bruceeckel.utility.foibles. Partway through the development of Java 1.2, however, it was discovered that this caused problems and so now the entire package name is lowercase.
This mechanism in Java means that all of your files automatically live in their own namespaces, and each class within a file automatically has a unique identifier. (Class names within a file must be unique, of course.) the definition for that class exists in more than one file..Vector;
to tell the compiler that you want to use Java’s Vector class. However, util contains a number of classes and you might want to use several of them without declaring them all explicitly. This is easily accomplished by using ‘ *’ to indicate a wildcard:, this.
There are no comments yet. Be the first to comment! | http://www.codeguru.com/java/tij/tij0037.shtml | CC-MAIN-2016-22 | refinedweb | 157 | 67.15 |
MenuItem
Since: BlackBerry 10.0.0
#include <bb/system/MenuItem>
To link against this class, add the following line to your .pro file: LIBS += -lbbsystem
A menu entry that can be invoked directly or may contain additional sub-menu entries.
A menu item contains information (labels and an icon) that can be used to present the menu to the user. It also provides the action to take when the item is selected. Selecting a menu item can either initiate the invocation of a target using specific data, MIME type, and/or URI, or it can result in the presentation of a submenu to give the user more options to choose from.
Overview
Public Functions Index
Public Functions
QUrl
Returns a path to a localized icon file that represents the menu item.
A path to a localized icon file that represents the menu item.
BlackBerry 10.0.0
MenuItemInvokeParams
Returns the invocation parameters that will be used to invoke the action associated with this menu item when it is selected.
The invocation parameters for this menu item.
BlackBerry 10.0.0
QString
Returns the localized label describing the menu item.
The localized label describing the menu item.
BlackBerry 10.0.0
MenuItem &
QUrl
Returns an optional path to a localized icon file that represents the menu item.
QString
Returns an optional secondary label describing the menu item.
The localized secondary label describing the menu item, or an empty string if not present.
BlackBerry 10.0.0
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/cascades/bb__system__menuitem.html | CC-MAIN-2017-26 | refinedweb | 261 | 57.57 |
Is there any specific reason why "using namespace std" must not be used?
Does it cause interference with other functions for example? I only know that std is used for cout, cin and endl. So using namespace std must only affect these functions right? (Also what else needs to be specified as std::?)
-> I'm asking because if I were to convert code from Quincy (a much older compiler that doesn't have namespace), I would have to change each and every cout which could be tedious.
So if it has no short-term disadvantages then I would not bother editing the code from Quincy to suit Visual C++ I would instead just write "using namespace std;"
Cheers. | https://cboard.cprogramming.com/cplusplus-programming/176471-why-shouldnt-using-namespace-std-used.html?s=2519a737d6733c839936e9119a985d77 | CC-MAIN-2019-04 | refinedweb | 118 | 71.75 |
Specifies search criteria as request body parameters.
GET /twitter/_search { "query" : { "term" : { "user" : "kimchy" } } }
The search request can be executed with a search DSL, which includes the Query DSL, within its body.
<index>
- (Optional, string) Comma-separated list or wildcard expression of index names used to limit the request.
allow_partial_search_results
- (Optional, boolean) Set to
falseto fail the request if only partial results are available. Defaults to
true, which returns partial results in the event of timeouts or partial failures You can override the default behavior for all requests by setting
search.default_allow_partial_resultsto
falsein the cluster settings.
batched_reduce_size
- (Optional, integer) The number of shard results that should be reduced at once on the coordinating node. This value should be used as a protection mechanism to reduce the memory overhead per search request if the potential number of shards in the request can be large.
ccs_minimize_roundtrips
- (Optional, boolean) If
true, the network round-trips between the coordinating node and the remote clusters ewill be minimized when executing cross-cluster search requests. See Cross-cluster search reduction for more. Defaults to
true.
from
- (Optional, integer) Starting document offset. Defaults to
0.
request_cache
- (Optional, boolean) If
true, the caching of search results is enabled for requests where
sizeis
0. See Shard request cache.
search_type
(Optional, string) The type of the search operation. Available options:
query_then_fetch
dfs_query_then_fetch
size
- (Optional, integer) The number of hits to return. Defaults to
10.
terminate_after
- (Optional, integer) The maximum number of documents to collect for each shard, upon reaching which the query execution will terminate early.
timeout
- (Optional, time units) Explicit timeout for each search request. Defaults to no timeout..
GET /twitter/_search { "query" : { "term" : { "user" : "kimchy" } } }
The API returns the following.
Allows to return the doc value representation of a field for each hit, for example:
GET /_search { "query" : { "match_all": {} }, "docvalue_fields" : [ "my_ip_field", { "field": "my_keyword_field" }, { "field": "my_date_field", "format": "epoch_millis" } ] }
Doc value fields can work on fields that have doc-values enabled, regardless of whether they are stored
* can be used as a wild card, for example:
GET /_search { "query" : { "match_all": {} }, "docvalue_fields" : [ { "field": "*_date_field", "format": "epoch_millis" } ] }
Note that if the fields parameter specifies fields without docvalues it will try to load the value from the fielddata cache causing the terms for that field to be loaded to memory (cached), which will result in more memory consumption.
Custom formats
While most fields do not support custom formats, some of them do:
- Date fields can take any date format.
- Numeric fields accept a DecimalFormat pattern.
By default fields are formatted based on a sensible configuration that depends
on their mappings:
long,
double and other numeric fields are formatted as
numbers,
keyword fields are formatted as strings,
date fields are formatted
with the configured
date format, etc.
On its own,
docvalue_fields cannot be used to load fields in nested
objects — if a field contains a nested object in its path, then no data will
be returned for that docvalue field. To access nested fields,
docvalue_fields
must be used within an
inner_hits block.
Enables explanation for each hit on how its score was computed.
GET /_search { "explain": true, "query" : { "term" : { "user" : "kimchy" } } }
Allows to collapse search results based on field values. The collapsing is done by selecting only the top sorted document per collapse key. For instance the query below retrieves the best tweet for each user and sorts them by number of likes.
GET /twitter//_search { "query": { "match": { "message": "elasticsearch" } }, "collapse" : { "field" : "user", "inner_hits": { "name": "last_tweets", "size": 5, "sort": [{ "date": "asc" }] }, "max_concurrent_group_searches": 4 }, "sort": ["likes"] }
See inner hits for the complete list of supported options and the format of the response.
It is also possible to request multiple
inner_hits for each collapsed hit. This can be useful when you want to get
multiple representations of the collapsed hits.
GET /twitter/_search { "query": { "match": { "message": "elasticsearch" } }, "collapse" : { "field" : "user", "inner_hits": [ { "name": "most_liked", "size": 3, "sort": ["likes"] }, { "name": "most_recent", .
collapse cannot be used in conjunction with scroll,
rescore or search after.
Second level of collapsing is also supported and is applied to
inner_hits.
For example, the following request finds the top scored tweets for
each country, and within each country finds the top scored tweets
for each user.
GET /twitter/_search { "query": { "match": { "message": "elasticsearch" } }, "collapse" : { "field" : "country", "inner_hits" : { "name": "by_location", "collapse" : {"field" : "user"}, "size": 3 } } }
Response:
{ ... "hits": [ { "_index": "twitter", "_type": "_doc", "_id": "9", "_score": ..., "_source": {...}, "fields": {"country": ["UK"]}, "inner_hits":{ "by_location": { "hits": { ..., "hits": [ { ... "fields": {"user" : ["user124"]} }, { ... "fields": {"user" : ["user589"]} }, { ... "fields": {"user" : ["user001"]} } ] } } } }, { "_index": "twitter", "_type": "_doc", "_id": "1", "_score": .., "_source": {...}, "fields": {"country": ["Canada"]}, "inner_hits":{ "by_location": { "hits": { ..., "hits": [ { ... "fields": {"user" : ["user444"]} }, { ... "fields": {"user" : ["user1111"]} }, { ... "fields": {"user" : ["user999"]} } ] } } } }, .... ] }
Second level of collapsing doesn’t allow
inner_hits..
Elasticsearch uses Lucene’s internal doc IDs as tie-breakers. As internal doc IDs might be completely different across replicas of the same data, you may occaisionally see documents with the same sort values are not consistently ordered when using pagination. Thus, it is more efficient to use the Scroll or Search After APIs for deep scrolling.
High
You can specify highlighter settings globally and selectively override them for individual fields.
GET /_search { "query" : { "match": { "user": "kimchy" } }, "highlight" : { "number_of_fragments" : 3, "fragment_size" : 150, "fields" : { "body" : { "pre_tags" : ["<em>"], "post_tags" : ["</em>"] }, "blog.title" : { "number_of_fragments" : 0 }, "blog.author" : { "number_of_fragments" : 0 }, "blog.comment" : { "number_of_fragments" : 5, "order" : "score" } } } }
Specify a highlight query
You can specify a
highlight_query to take additional information into account
when highlighting. For example, the following query includes both the search
query and rescore query in the
highlight_query. Without the
highlight_query,
highlighting would only take the search query into account.
GET /_search { "stored_fields": [ "_id" ], "query" : { "match": { "comment": { "query": "foo bar" } } }, "rescore": { "window_size": 50, "query": { "rescore_query" : { "match_phrase": { "comment": { "query": "foo bar", "slop": 1 } } }, "rescore_query_weight" : 10 } }, "highlight" : { "order" : "score", "fields" : { "comment" : { "fragment_size" : 150, "number_of_fragments" : 3, "highlight_query": { "bool": { "must": { "match": { "comment": { "query": "foo bar" } } }, "should": { "match_phrase": { "comment": { "query": "foo bar", "slop": 1, "boost": 10.0 } } }, "minimum_should_match": 0 } } } } } }
Set highlighter type
The
type field allows to force a specific highlighter type.
The allowed values are:
unified,
plain and
fvh.
The following is an example that forces the use of the plain highlighter:
GET /_search { "query" : { "match": { "user": "kimchy" } }, "highlight" : { "fields" : { "comment" : {"type" : "plain"} } } }
Configure highlighting tags
By default, the highlighting will wrap highlighted text in
<em> and
</em>. This can be controlled by setting
pre_tags and
post_tags,
for example:
GET /_search { "query" : { "match": { "user": "kimchy" } }, "highlight" : { "pre_tags" : ["<tag1>"], "post_tags" : ["</tag1>"], "fields" : { "body" : {} } } }
When using the fast vector highlighter, you can specify additional tags and the "importance" is ordered.
GET /_search { "query" : { "match": { "user": "kimchy" } }, "highlight" : { "pre_tags" : ["<tag1>", "<tag2>"], "post_tags" : ["</tag1>", "</tag2>"], "fields" : { "body" : {} } } }
You can also use the built-in
styled tag schema:
GET /_search { "query" : { "match": { "user": "kimchy" } }, "highlight" : { "tags_schema" : "styled", "fields" : { "comment" : {} } } }
Highlight on source
Forces the highlighting to highlight fields based on the source even if fields
are stored separately. Defaults to
false.
GET /_search { "query" : { "match": { "user": "kimchy" } }, "highlight" : { "fields" : { "comment" : {"force_source" : true} } } }
Highlight in all fields
By default, only fields that contains a query match are highlighted. Set
require_field_match to
false to highlight all fields.
GET /_search { "query" : { "match": { "user": "kimchy" } }, "highlight" : { "require_field_match": false, "fields": { "body" : { "pre_tags" : ["<em>"], "post_tags" : ["</em>"] } } } }
Combine matches on multiple fields
Each field highlighted can control the size of the highlighted fragment
in characters (defaults to
100), and the maximum number of fragments
to return (defaults to
5).
For example:
GET /_search { "query" : { "match": { "user": "kimchy" } }, "highlight" : { "fields" : { "comment" : {"fragment_size" : 150, "number_of_fragments" : 3} } } }
On top of this it is possible to specify that highlighted fragments need to be sorted by score:
GET /_search { "query" : { "match": { "user": ": "kimchy" } }, "highlight" : { "fields" : { "comment" : { "fragment_size" : 150, "number_of_fragments" : 3, "no_match_size": 150 } } } }
Highlight using the postings list highlighter
When using the
plain highlighter, you can choose between the
simple and
span fragmenters:
GET twitter/_search { "query" : { "match_phrase": { "message": "number 1" } }, "highlight" : { "fields" : { "message" : { "type": "plain", "fragment_size" : 15, "number_of_fragments" : 3, "fragmenter": "simple" } } } }>" ] } } ] } }
GET twitter/_search { "query" : { "match_phrase": { "message": "number 1" } }, "highlight" : { "fields" : { "message" : { "type": "plain", "fragment_size" : 15, "number_of_fragments" : 3, "fragmenter": "span" } } } }>" ] } } ] } }
If the
number_of_fragments option is set to
0,
NullFragmenter is used which does not fragment the text at all.
This is useful for highlighting the entire contents of a document or field..
Allows to configure different boost level per index when searching across more than one indices. This is very handy when hits coming from one index matter more than hits coming from another index (think social graph where each user has an index)..
The parent-join.
PUT test { "mappings": { "properties": { "comments": { "type": "nested" } } } } PUT test/_doc/1?refresh { "title": "Test title", "comments": [ { "author": "kimchy", "number": 1 }, { "author": "nik9000", "number": 2 } ] } POST test/_search { "query": { "nested": { "path": "comments", "query": { "match": {"comments.number" : 2} }, "inner_hits": {} } } }
An example of a response snippet that could be generated from the above search request:
{ ..., "hits": { "total" : { "value": 1, "relation": "eq" }, "max_score": 1.0, "hits": [ { "_index": "test", "_type": "_doc", "_id": "1", "_score": 1.0, "_source": ..., "inner_hits": { "comments": { "hits": { "total" : { "value": 1, "relation": "eq" }, "max_score": 1.0, "hits": [ { "_index": "test", "_type": "_doc", "_id": "1", "_nested": { "field": "comments", "offset": 1 }, "_score": 1.0, "_source": { "author": "nik9000", "number": 2 } } ] } } } } ] } }. doc values fields. Like this:
PUT test { "mappings": { "properties": { "comments": { "type": "nested" } } } } PUT test/_doc/1?refresh { "title": "Test title", "comments": [ { "author": "kimchy", "text": "comment text" }, { "author": "nik9000", "text": "words words words" } ] } POST test/_search { "query": { "nested": { "path": "comments", "query": { "match": {"comments.text" : "words"} }, "inner_hits": { "_source" : false, "docvalue_fields" : [ "comments.text.keyword" ] } } } }
If a mapping has multiple levels of hierarchical nested object fields each level can be accessed via dot notated path.
For example if there is a
votes nested field and votes should directly be returned
with the root hits then the following path can be defined:
PUT test { "mappings": { "properties": { "comments": { "type": "nested", "properties": { "votes": { "type": "nested" } } } } } } PUT test/_doc/1?refresh { "title": "Test title", "comments": [ { "author": "kimchy", "text": "comment text", "votes": [] }, { "author": "nik9000", "text": "words words words", "votes": [ {"value": 1 , "voter": "kimchy"}, {"value": -1, "voter": "other"} ] } ] } POST test/_search { "query": { "nested": { "path": "comments.votes", "query": { "match": { "comments.votes.voter": "kimchy" } }, "inner_hits" : {} } } }
Which would look like:
{ ..., "hits": { "total" : { "value": 1, "relation": "eq" }, "max_score": 0.6931472, "hits": [ { "_index": "test", "_type": "_doc", "_id": "1", "_score": 0.6931472, "_source": ..., "inner_hits": { "comments.votes": { "hits": { "total" : { "value": 1, "relation": "eq" }, "max_score": 0.6931472, "hits": [ { "_index": "test", "_type": "_doc", "_id": "1", "_nested": { "field": "comments", "offset": 1, "_nested": { "field": "votes", "offset": 0 } }, "_score": 0.6931472, "_source": { "value": 1, "voter": "kimchy" } } ] } } } } ] } }
This indirect referencing is only supported for nested inner hits.
The parent/child
inner_hits can be used to include parent or child:
PUT test { "mappings": { "properties": { "my_join_field": { "type": "join", "relations": { "my_parent": "my_child" } } } } } PUT test/_doc/1?refresh { "number": 1, "my_join_field": "my_parent" } PUT test/_doc/2?routing=1&refresh { "number": 1, "my_join_field": { "name": "my_child", "parent": "1" } } POST test/_search { "query": { "has_child": { "type": "my_child", "query": { "match": { "number": 1 } }, "inner_hits": {} } } }
An example of a response snippet that could be generated from the above search request:
{ ..., "hits": { "total" : { "value": 1, "relation": "eq" }, "max_score": 1.0, "hits": [ { "_index": "test", "_type": "_doc", "_id": "1", "_score": 1.0, "_source": { "number": 1, "my_join_field": "my_parent" }, "inner_hits": { "my_child": { "hits": { "total" : { "value": 1, "relation": "eq" }, "max_score": 1.0, "hits": [ { "_index": "test", "_type": "_doc", "_id": "2", "_score": 1.0, "_routing": "1", "_source": { "number": 1, "my_join_field": { "name": "my_child", "parent": "1" } } } ] } } } } ] } }
Exclude documents which have a
_score less than the minimum specified
in
min_score:
GET /_search { "min_score": 0.5, "query" : { "term" : { "user" : "kimchy" } } }
Note, most times, this does not make much sense, but is provided for advanced use cases.
Each filter and query can accept a
_name in its top level definition.
GET /_search { "query": { "bool" : { "should" : [ {"match" : { "name.first" : {"query" : "shay", "_name" : "first"} }}, {"match" : { "name.last" : {"query" : "banon", "_name" : "last"} }} ], "filter" : { "terms" : { "name.last" : ["banon", "kimchy"], "_name" : "test" } } } } }
The search response will include for each hit the
matched_queries it matched on. The tagging of queries and filters
only make sense for the
bool query.
The
post_filter is applied to the search
hits at the very end of a search
request, after aggregations have already been calculated. Its purpose is
best explained by example:
Imagine that" } } }
Controls a
preference of the.
A possible use case would be to make use of per-copy caches like the request cache. Doing this, however, runs contrary to the idea of search parallelization and can create hotspots on certain nodes because the load might not be evenly distributed anymore.
The
preference is a query string parameter which can be set to:
For instance, use the user’s session ID
xyzabc123 as follows:
GET /_search?preference=xyzabc123 { "query": { "match": { "title": "elasticsearch" } } }
This can be an effective strategy to increase usage of e.g. the request cache for unique users running similar searches repeatedly by always hitting the same cache, while requests of different users are still spread across all shard copies..
The query element within the search request body allows to define a query using the Query DSL.
GET /_search { "query" : { "term" : { "user" : "kimchy" } } }.likes.
Allows to return/_search?scroll=1m { "size": 100, "query": { "match" : { "title" : "elasticsearch" } } }/_search?scroll=1m { "slice": { "id": 0, "max": 2 }, "query": { "match" : { "title" : "elasticsearch" } } }.
Pag/_search { "size": 10, "query": { "match" : { "title" : "elasticsearch" } }, "search_after": [1463538857, "654323"], "sort": [ {"date": "asc"}, {"tie_breaker_id": "asc"} ] }.
There.
Returns the sequence number and primary term of the last modification to each search hit. See Optimistic concurrency control for more details.
GET /_search { "seq_no_primary_term": true, "query" : { "term" : { "user" : "kimchy" } } }
Allows": { "properties": { "post_date": { "type": "date" }, "user": { "type": "keyword" }, "name": { "type": "keyword" }, "age": { "type": "integer" } } } }
GET /my_index/_search { "sort" : [ { "post_date" : {"order" : "asc"}}, "user", { "name" : "desc" }, { "age" : "desc" }, "_score" ], "query" : { "term" : { "user" : "kimchy" } } }"}} ] }.. If
includes is not empty, then only fields that match one of the
patterns in
includes but none of the patterns in
excludes are provided in
_source. If
includes is empty, then all fields are provided in
_source,
except for those that match a pattern in
excludes.
GET /_search { "_source": { "includes": [ "obj1.*", "obj2.*" ], "excludes": [ "*.description" ] }, "query" : { "term" : { "user" : "kimchy" } } }
The
stored.
On its own,
stored_fields cannot be used to load fields in nested
objects — if a field contains a nested object in its path, then no data will
be returned for that stored field. To access nested fields,
stored_fields
must be used within an
inner_hits block.. | https://www.elastic.co/guide/en/elasticsearch/reference/7.3/search-request-body.html | CC-MAIN-2020-40 | refinedweb | 2,332 | 62.78 |
piyo alternatives and similar packages
Based on the "Game" category.
Alternatively, view piyo alternatives based on common mentions on social networks and blogs.
LambdaHack9.8 9.9 piyo VS LambdaHackHaskell game engine library for roguelike dungeon crawlers; please offer feedback, e.g., after trying out the sample game with the web frontend at
haskanoid9.6 2.4 piyo VS haskanoidA free and open source breakout clone in Haskell using SDL and FRP, with Wiimote and Kinect support.
Allure9.4 9.1 piyo VS AllureAllure of the Stars is a near-future Sci-Fi roguelike and tactical squad combat game written in Haskell; please offer feedback, e.g., after trying out the web frontend version at
rattletrap9.3 9.2 piyo VS rattletrap:car: Parse and generate Rocket League replays.
dominion9.2 0.0 piyo VS dominionA Dominion simulator in Haskell
FunGEn9.0 2.7 piyo VS FunGEnA lightweight, cross-platform, OpenGL-based 2D game engine in Haskell
HGE2D8.9 0.0 piyo VS HGE2D2D game engine written in Haskell
Nomyx-Core8.8 0.0 piyo VS Nomyx-CoreThe Nomyx game
Nomyx8.8 0.0 piyo VS NomyxThe Nomyx game
ecstasy8.7 0.0 piyo VS ecstasy:pill: a GHC.Generics-based entity component system
SFML8.5 0.0 piyo VS SFMLLow level Haskell bindings for SFML 2.x
Monadius8.4 0.0 piyo VS Monadius2-D arcade scroller
falling-turnip8.3 0.0 piyo VS falling-turnipfalling sand game with regular parallel arrays.
octane8.1 0.0 piyo VS octaneParse Rocket League replays.
reflex-sdl28.0 0.0 piyo VS reflex-sdl2A minimal host for sdl2 based reflex apps.
ActionKid7.8 0.0 piyo VS ActionKidA video game framework for haskell
werewolf7.8 0.0 piyo VS werewolfA game engine for running werewolf in a chat client
hsudoku7.8 0.0 piyo VS hsudokuA native gtk sudoku game written in haskell
Hipmunk7.7 0.0 L5 piyo VS HipmunkHaskell binding for Chipmunk, 2D physics engine.
chessIO7.6 8.5 piyo VS chessIOFast haskell chess move generator library and console UCI frontend
yampa20487.6 0.0 piyo VS yampa20482048 game clone using Yampa FRP library
halma7.3 0.0 piyo VS halmaChinese Checkers Haskell library, GUI application and Telegram bot
hoodie7.3 0.0 piyo VS hoodieA toy roguelike game in Haskell
macbeth-lib7.3 1.9 piyo VS macbeth-libA beautiful FICS client
affection7.2 5.5 piyo VS affectionA simple Game Engine using SDL
sylvia7.2 0.0 piyo VS sylviaLambda calculus visualization
htiled7.1 0.0 piyo VS htiledImport from the Tiled map editor.
Monaris7.1 0.0 piyo VS MonarisA simple tetris clone
battleships7.0 0.0 piyo VS battleshipsAngewandte Funktionale Programmierung
call6.9 0.0 piyo VS callNot an insufficient game engine
Ninjas6.8 0.0 piyo VS NinjasHaskell game where multiple players attempt to blend in with computer controlled characters while being first to visit the key locations on the board.
hs20486.6 0.0 piyo VS hs2048:1234: A 2048 game clone in Haskell.
werewolf-slack6.6 0.0 piyo VS werewolf-slackA chat interface for playing werewolf in Slack
fwgl6.5 0.0 piyo VS fwglGame engine
h20486.4 0.0 piyo VS h2048An Implementation of Game 2048
gloss-game6.4 0.0 piyo VS gloss-gameA convenience wrapper around the Gloss library to make writing games in Haskell even easier
wxAsteroids6.3 0.0 piyo VS wxAsteroidsA demonstration of how to use wxHaskell
animate-frames6.0 0.0 piyo VS animate-frames🎞️ Sprite frames to spritesheet & metadata
SpacePrivateers5.9 0.0 piyo VS SpacePrivateersSimple roguelike set in space
Liquorice5.9 0.0 piyo VS LiquoriceHaskell embedded domain-specific language (eDSL) for the algorithmic construction of maps for the computer game "Doom"
MazesOfMonad5.8 0.0 piyo VS MazesOfMonadsimple game
layers-game5.7 0.0 piyo VS layers-gameA prototypical 2d platform game
ixshader5.6 0.0 piyo VS ixshaderA shallow embedding of the OpenGL Shading Language in Haskell
postgresql-simple-url5.6 0.0 piyo VS postgresql-simple-urlHeroku helpers for pulmurice server
gore-and-ash5.6 0.0 piyo VS gore-and-ashAttempt to build game engine with networking in Haskell using FRP as core idea.
battleship5.4 0.0 piyo VS battleshipBattleship... Pure.. Functional... Haskell + MongoDB + TypeScript + React...
netwire-input-glfw5.3 0.0 piyo VS netwire-input-glfwMonadInput instances for GLFW based netwire programs
animate-preview5.2 0.0 piyo VS animate-preview🔬 Dynamic viewer for sprite animation
gore-and-ash-demo5.2 0.0 piyo VS gore-and-ash-demoDemostration game for Gore&Ash engine
betris5.2 2.2 piyo VS betrisA horizontal version of tetris for braille display users
Scout APM: A developer's best friend. Try free for 14-days
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of piyo or a related project?
README
Piyo🐤
Haskell game engine like fantasy console. Inspired by PICO-8.
[](pictures/piyo.png)
NOTE: UNDER DEVELOPMENT
Feature
- minimum but simple api
- pure update funciton
- japanease bitmap font support
It supports PICO-8 like drawing api.
Assuming Specification
For now below.
- display: 192x192
- sprite size: 12x12
- map size: 16x16
Requirements
SDL2 needed.
For OS X
Install middle wares with homebrew.
brew install sdl2 sdl2_gfx sdl2_image sdl2_mixer sdl2_ttf
For others
It may work ..!
Minimal code
import Piyo instance Game () where draw _ = [ cls Yellow , txt Black "Hello, World!" 48 48 ] main :: IO () main = piyo () Nothing
[](pictures/demo.png)
Data Flow Schematic
Functions are called in order update, draw, sound every frame.
[](pictures/flow.svg)
Indexing
Index number starts with 0. Not 1.
Examples
Sample working code at [Examples](examples)
Development in the future
- Redesign assets format
- Enrich sound api
- Add useful state update functions
- FPS management
- Support ore key action
- Sprite and map editor
- Export function for distribution
- etc... | https://haskell.libhunt.com/piyo-alternatives | CC-MAIN-2021-49 | refinedweb | 979 | 59.6 |
First time here? Check out the FAQ!
Hi,
I was wondering if the current OpenCV Python had GPU support yet ?
OR is there a faster way to calculate convolved variance ?
MaxFrom3DArray = numpy.amax(imgArray, axis=0) # where imgArray is a 3D array
Back2ImMax = Image.fromarray(MaxFrom3DArray, 'P')
Back2ImMax.save(os.path.join(MaxFromMulti, filename), "TIFF")
ForVariance = cv2.imread((MaxFromMulti + filename), cv2.IMREAD_UNCHANGED)
wlen = 40
def winVar(img, wlen):
wmean, wsqrmean = (cv2.boxFilter(x, -1, (wlen, wlen),
borderType=cv2.BORDER_REFLECT) for x in (img, img*img))
return wsqrmean - wmean*wmean
windowVar = winVar(ForVariance, wlen)
numpy.set_printoptions(threshold='nan')
print windowVar
This takes hours in Python, and ages using python multi-threading, with CPU cores maxed out.
It takes a fraction of a second and hardly any cpu usage when serialised in c sharp.
Doesn't something seem a bit off about that ?
Thanks in advance
TWP
I am following the tutorial here...
and
from this line
keypoints = detector.detect(im)
I get the following error
in <module>
keypoints = detector.detect(back2im)
SystemError: error return without exception set
Any ideas as to how to get past this?
best
TWP
This worked :-) Thank you Eshirima
Hi, I received the following errors in a pip install of OpenCV in Ubuntu 16.04 using Microsoft Visual Studio
[Pylint] C0103:Invalid constant name "detector"
[Pylint] E1101:Module 'cv2' has no 'SimpleBlobDetector' member
For the following line of code
detector = cv2.SimpleBlobDetector( )
cv2.imshow("Keypoints", im_with_keypoints)
I have tried cv instead of cv2 aswell, and I still get an error. What would you suggest ?
Kind regards | https://answers.opencv.org/users/255458/timwebphoenix/?sort=recent | CC-MAIN-2019-30 | refinedweb | 259 | 52.76 |
Universal Windows apps enable you to target every Windows device in one solution. You develop once, share most of your code, and deploy on Windows, Windows Phone, or Xbox.
The goal is to maximize the reuse of code. You can share code, user controls, styles, strings, and other assets between the Windows Phone and Windows 8 projects in Visual Studio. This reduces the effort needed in building and maintaining an app for each type of device.
Introduction
From a developer's perspective, a universal Windows app is not a single binary that runs on multiple platforms. Rather, it takes the form of a Visual Studio solution containing multiple projects, one project for each targeted platform in addition to a shared project containing code and resources shared between platforms. A lot of code can be shared between the projects as Windows Phone 8.1 implements the majority of the WinRT APIs that Windows 8.1 implements.
You can create a Windows Phone application using the Silverlight runtime (version 8.0 or 8.1) or the WinRT runtime (the one from universal windows apps). The WinRT runtime enables you to create one application that will run on Windows, Windows Phone, and even Xbox One.
We are using the XAML framework to develop an app for multiple platforms. In the current version there is an API convergence of 90%, but there still is a small set not converged yet. Windows Phone features only available in the Silverlight framework are:
- Lenses support
- VoIP support
- Camera capture task
- Clipboard APIs
- Lock screen wallpaper API
In this tutorial, I will use a universal Windows app template to create a Hex Clock app, a precise hexadecimal color clock. It goes through the whole 24 hours color range, from #000000 to #235959. With every tick of the clock, the app's background changes to the color corresponding to the current time converted to hexadecimal. It uses the same implementation as Hex Color JS Clock to generate the hex code of the current time.
The design was inspired by a Windows Phone 7 clock app tutorial on Tuts+. While the clock app only targets Windows Phone, we will use its design to make a similar app for Windows Phone 8.1 and Windows 8.1. The below screenshot shows what we are going to build.
In this tutorial, I will discuss the following topics which are relevant to developing universal Windows apps:
- the structure of universal Windows apps
- switching startup projects in Visual Studio
- context switcher for universal Windows apps in the Visual Studio editor
- how to write cross-platform code in the shared project
- how to add support for Windows or Windows Phone to an existing project
- building a universal Windows app from scratch
1. Structure of Universal Windows Apps
A universal Windows app is a collection of three projects enclosed in an optional solution folder. The Windows and Windows Phone projects are platform projects and are responsible for creating the application packages (.appx), targeting the respective platforms. These projects contain assets that are specific to the platform being targeted.
The shared project is a container for code that runs on both platforms. They don’t have a binary output, but their contents are imported by the platform projects and used as part of the build process to generate the app packages (.appx).
The screenshot below shows the solution that Visual Studio creates when you choose the project template for a Blank App (Universal Apps).
Visual Studio 2013 Update 2 introduces the new feature that is universal Windows apps. Download and install this update before you start building universal Windows apps.
2. Switching Startup Projects
When you run the solution, the project that runs is the one that's selected as the startup project. To set the startup project, right-click on the project node in the Solution Explorer and choose the option Set as Startup Project. You can quickly switch the startup project from the Debug target drop-down that now enumerates all the possible projects in the solution.
The project that you choose is shown in bold in the Solution Explorer. The available debug targets change when switching startup projects.
- When the Windows project is the startup project, the Debug target drop-down displays options for the Windows Simulator or Local Machine.
- When the Windows Phone project is the startup project, the drop-down displays options for Device as well as various emulators.
3. Context Switcher in Code Editor
When writing code in a shared project, you can use the project context switcher in the navigation bar to select the platform you are actively targeting, which in turn customizes the IntelliSense experience in the code editor.
If you use an API in shared code that is not supported on both platforms, an error message will identify this API when you build the project. You don't have to build the project to confirm that you’re using cross-platform APIs.
The following screenshot shows an example of the warning icons and IntelliSense for a type that is supported only in Windows Phone apps.
4. Cross-Platform Code in Shared Project
In the shared project, you typically write code that is common to both platforms. To isolate sections of code that are platform-specific, use the
#ifdef directive. The constants
WINDOWS_APP and
WINDOWS_PHONE_APP are predefined for you.
The following are the conditional compilation constants that you can use to write platform-specific code:
When you're writing code in the shared project, the Visual Studio code editor uses a context that targets one platform or the other. In C#, the IntelliSense that you see as you write code is specific to the context of the code editor, that is, specific to Windows or to Windows Phone.
5. Adding Support for Windows/Windows Phone you want to add support for Windows 8.1.
To add support for one type of device or another, in the Solution Explorer, right-click on the project and choose Add Windows Phone 8.1 or Add Windows 8.1.
Here, Visual Studio adds a new Windows Phone or Windows project to the solution. A shared project is also automatically created for you.
The following screenshot shows a solution after adding a Windows Phone project to an existing Windows project. The shared project that is added to the solution is initially empty.
Note that if you create an app using a universal Windows app template, the shared project already contains the App.xaml file.
Step 1: Move Files To the Shared Project
You can move any code that you want to share between apps to the shared project. For example, you can move the Common, DataModel, and Strings folders to the shared project. You can even move App.xaml to the shared project.
You may however receive some compiler errors about the code that you move into the shared project. You can resolve these errors by configuring your new app project to have the same set of references as your initial.
Step 2: Share App.xaml
When you create a new solution for a universal Windows app, Visual Studio places App.xaml in the shared project. If you convert an existing project to a universal Windows app, you can move App.xaml to the shared project manually. You will have to set the build action property of the page to ApplicationDefinition after moving the file. Here are the steps involved:
- In the Solution Explorer, in the shared project, select the App.xaml file.
- Select the View > Properties window.
- In the Properties window, in the Build Action drop-down list, select ApplicationDefinition.
You also have to decide how you want to open the first page of your app. If you share the App.xaml file and want to use a different start page for each app, you have to add
#ifdef directives as shown below.
#if WINDOWS_APP if (!rootFrame.Navigate(typeof(HubPage))) #endif #if WINDOWS_PHONE_APP if (!rootFrame.Navigate(typeof(WindowsPhoneStartPage))) #endif { throw new Exception("Failed to create initial page"); }
6. Get Started Writing a Universal Windows App
Step 1: Project Setup
Firstly, pick a project template for a universal Windows app in the New Project dialog box. The following screenshot shows the universal Windows app project templates that are currently available for C#.
Give the project a name. I will use Hex Clock Pro for my project.
Step 2: Building the User Interface
For the most part, the user interface work takes place in the platform-specific projects, allowing you to craft a user interface that look great on PC, tablets, and phones, but that share common data, resources, components, and even view-models.
Instead of building separate user interfaces for the Windows Phone 8.1 and Windows 8.1 versions of Hex Clock Pro, I define a common design in the shared project. I just have to make a few changes in the XAML of the clock app on Tuts+ to make it work for both platforms.
<Canvas x: <Canvas.RenderTransform> <CompositeTransform Rotation="-30"/> </Canvas.RenderTransform> <Canvas x: <Canvas.RenderTransform> <CompositeTransform/> </Canvas.RenderTransform> <TextBlock x: <TextBlock x: <TextBlock x: </Canvas> <Rectangle x: <Rectangle x: <TextBlock x: <TextBlock x: <TextBlock x: <TextBlock.RenderTransform> <CompositeTransform/> </TextBlock.RenderTransform> </TextBlock> <TextBlock x: </Canvas>
Step 3: Sharing Code
As discussed earlier, code which is common to both platforms can be put in the shared project. Code that uses platform-specific APIs needs to be placed in one of the platform-specific projects. You can even use
#ifdef directives to include platform-specific code in a shared file.
As the Hex Clock Pro app doesn't use any APIs that are platform-specific, I can put all the code in the shared project.
Hiding the Status Bar
In MainPage.xaml.cs in the shared project, we have used the
#ifdef directive to isolate code specific to Windows Phone.
I have used the
DispatcherTimer class to call an initial tick when the
LayoutRoot grid is loaded. The
timer object calls the
timer_Tick function on every tick of the clock.
try { DispatcherTimer timer = new DispatcherTimer(); timer.Tick += timer_Tick; timer.Interval = new TimeSpan(0, 0, 0, 1); timer.Start(); timer_Tick(null, null); //Call an initial tick } catch { }
The
timer_Tick function updates the displayed time in the app and, at the same time, it updates the background color.
Updating the Background Color
The background color is set to a hexadecimal color that corresponds to the current time.
HexColour color = new HexColour(hexTime); SolidColorBrush bgBrush = new SolidColorBrush(Color.FromArgb(color.A, color.R, color.G, color.B)); LayoutRoot.Background = bgBrush;
An object of the
HexColour class is initialized with the current time, returning the corresponding RGB values. The constructor of the
HexColour class sets the A, R, G, B values for the specified color.
public HexColour(string hexCode) { if (hexCode == null) { throw new ArgumentNullException("hexCode"); } if (!Regex.IsMatch(hexCode, HEX_PATTERN)) { throw new ArgumentException("Format must be #000000 or #FF000000 (no extra whitespace)", "hexCode"); } // shave off '#' symbol hexCode = hexCode.TrimStart('#'); // if no alpha value specified, assume no transparency (0xFF) if (hexCode.Length != LENGTH_WITH_ALPHA) hexCode = String.Format("FF{0}", hexCode); _color = new Color(); _color.A = byte.Parse(hexCode.Substring(0, 2), NumberStyles.AllowHexSpecifier); if (_color.A < 50) _color.A = 50; _color.R = byte.Parse(hexCode.Substring(2, 2), NumberStyles.AllowHexSpecifier); _color.G = byte.Parse(hexCode.Substring(4, 2), NumberStyles.AllowHexSpecifier); _color.B = byte.Parse(hexCode.Substring(6, 2), NumberStyles.AllowHexSpecifier); }
Adding Animations and Effects
I have imitated the initial animation used in the previous clock app on Tuts+ and it is initialized when the
LayoutRoot is loaded.
Storyboard sb = (Storyboard)this.Resources["IntialAnimation"]; sb.BeginTime = TimeSpan.FromSeconds(0.1); sb.Begin();
This is all we need to build the Hex Clock Pro app. The app uses 100% shared code. You just need to generate separate app packages for both platforms. The app looks very similar on Windows Phone and uses the same XAML code for its user interface.
Note that I have added all XAML and C# code in the shared project, but when I deploy either the Windows app or the Windows Phone app, the code in the shared project is merged internally with the platform-specific projects.
Conclusion
Most of the code for the Windows app and the Windows Phone app is shared, and while the user interfaces are separate, they are similar enough that building both is less work than building two user interfaces from scratch.
If I had built a Windows Phone version of Hex Clock Pro for Windows Phone 7 or 8, it would have been a lot more work since Windows Phone 7 contains no WinRT APIs and Windows Phone 8 contains only a small subset.
With Windows 10, we will see more convergence, which means one API—the WinRT API—for multiple platforms, and a high degree of fidelity between user interface elements for each platform that doesn't prevent developers from using platform-specific elements to present the best possible experience on every device. Feel free to download the tutorial's source files to use as reference. Hex Clock Pro is also available in the marketplace for Windows Phone 8.1 and Windows 8.1.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/creating-your-first-universal-windows-app--cms-23122 | CC-MAIN-2017-13 | refinedweb | 2,206 | 64.41 |
- Author:
- alia_khouri
- Posted:
- August 7, 2008
- Language:
- Python
- Version:
- .96
- settings path
- Score:
- 0 (after 0 ratings)
In the past, whenever I had a script that I wanted to properly configure the settings for, I would use something like the following idiom at the top of the script:
import sys, os; dirname = os.path.dirname # sys.path.insert(0, dirname(dirname(__file__))) sys.path.insert(0, dirname(__file__)) os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp.settings'
Notice that this is a relative setting to
__file__ variable in the script. The djangopath function is an attempt to do away with the above such that I can now write the following:
from lib import djangopath; djangopath(up=2, settings='myapp.settings')
This seems to work for me, but it assumes that you are packaging your script inside your projects/apps. If they are elsewhere then you may need to resort to another method (e.g. absolute paths, etc.)
AK
More like this
- cron/console bootstrap django by nstrite 8 years, 1 month ago
- Zope testing django layer by grahamcarlyle 7 years, 8 months ago
- Command Line Script Launcher by dakrauth 7 years, 2 months ago
- Use django-admin.py instead of manage.py by whiteinge 7 years, 3 months ago
- Django Settings Splitter & Local Settings loader by rudude 1 year, 7 months ago
Please login first before commenting. | https://djangosnippets.org/snippets/948/ | CC-MAIN-2015-40 | refinedweb | 223 | 63.09 |
*args and **kwargs in python :
In this python tutorial, we will discuss about two frequently used argument keywords ’args’ and ‘**kwargs’ . One thing we should keep in mind that we can use any name instead of ‘args’ and ‘kwargs’ for these variables. That means you can use any name like ’myargs’ , ‘**mykwargs’ etc. Let me show you what are the purpose of these two variables :
What is *args:
*args is used to pass multiple argument values to a function. For example , following method is used to find the average of two numbers :
def find_avg(a,b): average = (a+b)/2 print ("average is ",average) find_avg(2,3)
We can pass any two number and it will print out the average result. But if you want to find average of 3 numbers, do you want to write a different method for it ? Again for 4 numbers ? Ofcourse we don’t want same styled method for each cases. What we want is we will pass any number of numbers to the same method and it will calculate the average. To achieve this, we can use ‘*args’ that allows to pass any number of arguments to a function. Following example shows how :
def find_avg(*numbers): sum = 0 for i in numbers : sum += i print ("average is ",sum/(len(numbers))) print (numbers) find_avg(2,3) find_avg(2,3,4) find_avg(1,2,3,4,5,6,7,8,9,10)
It will give the following output :
average is 2.5 (2, 3) average is 3.0 (2, 3, 4) average is 5.5 (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
That means, we can pass any number of arguments if we place a ’’ before an argument name.Also , the second print statement shows that the values are stored as ‘tuple’ in the argument ’numbers’. Now what will happen if we place ’**’ ?
What is **kwargs :
So, ‘*args’ is used to pass variable arguments to a function. Similarly, ‘**kwargs’ is used to pass variable ‘key-value’ arguments to a function. Example :
def print_values(**values): print (values) print_values(one = 1, two = 2)
If you run this program, it will print :
{'one': 1, 'two': 2}
That means, it passes a dictionary as an argument. As it is a dictionary, all dictionary operations can be performed. We can pass any number of key-value pairs :
def print_values(**values): for key, value in values.items(): print("{} = {}".format(key,value)) print_values(one = 1,two = 2,three = 3,four = 4,five = 5)
Output :
one = 1 two = 2 three = 3 four = 4 five = 5
Example to use *args and **kwargs in Function Calls :
We have seen that both ’args’ and ‘**kwargs’ are used to receive variable number of arguments in a function .Similarly, we can also put different number of arguments in ’args’ or ‘*kwargs’ and pass them to a different function. Previously, we have seen that if ’args’ is used in function definition, parameters are converted to a tuple and save it in ‘args’. So, to pass ‘*args’ as parameter, we need to assign it as a tuple of items.Example :
def avg_of_two(a,b): print ((a+b)/2) def avg_of_three(a,b,c): print ((a+b+c)/3) var1 = (1,2) avg_of_two(*var1) var2 = (1,2,3) avg_of_three(*var2)
In this example, we have put some numbers in a tuple and assign it to a variable ‘var1’ and ‘var2’ . Next, we have passed the variable with a ’*’ before it. That’s it. The function definition handles all. e.g. after passing ‘var2’ to ‘avgofthree’ funciton, it assigns automatically 1 to ‘a’, 2 to ‘b’ and 3 to ‘c’.
Same thing can be achieved for ’kwargs’ as well. For ‘*args’, we were passing one tuple to the function . Now for ’kwargs’, we will pass one dictionary.Example :
def avg_of_two(a,b): print ((a+b)/2) def avg_of_three(a,b,c): print ((a+b+c)/3) var1 = {'a':1,'b':2} avg_of_two(**var1) var2 = {'a':1,'b':2,'c':3} avg_of_three(**var2)
The output is same as above example.
Using *args , **kwargs and other variables :
We can use *args , **kwargs and other normal variables in a same function. Example :
def show_details(a,b,*args,**kwargs): print("a is ",a) print("b is ",b) print("args is ",args) print("kwargs is ",kwargs) show_details(1,2,3,4,5,6,7,8,9) print("-----------") show_details(1,2,3,4,5,6,c= 7,d = 8,e = 9) print("-----------")
The output is :
a is 1 b is 2 args is (3, 4, 5, 6, 7, 8, 9) kwargs is {} ----------- a is 1 b is 2 args is (3, 4, 5, 6) kwargs is {'c': 7, 'd': 8, 'e': 9}
That’s all for *args and **kwargs in python. Please drop a comment below and follow us on facebook or twitter :)
Similar tutorials :
- Python program to delete all files with specific extension in a folder
- | https://www.codevscolor.com/args-kwargs-python-difference/ | CC-MAIN-2020-29 | refinedweb | 811 | 68.6 |
Haskell/Libraries/Maps
The module
Data.Map provides the
Map datatype, which allows you to store values attached to specific keys. This is called a lookup table, dictionary or associative array in other languages.
Motivation[edit]
Very often it would be useful to have some kind of data structure that relates a value or list of values to a specific key. This is often called a dictionary after the real-world example: a real-life dictionary associates a definition (the value) to each word (the key); we say the dictionary is a map from words to definitions. A filesystem driver might keep a map from filenames to file information. A phonebook application might keep a map from contact names to phone numbers. Maps are a very versatile and useful datatype.
Why not just
[(a, b)]?[edit]
You may have seen in other chapters that a list of pairs (or 'lookup table') is often used as a kind of map, along with the function
lookup :: [(a, b)] -> a -> Maybe b. So why not just use a lookup table all the time? Here are a few reasons:
- Working with maps gives you access to a whole load more useful functions for working with lookup tables.
- Maps are implemented far more efficiently than a lookup table would be, specially in terms of lookup speed.[1]
Library functions[edit]
The module Data.Map provides an absolute wealth of functions for dealing with Maps, including setlike operations like unions and intersections. The full list can be found in the core libraries documentation.
Example[edit]
The following example implements a password database. The user is assumed to be trusted, so is not authenticated and has access to view or change passwords.
{- A quick note for the over-eager refactorers out there: This is (somewhat) intentionally ugly. It doesn't use the State monad to hold the DB because it hasn't been introduced yet. Perhaps we could use this as an example of How Monads Improve Things? -} module PassDB where import qualified Data.Map as M import System.Exit type UserName = String type Password = String type PassDB = M.Map UserName Password -- PassBD is a map from usernames to passwords -- | Ask the user for a username and new password, and return the new PassDB changePass :: PassDB -> IO PassDB changePass db = do putStrLn "Enter a username and new password to change." putStr "Username: " un <- getLine putStrLn "New password: " pw <- getLine if un `M.member` db -- if un is one of the keys of the map then return $ M.insert un pw db -- then update the value with the new password else do putStrLn $ "Can't find username '" ++ un ++ "' in the database." return db -- | Ask the user for a username, whose password will be displayed. viewPass :: PassDB -> IO () viewPass db = do putStrLn "Enter a username, whose password will be displayed." putStr "Username: " un <- getLine putStrLn $ case M.lookup un db of Nothing -> "Can't find username '" ++ un ++ "' in the database." Just pw -> pw -- | The main loop for interacting with the user. mainLoop :: PassDB -> IO PassDB mainLoop db = do putStr "Command [cvq]: " c <- getChar putStr "\n" -- See what they want us to do. If they chose a command other than 'q', then -- recurse (i.e. ask the user for the next command). We use the Maybe datatype -- to indicate whether to recurse or not: 'Just db' means do recurse, and in -- running the command, the old datbase changed to db. 'Nothing' means don't -- recurse. db' <- case c of 'c' -> fmap Just $ changePass db 'v' -> do viewPass db; return (Just db) 'q' -> return Nothing _ -> do putStrLn $ "Not a recognised command, '" ++ [c] ++ "'." return (Just db) maybe (return db) mainLoop db' -- | Parse the file we've just read in, by converting it to a list of lines, -- then folding down this list, starting with an empty map and adding the -- username and password for each line at each stage. parseMap :: String -> PassDB parseMap = foldr parseLine M.empty . lines where parseLine ln map = let [un, pw] = words ln in M.insert un pw map -- | Convert our database to the format we store in the file by first converting -- it to a list of pairs, then mapping over this list to put a space between -- the username and password showMap :: PassDB -> String showMap = unlines . map (\(un, pw) -> un ++ " " ++ pw) . M.toAscList main :: IO () main = do putStrLn $ "Welcome to PassDB. Enter a command: (c)hange a password, " ++ "(v)iew a password or (q)uit." dbFile <- readFile "/etc/passdb" db' <- mainLoop (parseMap dbFile) writeFile "/etc/passdb" (showMap db') | https://en.wikibooks.org/wiki/Haskell/Hierarchical_libraries/Maps | CC-MAIN-2016-40 | refinedweb | 753 | 72.87 |
How to build a chatbot with Preact and Wit.ai
You will need a recent version of Node and npm installed on your machine. A basic understanding of React or Preact will be helpful.
In this tutorial, we will consider how to build a realtime chatbot that incorporates NLP using Preact, Wit.ai and Pusher Channels. You can find the entire source code of the application in this GitHub repository.
Chatbots have become more and more prevalent over the past few years, with several businesses taking advantage of them to serve their customers better.
Many chatbots integrate natural language processing (NLP) which adds a more human touch to conversations, and helps them understand a wider variety of inputs.
Prerequisites
Before you continue, make sure you have Node.js, npm and
curl installed on your computer. You can find out how to install Node.js and npm here.
The versions I used while creating this tutorial are as follows:
- Node.js v10.4.1
- npm v6.3.
Also investigate how to install
curl on your favorite operating system, or use this website.
Finally, you need to have a basic understanding of JavaScript and Preact or React, but no prior experience with Pusher or Wit.ai is required.
Getting started
Let’s bootstrap our project using the preact-cli tool which allows us to quickly get a Preact application up and running.
Open up your terminal, and run the following command to install
preact-cli on your machine:
npm install -g preact-cli
Once the installation completes, you’ll have access to the
preact command that will be used to setup the project. Run the following command in the terminal to create your Preact app:
preact create simple preact-chatbot
The above command will create a new directory called
preact-chatbot and install
preact as well as its accompanying dependencies. It may take a while to complete, so sit tight and wait. Once it’s done, you should see a some information in the terminal informing you of what you can do next.
Next, change into the newly created directory and run
npm run start to start the development server.
Once the application compiles, you will be able to view it at When you open up that URL in your browser, you should see a page on your screen that looks like this:
Create your application frontend with Preact
Open up
index.js in your text editor, and change its contents to look like this:
// index.js import './style'; import { Component } from 'preact'; export default class App extends Component { constructor(props) { super(props); this.state = { userMessage: '', conversation: [], }; this.handleChange = this.handleChange.bind(this); this.handleSubmit = this.handleSubmit.bind(this); } handleChange(event) { this.setState({ userMessage: event.target.value }); } handleSubmit(event) { event.preventDefault(); const msg = { text: this.state.userMessage, user: 'user', }; this.setState({ conversation: [...this.state.conversation, msg], }); fetch(' { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ message: this.state.userMessage, }), }); this.setState({ userMessage: '' }); } render() { const ChatBubble = (text, i, className) => { const classes = `${className} chat-bubble`; return ( <div key={`${className}-${i}`} class={`${className} chat-bubble`}> <span class="chat-content">{text}</span> </div> ); }; const chat = this.state.conversation.map((e, index) => ChatBubble(e.text, index, e.user) ); return ( <div> <h1>Realtime Preact Chatbot</h1> <div class="chat-window"> <div class="conversation-view">{chat}</div> <div class="message-box"> <form onSubmit={this.handleSubmit}> <input value={this.state.userMessage} onInput={this.handleChange} </form> </div> </div> </div> ); } }
If you have some experience with Preact or React, the above code should be straightforward to understand. The state of the application is initialized with two values:
userMessage which contains the value of whatever the user types into the input field, and
conversation which is an array that will hold each message in the conversation.
The
handleChange function runs on every keystroke to update
userMessage which allows the displayed value to update as the user types. When the user hits the
Enter button the form will be submitted and
handleSubmit will be invoked.
handleSubmit updates the
conversation state with the contents of the user’s message and sends the message in a
POST request to the
/chat endpoint which we will soon setup in our app’s server component, before clearing the input field by setting
userMessage to an empty string.
Add the styles for the application
Let’s add the styles for the app’s frontend. Open up
style.css in your editor and replace its contents with the following styles:
// style.css html, body { font: 14px/1.21 'Helvetica Neue', arial, sans-serif; font-weight: 400; box-sizing: border-box; } *, *::before, *::after { box-sizing: inherit; margin: 0; padding: 0; } h1 { text-align: center; margin-bottom: 40px; } .chat-window { width: 750px; margin: auto; border: 1px solid #eee; } .conversation-view { width: 100%; min-height: 300px; padding: 20px 40px; } .message-box { width: 100%; background-color: #d5d5d5; padding: 10px 20px; } .text-input { width: 100%; border-radius: 4px; border: 1px solid #999; padding: 5px; } .chat-bubble { font-size: 20px; margin-bottom: 20px; width: 100%; display: flex; } .chat-bubble.ai { justify-content: flex-end; } .chat-bubble.ai .chat-content { background-color: #eec799; } .chat-content { display: inline-block; padding: 8px 15px; background-color: #bada55; border-radius: 10px; }
Now, the application should look like this:
Setup your Wit.ai application
Head over to the Wit.ai website and create a free account.
Once you are signed in, hit the + icon at the top right of the page to create a new application. Enter your app name and click the +Create App button at the bottom.
You should see the following page once your app has been created.
Create your first entity
Wit.ai uses entities to help you understand user queries and extract meaningful information from them. Let’s setup an entity that will enable our bot to understand common greetings like “Hi” or “Hello”.
Type the word “Hello” in the “User says…” input field, then select the “wit/greetings” entity in the Add a new entity field. Use the dropdown on the right to set the value of the entity to true.
Once done, hit the Validate button to add the entity to your application. You can repeat the steps for other greetings such as “Hi”, “Hey”, “Good morning” etc.
If you click on the wit/greetings entry at the bottom, you will be directed to the entity page that contains all the expressions under that entity.
Create a custom entity
wit/greetings is an example of a built-in entity. These built-in entities are prefixed by
wit/, and are defined to extract common expressions from messages. Things like age, money, email address, location and the likes are all covered by Wit.ai’s built-in entities.
You can train our bot to understand other things that the built-in entities do not cover. For example, let’s add an entity that allows our bot to understand a request for a joke.
Type “Tell me a joke” in the User says… input field, and add a new entity called “getJoke”. As before, use the dropdown on the right to set the value of the entity to true and hit the Validate button.
Test your Wit.ai chatbot with curl
Go to the settings page, and type “Hello” in the input field that says Type something to cURL, then copy the command to your clipboard using the copy icon on the right.
Open a terminal window and paste in the command, then press Enter. This would produce some output in your terminal that shows the entity that your query matches.
Set up the server component
We need to setup a server component so that we can pass messages sent through the frontend of the app to Wit.ai for processing.
Run the following command to install the dependencies we’ll be needing on the server side:
npm install --save express body-parser cors node-wit dotenv
Next, create a new file called
server.js in the root of your project directory and paste in the following code to set up a simple express server:
// server.js require('dotenv').config({ path: 'variables.env' }); const express = require('express'); const bodyParser = require('body-parser'); const cors = require('cors'); const { Wit } = require('node-wit'); const client = new Wit({ accessToken: process.env.WIT_ACCESS_TOKEN, }); const app = express(); app.use(cors()); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.post('/chat', (req, res) => { const { message } = req.body; client .message(message) .then(data => { console.log(data); }) .catch(error => console.log(error)); }); app.set('port', process.env.PORT || 7777); const server = app.listen(app.get('port'), () => { console.log(`Express running → PORT ${server.address().port}`); });
We’ve also set up a
/chat endpoint that receives messages from the frontend of our app and sends it off to the Wit message API. Whatever response is received is then logged to the console.
Before you start the server, create a
variables.env file in the root of your project directory. You should add this file to your
.gitignore so that you do not commit it into your repository by accident.
Here’s how your
variables.env file should look like:
// variables.env PORT=7777 WIT_ACCESS_TOKEN=<your server access token>
You can grab your Wit.ai server access token by heading to the settings under API Details.
Save the file and run
node server.js from the root of your project directory to start the server.
Now, send a few messages in the chat window, and watch the terminal where your Node server is running. You should see some output in your terminal that shows the entity that your query matches.
Set up responses for your chatbot
Now that user messages are being passed on to Wit.ai successfully, we need to add a way to detect which entity was matched and send an appropriate response to the user.
We’ll achieve that by setting up a
responses object that contains a variety of responses for each entity that we defined, and then send a random message when the appropriate entity is matched.
Inside the
/chat route and under the
message variable, paste in the following code:
// server.js const responses = { greetings: ["Hey, how's it going?", "What's good with you?"], jokes: [ 'Do I lose when the police officer says papers and I say scissors?', 'I have clean conscience. I haven’t used it once till now.', 'Did you hear about the crook who stole a calendar? He got twelve months.', ], }; const firstEntityValue = (entities, entity) => { const val = entities && entities[entity] && Array.isArray(entities[entity]) && entities[entity].length > 0 && entities[entity][0].value; if (!val) { return null; } return val; }; const handleMessage = ({ entities }) => { const greetings = firstEntityValue(entities, 'greetings'); const jokes = firstEntityValue(entities, 'getJoke'); if (greetings) { return console.log(responses.greetings[ Math.floor(Math.random() * responses.greetings.length) ]); } if (jokes) { return console.log(responses.jokes[ Math.floor(Math.random() * responses.jokes.length) ]); } return console.log('I can tell jokes! Say \'tell me a joke\'') };
Then change the line that says
console.log(data) to
handleMessage(data):
// server.js client .message(message) .then(data => { handleMessage(data); }) .catch(error => console.log(error));
Once we find an entity that matches, a random message from the appropriate property in the
responses object is logged to the console. Otherwise the default response is logged.
Set up Pusher Channels for realtime responses
Now, let’s integrate Pusher into the app so that our bot can respond to the user in realtime. Head over to the Pusher website and sign up for a free account. Select Channels apps on the sidebar, and hit Create Channels app to create a new app.
Once your app is created, retrieve your credentials from the API Keys tab, then add the following to your
variables.env file:
PUSHER_APP_ID=<your app id> PUSHER_APP_KEY=<your app key> PUSHER_APP_SECRET=<your app secret> PUSHER_APP_CLUSTER=<your app cluster>
Integrate Pusher Channels into your Preact application
First, install the Pusher Channels client library by running the command below:
npm install pusher-js
Then import it at the top of
index.js:
import Pusher from 'pusher-js';
Next, we’ll open a connection to Channels and use the
subscribe() method from Pusher to subscribe to a new channel called
bot. Finally, we’ll listen for the
bot-response on the
bot channel using the
bind method and update the application state once we receive a message.
Don’t forget to replace the
<your app key> and
<your app cluster> placeholder with the appropriate details from your Pusher account dashboard.
// index.js componentDidMount() { const pusher = new Pusher('<your app key>', { cluster: '<your app cluster>', encrypted: true, }); const channel = pusher.subscribe('bot'); channel.bind('bot-response', data => { const msg = { text: data.message, user: 'ai', }; this.setState({ conversation: [...this.state.conversation, msg], }); }); }
Trigger events from the server
Add the Pusher server library though npm:
npm install pusher
Then import it at the top of
server.js:
// server.js const Pusher = require('pusher'); const pusher = new Pusher({ appId: process.env.PUSHER_APP_ID, key: process.env.PUSHER_APP_KEY, secret: process.env.PUSHER_APP_SECRET, cluster: process.env.PUSHER_APP_CLUSTER, encrypted: true, });
Change the
handleMessage function to look like this:
// server.js onst handleMessage = ({ entities }) => { const greetings = firstEntityValue(entities, 'greetings'); const jokes = firstEntityValue(entities, 'getJoke'); if (greetings) { return pusher.trigger('bot', 'bot-response', { message: responses.greetings[ Math.floor(Math.random() * responses.greetings.length) ], }); } if (jokes) { return pusher.trigger('bot', 'bot-response', { message: responses.jokes[ Math.floor(Math.random() * responses.jokes.length) ], }); } return pusher.trigger('bot', 'bot-response', { message: 'I can tell jokes! Say \'tell me a joke\'', }); };
Stop the node server if it is currently running by pressing
Ctrl + C in the terminal and restart it with
node server.js. Now you can go ahead and test your bot! Send messages like “hey”, or “Tell me a joke” and you will get replies from the bot.
Conclusion
You have now learned how easy it is to create a chatbot that incorporates natural language processing with Wit.ai and how to respond in realtime with Pusher Channels.
Thanks for reading! Remember that you can find the source code of this app in this GitHub repository.
August 16, 2018
by Ayooluwa Isaiah | https://pusher.com/tutorials/chatbot-preact-witai/ | CC-MAIN-2022-21 | refinedweb | 2,344 | 58.38 |
I use JPA (Hibernate) with Spring.
When i want to lazy load a Stirng property i use this syntax:
@Lob
@Basic(fetch = FetchType.LAZY)
public String getHtmlSummary() {
return htmlSummary;
}
Hi
I am a newbie to Java persistence and Hibernate.
What is the difference between FetchType LAZY and EAGER in Java persistence?
Thanks
In my UserTbl entity, I have this
@OneToOne( cascade = CascadeType.REMOVE, mappedBy = "user", fetch=FetchType.EAGER )
private RetailPostUserTbl retailPostUser;
@OneToOne( cascade = CascadeType.REMOVE, mappedBy = "user", fetch=FetchType.EAGER )
private BackOfficeUserTbl backOfficeUser;
Hi I've made two entity classes named Designaton.java and Module.java. In Designation class there is a toMany relationship. I tried to Fetch data using FetchType.EAGEr as well as FetchType.LAZY also, ...
I am using a complex native query to obtain multiple objects from the database.
I am also using the following annotations in my Entity to obtain the results
from that query:
@SqlResultSetMapping(
...
Hi, I have two objects managed by hibernate with annotations Code: @Entity @Table(name = "person") public class Person { .......................... @OneToMany(fetch = FetchType.EAGER, targetEntity = Address.class, cascade = CascadeType.ALL) @JoinColumn(name = "idforeign_add") public Set
Hi, When you are trying to get entityOne.getEntityTwo() , as you said hibernate will return proxy. The proxy holds a reference of session object which was jused to fetch entity two. Now if that session is closed before you call getEntityThree(), you will get LazyInitializationException. In case of Eager , real object and not proxy is returned and hence things will ...
Hi, When switiching to annotations it seems that there is a disctinction between using fetch=FetchType.EAGER vs LazyCollection(LazyCollectionOption.FALSE) - at least in one-to-many associations using a JoinColumn. Using LazyCollectionOption.FALSE results in usual performance/behavior similar to when using lazy="true" with xdoclet. Using FetchType.EAGER however results in larger SQL queries and slowdown. As far as hibernate is concerned, what exactly is the difference between ...
I had started off my JEE5 experience using TopLink Essentials which for the most part was pretty painless. I noticed a few features that looked to be useful in the Hibernate JPA provider implementation such as @IndexColumn and I must say my painless experience has become very painful. There has been several things that had broke when I made the switchover ...
Hi, I'm having a problem using FetchType.LAZY on hibernate 3.x where it does not call the database every time I want to use a OneToMany collection... Here is an example: class MyRoutine { ... @OneToMany private List steps; ... } When I update a step, and call getSteps() on MyRoutine it retrieves the data lazily as it is supposed to. However ... | http://www.java2s.com/Questions_And_Answers/JPA/Fetch/FetchType.htm | CC-MAIN-2015-14 | refinedweb | 434 | 52.26 |
This post was inspired by an article I read in the Feb. 2005 issue of Better Software: "Double Duty" by Brian Button. The title refers to having unit tests serve the double role of testing and documentation. Brian calls this Agile Documentation. For Python developers, this is old news, since the doctest module already provides what is called "literate testing" or "executable documentation". However, Brian also introduces some concepts that I think are worth exploring: Test Lists and Tests Maps.
Test Lists
A Test List tells a story about the behavior expected from the module/class under test. It is composed of one-liners, each line describing what a specific unit test tries to achieve. For example, in the case of a Blog management application, you could have the following (incomplete) Test List:
- Deleting all entries results in no entries in the blog.
- Posting single entry results in single valid entry.
- Deleting a single entry by index results in no entries in the blog.
- Posting new entry results in valid entry and increases the number of entries by 1.
- Etc.
I find it very valuable to have such a Test List for every Python module that I write, especially if the list is easy to generate from the unit tests that I write. I will show later in this post how the combination of doctest and epydoc makes it trivial to achieve this goal.
Test Maps
A Test Map is a list of unit tests associated with a specific function/method under test. It helps you see how that specific function/method is being exercised via unit tests. A Test Map could look like this:
Testmap for method delete_all_entries:
- test_delete_all_entries
- test_delete_single_entry
- test_post_single_entry
- test_post_two_entries
- test_delete_first_of_two_entries
- test_delete_second_of_two_entries
Generating Test Lists
As an example of a module under test, I will use the Blog management application that I discussed in several previous posts. The source code can be found here. I have a directory called blogmgmt which contains a module called blogger.py. The blogger module contains several classes, the main one being Blogger, and a top-level function called get_blog. I also created an empty __init__.py file, so that blogmgmt can be treated as a package. I wrote a series of doctest-based tests for the blogger module in a file I called testlist_blogger.py. Here is part of that file:
"""
Doctest unit tests for module L{blogger}
"""
def test_get_blog():
"""
get_blog() mimics a singleton by always returning the same object.
Function(s) tested:
- L{blogger.get_blog}
>>> from blogger import get_blog
>>> blog1 = get_blog()
>>> blog2 = get_blog()
>>> id(blog1) == id(blog2)
True
"""
def test_get_feed_title():
"""
Can retrieve the feed title.
Method(s) tested:
- L{blogger.Blogger.get_title}
>>> from blogger import get_blog
>>> blog = get_blog()
>>> print blog.get_title()
fitnessetesting
"""
def test_delete_all_entries():
"""
Deleting all entries results in no entries in the blog.
Method(s) tested:
- L{blogger.Blogger.delete_all_entries}
- L{blogger.Blogger.get_num_entries}
>>> from blogger import get_blog
>>> blog = get_blog()
>>> blog.delete_all_entries()
>>> print blog.get_num_entries()
0
"""
def test_post_new_entry():
"""
Posting new entry results in valid entry and increases the number of entries by 1.
Method(s) tested:
- L{blogger.Blogger.post_new_entry}
- L{blogger.Blogger.get_nth_entry_title}
- L{blogger.Blogger.get_nth_entry_content_strip_html}
- L{blogger.Blogger.get_num_entries}
>>> from blogger import get_blog
>>> blog = get_blog()
>>> init_num_entries = blog.get_num_entries()
>>> rc = blog.post_new_entry("Test title", "Test content")
>>> print rc
True
>>> print blog.get_nth_entry_title(1)
Test title
>>> print blog.get_nth_entry_content_strip_html(1)
Test content
>>> num_entries = blog.get_num_entries()
>>> num_entries == init_num_entries + 1
True
"""
Each unit test function is composed of a docstring and nothing else. The docstring starts with a one-line description of what the unit test tries to achieve. The docstring continues with a list of methods/functions tested by that unit test. Finally, the interactive shell session output is copied and pasted into the docstring so that it can be processed by doctest.
For the purpose of generating a Test List, only the first line in each docstring is important. If you simply run
epydoc -o blogmgmt testlist_blogger.py
you will get a directory called blogmgmt that contains the epydoc-generated documentation. I usually then move this directory somewhere under the DocumentRoot of one of my Apache Virtual Servers. When viewed in a browser, this is what the epydoc page for the summary of the testlist_blogger module looks like this (also available here):
Module blogmgmt.testlist_blogger
Doctest unit tests for module
blogger
This is exactly the Test List we wanted. Note that epydoc dutifully generated it for us, since in the Function Summary section it shows the name of every function it finds, plus the first line of that function's docstring. The main value of this Test List for me is that anybody can see at a glance what the methods of the Blogger class are expected to do. It's a nice summary of expected class behavior that enhances the documentation.
So all you need to do to get a nicely formatted Test List is to make sure that you have the test description as the first line of the unit test's docstring; epydoc will then do the grungy work for you.
If you click on the link with the function name on it, you will go to the Function Detail section and witness the power of doctest/epydoc. Since all the tests are copied and pasted from an interactive session and included in the docstring, epydoc will format the docstring very nicely and it will even color-code the blocks of code. Here is an example of the detail for test_delete_all_entries.
Generating Test Maps
Each docstring in the testlist_blogger module contains lines such as these:
Method(s) tested:
- L{blogger.Blogger.post_new_entry}
- L{blogger.Blogger.get_nth_entry_title}
- L{blogger.Blogger.get_nth_entry_content_strip_html}
- L{blogger.Blogger.get_num_entries}
(the L{...} notation is epydoc-specific and represents a link to another object in the epydoc-generated documentation)
The way I wrote the unit tests, each of them actually exercises several functions/methods from the blogger module. Some unit test purists might think these are not "real" unit tests, but in practice I found it is easier to work this way. For example, the get_blog function is called by each and every unit test in order to retrieve the same "blog" object. However, I am not specifically testing get_blog in every unit test, only calling it as a helper function. The way I see it, a method is tested when there is an assertion made about its behavior. All the other methods are merely called as helpers.
So whenever I write a unit test, I manually specify the list of methods/functions under test. This makes it easy to then parse the testlist file and build a mapping from each function/method under test to a list of unit tests that test it, i.e. what we called the Test Map.
For example, in the testlist_blogger module, the Blogger.delete_all_entries method is listed in the docstrings of 6 unit tests: test_delete_all_entries, test_delete_single_entry, test_post_single_entry, test_post_two_entries, test_delete_first_of_two_entries, test_delete_second_of_two_entries. These 6 unit test represent the Test Map for Blogger.delete_all_entries. It's easy to build the Test Map programatically by parsing the testlist_blogger.py file and creating a Python dictionary having the methods under tests as keys and the unit test lists corresponding to them as values.
An issue I had while putting this together was how to link a method in the Blogger class (for example Blogger.delete_all_entries) to its Test Map. One way would have been to programatically insert the Test Map into the docstring for that method. But this would mean that every time a new unit test is added that tests that method, the Test Map will change and thus the module containing the Blogger class will get changed. This is unfortunate especially when the files are under source control. I think a better solution, and the one I ended up implementing, is to have a third module called for example testmap_blogger that will be automatically generated from testlist_blogger. A method M in the Blogger class will then link to a single function in testmap_blogger. That function will contain in its docstring the Test Map for the Blogger method M.
Again, an example to make all this clearer. Here is the docstring of the Blogger.delete_all_entries method in the blogger module:
"""
Delete all entries in the blog
Test map (set of unit tests that exercise this method):
- L{testmap_blogger.testmap_Blogger_delete_all_entries}
"""
Here is the epydoc-generated documentation for the Blogger.delete_all_entries method (in the Method Details section):
delete_all_entries(self)
Delete all entries in the blog
Test map (set of unit tests that exercise this method):
I manually inserted in the docstring an epydoc link to a function called testmap_Blogger_delete_all_entries in a module called testmap_blogger. Assuming that the testmap_blogger module was already generated and epydoc-documented, clicking on the link will bring up the epydoc detail for that particular function, which contains the 6 unit tests for te delete_all_entries method:
testmap_Blogger_delete_all_entries()
Testmap for
blogger.Blogger.delete_all_entries:
testlist_blogger.test_delete_all_entries
testlist_blogger.test_delete_single_entry
testlist_blogger.test_post_single_entry
testlist_blogger.test_post_two_entries
testlist_blogger.test_delete_first_of_two_entries
testlist_blogger.test_delete_second_of_two_entries
Here is the programatically-generated testmap_blogger.py file.
To have all this mechanism work, I use some naming conventions:
- The module containing the Test Maps for module blogger is called testmap_blogger
- In testmap_blogger, the function containing the Test Map for method Blogger.M from the blogger module is called testmap_Blogger_M
- In testmap_blogger, the function containing the Test Map for function F from the blogger module is called testmap_F
- In the docstring of the testmap function itself there is a link which points back to the method Blogger.M; the name of the link needs to be blogger.Blogger.M, otherwise epydoc will not find it
Here's an end-to-end procedure for using the doctest/epydoc combination to write Agile Documentation:
1. We'll unit test a Python module we'll call P which contains a class C.
2. We start by writing a unit test for the method C.M1 from the P module. We write the unit test by copying and pasting a Python shell session output in another Python module called testlist_P. We call the unit test function test_M1. It looks something like this:
def test_M1():
"""
Short description of the behavior we're testing for M1.
Method(s) tested:
- L{P.C.M1}
>>> from P import C
>>> c = C()
>>> rc = c.M1()
>>> print rc
True
"""
The testlist_P module has a "main" section of the form:
if __name__ == "__main__":
import doctest
doctest.testmod()
This is the typical doctest way of running unit tests. To actually execute the tests, we need to run "python testlist_P.py" at a command line (for more details on doctest, see a previous blog post).
3. At this point, we fleshed out an initial implementation for method M1 in module P. In its docstring, we add a link to the test map:
def M1(self):
"""
Short description of M1
Test map (set of unit tests that exercise this method):
- L{testmap_P.testmap_C_M1}
"""
Note that I followed the naming convention I described earlier.
4. We programatically generate the Test Map for module P by running something like this: build_testmap.py. It will create a file called testmap_P.py with the following content:
def testmap_C_M1():
"""
Testmap for L{P.C.M1}:
- L{testlist_P.test_M1}
"""
5. We run epydoc:
epydoc -o P_docs P.py testlist_P.py testmap_P.py
A directory called P_docs will be generated; we can move this directory to a public area of our Web server and thus make the documentation available online. When we click on the testlist_P
module link, we will see the Test List for module P. It will show something like:
Module P_docs.testlist_P
Doctest unit tests for module P
When we click on the test map link inside the docstring of method C.M1, we see:
testmap_C_M1()
Now repeat steps 2-5 for method M2:
6. Let's assume we now unit test method M2, but in the process we also test method M1. The function test_M2 will look something like this:
def test_M2():
"""
Short description of the behavior we're testing for M2.
Method(s) tested:
- L{P.C.M1}
- L{P.C.M2}
>>> from P import C
>>> c = C()
>>> rc = c.M1()
>>> print rc
True
>>> rc = c.M2()
>>> print rc
True
"""
We listed both methods in the "Method(s) tested" section.
7. We add a link to the testmap in method M2's docstring (in module P):
def M2(self):
"""
Short description of M2
Test map (set of unit tests that exercise this method):
- L{testmap_P.testmap_C_M2}
"""
8. We recreate the testmap_P file by running build_testmap.py. The testmap for M1 will now contain 2 functions: test_M1 and test_M2, while the testmap for M2 will contain test_M2:
def testmap_C_M1():
"""
Testmap for L{P.C.M1}:
- L{testlist_P.test_M1}
- L{testlist_P.test_M2}
"""
def testmap_C_M2():
"""
Testmap for L{P.C.M2}:
- L{testlist_P.test_M2}
"""
9. We run epydoc again:
epydoc -o P_docs P.py testlist_P.py testmap_P.py
Now clicking on testlist_P will show:
Module P_docs.testlist_P
Doctest unit tests for module P
Clicking on the test map link inside the docstring of method C.M1 shows:
testmap_C_M1()
10. Repeat steps 2-5 for each unit test that you add to the testlist_P module.
Conclusion
I find the combination doctest/epydoc very powerful and easy to use in generating Agile Documentation, or "literate testing", or "executable documentation", or however you want to call it. The name is not important, but what you can achieve with it is: a way of documenting your APIs by means of unit tests that live in your code as docstrings. It doesn't get much more "agile" than this. Kudos to the doctest developers and to Edward Loper, the author of epydoc. Also, kudos to Brian Button for his insightful article which inspired my post. Brian's examples used .NET, but hopefully he'll switch to Python soon :-)
If you want to see the full documentation I generated for my blogmgmt package, you can find it here.
3 comments:
Cool post. Last year Brian Marick and I hosted a workshop at XP Agile Universe on this topic. I posted some of the findings from that workshop on my blog:: you haven't seen these already, some of the findings from the workshop might be of interest to you.
-Jonathan
Sir,the article given here is very good,Since i don't have any idea about agile.So please can you explain me the basic idea behind the Agile Testing. thank you...
A good blog to read. You clever in any posts so interesting to read | http://agiletesting.blogspot.com/2005/02/agile-documentation-with-doctest-and.html | CC-MAIN-2018-05 | refinedweb | 2,407 | 64.2 |
Sometimes people come up with some creative solutions to solve their problems. The correct way to monitor data, such as messages, coming from Telepathy is to write an Observer, however sometimes you just want to get a feed of all of the text messages (e.g. so you can feed it to your keyboard's LCD or something).
The following is a pure D-Bus solution (although it includes telepathy.interfaces to cut down on typing). It listens to all Channel.Type.Text.Received signals, looks up the connection they came from and resolves the sender handle to a name. However note: it makes a lot more D-Bus calls than is required with Telepathy. Really you should cache the results for these handles and listen to the signals that tell you when that information has updated. If you were doing things properly, that's what you'd do.
import dbus, gobject from dbus.mainloop.glib import DBusGMainLoop from telepathy.interfaces import * from datetime import datetime dbus.mainloop.glib.DBusGMainLoop(set_as_default=True) bus = dbus.SessionBus() def message(id, timestamp, sender, type, flags, text, path=None): # path is the object path of the channel, from this we can derive the # object path of the connection, and acquire a proxy to it service = '.'.join(path.split('/')[1:8]) conn_path = '/' + service.replace('.', '/') conn = bus.get_object(service, conn_path) # request the alias and id of the sender handle d = dbus.Interface(conn, CONNECTION_INTERFACE_CONTACTS).GetContactAttributes([sender], [CONNECTION, CONNECTION_INTERFACE_ALIASING], False) alias = d[sender].get(CONNECTION_INTERFACE_ALIASING + '/alias', d[sender].get(CONNECTION + '/contact-id', "Unknown")) dt = datetime.fromtimestamp(timestamp) print "%s <%s> %s" % (dt.strftime('%H:%M'), alias, text) # listen to all Channel.Type.Text.Received signals bus.add_signal_receiver(message, dbus_interface=CHANNEL_TYPE_TEXT, signal_name='Received', path_keyword='path') loop = gobject.MainLoop() loop.run()
Like I said, this is not efficient use of Telepathy. If it eats your D-Bus, don't blame me.
On the other hand, this has actually started a conversation about possible new convenience classes for telepathy-python. | https://blogs.gnome.org/danni/2009/12/29/a-hacky-way-of-monitoring-messages-in-telepathy/ | CC-MAIN-2017-17 | refinedweb | 330 | 52.76 |
KCC is a built-in process that runs on all domain controllers and generatesreplication topology for the Active Directory forest. The KCC creates separatereplication topologies depending on whether replication is occurring within asite (intrasite) or between sites (intersite). The KCC also dynamically adjuststhe topology to accommodate new domain controllers, domain controllersmoved to and from sites, changing costs and schedules, and domain controllersthat are temporarily unavailable.
Group Types* Security groups: Use Security groups for granting permissions to gain accessto resources. Sending an e-mail message to a group sends the message to allmembers of the group. Therefore security groups share the capabilities ofdistribution groups.* Distribution groups: Distribution groups are used for sending e-mainmessages to groups of users. You cannot grant permissions to security groups.Even though security groups have all the capabilities of distribution groups,distribution groups still requires, because some applications can only readdistribution groups.
Group ScopesGroup scope normally describe which type of users should be clubbed togetherin a way which is easy for there administration. Therefore, in domain, groupsplay an important part. One group can be a member of other group(s) which isnormally known as Group nesting. One or more groups can be member of anygroup in the entire domain(s) within a forest.* Domain Local Group: Use this scope to grant permissions to domain resourcesthat are located in the same domain in which you created the domain localgroup. Domain local groups can exist in all mixed, native and interim functionallevel of domains and forests. Domain local group memberships are not limitedas you can add members as user accounts, universal and global groups from anydomain. Just to remember, nesting cannot be done in domain local group. Adomain local group will not be a member of another Domain Local or any othergroups in the same domain.* Global Group: Users with similar function can be grouped under global scopeand can be given permission to access a resource (like a printer or sharedfolder and files) available in local or another domain in same forest. To say insimple words, Global groups can be use to grant permissions to gain access toresources which are located in any domain but in a single forest as theirmemberships are limited. User accounts and global groups can be added onlyfrom the domain in which global group is created. Nesting is possible in Globalgroups within other groups as you can add a global group into another globalgroup from any domain. Finally to provide permission to domain specificresources (like printers and published folder), they can be members of aDomain Local group. Global groups exist in all mixed, native and interimfunctional level of domains and forests.* Universal Group Scope: these groups are precisely used for email distributionand can be granted access to resources in all trusted domain as these groupscan only be used as a security principal (security group type) in a windows 2000native or windows server 2003 domain functional level domain. Universal groupmemberships are not limited like global groups. All domain user accounts andgroups can be a member of universal group. Universal groups can be nestedunder a global or Domain Local group in any domain.
DifferentialA cumulative backup of all changes made after the last full backup. Theadvantage to this is the quicker recovery time, requiring only a full backup andthe latest differential backup to restore the system. The disadvantage is thatfor each day elapsed since the last full backup, more data needs to be backedup, especially if a majority of the data has been changed.
15 Netstat21 FTP23 Telnet25 SMTP42 WINS53 DNS67 Bootp68 DHCP80 HTTP88 Kerberos101 HOSTNAME110 POP3119 NNTP123 NTP (Network time protocol)139 NetBIOS161 SNMP180 RIS389 LDAP (Lightweight Directory Access Protocol)443 HTTPS (HTTP over SSL/TLS)520 RIP79 FINGER37 Time3389 Terminal services443 SSL (https) (http protocol over TLS/SSL)220 IMAP33268 AD Global Catalog3269 AD Global Catalog over SSL500 Internet Key Exchange, IKE (IPSec) (UDP 500)diskpart.exe This command is used for disk management in Windows 2003.nltest /dsgetdc:domainnamereplacing domainname with the name of the domain that you are trying to logon to. Thiscommand verifies that a domain controller can be located. Nltest is included inSupportTools
What are the icons that don’t get delete option on the Desktop (up to 2000O. S.)? My Computer My Network Places Recycle BinNote: In Windows 2003 you can delete My computer, My network places. Youcan also get back them.Right click on Desktop Properties Click on Desktop tab click oncustomize desktop select the appropriate check boxes.Even in 2003 you cannot delete Recycle bin.Note: You can delete any thing (even Recycle bin) from the desktop by usingregistry settings in 2000/2003.
After creating the root zone then create another zone with Domain Name Right click on Forward Lookup zone New zone Active Directory Integrated (you can choose any one) DNS Name [___]Next Finish
If you want to create an Active Directory integrated zone, the server must beDomain Controller.If you want to create the Primary DNS, you can create on Domain Controller orMember server. But if create on member you could not get 4 options under thedomain which are meant for Active directory.You can create Secondary zone on a Member Server or on a Domain Controller.There is no difference between them.
What is BIND?
What are the ports numbers used for Kerberos, LDAP etc in DNS?
What is a zone?A database of records is called a zone.Also called a zone of authority, a subset of the Domain Name System (DNS)namespace that is managed by a name server.
OrGo to Registry then search for lanmanNt then change it as serverNt
You have to follow the same procedure as same as primary DNS configuration.But at the time selection, select Secondary zone instead of primary zone. Afterthat it asks the primary DNS zone address provide that address.
Then it asks for Primary DNS zone details, provide those details then click on finish.
Select anyone and give the details of secondary zone (only in case of secondand third option).Click on apply, then OK
Note: In zone transfers tab you can find another option Notify, this is toautomatically notify secondary severs when the zone changes. Here also youcan select appropriate options.
Note: In secondary zone you cannot modify any information. Every one hasread only permission.Whenever Primary DNS is in down click on “change” tab on general tab ofproperties, to change as primary, then it acts as primary, there you can writepermission also.
What is the default time setting in primary zone to refresh, Retry, Expireintervals for secondary zone?The default settings are
Suppose the Secondary zone is Expired then, how to solve the problem?
Go to the properties of the zone click on general tab, there you can find theoption called “Change” click on it then select appropriate option.Then click on OK
Iterative queryThe query that has been sent to my DNS server from my computer.Recursive queryThe query that has been sent to other DNS servers to know the IP address of aparticular server from my DNS server.
When you install a Windows 2000 DNS server, you immediately get all of therecords of root DNS servers. So every windows 2000 DNS server installed onInternet has pre configured with the address of root DNS servers. So everysingle DNS server in the Internet can get root servers.
DNS requirements:First and foremost has to support SRV records (SRV record identifies aparticular service in a particular computer) (in windows 2000 we use SRVrecords to identify Domain controllers, identifying Global Catalogue, etc.DNS server addresses.Every single server can get to the root. So that only every DNS server on theInternet first contacts root DNS servers for name resolution.
Where can you find the address of root servers in the DNS server?Open the DNS console Right click on the domain name drag down toproperties click on Root hints. Here you can find different root serveraddresses.
Note: When you install DNS service in a 2000 server operating system (still youhave not configured anything on DNS server), then it starts its functionality ascaching only DNS server.What is caching only DNS server?
What is a forwarder?(Open DNS console Right click on Domain name Click on forwarder tab)A forwarder is server, which has more access than the present DNS server. Maybe our present DNS server is located in internal network and it cannot resolvethe Internet names. May be it is behind a firewall or may it is using a proxyserver or NAT server to get to the Internet. Then this server forwards the queryto another DNS server that can resolve the Internet names.
What is DHCP?Dynamic Host Configuration Protocol (DHCP) is a network protocol that enablesa server to automatically assign an IP address to a computer from a definedrange of numbers (i.e., a scope) configured for a given network.
2) Independently
Note: When you have installed DHCP a icon will appear in Administrative Tools(DHCP)
DHCP
This server [________________] BROWSE
OK
DHCP Servername.domain.com [IP address]
Note: Some time the window comes automatically with creating the “AddServer”. Such cases check the IP address whether it is correct or not. If it iswrong delete it and recreate it. Now you have DHCP server. Now you have to authorize the DHCP Server to provide IP addresses to the clients.
Note: If it is not authorized a red symbol (down red arrow) will appear, if uauthorize it then a green up arrow will appear.
Click on Next.
Click on Next
aaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaa Add Remov e
What is the default lease duration, minimum lease duration and maximumlease duration?By default any system will get 8 lease days to use IP address.Note: You can increase or decrease the Lease duration, You have assign atleast minimum duration of 1 second and you can assign Maximum duration of999 days 23 hours 59 minutes.Note: If you haven’t log on for 50% of the duration continuously the IP addresswill be released.Click NextNow you will get a Window asking whether you want to configure the options(DNS, WINS, and Router etc.)You can configure the options now itself or you can authorize after completionof this.Select any one then click Next.
Click Finish.
Note: If u have selected “NO” in the above window you can configure abovethings anytime like below
Note: You can reserve IP address for specific Clients. Or You can Exclude IPaddress (without allocation) for future purpose.
Go to Client System
In that select “assign IP address automatically” and select “assign DNS addressautomatically”Click on “More” delete the DNS suffix if anything is there.
Click OK
Note: The DHCP server assigns IP address to the clients. But apart from that italso provides DNS address, default gateway, WINS address and so on, which areconfigured in DHCP server.
DHCP Discover: When ever client has to obtain an IP address from a DHCP server it willbroadcast a message called “DHCP discover” , which contains destinationaddress 255.255.255.255 and source IP address as 0.0.0.0 and its MAC address.DHCP offer: The DHCP server on the network will respond to DHCP discover bysending a DHCP offer message to the client requesting an IP address.DHCP request: The client after receiving offer message will send a “DHCP request”message asking the DHCP server to confirm the IP address it has offered to itthrough DHCP offer message.DHCP Acknowledge: DHCP server will respond to the “DHCP request” message by sendingacknowledge message through which it confirms the IP address to othermachine.
Note: You can also enable DHCP in work group for dynamic allocation of IPaddresses.Configure the server operating system in work group as a DHCP then go forclient in TCP/IP properties select obtain IP address automatically. Then theclient gets IP address from the DHCP server.Note: You need not to configure DNS or anything.Using APIPAOn occasion, a network PC boots up and finds that the DHCP server is notavailable. When this happens, the PC continues to poll for a DHCP server usingdifferent wait periods.
The Automatic Private IP Addressing (APIPA) service allows the DHCP client toautomatically configure itself until the DHCP server is available and the clientcan be configured to the network. APIPA allows the DHCP client to assign itselfan IP address in the range of 169.254.0.1 to 169.254.254.254 and a Class Bsubnet mask of 255.255.0.0. The address range that is used by APIPA is a ClassB address that Microsoft has set aside for this purpose.
What is the difference between windows 2000 server and Windows 2000advanced server, Data center server?In Windows 2000 server we don’t have Clustering, Network load balancing.Where as in Windows 2000 advanced server and in Data center server we haveClustering and Network load balancing.In 2000-Advanced server and Data center server we have more RAM and moreProcessors.
What are the minimum and Maximum configurations for Windows family?
What are the differences between windows 2000 professional and serverversions?In professional we don’t have fault tolerance (Mirroring, RAID5) where as in allserver versions we have.In professional we cannot load Active Directory where as in all server versionswe can.In professional and 2000 server we don’t clustering and network load balancingwhere as in 2000 advanced server and in Data centre server we have Clusteringand NLB.As you move from server to advanced server, advanced server to data centreserver we get more RAM and more Processors.
What roles does a Main Domain Controller (the first domain controller in theentire forest) will have by default?By default it gets 5 roles. • Schema Master • Domain Naming Master • PDC Emulator • Relative Identifier (RID) • Infrastructure Master (IM)
What are the roles an Additional Domain controller will have by default?By default you cannot get any role. But if you want to assign any role you cantransfer from master.What are the roles a Child Main Domain Controller will have by default?By default it gets only three roles. • PDC Emulator • Relative Identifier (RID) • Infrastructure Master (IM)
What are the roles a Child additional Domain controller will have by default?By default it won’t get any role. But if want to assign you can transfer frommain child domain controller.
What are the roles those must not be on the same Domain Controller?Infrastructure Master and Global CatalogueNote: If you have only one domain then you won’t get any problem even if youhave both of them in the same server.If you have two or more domains in a forest then they shouldn’t be in the sameserver.
How to check the above roles to which server they have assigned?Install support tools from CDProgramssupport toolstoolscmd prompt (Go to the command prompt inthis way only)At command prompt type “netdom query fsmo”
What is FSMO?Flexible Single Master OperationsNote: The above five roles are called FSMO roles.
What is a client?A client is any device, such as personal computer, printer or any other server,which requests services or resources from a server. The most common clientsare workstations.
What is a server?A server is a computer that provides network resources and services toworkstations and other clients. ..
What is a forest?Collection of one or more domain trees that do not form a contiguousnamespace. Forests allow organizations to group divisions that operateindependently but still need to communicate with one another.
All trees in a forest share common Schema, configuration partitions and GlobalCatalog. All trees in a give forest trust each other with two way transitive trustrelations.
What is a Domain?A group of computers that are part of a network and shares a commondirectory and security policies. In Windows 2000 a domain is a securityboundary and permissions that are granted in one domain are not carried overto other domains
What is a partition?Disk Partition is a way of dividing your Physical Disk so that each sectionfunctions as a separate unit. A partition divides a disk into sections thatfunction as separate units and that can be formatted for use by a file system.
How many partitions can you create maximum? (Among that how manyprimary and how many Extended?)Maximum we can create 4 partitions in basic disk. Among that we can createmaximum 1 extended partition. You can create 4 primary partitions if you donot have Extended.
What is a volume?Disk volume is a way of dividing your Physical Disk so that each sectionfunctions as a separate unit.
Note: In Windows NT and Windows 2000 by default the system files will becopied to winnt directory and in Windows 2003 by default they are copied intoWindows directory.
1..
What is BIOS?A computer's basic input/output system (BIOS) is a set of software throughwhich the operating system (or Setup) communicates with the computer'shardware devices.
Note: When you format the operating system with NTFS then Windows NT andWindows 2000 are only the operating systems that can read the data.
Note: The only reason to use FAT or FAT32 is for dual booting with previousversions windows 2000 O. S.
What are the features will you get when you upgrade from Windows NT toWindows 2000?Active Directory includes the following features:
* Management tools: Microsoft Management Console Plug and Play Device Manager Add/Remove Hardware wizard (in Control Panel) Support for universal serial bus New Backup utility
* Application services: Win32 Driver Model DirectX 5.0 Windows Script Host
* Security: Encrypting file system
Note: For anything other than a situation with multiple operating systems,however, the recommended file system is NTFS.
NTFSSome of the features you can use when you choose NTFS are: * Active Directory, which you can use to view and control network resourceseasily. * Domains, which are part of Active Directory, and which you can use to fine-tune security options while keeping administration simple. Domain controllersrequire NTFS. * File encryption, which greatly enhances security. * Permissions that can be set on individual files rather than just folders. * Sparse files. These are very large files created by applications in such a waythat only limited disk space is needed. That is, NTFS allocates disk space onlyto the portions of a file that are written to. * Remote Storage, which provides an extension to your disk space by makingremovable media such as tapes more accessible. * Recovery logging of disk activities, which helps you restore informationquickly in the event of power failure or other system problems. * Disk quotas, which you can use to monitor and control the amount of diskspace used by individual users. * Better scalability to large drives. The maximum drive size for NTFS is muchgreater than that for FAT, and as drive size increases, performance with NTFSdoesn't degrade as it does with FAT.Note:It is recommended that you format the partition with NTFS rather thanconverting from FAT or FAT32. Formatting a partition erases all data on thepartition, but a partition that is formatted with NTFS rather than convertedfrom FAT or FAT32 will have less fragmentation and better performance.
What are the options do u get when you are shut downing? Log off Restart Shut down Stand by Hibernate Disconnect Standby: Turns off your monitor and hard disks, and your computer useless power. A state, in which your computer consumes less electric power when it isidle, but remains available for immediate use. Typically, you’d put yourcomputer on stand by to save power instead of leaving it on for extendedperiods. In stand by mode, information in computer memory is not saved on yourhard disk. If the computer loses power, the information in memory will be lost. This option appears only if your computer supports this feature and youhave selected this option in Power Options. See Power Options overview inHelp.Hibernation: Turns off your monitor and hard disk, saves everything in memoryon disk, and turns off your computer. When you restart your computer, yourdesktop is restored exactly as you left it. A state in which your computer saves any Windows settings that youchanged, writes any information that is currently stored in memory to yourhard disk, and turns off your computer. Unlike shutting down, when you restartyour computer, your desktop is restored exactly as it was before hibernation. Hibernate appears only if your computer supports this feature and youhave selected the Enable hibernate support option in Power Options. SeePower Options overview in Help.
Disconnect A state, in which your Terminal Services session is disconnected, butremains active on the server. When you reconnect to Terminal Services, youare returned to the same session, and everything looks exactly as it did beforeyou disconnected. Disconnect appears only if you are connected to a Windows 2000 Serverrunning Terminal Services.
Shut down A state in which your computer saves any Windows settings that youchanged and writes any information that is currently stored in memory to yourhard disk. This prepares your computer to be turned off.
Restart A state in which your computer saves any Windows settings that youchanged, writes any information that is currently stored in memory to yourhard disk, and then restarts your computer.
Log off A state in which your computer closes all your programs, disconnectsyour computer from the network, and prepares your computer to be used bysomeonewinnt32 on 32 bit windows operating system (like Win 95/98, Win NT, andWin 2000)?
How do you install the Windows 2000 deployment tools, such as the SetupManager Wizard and the System Preparation tool?To install the Windows 2000 Setup Tools, display the contents of the Deploy.cab file, which is located in the Support\Tools folder on the Windows 2000 CD-ROM. Select all the files you want to extract, right-click a selected file, andthen select Extract from the menu. You will be prompted for a destination, thelocation and name of a folder, for the extracted files.
What is Desktop?The desktop, which is the screen that you see after you log on toWindows 2000, is one of the most important features on your computer. Thedesktop can contain shortcuts to your most frequently used programs,documents, and printers.
Suppose if your CD is auto play CD. Then what is the key that is used to stopthe orother networks.
Drive Letters: Each workstation can assign up to 26 letters to regular drive mappings.Drive letters that are not used by local devices are available for networkdrives. Generally the Drive letters A and B represents floppy disk drives and Crepresents the local hard disk.
What do you call the right hand side portion (i.e., where the clock andother icons exist) of task bar?System Tray or Notification area
2) Boot from the floppy, insert the CD, and install the O.S.
3) Install over the network or install over the Hard disk. For this you have torun the files WinNT or winnt32.
Note: WinNT is used when you r installing from the operating system other thanWindowsnames to IP addresses. WINS is used only when you need to access the NETBIOSresources..
What is the location of lmhost file (LAN Manager Host file) in windows 2000What is the port used for Terminal Services?3389
When a user logs on the start up options will be loaded. How to stop them?(The notification area icons)When a user types user name and password, and presses enter immediatelyhold down Shift key. Then the above things will not be loaded.
Note: Class A, Class B, Class C are used to assign IP addresses. Class D is usedfor Multicasting. Class E is reserved for the future (Experimental).
Note: If you want to restore the system state backup on a domain controlleryou have to restart the computer in Directory Services restore mode, becauseyou are restoring Active Directory when it is in active. If you want to restoreActive Directory it should not be in active. If you restart the computer inDirectory services restore mode the Active directory is not in active, so you canrestore the Active directory.You can restore Active Directory in two ways Authoritative restore Non Authoritative restore
Local policy Site Policy Domain Policy OU Policy Sub OU Policy (If any are there)
Configuration PartitionThe configuration partition, which contains replication configurationinformation (and other information) for the forest
Schema Partition The schema partition contains all object types and their attributes thatcan be created in Active Directory. This data is common to all domaincontrollers in the domain tree or forest, and is replicated by Active Directory toall the domain controllers in the forest.
2) Start Run Type netdom query fsmo The computer names that have been listed there are Domain Controller.
3) Search for NTDS and Sysvol folder in system directory, if they are there then it is a Domain Controller.
6) In 2000 you cannot change the name of the Domain Controller so Right click on My computer Properties Network Identification There Change button is grayed out.
Diagnostic Utilitiesa) PING b) finger c) hostname d) Nslookup e) ipconfig f)Netstatg) NBTStat h) Route i) Tracer j) ARP
PING: Verifies that TCP/IP is configured and another host is available.
FINGER: Retrieves system information from a remote computer that supportsTCP/IP finger services
HOSTNAME: It displays the host name.
NSLOOKUP: Examines Entries in the DNS database, which pertains to a particularhost or domain
NETSTAT: Displays protocol statistics and the current state of TCP/IP concepts.
NBTSTAT: Checks the state of current NetBIOS over TCP/IP connections, updatesLMhost’s cache or determines your registered name or scope ID.
Route: Views or modifies the local routing table.
TRACERT Verifies the route from the local host to remote host
ARP Displays a cache of local resolved IP address to MAC address
The root domain has a null label and is not expressed in the FQDNHow to know port 3389 (Terminal services) is working or not?netstat -a (Displays all connections and listening of ports)
What is a host?Any device on a TCP/IP network that has an IP address. Example includessevers, clients, network interface print devices, routers.
Note: The ports 0-1023 are called well known ports and all other ports arecalled Dynamic or private ports (i.e., 1024-65535)
Note: When you are formatting the disk, if you set the block size as default,windows 2000/XP/2003 divides the partition into 4 KB blocks. When you arecreating a file or folder it allocates space to that file or folder in multiples of 4KB. When you create a new file first time it allocates 4 KB, after 4 KB is filledup it allocates another 4 KB size, it goes on like this until the disk space iscompleted.
Note: With windows 2000 advanced server and data centre server we can NLBcluster 2 to 32 servers. It supports clustering up to 2 nodes.Note: With disk quotas we can track the usage of disk space for each user. Wecan limit each user to use certain amount of space.
What is latency? The required time for all updates to be completed throughout all domaincontrollers on the network domain or forest.
What is convergence? The state at which all domain controllers have the same replica contentsof the Active Directory database.
What are the file names that we cannot create in Windows operatingsystem?The file names that cannot be created in Windows operating system are Con Prn Lpt1, Lpt2, Lpt3, Lpt4, ….., Lpt9 Com1, com2 com3, com4, com5,….., com9 Nul AuxNote: The file name clock$ cannot be created in DOS 6.22 or earlier versions ofDOS.
What is QoS?QoS stands for Quality of Service. With QoS we can reserve bandwidth tocertain applications.
What is NAT? NAT stands for Network Address Translation. It is a device between theInternet (i.e., public network) and our private network. On one NIC card it hasvalid Internet address; on the other NIC it has our private (internal) networkaddress. NAT is a device that translates one valid public IP address to more tupleinternal private address. We load Windows 2000 RRAS (Routing and Remote Access service)service into this Windows 2000 server and we turn in to a router. Now we addNAT protocol, so now onwards our internal clients sends their traffic throughthis router to the internet, when it passing through this NAT server it stripes offthe internal network IP address and assigns a valid public IP address. So goesout and communicates with that valid public IP address, comes back in the NATserver stripes off the public IP address and replaces private IP address, sendsthe traffic back to that particular client.For client perspective they don’t know any thing except they are surfinginternet.
We load RRAS in to windows 2000 server; we turn this server as router. Now weadd NAT protocol, so that now on our clients can send traffic to internetthrough this router , as it passes through the NAT server this server stripes offthe internal IP address and replaces with a valid public IP address. Then it goesto the internet surf the internet when it comes back through the NAT server,now NAT server stripes off the valid public IP address and replaces it with itsinternal IP address sends the traffic to that particular client.
Note: Windows 2000 NAT can acts as a DHCP server. So it is possible to give IPaddress with our NAT server. When you are doing this make sure that you don’thave DHCP server in your network. If you have less clients (5 or 6) then there is no harm assigning IP addressthrough NAT, but if your network is big then best is to use DHCP.
Note: NAT server contains at least two NIC, because one for internal IP addressand another one for external (Public IP).
What is proxy? NAT server helps the client to access Internet, where as proxy serverdoes every thing for client. When a request comes from the client the proxyserver surfs the internet and caches the results to its local disk, sends thatresult to the client. With proxy we have performance improvement, because results arecached to the local hard disk. With proxy we have security, because only one system in the internalnetwork communicating with the Internet. Rather than allowing clients to access internet by changing IP address,the proxy server does all the surfing for clients and caches to its local disk andgives to the clients.
Note: an IP address is assigned to every device that you want access on thenetwork, and each have unique IP address. A client, server, every interface ofrouter, printer and all devices on the network should have an IP address tocommunicate in the network.
Note: Tracrt command traces the root (path) for which we are connecting. Pathping is combination of tracert and ping. It displays path and someother information.
Note: with RIP version 1 we cannot do CIDR /VLSM. To transfer the route tableto the all routers RIP version 1 uses broad cast. With RIP version 2 we can doCIDR. To transfer the route table to all routers RIP version 2 uses multicast.Also with version 2 we have password authentication to transfer router table.
What is VPN? VPN stands for Virtual Private Network. By using public media we areestablishing a private secure connection. To communicate through VPN we usePPTP (Point to Point Tunneling Protocol) or L2TP (Layer2 Tunneling Protocol). Most cases we use L2TP because this is more secure. The only one casethat we use PPTP is only when we are trying to use VPN through a NAT server,another reason to use is if don’t have windows clients that have capability toestablish L2TP VPN connection.
RADIUS RADIUS stands for Remote Authentication Dial in User Service. Itis used to authenticate remote users. Instead of authenticating users atindividual RAS server, we pass a request to central server (RADIUS server), andlet the authentication happen there. All RAS servers pass authenticationrequests to this central server (RADIUS server) that is doing the authentication.It is authenticating users based on Active Directory. It is also doing reporting,so it is doing .accounting and authentication. With RADIUS authentication willtakes place at a central location. Now there is no need to maintain a localdatabase of users for each RAS server. When ever authentication needed RASserver forwards query to RADIUS server.Accounting means we keep tracking who is connected, how long, why theyfailed to connect etc., the information is all centralized here. By centralizing accountability and authentication we are doing our RASservers as dumb devices. So when RAS server fails then there is no need toworry about the 100 or 1000 accounts we manually created on the RAS server,so that we can authenticate. All you need to do is swap out this device withanother and configure it to pass the authentication to RADIUS server.Note: Terminology wise the central server is RADIUS server. Clients for RADIUSare RAS servers.
Note: Put your RAS server close to the clients. Put your RADIUS server close tothe Active Directory database.
Note: If you install DCPROMO in member server then it will become DomainController, if you uninstall DCPROMO in Domain Controller then it will becomeMember server, if you are uninstalling DCPROMO on last domain controller thenit will become standalone server.
Note: Always file size is less than or equal to file size on disk except when filecompressed. If file is compressed file size greater than file size on disk.
The data replicated between domain controllers is called data and alsocalled naming context. Once a domain controller has been established onlychanges are replicated.
Note: Each domain controller keeps a list of other known domain controllersand the last USN received from each controller.
The DNS IP address and computer name is stored in Active Directory forActive Directory integrated DNS zones and replicated to all local domaincontrollers. DNS information is not replicated to domain controllers outside thedomain.
Note: If you want you can change the port number, but generally we don’tchange the port number. If you have changed the port number, then whentyping URL you have to type the port number followed by the URL.Ex: you haven’t typed any thing by default it takes the port number as 80.
What is NetBIOS?NetBIOS stands for Network Basic Input Output System. It is naming interfaceby which client can access network resources. It manages data transferbetween nodes on a network..
Note: Computer names are not the only names that are registered as a NetBIOSnames, a domain name can be registered as NetBIOS name, any service on thenetwork can be registered as the NetBIOS names, for example messengerservice.Note: Communication in the network happen IP address to IP address,ultimately MAC address to MAC address.
Note: A UNIX does not have ability to register into WINS database. But if a UNIXserver is there in network and you need to resolve it, then for this you need toconfigure manually the entry of that UNIX server in the WINS server.
What is the location of lmhost file (LAN Manager Host file) in windows2000
Note: Windows 2000 doesn’t use WINS for its naming structure. Windows 2000uses DNS for its naming structure. The only time that you need WINS inWindows 2000 environment is when you want resolve NETBIOS based resourcessuch as NT file server. In native Windows 2000 environment there is no need touse WINS.
Note: You can configure as many as WINS servers as you want on the network.No matter that which client is using which WINS server, but all WINS servershould be configured to replicate the data with each other.
How to configure the WINS servers to replicate database with other WINSservers on the network?Open WINS MMC Right click on Replication partners Select Newreplication partner Give the IP address of the other WINS server click OK
First create a shared folder and put installation files on that shared folder.
What is the program that is used to create .msi files when .msi files are notavailable?Wininstall
Note: With assign we install a package in 3 ways where as with publish we caninstall in 2 ways.
Note: With assign you will get the more functionality than publish. So when itis possible for assign, choose assign only.Note: When ever you have a .msi file then only you can repair or upgrade thatapplication. With .zap you cannot do them.
Note: For Disk Management in Windows 2003 you can use command line tooldiskpart.exe (New feature in Windows 2003). For more details typediskpart.exe at command prompt and then type “?”.
Note: By default search doesn’t displays hidden files. But if you want to searchhidden files also you can search by modifying the following key in registry.Mycomputer\HKEY_CURRENT_USER\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\EXPLORER\Here you can search hidden attribute. Click on this and change value from 0 to1.
Read Users can see the names of files and subfolders in a folder and viewfolder attributes, ownership, and permissions. Users can open and view files,but they cannot change files or add new files.
List Folder Contents Users can see the names of files and subfolders in thefolder. However, users cannot open files to view their contents.
Read & Execute Users have the same rights as those assigned through theRead permission, as well as the ability to traverse folders. Traverse foldersrights allow a user to reach files and folders located in subdirectories, even ifthe user does not have permission to access portions of the directory path.
How do determine the Operating system type that you are working on?Right click on My computer Select properties on general tab you can seeoperating system type and version.
ADSI edit:When you open ADSI edit you can see 3 database partitions, i.e., domainpartition, configuration partition, and schema partition. Under this you cansee CN, and Distinguished names of different objects.
Note: By using cluster Administrator you can configure, control, manage andmonitor clusters.
Suppose you have deleted Active Directory Users and Computers fromAdministrative tools, how to restore it?Start Programs Right click on Administrative tools Select All Users Right click in the window drag down to New Select short cut click onBrowse My computer C:\Windows\System32 Select dsa.msc Click OK Give the name as Active Directory Users and Computers Click OK.Note: You can add all snap ins in Administrative tools like this only.Note: The same procedure applied for any thing to place in start menu, justright click on the parent folder select open all users, and create a short cutthere, that’s all.
How can I quickly find all the listening or open ports on my computer?Usually, if you want to see all the used and listening ports on your computer,you'd use the NETSTAT command.Open Command Prompt and type: C:\WINDOWS>netstat -an |find /i "listening"This command displays all listening ports.C:\netstat -an |find /i "listening" > c:\openports.txtThis command redirects the output to a file openport.txt in C drive.C:\netstat -an |find /i "listening" > c:\openports.txtThis command is used to see what ports your computer actually communicateswith.
Note: Suppose you have some roles on a domain controller. With outtransferring the roles to other domain controller you have demoted the domaincontroller to a member server by the command dcpromo. Then what willhappen?When you demote a domain controller which has roles by the commanddcpromo, during the demotion the roles will be transferred to the nearestdomain controller.
How to change the Priority for DNS SRV Records in the Registry?To prevent Clients from sending all requests to a single domain controller, thedomain controllers are assigned a priority value. Client always send requests tothe domain controller that has the lowest priority value. If more than onedomain controller has the same value, The clients randomly choose from thegroup of domain controllers with the same value. If no domain controllers withthe lowest priority value are available, then the clients send requests to thedomain controller with the next highest priority. A domain Controller’s priorityvalue is stored in registry. When the domain controller starts, the Net Logonservice registers domain controller, the priority value is registered with therest of its DNS information. When a client uses DNS to discover a domaincontroller, the priority for a given domain controller is returned to the clientwith the rest of the DNS information. The client uses the priority values to helpdetermine to which domain controller to send requests.The value is stored in the LdapSrvPriority registry entry. The default value is0 and it can be range from 0 through 65535.Note: A lower value entered for LdapSrvPriority indicates a higher priority. Adomain controller with an LdapSrvPriority setting of 100 has a lower prioritythan a domain controller with a setting of 10. Therefore, client attempts to usethe domain controller with the setting of 100 first.To change priority for DNS SRV records in the registryLog on as Domain Admin Start Run Regedit HKLM\SYSTEM|CurrentControlSet\Services\Netlogon\Parameters Click Edit Click New Click DWORD value For the New value name, type LdapSrvPriority ClickEnter Double click the value name that just you typed to open the EditDWORD Value dialogue box Enter a value from 0 through 65535. The defaultvalue is 0 Choose Decimal as the Base option Click OK Close theRegistry editor.
What is the switch that is used to restart in Directory service Restore modein boot.ini file?Use the following switch along with the path./safeboot:dsrepair (I hope this switch is available in Windows 2003 only)
Note: If the functional level is windows server 2003 then you will get all thefeatures that are available with 2003. When Windows NT or Windows 2000domain controllers are included in your domain or forest with domaincontroller running Windows server 2003, Active Directory features are limited.
Note: Once if you raise the domain or forest functional level you cannot revertback.Advantages of different functional levels:When ever you are in Windows 2000 mixed mode the advantage is you can useWindows NT, 2000, 2003 domain controllers. The limitations are you cannot create universal groups You cannot nest groups You cannot convert groups (i.e., conversion between security groups and distribution groups) some additional dial in features will be disabled you cannot rename the domain controller. SID history disabled.
What is teaming?Teaming is the concept of combing two or more LAN cards for more speed. Forn number of LAN cards there will be only one IP address. By teaming you canincrease speed. For example if you are teaming 5 LAN cards of 100 MBPS nowyour network speed is 500 MBPS.Note: You can assign one IP address to n number of LAN cards and at the sameyou can assign n number of IP addresses to LAN card.
ADMT 2.0 has many new features such as a command-line interface and abetter interface to work with Microsoft Exchange Server. ADMT also supports auser-account password migration.How to restart Active Directory Domain Services? Take the following steps torestart Active Directory Domain Services:
Start the Services console through Start > Administrative Tools > Services.
Configuration partition: This partition stores the logical structure of the forestdeployment. It includes the domain structure and replication topology. Changesmade in this partition are replicated to all the domain controllers in all thedomains in the forest.
Domain partition: This partition stores all the objects in a domain. Changesmade in this partition are replicated to all the domain controllers within thedomain.
What is GPO?Group policy object (GPO) is a collection of group policy settings. It can becreated using a Windows utility known as the Group Policy snap-in. GPO affectsthe user and computer accounts located in sites, domains, and organizationalunits (OUs). The Windows 2000/2003 operating systems support two types ofGPOs, local and non-local (Active Directory-based) GPOs.
Local GPOsLocal GPOs are used to control policies on a local server running Windows2000/2003 Server. On each Windows 2000/2003 server, a local GPO is stored.The local GPO affects only the computer on which it is stored. By default, onlySecurity Settings nodes are configured. The rest of the settings are eitherdisabled or not enabled. The local GPO is stored in the %systemroot%SYSTEM32GROUPPOLICY folder.
Non-local GPOsNon-local GPOs are used to control policies on an Active Directory-basednetwork. A Windows 2000/2003 server needs to be configured as a domaincontroller on the network to use a non-local GPO. The non-local GPOs must belinked to a site, domain, or organizational unit (OU) to apply group policies tothe user or computer objects. The non-local GPOs are stored in %systemroot%SYSVOL<domain name>POLICIES<GPO GUID>ADM, where <GPO GUID> is theGPO's globally unique identifier. Two non-local GPOs are created by defaultwhen the Active Directory is installed:
Default Domain Policy: This GPO is linked to the domain and it affects all usersand computers in the domain.
Default Domain Controllers Policy: This GPO is linked to the Domain ControllersOU and it affects all domain controllers placed in this OU.
What is GPMC tool? The Group Policy Management Console (GPMC) is a tool for managing group policies in Windows Server 2003..
System Monitor can also be used to monitor the resource use of specificcomponents and program processes.
What is the SQL Server: General Statistics: User Connections counter? The SQL Server: General Statistics: User Connections counter displays the number of user connections in SQL Server. Its maximum value is 255. An increase in the value of the counter causes performance problems and affects throughput. A Database Administrator should monitor this counter to resolve performance issues.What is Simple Mail Transfer Protocol (SMTP)? Simple Mail Transfer Protocol(SMTP) is a protocol used for sending e-mail messages between servers. It ismostly used to send messages from a mail client such as Microsoft Outlook toa mail server. Most of the e-mail systems that send mails over the Internetuse SMTP to send messages from one server to another. Due to its limitationsin queuing messages at the receiving end, it is generally used with either thePOP3 or IMAP protocol, which enables a user to save and download messagesfrom the server.
Failback When the failed node returns back to the network, other nodes take notice and the cluster begins to use the restored node again. This phenomenon is called failback.
• Server clusters • Network Load Balancing (NLB)
Server Clusters In server clusters, all nodes are connected to a common dataset, such as a storage area network. All nodes have access to the sameapplication data. Any of these nodes can process a request from a client at anytime. Nodes can be configured as either active or passive. Only an active nodecan process requests from clients. In the event of a failure of the active node,the passive node takes charge and becomes active. Otherwise, the passivenode remains idle.
Server clusters are created for running applications that have frequentlychanging data sets and have long-running in-memory states. The applicationssuch as database servers, e-mail and messaging servers, and file and printservices can be included in server clusters.
A server cluster is treated as a single destination for a client. It has its ownname and IP address. This address is different from the individual IP addressesof the servers in the cluster. Hence, when any server fails in the cluster, thepassive server becomes active. Clients send their requests to the server clusteraddress. Therefore, this change over does not affect the functionality of thecluster.
Windows Server 2003 supports eight nodes in a cluster. However, Windows 2000Server supports only two nodes in a cluster.
In the NLB cluster, all nodes are active and have separate identical data sets.Multiple servers (or nodes) are used to distribute the load of processing data.Clients send the requests to the cluster, and then the clustering softwaredistributes incoming client requests among the nodes. If a node fails, theclients' requests are served by other nodes. Network Load Balancing is highlyscaleable. Both Windows 2003 and Windows 2000 operating systems supportNLB clusters of up to thirty-two nodes..What is DNS namespace? DNS namespace is the hierarchical structure of thedomain name tree. It is defined such that the names of all similarcomponents must be similarly structured, but similarly identifiable. The fullDNS name must point to a particular address. Consider the following image ofDNS namespace of the Internet:
The salessrv1 and salessrv2 are host names of the hosts configured in thesales.ucertify.com domain. The fully qualified domain name (FQDN) of thehost salessrv1 is salessrv1.sales.ucertify.com. No two hosts can have thesame FQDN.
• ADSIEDIT.DLL • ADSIEDIT.MSCRegarding system requirements, a connection to an Active Directoryenvironment and Microsoft Management Console (MMC) is necessary.
What are group scopes? The scope of a group defines two characteristics:
Domain Local: Domain local groups are used to assign permissions to localresources such as files and printers. Members can come from any domain.
Global: Members of this group can access resources in any domain. Memberscan only come from the local domain.
Universal: Members can be added from any domain in the forest. Members canaccess resources from any domain. Universal groups are used for managing thesecurity across domains. Universal groups can also contain global groups.Universal groups are only available in the domains having functional levelWindows 2000 native or Windows Server 2003.
What is System File Checker utility? The System File Checker utility is used to verify the integrity of the operating system files, to restore them if they are corrupt, and to extract compressed files (such as drivers) from installation disks. It can also be used to backup the existing files before restoring the original files.
PATHPING: PATHPING is a command-line utility that pings each hop along theroute for a set period of time and shows the delay and packet loss along withthe tracing functionality of TRACERT, which helps determine a weak link in thepath.
• Memory: Pages/sec • Memory: Available Bytes • SQL Server: Buffer Manager: Buffer Cache Hit Ratio • Physical Disk: Disk Reads/sec • Physical Disk: Disk Writes/sec • Physical Disk: %Disk Time • Physical Disk: Avg: Disk Queue Length • Physical Disk: % Free Space • Logical Disk: %Free Space • Processor: %Processor Time • System: Processor Queue Length • Network Interface: Bytes Received/sec • Network Interface: Bytes Sent/sec • Network Interface: Bytes/sec • Network Interface: Output Queue Length • SQL Server: General: User Connection
Tip for server roles. There are eight server roles. These roles are as follows:
• sysadmin • dbcreator • bulkadmin • diskadmin • processadmin • serveradmin • setupadmin • securityadmin
Network ProtocolsProtocol is a set of rules and conventions by which two computers passmessages across a network. Sets of standard protocols facilitate communicationbetween the computers in a network having different types of hardware andsoftware. Both the sender and the receiver computers must use exactly thesame set of protocols in order to communicate with each other. A protocol canlay down the rules for the message format, timing, sequencing, and errorhandling.
Protocol DescriptionNameIP Internet Protocol (IP) is a connectionless network-layer protocol that is the primary carrier of data on a TCP/IP network.TCP Transmission Control Protocol (TCP) is a reliable, connection-oriented protocol operating at the transport layer. This protocol can transmit large amounts of data. Application-layer protocols, such as HTTP and FTP, utilize the services of TCP to transfer files between clients and servers.UDP User Datagram Protocol (UDP) is a connectionless, unreliable transport-layer protocol. UDP is used primarily for brief exchange of requests and replies.Telnet Telnet is a protocol that enables an Internet user to log onto and enter commands on a remote computer linked to the Internet, as if the user were using a text-based terminal directly attached to that computer.FTP File Transfer Protocol (FTP) is a primary protocol of the TCP/IP protocol suite, used to transfer text and binary files between computers over a TCP/IP network.SMTP Simple Mail Transfer Protocol (SMTP) is used for transferring or sending e-mail messages between servers.
POP3: Post Office Protocol version 3 (POP3) is a protocol used for retrieving e-mail messages. The POP3 servers allow access to a single Inbox in contrast toIMAP servers that provide access to multiple server-side folders.
IMAP: Internet Message Access Protocol (IMAP) is a protocol for receiving e-mailmessages. It allows an e-mail client to access and manipulate a remote e-mailfile without downloading it to the local computer. It is used mainly by the userswho want to read their e-mails from remote locations.PPTP: Point-to-Point Tunneling Protocol (PPTP) is an encryption protocol usedto provide secure, low-cost remote access to corporate networks throughpublic networks such as the Internet. Using PPTP, remote users can use PPP-enabled client computers to dial a local ISP and connect securely to thecorporate network through the Internet.
• IMAP4 can be used to download only specific mails from the mail server, while POP3 downloads all the mails from the mail server at a time. • IMAP4 can download only a part of the message (e.g., the header) initially. Then depending upon the user, the entire message can be downloaded afterwards. However, POP3 downloads the entire message at a time. • IMAP4 only marks a message as deleted as soon as it is being read. The message will then be deleted as soon as the user logs off, or sends the EXPUNGE command to the mail server. • IMAP4 supports server side storage. Hence, the location of the user is insignificant. However, POP3 uses a local client application to read the mails. • Since IMAP4 stores messages on the server side, the user does not have to bother about fault tolerance and system crashes. When the POP3 protocol is used, the messages once downloaded from the server are stored locally and can be lost if the local system crashes. • IMAP4 allows a user to create multiple mailboxes on multiple servers under the same user name. The user can personalize these mailboxes for receiving specific kinds of mails in each mailbox. However, POP3 allows only a single user account to be configured. • Changes made to a mail are propagated to the IMAP4 server. This feature is not available under POP3 protocol.
However, there are some disadvantages of IMAP4 over the POP3 protocol,which are as follows:
• If the connection with the mail server drops while reading a mail, it has to be re-established. On the other hand, POP3 downloads the entire mail at a time. Hence, if the connection with the mail server is dropped at the time of reading a mail, it does not affect the reading. • The POP3 protocol is mostly supported by the commercially available mail servers. • Since the mails in IMAP4 are stored on the server, the space storage management is a primary concern on such mail servers.
There are two versions of IP addressing, the commonly used IPv4 and the latestversion known as IPv6. They have been discussed in detail in the followingparagraphs.
IPv4IP Address In this version of IP addressing, an IP address is of 32 bits in length,and is divided into four 8 bit decimal values known as octets. In these types ofIP addresses, the leftmost bit has the value of 128, which is followed by 64, 32,16, 8, 4, 2, and 1. An IP address can have values from 0 to 255 because eachbit can be either a 0 or a 1. So if all the bits are 1, the value will be 255; and ifall the bits are 0, the value will be 0.
Subnet Mask A subnet mask determines which part of the IP address denotesthe network id and which part is the host id. It is also a 32-bit number, which isexpressed in decimal format. The subnet mask is assigned according to theclass of IP address used.
• In class A addresses, only the first octet is used to define the network id, and the rest are used for the host id. It has the address range from 1 to 126 and so it can have only 126 numbers of networks. The number of hosts possible in these types of networks is 16,777,214. It uses the subnet mask 255.0.0.0. • In class B networks, the first two octets represent the network id and the rest are the host id. It has a range of 128-191 and can have 16384 networks with 65,534 hosts. The standard subnet mask assigned to these IP addresses is 255.255.0.0.
• In class C addresses, the first three octets are used to represent the network id. It has a range of 192-223 and can have 2,097,152 networks with 253 hosts. The subnet mask associated with it is 255.255.255.0.
IPv6 The current version of IP addressing (i.e., IPv4) has its limitations. Withthe fast increasing number of the networks and the expansion of the WorldWide Web, the IP addresses allotted are finishing fast and the need for morenetwork addresses has arisen. IPv6 can solve this problem, as it uses a 128-bitaddress that can produce a lot more IP addresses. These addresses arehexadecimal numbers, made up of eight octet pairs. An example of an IPv6address can be 45CF: 6D53: 12CD: AFC7: E654: BB32: 543C.
Suppose that a company has been assigned a Class C IP address 200.1.1.0, andthe standard subnet mask is 255.255.255.0. This means that the network id willbe 200.1.1 and the total number of hosts will be 254. The company has twodepartments: production and sales. Members of the production department donot need to access the computers of the sales department. So it is better tohave separate networks for both the departments for better security andmanageability. Through subnetting, the bits from the host id portion can beused to create more networks, which will work as separate networks.
Some addresses from each of the classes A, B, and C have been assigned for useby private networks. The address range for class A addresses is from 10.0.0.0 to255.255.255, for class B addresses it is from 172.6.0.0 to 172.31.255.255, andfor class C addresses, it is from 192.168.0.0 to 192.168.255.255.
IP Addressing Methods:Static Addressing In static addressing, every computer is assigned an IP addressmanually. It is not preferred in large networks, which have lots of hosts,because the chance of assigning duplicate addresses will be more. This willresult in a conflict of IP addresses and deterioration of the speed. Also it istime consuming, as every system is configured manually and if some changesare to be made afterwards, it will consume a lot of time doing it manually forevery computer.
TCP/UDP Ports The default TCP/UDP ports associated with TCP/IP protocol orapplications are as under:
Protocol PortHTTP 80HTTPS 443POP3 110FTP 20FTP 21IMAP4 143SMTP 25NNTP 119NTP 123DNS 53TFTP 69Telnet 23SSH 22
What are cluster configurations? Server clusters using the Cluster service can be set up as one of the following three different cluster configurations:
However, server clusters using the Cluster service are set up depending on thespecific needs for failovers, in which application services are moved to anothernode in the cluster.
What is N+I Hot Standby Server? N+I Hot Standby Server is one of the failover models. It is commonly referred to as an Active/Passive mode. In an active/passive mode, the active nodes handle all client requests, whereas the passive nodes monitor the active nodes. In N+I Hot Standby Server, N denotes the number of active nodes, and I refers to the number of passive nodes. This model has a drawback that the server resources remain idle for a long time and are utilized only when another server fails. However, it is the most scalable and reliable model.
In a large complex network, the Active Directory service provides a single pointof management for the administrators by placing all the network resources at asingle place. It allows administrators to effectively delegate administrativetasks as well as facilitate fast searching of network resources. It is easilyscalable, i.e., administrators can add a large number of resources to it withouthaving additional administrative burden. It is accomplished by partitioning thedirectory database, distributing it across other domains, and establishing trustrelationships, thereby providing users with benefits of decentralization, and atthe same time, maintaining the centralized administration..
What are domain functional levels? The domain functional levels are thevarious states of a domain, which enable domain-wide Active Directoryfeatures within a network environment. Domain levels are the same asdomain modes in Windows 2000. Windows supports four types of functionallevels:
1. Windows 2000 Mixed: This is the default domain functional level. When a first domain controller is installed or upgraded to Windows 2003, the domain controller is configured to run in the Windows 2000 mixed functional level. In this mode, domain controllers running the following operating systems are supported: o Windows NT Server 4.0 o Windows 2000 Server o Windows Server 2003 2. Windows 2000 Native: In this level, domain controllers running Windows 2000 and Windows 2003 can interact with each other. No domain controller running a pre-Windows 2000 version is supported in this functional level of the domain.
Note: Windows Server 2003 interim functional level does not support domain controllers running Windows 2000.
• Connectivity • Replication • Integrity of topology • Permissions on directory partition heads • Permissions of users • Functionality of the domain controller locator • Consistency among domain controllers in the site • Verification of trusts • Diagnosis of replication latencies • Replication of trust objects • Verification of File Replication service • Verification of critical servicesNote: DCDIAG is an analyzing tool, which is mostly used for the reportingpurposes. Although this tool allows specific tests to be run individually, it is notintended as a general toolbox of commands for performing specific tasks.
Windows 2003 system services? Windows Server 2003 comes with many system services that have different functionalities in the operating system. When Windows Server 2003 is first installed, the default system services are created and are configured to run when the system starts
AlerterAutomatic UpdatesCluster Service DHCP Distributed File System DNS Client service DNS Server serviceEvent Log serviceRemote InstallationRemote Procedure Call (RPC)Routing and Remote Access
What is a paging file? A paging file is a hidden file on the hard disk used by Windows operating systems to hold parts of programs and data that do not fit in the computer's memory. The paging file and the physical memory, or random access memory (RAM), comprise the virtual memory. Windows operating systems move data from the paging file to the memory as required and move data from the memory to the paging file to make room for new data. A paging file is also known as a swap file.
What is ADPREP tool? The ADPREP tool is used to prepare Windows 2000 domains and forests for an upgrade to Windows Server 2003. It extends the schema, updates default security descriptors of selected objects, and adds new directory objects as required by some applications.
Parameter Description/forestprep Prepares a Windows 2000 forest for an upgrade to a Windows Server 2003 forest./domainprep Prepares a Windows 2000 domain for an upgrade to a Windows Server 2003 domain./? Displays help for the command.
Which files are included in the System State data? Following are the files included in the System State data: information to log files over a period of time. The prime benefit of this tool is the ability to capture performance counter information for further analysis. Performance Logs and Alerts runs as a service and loads during computer startup. It does not require a user to log on to a computer.
Copy Backups A copy backup copies all selected files and folders. It neither uses nor clears the archive attribute of the files. It is generally not a part of a planned scheduled backup.
Daily Backups A daily backup backs up all selected files and folders that have changed during the day. It backs up data by using the modified date of the files. It neither uses nor clears the archive attribute of the files.
The most common solutions for the needs of different organizations include thecombination of normal, differential, and incremental backups.Combination of Normal and Differential Backups An administrator can use acombination of a normal backup and a differential backup to save time intaking a backup as well as for a restoration of data. In this plan, a normalbackup can be taken on Sunday, and differential backups can be taken onMonday through Friday every night. If data becomes corrupt at any time, only anormal and last differential backup are required to be restored. Although thiscombination is easier and takes lesser time for restoration, it takes more timeto take backup, if data changes frequently.
The sites are created to physically group the computers and resources tooptimize network traffic. Administrators can configure Active Directoryaccess and replication technology to take advantage of the physicalnetwork by configuring sites. When a user logs on to the network, theauthentication request searches for the domain controllers in the samesite as the user. A site prevents the network traffic from traveling onslow wide area network (WAN) links.
Note: Windows Server 2003 supports a new type of directory partition namedApplication directory partition. This partition is available only to Windows 2003domain controllers. The applications and services use this partition to storeapplication-specific data.
For intrasite replication to take place, connection objects are required. TheActive Directory automatically creates and deletes connection objects as andwhen required. Connection objects can be created manually to forcereplication.
What are Site Links? Site links are logical, transitive connections between twoor more sites. For intersite replication to take place, site links are required tobe configured. Once a site link has been configured, the knowledge consistencychecker (KCC) then automatically generates the replication topology bycreating the appropriate connection objects. Site links are used to determinethe paths between two sites. They must be created manually.
Site links are transitive in nature. For example, if Site 1 is linked with Site 2and Site 2 is linked with Site 3, then Site 1 and Site 3 are linked transitively.The administrators can control transitivity of the site link. By default,transitivity is enabled. Site link transitivity can be enabled or disabled througha bridge.
What is Site Link Bridge? A site link bridge is created to build a transitive andlogical link between two sites that do not have an explicit site link. The sitelink bridge is created only when the transitivity of the site link is disabled.
What is Site Link Cost? Site link cost is an attribute of a site link. Each site linkhas been assigned a default cost of 100. The knowledge consistency checker(KCC) uses the site link cost to determine which site links should be preferredfor replication. It should be remembered that the lower the site link cost, themore preferred is the link.
For example, an administrator has to configure the site link cost of linksbetween Site 1 and Site 2. There are two site links available as shown in theimage below:S1S2 is a T1 site link that uses T1 lines for replication, whereas S1S2DU uses adial-up connection for replication. If the administrator requires that the KCCshould prefer the S1S2 site link to the S1S2DU site link for replication, he willhave to configure the SIS2 link with a lower cost than that of the S1S2DU link.Any site link configured with the site link cost of one (1) will always getpreference over the other site links with a higher cost.information to log files over a period of time. The prime benefit of this toolis the ability to capture performance counter information for furtheranalysis. Performance Logs and Alerts runs as a service and loads duringcomputer startup. It does not require a user to log on to a computer
Which installation modes are available with ISA Server? The following modes are available as a part of the ISA Server setup process:.
Where are the Windows NT Primary Domain Controller (PDC) and itsBackup Domain Controller (BDC) in Server 2003? The Active Directoryreplaces them. Now all domain controllers share a multimaster peer-to-peer read and write relationship that hosts copies of the ActiveDirectory.
How long does it take for security changes to be replicated among thedomain controllers? Security-related modifications are replicated withina site immediately. These changes include account and individual userlockout policies, changes to password policies, changes to computeraccount passwords, and modifications to the Local Security Authority(LSA).
If I delete a user and then create a new account with the sameusername and password, would the SID and permissions stay thesame? No. If you delete a user account and attempt to recreate it withthe same user name and password, the SID will be different.What do you do with secure sign-ons in an organization with manyroaming users? Credential Management feature of Windows Server 2003provides a consistent single sign-on experience for users. This can beuseful for roaming users who move between computer systems. TheCredential Management feature provides a secure store of usercredentials that includes passwords and X.509 certificates.
Anything special you should do when adding a user that has a Mac?"Save password as encrypted clear text" must be selected on UserProperties Account Tab Options, since the Macs only store theirpasswords that way.
What remote access options does Windows Server 2003 support? Dial-in, VPN, dial-in with callback.
Where are the documents and settings for the roaming profile stored?All the documents and environmental settings for the roaming user arestored locally on the system, and, when the user logs off, all changes tothe locally stored profile are copied to the shared server folder.Therefore, the first time a roaming user logs on to a new system thelogon process may take some time, depending on how large his profilefolder is.
Where are the settings for all the users stored on a given machine?\Document and Settings\All Users
What languages can you use for log-on scripts? JavaScipt, VBScript,DOS batch files (.com, .bat, or even .exe)
What is LSDOU? It’s group policy inheritance model, where the policiesare applied to Local machines, Sites, Domains and Organizational Units.
Why doesn’t LSDOU work under Windows NT? If the NTConfig.pol fileexist, it has the highest priority among the numerous policies.
What is GPT and GPC? Group policy template and group policycontainer.Where is GPT stored? %SystemRoot%\SYSVOL\sysvol\domainname\Policies\GUID
You change the group policies, and now the computer and usersettings are in conflict. Which one has the highest priority? Thecomputer settings take priority.
You need to automatically install an app, but MSI file is not available.What do you do? A .zap text file can be used to add applications usingthe Software Installer, rather than the Windows Installer.
You want to create a new group policy but do not wish to inherit.Make sure you check Block inheritance among the options whencreating the policy.
What is "tattooing" the Registry? The user can view and modify userpreferences that are not stored in maintained portions of the Registry. Ifthe group policy is removed or changed, the user preference will persistin the Registry.
How do FAT and NTFS differ in approach to user shares? They don’t,both have support for sharing.
I have a file to which the user has access, but he has no folderpermission to read it. Can he access it? It is possible for a user tonavigate to a file for which he does not have folder permission. Thisinvolves simply knowing the path of the file object. Even if the usercan’t drill down the file/folder tree using My Computer, he can still gainaccess to the file using the Universal Naming Convention (UNC). The bestway to start would be to type the full path of a file into Run… window.
What problems can you have with DFS installed? Two users opening the redundant copies of the file at the same time, with no file-locking involved in DFS, changing the contents and then saving. Only one file will be propagated through DFS.
What hashing algorithms are used in Windows 2003 Server? RSA Data Security’s Message Digest 5 (MD5), produces a 128-bit hash, and the Secure Hash Algorithm 1 (SHA-1), produces a 160-bit hash.
What is ARP Cache Poisoning? ARP stands for Address Resolution Protocol.Every computer in a LAN has 2 identifiers: IP and MAC address. IP is eitherentered by the user or dynamically allocated by a server. But the MAC addressis unique for any Ethernet card. For example, if you have 2 ethernet cards, onefor wired and the other for WiFi, you have 2 MAC addresses on your machine.The MAC address is a hardware code for your ethernet card.The communications between computers is done on the IP level. Means that ifyou want to send a file to a computer, you need to know the other computerIP.Now, ARP is the protocol that matches every IP with a certain MAC address inARP table that is saved on your switch in your LAN.ARP cache poisoning is changing this ARP table on the switch.For Normal case, when a machine tries to connect to another machine. Thefirst machine goes to the ARP table with the other machine IP, the ARP tableprovide the MAC address for the other machine and the communication starts.But if someone plays with the table, the first machine goes with the IP and theARP table will provide a faulty MAC address to a 3rd machine who wants tointrude through your communication.This Kind of attach is known as "Man in the Middle".
When it's time to send a packet, your computer delivers a packet a) directly tothe destination computer or b) sends it to the router for ultimate delivery.
But how does your computer know whether the packet's destination is within itssubnet? The answer is that your computer uses the subnet mask to determinethe members of the subnet. If your computer's address and the destinationcomputer's IP addresses are in the same subnet address range, then they cansend packets directly to each other. If they're not in the same range, then theymust send their data through a router for delivery.The chart below associatesthe number of IP addresses in a subnet to the subnet mask. For example, thesubnet mask "255.255.255.0" represents 254 consecutive IP addresses.
As the client is unable to connect with the server, APIPA will automatically tryto configure itself with an IP address from an specially reserved range. (Thisreserved IP address range goes from 169.254.0.0 to 169.254.255.255).
What is an RFC? Name a few if possible (not necessarily the numbers, justthe ideas behind them) A Request For Comments (RFC) document defines aprotocol or policy used on the Internet. An RFC can be submitted by anyone.Eventually, if it gains enough interest, it may evolve into an Internet StandardEach RFC is designated by an RFC number. Once published, an RFC neverchanges. Modifications to an original RFC are assigned a new RFC number.
What is RFC 1918? RFC 1918 is Address Allocation for Private Internets TheInternet Assigned Numbers Authority (IANA) has reserved the following threeblocksblock", and to the third as "16-bit" block. Note that (in pre-CIDR notation) thefirst block is nothing but a single class A network number, while the secondblock is a set of 16 contiguous class B network numbers, and third block is a setof 256 contiguous class C network numbers.
192.30.250.00/18The "192.30.250.00" is the network address itself and the "18" says that the first18 bits are the network part of the address, leaving the last 14 bits for specifichost addresses. CIDR lets one routing table entry represent an aggregation ofnetworks that exist in the forward path that don't need to be specified on thatparticular gateway, much as the public telephone system uses area codes tochannel calls toward a certain part of the network. This aggregation ofnetworks in a single address is sometimes referred to as a supernet.CIDR is supported by the Border Gateway Protocol, the prevailing exterior(interdomain) gateway protocol. (The older exterior or interdomain gatewayprotocols, Exterior Gateway Protocol and Routing Information Protocol, do notsupport CIDR.) CIDR is also supported by the OSPF interior or intradomaingateway protocol.
You have the following Network ID: 131.112.0.0. You need at least 500hosts per network. How many networks can you create? What subnet maskwill you use? Subnet mask is 255.255.252.0, we can create 4 subnet and atleast we can connect 500host per network
You need to view at network traffic. What will you use? Name a few toolsDepends what type of traffic I want to monitor and the network design. I reallyliked using Fluke Networks OptiView Network Analyzer. Software though Iwould say wireshark, sitrace, Iris Network Traffic Analyzer, Airsnare,Packetcapsa. Backtrack (a linux live CD) has tons of different applications thatyou can use to monitor and view network traffic
How do I know the path that a packet takes to the destination? use "tracert"command-lineWhat is DHCP? What are the benefits and drawbacks of using it?
Benefits:
Disadvantage
Your machine name does not change when you get a new IP address. The DNS(Domain Name System) name is associated with your IP address and thereforedoes change. This only presents a problem if other clients try to access yourmachine by its DNS name.
Describe the steps taken by the client and DHCP server in order to obtain anIP address. At least one DHCP server must exist on a network. Once the DHCPserver software is installed, you create a DHCP scope, which is a pool of IPaddresses that the server manages. When clients log on, they request an IPaddress from the server, and the server provides an IP address from its pool ofavailable addresses. DHCP was originally defined in RFC 1531 (Dynamic HostConfiguration Protocol, October 1993) but the most recent update is RFC 2131(Dynamic Host Configuration Protocol, March 1997). The IETF Dynamic HostConfiguration (dhc) Working Group is chartered to produce a protocol forautomated allocation, configuration, and management of IP addresses andTCP/IP protocol stack parameters.
What is the DHCPNACK and when do I get one? Name 2 scenarios. Recently Isaw a lot of queries regarding when the Microsoft DHCP server issues a NAK toDHCP clients. For simplification purposes, I am listing down the possiblescenarios in which the server should NOT issue a NAK. This should give you agood understanding of DHCP NAK behavior.
DHCP server will issue a NAK to the client ONLY IF it is sure that the client, "onthe local subnet", is asking for an address that doesn't exist on that subnet.
1. Requested address from possibly the same subnet but not in the address poolof the server:-
This can be the failover scenario in which 2 DHCP servers are serving the samesubnet so that when one goes down, the other should not NAK to clients whichgot an IP from the first server.
What ports are used by DHCP and the DHCP clients? Requests are on UDP port68, Server replies on UDP 67 double check. These are reversed.
• Installing DHCP • Understanding the DHCP lease process • Creating scopes, superscopes, and multicast scopes • Configuring the lease duration • Configuring optional IP parameters that can be assigned to DHCP clients • Understanding how DHCP interacts with DNS • Configuring DHCP for DNS integration • Authorizing a DHCP server in Active Directory • Managing a DHCP server • Monitoring a DHCP server IntroductionThe TCP/IP protocol is an Active Directory operational requirement. This means that all computersissues because anyone with physical access to the network can plug in a laptop and obtain IP information about the internal network.
In this chapter, you'll learn how to implement a DHCP server, including theinstallation process, authorization of the server, and the configuration of DHCPscopes. The chapter ends by looking at how to manage a DHCP server andmonitor its performance.
Describe the integration between DHCP and DNS. Traditionally, DNS andDHCP servers have been configured and managed one at a time. Similarly,changing authorization rights for a particular user on a group of devices hasmeant visiting each one and making configuration changes. DHCP integrationwith DNS allows the aggregation of these tasks across devices, enabling acompany's network services to scale in step with the growth of network users,devices, and policies, while reducing administrative operations and costs.
This integration provides practical operational efficiencies that lower total costof ownership. Creating a DHCP network automatically creates an associatedDNS zone, for example, reducing the number of tasks required of networkadministrators. And integration of DNS and DHCP in the same database instanceprovides unmatched consistency between service and management views of IPaddress-centric network services data.
Windows Server 2003 DNS supports DHCP by means of the dynamic update ofDNS zones. By integrating DHCP and DNS in a DNS deployment, you can provideyour network resources with dynamic addressing information stored in DNS. Toenable this integration, you can use the Windows Server 2003 DHCP service.The dynamic update standard, specified in RFC 2136: Dynamic Updates in theDomain Name System (DNSUPDATE), automatically updates DNS records. Both Windows Server 2003 andWindows 2000 support dynamic update, and both clients and DHCP servers cansend dynamic updates when their IP addresses change.Dynamic update enables a DHCP server to register address (A) and pointer(PTR) resource records on behalf of a DHCP client by using DHCP Client FQDNoption 81. Option 81 enables the DHCP client to provide its FQDN to the DHCPserver. The DHCP client also provides instructions to the DHCP serverdescribing how to process DNS dynamic updates on behalf of the DHCP client.The DHCP server can dynamically update DNS A and PTR records on behalf ofDHCP clients that are not capable of sending option 81 to the DHCP server. Youcan also configure the DHCP server to discard client A and PTR records whenthe DHCP client lease is deleted. This reduces the time needed to managethese records manually and provides support for DHCP clients that cannotperform dynamic updates. In addition, dynamic update simplifies the setup ofActive Directory by enabling domain controllers to dynamically register SRVresource records.If the DHCP server is configured to perform DNS dynamic updates, it performsone of the following actions:
The DHCP server updates resource records at the request of the client. Theclient requests the DHCP server to update the DNS PTR record on behalf of theclient, and the client registers A.
The DHCP server updates DNS A and PTR records regardless of whether theclient requests this action or not.By itself, dynamic update is not secure because any client can modify DNSrecords. To secure dynamic updates, you can use the secure dynamic updatefeature provided in Windows Server 2003. To delete outdated records, you canuse the DNS server aging and scavenging feature.
What are User Classes and Vendor Classes in DHCP? Microsoft Vendor Classes
ipconfig /setclassid "<Name of your Network card>" <Name of the class youcreated on DHCP and you want to join (Name is case sensitive)>
Eg:
Apple OS X 10.* Server supports BootP (albeit) renamed as NetBoot. The facilityallows the Admin to maintain a selected set of configurations as boot imagesand then assign sets of client systems to share(or boot from) that image. Forexample Accounting, Management, and Engineering departments haveelements in common, but which can be unique from other departments.Performing upgrades and maintenance on three images is far more productivethat working on all client systems individually.
Startup is obviously network intensive, and beyond 40-50 clients, the Adminneeds to carefully subnet the infrastructure, use gigabit switches, and host theimages local to the clients to avoid saturating the network. This will expandthe number of BootP servers and multiply the number of images, but theproductivity of 1 BootP server per 50 clients is undeniable :)
DNS zones – describe the differences between the 4 types. Dns zone is actualfile which contains all the records for a specific domain.
i) Forward Lookup Zones: - This zone is responsible to resolve host name to ip.
iii) Stub Zone: - Stubzone is read only copy of primary zone, but it containsonly 3 records viz the SOA for the primary zone, NS record and a Host (A)record.
Authoritative Name Server [NS] Record:-A Zone should contain one NS Recordfor each of its own DNS servers (primary and secondary). This mostly is used forZone Transfer purposes (notify). These NS Records have the same name as theZone in which they are located.
If you host Web sites on this server and have a standalone DNS server acting asa primary (master) name server for your sites, you may want to set up yourcontrol panel's DNS server to function as a secondary (slave) name server:
To make the control panel's DNS server act as a secondary name server:
Go to Domains > domain name > DNS Settings (in the Web Site group).
Click Add.
Repeat steps from 1 to 5 for each Web site that needs to have a secondaryname server on this machine.
To make the control panel's DNS server act as a primary for a zone:
Click Switch DNS Service Mode. The original resource records for the zone willbe restored.If you host Web sites on this server and rely entirely on other machines toperform the Domain Name Service for your sites (there are two external nameservers - a primary and a secondary), switch off the control panel's DNS servicefor each site served by external name servers.
To switch off the control panel's DNS service for a site served by an externalname server:
Click Switch Off the DNS Service in the Tools group. Turning the DNS serviceoff for the zone will refresh the screen, so that only a list of name serversremains.
Note: The listed name server records have no effect on the system. They areonly presented on the screen as clickable links to give you a chance to validatethe configuration of the zone maintained on the external authoritative nameservers.
Repeat the steps from 1 to 3 to switch off the local domain name service foreach site served by external name servers.
Add to the list the entries pointing to the appropriate name servers that areauthoritative for the zone: click Add, specify a name server, and click OK.Repeat this for each name server you would like to test.
Click the records that you have just created. Parallels Plesk Panel will retrievethe zone file from a remote name server and check the resource records tomake sure that domain's resources are properly resolved.
Describe the importance of DNS to AD. When you install Active Directory on aserver, you promote the server to the role of a domain controller for aspecified domain. When completing this process, you are prompted to specify aDNS domain name for the Active Directory domain for which you are joiningand promoting the server.If during this process, a DNS server authoritative forthe domain that you specified either cannot be located on the network or doesnot support the DNS dynamic update protocol, you are prompted with theoption to install a DNS server. This option is provided because a DNS server isrequired to locate this server or other domain controllers for members of anActive Directory domain
What does "Disable Recursion" in DNS mean? In the Windows 2000/2003 DNSconsole (dnsmgmt.msc), under a server's Properties -> Forwarders tab is thesetting Do not use recursion for this domain. On the Advanced tab you will findthe confusingly similar option Disable recursion (also disables forwarders).
Recursion refers to the action of a DNS server querying additional DNS servers(e.g. local ISP DNS or the root DNS servers) to resolve queries that it cannotresolve from its own database. So what is the difference between thesesettings?
The DNS server will attempt to resolve the name locally, then will forwardrequests to any DNS servers specified as forwarders. If Do not use recursion forthisresolve a query from its own database only. It will not query any additionalservers.
If neither of these options is set, the server will attempt to resolve queriesnormally:... the local database is queried... if an entry is not found, the request is passed to any forwarders that are set... if no forwarders are set, the server will query servers on the Root Hints tabto resolve queries beginning at the root domains.
What could cause the Forwarders and Root Hints to be grayed out? Win2Kconfigured your DNS server as a private rootrequire additional configuration to dynamically register DNS records in single-label DNS zones. • Client computers and domain controllers may requireadditional configuration to resolve DNS queries in single-label DNS zones.• By default, Windows Server 2003-based domain members, Windows XP-baseddomain members, and Windows 2000-based domain members do not performdynamic updates to single-label DNS zones.• Some server-based applications are incompatible with single-label domainnames. Application support may not exist in the initial release of anapplication, or support may be dropped in a future release. For example,Microsoft Exchange Server 2007 is not supported in environments in whichsingle-label DNS is used.• Some server-based applications are incompatible with the domain renamefeature that is supported in Windows Server 2003 domain controllers and inWindows Server 2008 domain controllers. These incompatibilities either blockor complicate the use of the domain rename feature when you try to rename asingle-label DNS name to a fully qualified domain name.
What is the "in-addr.arpa" zone used for? When creating DNS records for yourhosts, A records make sense. After all, how can the world find your mail serverunless the IP address of that server is associated with its hostname within aDNS database? However, PTR records aren't as easily understood. If you alreadyhave a zone file, why does there have to be a separate in-addr.arpa zonecontaining PTR records matching your A records? And who should be makingthose PTR records--you or your provider? Let's start by defining in-addr.arpa..arpa is actually a TLD like .com or .org. The name of the TLD comes fromAddress and Routing Parameter Area and it has been designated by the IANA tobe used exclusively for Internet infrastructure purposes. In other words, it is animportant zone and an integral part of the inner workings of DNS. The RFC forDNS (RFC 1035) has an entire section on the in-addr.arpa domain. The first twoparagraphs in that section state the purpose of the domain: "The Internet usesa special domain to support gateway location and Internet address to hostmapping. Other classes may employ a similar strategy in other domains. Theintent of this domain is to provide a guaranteed method to perform hostaddress to host name mapping, and to facilitate queries to locate all gatewayson a particular network in the Internet. Note that both of these services aresimilar to functions that could be performed by inverse queries; the differenceis that this part of the domain name space is structured according to address,and hence can guarantee that the appropriate data can be located without anexhaustive search of the domain space." In other words, this zone provides adatabase of all allocated networks and the DNS reachable hosts within thosenetworks. If your assigned network does not appear in this zone, it appears tobe unallocated. And if your hosts don't have a PTR record in this database, theyappear to be unreachable through DNS. Assuming an A record exists for a host,a missing PTR record may or may not impact on the DNS reachability of thathost, depending upon the applications running on that host. For example, amail server will definitely be impacted as PTR records are used in mail headerchecks and by most anti-SPAM mechanisms. Depending upon your web serverconfiguration, it may also depend upon an existing PTR record. This is why theDNS RFCs recommend that every A record has an associated PTR record. Butwho should make and host those PTR records? Twenty years ago when youcould buy a full Class C network address (i.e. 254 host addresses) the answerwas easy: you. Remember, the in-addr.arpa zone is concerned with delegatednetwork addresses. In other words, the owner of the network address isauthoritative (i.e. responsible) for the host PTR records associated with thatnetwork address space. If you only own one or two host addresses within anetwork address space, the provider you purchased those addresses from needsto host your PTR records as the provider is the owner of (i.e. authoritative for)the network address. Things are a bit more interesting if you have beendelegated a CIDR block of addresses. The in-addr.arpa zone assumes a classfuladdressing scheme where a Class A address is one octet (or /8), a Class B is 2octets (or /16) and a Class C is 3 octets (or /24). CIDR allows for delegatingaddress space outside of these boundaries--say a /19 or a /28. RFC 2317provides a best current practice for maintaining in-addr.arpa with these typesof network allocations. Here is a summary regarding PTR records: • Don't waituntil users complain about DNS unreachability--be proactive and ensure there isan associated PTR record for every A record. • If your provider hosts your Arecords, they should also host your PTR records. • If you only have one or twoassigned IP addresses, your provider should host your PTR records as they areauthoritative for the network those hosts belong to. • If you own an entirenetwork address (e.g. a Class C address ending in 0), you are responsible forhosting your PTR records. • If you are configuring an internal DNS server withinthe private address ranges (e.g. 10.0.0.0 or 192.168.0.0), you are responsiblefor your own internal PTR records. • Remember: the key to PTR hosting isknowing who is authoritative for the network address for your domain. When indoubt, it probably is not you.
When you install Active Directory on a member server, the member server ispromoted to a domain controller. Active Directory uses DNS as the locationmechanism for domain controllers, enabling computers on the network toobtain IP addresses of domain controllers.
During the installation of Active Directory, the service (SRV) and address (A)resource records are dynamically registered in DNS, which are necessary for thesuccessful functionality of the domain controller locator (Locator) mechanism.
To find domain controllers in a domain or forest, a client queries DNS for theSRV and A DNS resource records of the domain controller, which provide theclient with the names and IP addresses of the domain controllers. In thiscontext, the SRV and A resource records are referred to as Locator DNSresource records.
When adding a domain controller to a forest, you are updating a DNS zonehosted on a DNS server with the Locator DNS resource records and identifyingthe domain controller. For this reason, the DNS zone must allow dynamicupdates (RFC 2136) and the DNS server hosting that zone must support the SRVresource records (RFC 2782) to advertise the Active Directory directory service.For more information about RFCs, see DNS RFCs.
If the DNS server hosting the authoritative DNS zone is not a server runningWindows 2000 or Windows Server 2003, contact your DNS administrator todetermine if the DNS server supports the required standards. If the server doesnot support the required standards, or the authoritative DNS zone cannot beconfigured to allow dynamic updates, then modification is required to yourexisting DNS infrastructure.
For more information, see Checklist: Verifying DNS before installing ActiveDirectory and Using the Active Directory Installation Wizard.
Important
• The DNS server used to support Active Directory must support SRV resourcerecords for the Locator mechanism to function. For more information, seeManaging resource records.
After installing Active Directory, these records can be found on the domaincontroller in the following location: systemroot\System32\Config\Netlogon.dns
How do you manually create SRV records in DNS? this is on windows server
right click on the zone you want to add srv record to and choose "other newrecord"
What are the benefits of using Windows 2003 DNS when using AD-integratedzones?
Advantages:
You installed a new AD domain and the new (and first) DC has not registeredits SRV records in DNS. Name a few possible causes. The machine cannot beconfigured with DNS client her own The DNS service cannot be run
What are the benefits and scenarios of using Stub zones? One of the newfeatures introduced in the Windows Server 2003-based implementation of DNSare stub zones. Its main purpose is to provide name resolution in domains, forwhich a local DNS server is not authoritative. The stub zone contains only a fewrecords: - Start of Authority (SOA) record pointing to a remote DNS server thatis considered to be the best source of information about the target DNSdomain, - one or more Name Server (NS) records (including the entry associatedwith the SOA record), which are authoritative for the DNS domain representedby the stub zone, - corresponding A records for each of the NS entries(providing IP addresses of the servers). While you can also provide nameresolution for a remote domain by creating a secondary zone (which was acommon approach in Windows Server 2000 DNS implementation) or delegation(when dealing with a contiguous namespace), such approach forces periodiczone transfers, which are not needed when stub zones are used. Necessity totraverse network in order to obtain individual records hosted on the remoteName Servers is mitigated to some extent by caching process, which keepsthem on the local server for the duration of their Time-to-Live (TTL)parameter. In addition, records residing in a stub zone are periodicallyvalidated and refreshed in order to avoid lame delegations.
What are the benefits and scenarios of using Conditional Forwarding? Thebenefits are speed up name resolution in certain scenarios. According toresearch that is forwarded to the correct server or with specific speed. Anddown where DNS queries are sent in specific areas.
• Start • Run • Type "cmd" and press enter • In the command window type "ipconfig /flushdns" • A. If done correctly it should say "Successfully flushed the DNS Resolver Cache." • B. If you receive an error "Could not flush the DNS Resolver Cache: Function failed during execution.", follow the Microsoft KB Article 919746 to enable the cache. The cache will be empty however this will allow successful cache-flush in future.
What is the 224.0.1.24 address used for? WINS server group address. Used tosupport autodiscovery and dynamic configuration of replication for WINSservers. For more information, see WINS replication overview
What is WINS and when do we use it? WINS is windows internet name servicewho is use for resolved the NetBIOS (computer name) name to IP address. Thisis proprietary for Windows. You can use in LAN.
A push partner is a WINS server that sends a message to its pull partners,notifying them that it has new WINS database entries. When a WINS server'spull partner responds to the message with a replication request, the WINSserver sends (pushes) copies of its new WINS database entries (also known asreplicas) to the requesting pull partner.
A pull partner is a WINS server that pulls WINS database entries from its pushpartners by requesting any new WINS database entries that the push partnershave. The pull partner requests the new WINS database entries that have ahigher version number than the last entry the pull partner received during themost recent replication.
Simple deletion removes the records that are selected in the WINS consoleonly from the local WINS server you are currently managing. If the WINSrecords deleted in this way exist in WINS data replicated to other WINS serverson your network, these additional records are not fully removed. Also, recordsthat are simply deleted on only one server can reappear after replicationbetween the WINS server where simple deletion was used and any of itsreplication partners.
replicated, the tombstone status is updated and applied by other WINS serversthat store replicated copies of these records. Each replicating WINS server thenupdates and tombstones
Name the NetBIOS names you might expect from a Windows 2003 DC that isregistered in WINS. 54 name the NetBIOS names you might expect from awindows 2003 dc that is registered in wins
Routers can have many different types of connectors; from Ethernet, FastEthernet, and Token Ring to Serial and ISDN ports. Some of the availableconfigurable items are logical addresses (IP,IPX), media types, bandwidth, andadministrative commands. Interfaces are configured in interface mode whichyou get to from global configuration mode after logging in.
Depending on the port you're using, you might have to press enter to get theprompt to appear (console port). The first prompt will look like Routername>the greater than sign at the prompt tell you that you are in user mode. In usermode you can only view limited statistics of the router in this mode. To changeconfigurations you first need to enter privileged EXEC mode. This is done bytyping enable at the Routername> prompt, the prompt then changes toRoutername#. This mode supports testing commands, debugging commands,and commands to manage the router configuration files. To go back to usermode, type disable at the Routername# prompt. If you want to leavecompletely, type logout at the user mode prompt. You can also exit from therouter while in privileged mode by typing exit or logout at the Routername#prompt.
Enter this mode from the privileged mode by typing configure terminal or(conf t for short). The prompt will change to Routername(config)#. Changesmade in this mode change the running-config file in DRAM. Use configurememory to change the startup-config in NVRAM. Using configure networkallows you to change the configuration file on a TFTP server. If you change thememory or network config files, the router has to put them into memory(DRAM) in order to work with them, so this will change your router's currentrunning-config file.
Interfaces modeWhile in global configuration mode you can make changes to individualinterfaces with the command Routername(config)#interface ethernet 0 orRoutername(config)#int e0 for short, this enters the interface configurationmode for Ethernet port 0 and changes the prompt to look likeRoutername(config-if)#.Bringing Up InterfacesIf an interface is shown administratively down when the show interfacecommand is given in privileged EXEC mode, use the command no shutdown toenable the interface while in interface configuration mode.Setting IP Addresses
You can add another IP address to an interface with the secondary command.The syntax is the same as setting an IP address except you add secondary tothe end of it. Using secondary interfaces, it allows you to specify 2 IPaddresses for 1 interface. Use subinterfaces instead, since they allow for morethan 2 IP addresses on an interface and secondaries will probably be replacedsoon.
Interface Problems
When using the command show interface [type #] interface problems can beseen and appropriate action taken.
Message Solution Ethernet0 is up, line protocol None needed, interface working properly is up Ethernet0 is up, line protocol Clocking or framing problem, check clock is down rate and encapsulation type on both routers Ethernet0 is down, line Cable or interface problem, check interfaces protocol is down on both ends to ensure they aren't shutdown The interface has been shutdown, use the Ethernet0 is administratively no shutdown command in the interface's down, line protocol is down configuration modeSerial InterfacesThe serial interface is usually attached to a line that is attached to a CSU/DSUthat provides clocking rates for the line. However, if two routers areconnected together, one of the serial interfaces must act as the DCE deviceand provide clocking. The DCE end of the cable is the side of the cable thathas a female connector where it connects to the other cable. The clockingrate on the DCE device is set in interface configuration mode with thecommands: Router3(config)#int s0 Router3(config-if)#clock rate ?
Bandwidth Cisco routers ship with T1 (1.544 mbps) bandwidth rates on theirserial interfaces. Some routing protocols use the bandwidth of links todetermine the best route. The bandwidth setting is irrelevant with RIProuting. Bandwidth is set with the bandwidth command and ranges from 1 -10000000 kilobits per second. Router3(config)#int s0 Router3(config-if)#bandwidth ? <1-10000000> Bandwidth in kilobits
Router3(config-if)#bandwidth 10000000Saving Changes
Any time you make changes and want them saved over the next reboot, youneed to copy the running-config to the startup-config in NVRAM. Use thecommand:
Show Controllers Tells you information about the physical interface itself, italso gives you the cable type and whether it is a DTE or DCE interface. Syntaxis: Router_2#show controllers s 1
What is the real difference between NAT and PAT? NAT is a feature of arouter that will translate IP addresses. When a packet comes in, it will berewritten in order to forward it to a host that is not the IP destination. A routerwill keep track of this translation, and when the host sends a reply, it willtranslate back the other way.
PAT translates ports, as the name implies, and likewise, NAT translatesaddresses. Sometimes PAT is also called Overloaded NAT
How do you configure NAT on Windows 2003? To configure the Routing andRemote Access and the Network Address Translation components, yourcomputer must have at least two network interfaces: one connected to theInternet and the other one connected to the internal network. You must alsoconfigure the network translation computer to use Transport ControlProtocol/Internet Protocol (TCP/IP).
Use the following data to configure the TCP/IP address of the network adapterthat connects to the internal network:
Click Start, point to All Programs, point to Administrative Tools, and thenclick Routing and Remote Access.
Right-click your server, and then click Configure and Enable Routing andRemote Access.
In the Routing and Remote Access Setup Wizard, click Next, click Networkaddress translation (NAT), and then click Next.
Click Use this public interface to connect to the Internet, and then click thenetwork adapter that is connected to the Internet. At this stage you have theoption to reduce the risk of unauthorized access to your network. To do so,click to select the Enable security on the selected interface by setting upBasic Firewall check box.
Examine the selected options in the Summary box, and then click Finish.
In the NAT/Basic Firewall Properties dialog box, click the Address Assignmenttab.
Click Exclude.
In the Exclude Reserved Addresses dialog box, click Add, type the IP address,and then click OK.
Click OK.
Click Start, point to All Programs, point to Administrative Tools, and thenclick Routing and Remote Access. Right-click NAT/Basic Firewall, and thenclick Properties.
In the NAT/Basic Firewall Properties dialog box, click the Name Resolutiontab.
Click to select the Clients using Domain Name System (DNS) check box. If youuse a demand-dial interface to connect to an external DNS server, click toselect the Connect to the public network when a name needs to be resolvedcheck box, and then click the appropriate dial-up interface in the list.
How do you allow inbound traffic for specific hosts on Windows 2003 NAT?You can use the Windows Server 2003 implementation of IPSec to compensatefor the limited protections provided by applications for network traffic, or as anetwork-layer foundation of a defense-in-depth strategy. Do not use IPSec as areplacement for other user and application security controls, because it cannotprotect against attacks from within established and trusted communicationpaths. Your authentication strategy must be well defined and implemented forthe potential security provided by IPSec to be realized, because authenticationverifies the identity and trust of the computer at the other end of theconnection.
What is VPN? What types of VPN does Windows 2000 and beyond work withnatively? The virtual private network (VPN) technology included in WindowsServer 2003 helps enable cost-effective, secure remote access to privatenetworks. VPN allows administrators to take advantage of the Internet to helpprovide the functionality and security of private WAN connections at a lowercost. In Windows Server 2003, VPN is enabled using the Routing and RemoteAccess service. VPN is part of a comprehensive network access solution thatincludes support for authentication and authorization services, and advancednetwork security technologies.
There are two main strategies that help provide secure connectivity betweenprivate networks and enabling network access for remote users.
Note
Using VPN, administrators can connect remote or mobile workers (VPN clients)to private networks. Remote users can work as if their computers are physicallyconnected to the network. To accomplish this, VPN clients can use aConnection Manager profile to initiate a connection to a VPN server. The VPNserver can communicate with an Internet Authentication Service (IAS) server toauthenticate and authorize a user session and maintain the connection until itis terminated by the VPN client or by the VPN server. All services typicallyavailable to a LAN-connected client (including file and print sharing, Webserver access, and messaging) are enabled by VPN.
VPN clients can use standard tools to access resources. For example, clientscan use Windows Explorer to make drive connections and to connect toprinters. Connections are persistent: Users do not need to reconnect tonetwork resources during their VPN sessions. Because drive letters anduniversal naming convention (UNC) names are fully supported by VPN, mostcommercial and custom applications work without modification.
VPN ScenariosVirtual private networks are point-to-point connections across a private orpublic network such as the Internet. A VPN client uses special TCP/IP-basedprotocols, called tunneling protocols, to make a virtual call to a virtual port ona VPN server. In a typical VPN deployment, a client initiates a virtual point-to-point connection to a remote access server over the Internet. The remoteaccess server answers the call, authenticates the caller, and transfers databetween the VPN client and the organization’s private network.
A VPN Connection
Site-to-site VPN
Site-to-Site VPNSite-to-site VPN connections (also known as router-to-router VPN connections)enable organizations to have routed connections between separate offices orwith other organizations over a public network while helping to maintainsecure communications. A routed VPN connection across the Internet logicallyoperates as a dedicated WAN link. When networks are connected over theInternet, as shown in the following figure, a router forwards packets to anotherrouter across a VPN connection. To the routers, the VPN connection operates asa data-link layer link.
EncapsulationVPN technology provides a way of encapsulating private data with a headerthat allows the data to traverse the network.AuthenticationThere are three types of authentication for VPN connections:
Data Encryption
Data can be encrypted for protection between the endpoints of the VPNconnection. Data encryption should always be used for VPN connections whereprivate data is sent across a public network such as the Internet. Data that isnot encrypted is vulnerable to unauthorized interception. For VPN connections,Routing and Remote Access uses Microsoft Point-to-Point Encryption (MPPE)with PPTP and IPSec encryption with L2TP.
The virtual interfaces of the VPN client and the VPN server must be assigned IPaddresses. The assignment of these addresses is done by the VPN server. Bydefault, the VPN server obtains IP addresses for itself and VPN clients using theDynamic Host Configuration Protocol (DHCP). Otherwise, a static pool of IPaddresses can be configured to define one or more address ranges, with eachrange defined by an IP network ID and a subnet mask or start and end IPaddresses.
Name server assignment, the assignment of Domain Name System (DNS) andWindows Internet Name Service (WINS) servers to the VPN connection, alsooccurs during the process of establishing the VPN connection.
Tunneling OverviewTunneling is a method of using a network infrastructure to transfer data for onenetwork over another network. The data (or payload) to be transferred can bethe frames (or packets) of another protocol. Instead of sending a frame as it isproduced by the originating node, the tunneling protocol encapsulates theframe in an additional header. The additional header provides routinginformation so that the encapsulated payload can traverse the intermediatenetwork.
The encapsulated packets are then routed between tunnel endpoints over thenetwork. The logical path through which the encapsulated packets travelthrough the network is called a tunnel. After the encapsulated frames reachtheir destination on the network, the frame is de-encapsulated (the header isremoved) and the payload is forwarded to its final destination. Tunnelingincludes this entire process (encapsulation, transmission, and de-encapsulationof packets).
Tunneling
Tunneling ProtocolsTunneling enables the encapsulation of a packet from one type of protocolwithin the datagram of a different protocol. For example, VPN uses PPTP toencapsulate IP packets over a public network such as the Internet. A VPNsolution based on either PPTP or L2TP can be configured.
PPTP and L2TP depend heavily on the features originally specified for PPP. PPPwas designed to send data across dial-up or dedicated point-to-pointconnections. For IP, PPP encapsulates IP packets within PPP frames and thentransmits the encapsulated PPP-packets across a point-to-point link. PPP wasoriginally defined as the protocol to use between a dial-up client and a networkaccess server (NAS).PPTPPPTP allows multiprotocol traffic to be encrypted and then encapsulated in anIP header to be sent across an organization’s IP network or a public IP networksuch as the Internet. PPTP encapsulates Point-to-Point Protocol (PPP) frames inIP datagrams for transmission over the network. PPTP can be used for remoteaccess and site-to-site VPN connections. PPTP is documented in RFC 2637 in theIETF RFC Database.
PPTP uses a TCP connection for tunnel management and a modified version ofGeneric Routing Encapsulation (GRE) to encapsulate PPP frames for tunneleddata. The payloads of the encapsulated PPP frames can be encrypted,compressed, or both. The following figure shows the structure of a PPTP packetcontaining an IP datagram.
When using the Internet as the public network for VPN, the PPTP server is aPPTP-enabled VPN server with one interface on the Internet and a secondinterface on the intranet.
L2TPL2TP allows multiprotocol traffic to be encrypted and then sent over anymedium that supports point-to-point datagram delivery, such as IP, X.25, framerelay, or asynchronous transfer mode (ATM). L2TP is a combination of PPTP andLayer 2 Forwarding (L2F), a technology developed by Cisco Systems, Inc. L2TPrepresents the best features of PPTP and L2F. L2TP encapsulates PPP frames tobe sent over IP, X.25, frame relay, or ATM networks. When configured to use IPas its datagram transport, L2TP can be used as a tunneling protocol over theInternet. L2TP is documented in RFC 2661 in the IETF RFC Database.
L2TP over IP networks uses User Datagram Protocol (UDP) and a series of L2TPmessages for tunnel management. L2TP also uses UDP to send L2TP-encapsulated PPP frames as tunneled data. The payloads of encapsulated PPPframes can be encrypted, compressed, or both, although the Microsoftimplementation of L2TP does not use MPPE to encrypt the PPP payload. Thefollowing figure shows the structure of an L2TP packet containing an IPdatagram.
Default RoutingThe preferred method for directing packets to a remote network is to create adefault route on the remote access client that directs packets to the remotenetwork (the default configuration for VPN remote access clients). Any packetthat is not intended for the neighboring LAN segment is sent to the remotenetwork. When a connection is made, the remote access client, by default,adds a default route to its routing table and increases the metric of theexisting default route to ensure that the newest default route is used. Thenewest default route points to the new connection, which ensures that anypackets that are not addressed to the local LAN segment are sent to theremote network.Under this configuration, when a VPN client connects and creates a newdefault route, Internet sites that have been accessible are no longer accessible(unless Internet access is available through the organization’s intranet). Thisposes no problem for remote VPN clients that require access only to theorganization’s network. However, it is not acceptable for remote clients thatneed access to the Internet while they are connected to the organization’snetwork.
Split TunnelingSplit tunneling enables remote access VPN clients to route corporate-basedtraffic over the VPN connection while sending Internet-based traffic using theuser’s local Internet connection. This prevents the use of corporate bandwidthfor access to Internet sites.
With the advent of the Internet, packets can now be routed between routersthat are connected to the Internet across a virtual connection that emulatesthe properties of a dedicated, private, point-to-point connection. This type ofconnection is known as a site-to-site VPN connection. Site-to-site VPNconnections can be used to replace expensive long-haul WAN links with short-haul WAN links to a local Internet service provider (ISP).
To facilitate routing between the sites, each VPN server and the routinginfrastructure of its connected site must have a set of routes that represent theaddress space of the other site. These routes can be added manually, orrouting protocols can be used to automatically add and maintain a set ofroutes.
1.1.1.1.6 RIPRIP is designed for exchanging routing information within a small to medium-size network. RIP routers dynamically exchange routing table entries.
The Windows Server 2003 implementation of RIP has the following features:
The ability to select which RIP version to run on each interface for incomingand outgoing packets.
1.1.1.1.7 OSPFOSPF is designed for exchanging routing information within a large or very largenetwork. Instead of exchanging routing table entries like RIP routers, OSPFrouters maintain a map of the network that is updated after any change to thenetwork topology. This map, called the link state database, is synchronizedbetween all the OSPF routers and is used to compute the routes in the routingtable. Neighboring OSPF routers form an adjacency, which is a logicalrelationship between routers to synchronize the link state database.
For inbound traffic, when the tunneled data is decrypted by the VPN server, itis forwarded to the firewall. Through the use of its filters, the firewall allowsthe traffic to be forwarded to intranet resources. Because the only traffic thatcrosses the VPN server is generated by authenticated VPN clients, in thisscenario, firewall filtering can be used to prevent VPN users from accessingspecific intranet resources. Because Internet traffic allowed on the intranetmust pass through the VPN server, this approach also prevents the sharing ofFTP or Web intranet resources with non-VPN Internet users.
• Connection Manager • DHCP • EAP-RADIUS • IAS • Name Server Assignment (DNS and WINS) • NAT
Connection ManagerConnection Manager is a service profile that can be used to provide customizedremote access to a network through a VPN connection. The advanced featuresof Connection Manager are a superset of basic dial-up networking. ConnectionManager provides support for local and remote connections by using a networkof points of presence (POPs), such as those available worldwide through ISPs.Windows Server 2003 includes a set of tools that enable a network manager todeliver pre-configured connections to network users. These tools are:
CPSConnection Point Services (CPS) automatically distributes and updates customphone books. These phone books contain one or more Point of Presence (POP)entries, with each POP supplying a telephone number that provides dial-upaccess to an Internet access point for VPN connections. The phone books giveusers complete POP information, so when they travel they can connect todifferent Internet POPs rather than being restricted to a single POP.
Without the ability to update phone books (a task CPS handles automatically),users would have to contact their organization’s technical support staff to beinformed of changes in POP information and to reconfigure their client-dialersoftware. CPS has two components:
DHCPFor both PPTP and L2TP connections, the data being tunneled is a PPP frame. APPP connection must be established before data can be sent. The VPN servermust have IP addresses available in order to assign them to a VPN server’svirtual interface and to VPN clients during the IP Control Protocol (IPCP)negotiation phase that is part of the process of establishing a PPP connection.The IP address assigned to a VPN client is also assigned to the virtual interfaceof that VPN client.
For Windows Server 2003-based VPN servers, the IP addresses assigned to VPNclients are obtained through DHCP by default. A static IP address pool can alsobe configured. DHCP is also used by remote access VPN clients to obtainadditional configuration settings after the PPP connection is established.
EAP-RADIUSEAP-RADIUS is the passing of EAP messages of any EAP type by an authenticatorto a Remote Authentication Dial-In User Service (RADIUS) server forauthentication. For example, for a remote access server that is configured forRADIUS authentication, the EAP messages sent between the remote accessclient and remote access server are encapsulated and formatted as RADIUSmessages between the remote access server (the authenticator) and theRADIUS server (the authenticator).
IASThe VPN server can be configured to use either Windows or RADIUS as anauthentication provider. If Windows is selected as the authentication provider,the user credentials sent by users attempting VPN connections areauthenticated using typical Windows authentication mechanisms, and theconnection attempt is authorized using local remote access policies.
RADIUS can respond to authentication requests based on its own user accountdatabase, or it can be a front end to another database server, such as aStructured Query Language (SQL) server or a Windows domain controller (DC).The DC can be located on the same computer as the RADIUS server, orelsewhere. In addition, a RADIUS proxy can be used to forward requests to aremote RADIUS server.
The VPN server must be configured with DNS and WINS server addresses toassign to the VPN client during IPCP negotiation. For NetBIOS name resolution,you do not have to use WINS and can enable the NetBIOS over TCP/IP (NetBT)proxy on the VPN server.
NATA network address translator (NAT) translates the IP addresses andTransmission Control Protocol/User Datagram Protocol (TCP/UDP) port numbersof packets that are forwarded between a private network and the Internet. TheNAT on the private network can also provide IP address configurationinformation to the other computers on the private network.
PPTP-based VPN clients can be located behind a NAT if the NAT includes aneditor that can translate PPTP packets. PPTP-based VPN servers can be locatedbehind a NAT if the NAT is configured with static mappings for PPTP traffic. Ifthe L2TP/IPSec-based VPN clients or servers are positioned behind a NAT, bothclient and server must support IPSec NAT traversal (NAT-T).
What are Conditions and Profile in RRAS Policies? Remote access policies arean ordered set of rules that define whether remote access connection attemptsare either authorized or rejected. Each rule includes one or more conditions(which identifies the criteria), a set of profile settings (to be applied on theconnection attempt), and a permission setting (grant or deny) for remoteaccess. This can be compared like a brain of the door-keeper (VPN server)which allows entry to your network from outside. Remote access policy decideswho can access what resources from where using what tunnel settings. Soconfiguring proper set of policies are important.
How does SSL work? Secure Sockets Layer uses a cryptographic system thatencrypts data with two keys.
When a SSL Digital Certificate is installed on a web site, users can see apadlock icon at the bottom area of the navigator. When an Extended ValidationCertificates is installed on a web site, users with the latest versions of Firefox,Internet Explorer or Opera will see the green address bar at the URL area ofthe navigator.
How does IPSec work? IPSec is an Internet Engineering Task Force (IETF)standard suite of protocols that provides data authentication, integrity, andconfidentiality as data is transferred between communication points across IPnetworks. IPSec provides data security at the IP packet level. A packet is a databundle that is organized for transmission across a network, and it includes aheader and payload (the data in the packet). IPSec emerged as a viablenetwork security standard because enterprises wanted to ensure that datacould be securely transmitted over the Internet. IPSec protects against possiblesecurity exposures by protecting data while in transit
How do I deploy IPSec for a large number of computers? Just use thisprogram Server and Domain Isolation Using IPsec and Group Policy
Forward secrecy has been used as a synonym for perfect forward secrecy [1],since the term perfect has been controversial in this context. However, at leastone reference [2] distinguishes perfect forward secrecy from forward secrecywith the additional property that an agreed key will not be compromised evenif agreed keys derived from the same long-term keying material in asubsequent run are compromised.
How do I monitor IPSec? To test the IPSec policies, use IPSec Monitor. IPSecMonitor (Ipsecmon.exe) provides information about which IPSec policy is activeand whether a secure channel between computers is established.
What can you do with NETSH? Netsh is a command-line scripting utility thatallows you to, either locally or remotely, display, modify or script thenetwork configuration of a computer that is currently running.
To view help for a command, type the command, followed by a space, andthen type?
* Restrict visibility – Users can view only the objects for which they haveaccess.
The DNS system is, in fact, its own network. If one DNS server doesn’t knowhow to translate a particular domain name, it asks another one, and so on,until the correct IP address is returned.
1. In the DHCP console, right-click the server you want to back up, and thenclick Backup.
2. In the Browse For Folder dialog box, select the folder that will contain thebackup DHCP database, and then click OK.
Explain APIPA.A Windows-based computer that is configured to use DHCP can automaticallyassign itself an Internet Protocol (IP) address if a DHCP server is not availableor does not exist. The Internet Assigned Numbers Authority (IANA) has reserved169.254.0.0-169.254.255.255 for Automatic Private IP Addressing(APIPA).
What is the default time for group policy refresh interval time?The default refresh interval for policies is 90 minutes. The default refreshinterval for domain controllers is 5 minutes. Group policy object’s group policyrefresh intervals may be changed in the group policy object.
You can use one of the three methods to restore Active Directory frombackup media: Primary Restore, Normal Restore (i.e. Non Authoritative), andAuthoritative Restore.Primary Restore: This method rebuilds the first domain controller in a domainwhen there is no other way to rebuild the domain. Perform a primary restoreonly when all the domain controllers in the domain are lost, and you want torebuild the domain from the backup. Members of the Administrators group canperform the primary restore on local computer. On a domain controller, onlymembers of the Domain Admins group can perform this restore.
Normal Restore: This method reinstates the Active Directory data to the statebefore the backup, and then updates the data through the normal replicationprocess. Perform a normal restore for a single domain controller to a previouslyknown good state.
How do you change the DS Restore admin password? Microsoft Windows 2000uses the Setpwd utility to reset the DS Restore Mode password. In MicrosoftWindows Server 2003, that functionality has been integrated into the NTDSUTILtool. Note that you cannot use the procedure if the target server is running inDSRM.
How can you forcibly remove AD from a server? In run use the command->dcpromo /forceremoval
What is the SYSVOL folder? The sysvol folder stores the server’s copy of thedomain’s public files. The contents such as group policy, users etc of the sysvolfolder are replicated to all domain controllers in the domain. The sysvol foldermust be located on an NTFS volume
What is the entire problem if DNS Server fails? If your DNS server fails, youcan’t resolve host names. You can’t resolve domain controller IP Address.
How can you restrict running certain applications on a machine? The GroupPolicy Object Editor and the Software Restriction Policies extension of GroupPolicy Object Editor are used to restrict running certain applications on amachine. For Windows XP computers that are not participating in a domain,you can use the Local Security Settings snap-in to access Software RestrictionPolicies.
How will map a folder through AD? Navigate domain user properties->givepath in profile tab in the format \\servername\sharename.Explain Quotas. Disk Quota is a feature or service of NTFS which helps torestrict or manage the disk usage from the normal user. It can be implementedper user user per volume basis.By default it is disabled. Administrativeprivilege is required to perform the task. In 2003server we can control onlydrive but in 2008server we can establish quota in folder level.
* Normal Backup:-This is default backup in which all files are backed up even ifit was backed up before.*Incremental Backup:-In this type of backup only the files that haven’t beenbacked up are taken care of or backed up.*Differential Backup:-This backup is similar to incremental backup because itdoes not take backup of those files backed up by normalbackup but different from incremental because it will take backup ofdifferentially backed up files at next time of differential backup.*Copy Backup:-This type of backup is which is used during system state backupand asr backup. It is used in special conditions only.*Daily Backup:-This type of backup takes backup of only those files that arecreated on that particular day.*System Backup:-This type of backup takes backup of files namely, Boot file,COM+Class Registry, Registry. But in server it takesbackup of ads.*ASR Backup:-This type of backup takes backup of entire boot partitionincluding OS and user data. This should be the lasttroubleshooting method to recover an os from disaster.
Specify the Port Number for AD, DNS, DHCP, HTTP, HTTPS, SMTP, POP3 &FTPAD- uses LDAP Udp 389 and UDP 135,DNS- 53,DHCP-67,68,HTTP-80,HTTPS-,SMTP-25,POP3-110 & FTP-20,21.
What connector type would you use to connect to the Internet, and whatare the two methods of sending mail over that connector?SMTP Connector: Forward to smart host or use DNS to route to each address
What are the standard port numbers for SMTP, POP3, IMAP4, RPC, LDAP andGlobal Catalog?
- 25 SMTP- 110 POP3- 143 IMAP4- 135 RPC- 389 LDAP- 636 LDAP (SSL)- 3268 Global Catalog- 465 SMTP/SSL,- 993 IMAP4/SSL- 563 IMAP4/SSL- 53 DNS ,- 80 HTTP- 88 Kerberos
- 110 POP3- 119 NNTP
ASP.NET
What is the use of NNTP with exchange? This protocol is used the news groupin exchange
Disaster Recovery Plan? Ans: Deals with the restoration of computer systemwith all attendent software and connections to full functionality under avariety of damaging or interfering external condtions.
What would a rise in remote queue length generally indicate? This meansmail is not being sent to other servers. This can be explained by outages orperformance issues with the network or remote servers.
What would a rise in the Local Delivery queue generally mean? This indicatesa performance issue or outage on the local server. Reasons could be slownessin consulting AD, slowness in handing messages off to local delivery or SMTPdelivery. It could also be databases being dismounted or a lack of disk space. What are the disadvantages of circular logging? In the event of a corruptdatabase, data can only be restored to the last backup.
What is the maximum storage capacity for Exchange standard version? Whatwould you do if it reaches maximum capacity?” 16GB.Once the storedismounts at the 16GB limit the only way to mount it again is to use the 17GBregistry setting. And even this is a temporary solution. if you apply Exchange2003 SP2 to your Standard Edition server, the database size limit is initiallyincreased to 18GB. Whilst you can go on to change this figure to a value up to75GB, it’s important to note that 18GB is the default settingHKLM\System\CurrentControlSet\Services\MSExchangeIS\{server name}\Private-{GUID It therefore follows that for registry settings that relate to makingchanges on a public store, you’ll need to work in t he following registry key:
HKLM\System\CurrentControlSet\Services\MSExchangeIS\{server name}\Public-{GUID}
Under the relevant database, create the following registry information: Valuetype: REG_DWORD
Set the value data to be the maximum size in gigabytes that the database isallowed to grow to. For the Standard Edition of Exchange, you can enternumbers between 1 and 75. For the Enterprise Edition, you can enter numbersbetween 1 and 8000. Yes, that’s right, between 1GB and 8000GB or 8TB.Therefore, even if you are running the Enterprise Edition of Exchange, you canstill enforce overall database size limits of, say, 150GB if you so desire..
The non text elements will be encoded from the sender of the message andwill be decoded by the message recipient. Coding of non ASCII characters isoften based on “quoted printable” coding, binary data typically using Base64-coding.
List the services of Exchange Server 2003? There are several servicesinvolved with Exchange Server, and stopping different services will accomplishdifferent things. The services are interdependent, so when you stop or startvarious services you may see a message about having to stop dependentservices. If you do stop dependent services, don’t forget to restart them againwhen you restart the service that you began with.
To shut down Exchange completely on a given machine, you need to stop all ofthe following services:
Microsoft Exchange Routing Engine (RESvc):-This service is used for routing andtopology information for routing SMTP based messages. This service is startedby default. Microsoft Exchange System Attendant (MSExchangeSA):-This service handlesvarious cleanup and monitoring functions. One of the most important functionsof the System Attendant is the Recipient Update Service (RUS), which isresponsible for mapping attributes in Active Directory to the Exchangesubsystem and enforcing recipient policies. When you create a mailbox for auser, you simply set some attributes on a user object. The RUS takes thatinformation and does all of the work in the background with Exchange to reallymake the mailbox. If you mailbox-enable or mail-enable objects and they don’tseem to work, the RUS is
One of the first places you will look for an issue. If you need to enablediagnostics for the RUS, the parameters are maintained in a separate serviceregistry entry called MSExchangeAL. This isn’t a real service; it is simply thesupplied location to modify RUS functionality. This service is started bydefault.
How can you recover a deleted mail box? In Exchange, if you delete amailbox, it is disconnected for a default period of 30 days (the mailboxretention period), and you can reconnect it at any point during that time.Deleting a mailbox does not mean that it is permanently deleted (or purged)from the information store database right away, only that it is flagged fordeletion. At the end of the mailbox retention period, the mailbox ispermanently deleted from the database. You can also permanently delete themailbox by choosing to purge it at any time.
This also means that if you mistakenly delete a mail-enabled user account, youcan recreate that user object, and then reconnect that mailbox during themailbox retention period.
Configure the deleted mailbox retention period at the mailbox store objectlevel.
The mailbox is now flagged for deletion and will be permanently deleted at theend of the mailbox retention period unless you recover it.
1. In Exchange System Manager, locate the mailbox store that contains thedisconnected mailbox.
4. Right-click the disconnected mailbox, click Reconnect, and then select theappropriate user from the dialog box that appears.
5. Click OK.
Note Only one user may be connected to a mailbox because all globally uniqueidentifiers (GUIDs) are required to be unique across an entire forest
1. In Active Directory Users and Computers, create a new user object. Whenyou create the new user object, click to clear the Create an Exchange Mailboxcheck box.
2. On the Limits tab, change the Keep deleted mailboxes for (days) defaultsetting of 30 to the number of days you want.
3. Click OK.
If you have deleted the user, after you recreated the same user. How youwill give the access of previous mail box? Reconnect the Deleted user’ smailbox to the recreated user. Provided the recreated user doesn’t havemailbox
Which protocol is used for Public Folder? NNTP Network News TransferProtocol, both nntp and imap helps clients to access the public folder. Butactually, Smtp send the mails across the public folder.
IIS
Automatic Process Recycling— IIS 6.0 automatically stops and restarts faultyWeb sites and applications based on a flexible set of criteria, including CPUutilization and memory consumption, while queuing requests
Edit-While-Running
Difference between PDC & BDC PDC contains a write copy of SAM databasewhere as BDC contains read only copy of SAM database. It is not possible toreset a password or create objects without PDC in Windows NT.
What is DNS & WINS? DNS is a Domain Naming System, which resolves Hostnames to IP addresses. It uses fully qualified domain names. DNS is a Internetstandard used to resolve host names
What is the process of DHCP for getting the IP address to the client?
What are the port numbers for FTP, Telnet, HTTP, DNS FTP-21, Telnet – 23,HTTP-80, DNS-53, Kerberos-88, LDAP- 389
What is the database files used for Active Directory? The key AD databasefiles—edb.log, ntds.dit, res1.log, res2.log, and edb.chk—all of which reside in \%systemroot%\ntds on a domain controller (DC) by default. During ADinstallation, Dcpromo lets you specify alternative locations for these log filesand database file NTDS.DIT.
What is the use of terminal services Terminal services can be used as RemoteAdministration mode to administer remotely as well asApplication Server Modeto run the application in one server and users can login to that server to userthat application.
How to monitor replication We can user Replmon tool from support tools
• Normal Backup • Incremental Backup • Differential Backup • Daily Backup • Copy Backup
• 1.Configuration partition • 2. Schema Partition • 3. Domain partition • 4. Application Partition (only in windows 2003 not available in windows 2000)
What are the port numbers for Kerberos, LDAP and Global Catalog? Kerberos– 88, LDAP – 389, Global Catalog – 3268
What are the problems that are generally come across DHCP? Scope is fullwith IP addresses no IP’s available for new machines If scope options are notconfigured properly eg default gateway Incorrect creation of scopes etc
What is TTL & how to set TTL time in DNS TTL is Time to Live setting used forthe amount of time that the record should remain in cache when nameresolution happened. We can set TTL in SOA (start of authority record) of DNS.
What is RIS and what are its requirements? RIS is a remote installationservice, which is used to install operation system remotely.
Client requirements
Software Requirements
• Below network services must be active on RIS server or any server in the network • Domain Name System (DNS Service) • Dynamic Host Configuration Protocol (DHCP) • Active directory “Directory” service
What is FSMO Roles? Flexible single master operation (FSMO) roll are
Domain Naming master and schema master are forest level roles. PDCemulator, Infrastructure master and RID master are Domain level roles; Firstserver in the forest performs all 5 roles by default. Later we can transfer theroles.
PDC Emulator: Server, which is performing this role, acts as a PDC in a mixedmode to synchronize directory information between windows 2000 DC toWindows NT BDC. Server, which is performing thisrole, will contain latestpassword information. This role is also responsible for time synchronization inthe forest.Infrastructure Master: It is responsible for managing group membershipinformation in the domain. This role is responsible for updating DN when nameor location of the object is modified.
RID Master: Server, which is performing this role, will provide pool of RID toother domain controllers in the domain. SID is the combination of SID and RIDSID=SID+RID where SID is Security identifier common for all objects in thedomain and RID is relative identifier unique for each object
Through MMCWe can configure Domain Naming Master role through Active directory domainsand trusts we can configure Schema Master Role through Active Directoryschema Other Three roles we can configure by Active directory users andcomputers.
How to deploy the patches and what are the software’s used for thisprocess Using SUS (Software update services) server we can deploy patches toall clients in the network. We need to configure an option called “Synchronizewith Microsoft software update server” option and schedule time tosynchronize in server. We need to approve new update based on therequirement. Then approved update will be deployed to clients we canconfigure clients by changing the registry manually or through Group policy byadding WUAU administrative template in group policy.
NLB (network load balancing) cluster for balancing load between servers. Thiscluster will not provide any high availability. Usually preferable at edge serverslike web or proxy.
Quorum: A shared storage need to provide for all servers which keepsinformation about clustered application and session state and is useful inFAILOVER situation. This is very important if Quorum disk fails entire clusterwill fails.
Is it possible to rename the Domain name & how? In Windows 2000 it is notpossible. In windows 2003 it is possible. On Domain controller by going toMYCOMPUTER properties we can changeWhat is SOA Record SOA is a Start of Authority record, which is a first recordin DNS, which controls the startup behavior of DNS. We can configure TTL,refresh, and retry intervals in this record.
What is a Stub zone and what is the use of it. Stub zones are a new feature ofDNS in Windows Server 2003 that can be used to streamline name resolution,especially in a split namespace scenario. They also help reduce the amount ofDNS traffic on your network, making DNS more efficient especially over slowWAN links.
What is ASR (Automated System Recovery) and how to implement it? ASR is atwo-part system; it includes ASR backup and ASR restore. The ASR Wizard,located in Backup, does the backup portion. The wizard backs up the systemstate, system services, and all the disks that are associated with the operatingsystem components. ASR also creates a file that contains information about thebackup, the disk configurations (including basic and dynamic volumes), andhow to perform a restore.
You can access the restore portion by pressing F2 when prompted in the text-mode portion of setup. ASR reads the disk configurations from the file that itcreates. It restores all the disk signatures, volumes, and partitions on (at aminimum) the disks that you need to start the computer. ASR will try to restoreall the disk configurations, but under some circumstances it might not be ableto. ASR then installs a simple installation of Windows and automatically starts arestoration using the backup created by the ASR Wizard.
What are the different levels that we can apply Group Policy? We can applygroup policy at SITE level—Domain Level—OU level
What is Domain Policy, Domain controller policy, Local policy and GroupPolicy? Domain Policy will apply to all computers in the domain, because bydefault it will be associated with domain GPO, Where as Domain controllerpolicy will be applied only on domain controller. By default domain controllersecurity policy will be associated with domain controller GPO. Local policy willbe applied to that particular machine only and effects to that computer only
What is the use of SYSVOL FOLDER? Policies and scripts saved in SYSVOLfolder will be replicated to all domain controllers in the domain. FRS (Filereplication service) is responsible for replicating all policies and scripts.
What is folder redirection? Folder Redirection is a User group policy. Once youcreate the group policy and link it to the appropriate folder object, anadministrator can designate which folders to redirect and where To do this, theadministrator needs to navigate to the following location in the Group PolicyObject:
In the Properties of the folder, you can choose Basic or Advanced folderredirection and you can designate the server file system path to which thefolder should be redirected. The %USERNAME% variable may be used as part of the redirection path, thusallowing the system to dynamically create a newly redirected folder for eachuser to whom the policy object applies
Features of windows2003
Internet Information Service 6.0 (By default will not install) Highly secured andlocked down by default, new architectural model that includes features such asprocess isolation and a met abase stored in XML format.
Saved Queries: Active Directory Users and Computers now includes a new nodenamed Saved Queries, which allows an administrator to create a number ofpredefined queries that are saved for future access.
Group Policy Management Console (GPMC) is a new a new tool for managingGroup Policy in Windows Server 2003. While Group Policy–related elementshave typically been found across a range of tools—such as Active DirectoryUsers And Computers, the Group Policy MMC snap-in, and others—GPMC acts asa single consolidated environment for carrying out Group Policy–related tasks.
RSoP tool, the administrator could generate a query that would process all theapplicable Group Policy settings for that user for the local computer or anothercomputer on the network. After processing the query, RSoP would present theexact Group Policy settings that apply to that user, as well as the source GroupPolicy object that was responsible for the setting.
Distributed File System: DFS is enhanced for Windows Server 2003, EnterpriseEdition and Windows Server, Datacenter Edition by allowing multiple DFS rootson a single server. You can use this feature to host multiple DFS roots on asingle server, reducing administrative and hardware costs of managing multiplenamespaces and multiple replicated namespaces.
Improvements in Clustering:In Datacenter Edition, the maximum supported cluster size has been increasedfrom 4-nodes in Windows 2000, to 8-nodes in Windows Server 2003.In Enterprise Edition, the maximum supported cluster size has been increasedfrom 2-nodes in Windows 2000 Advanced Server to 8-nodes in Windows Server2003.
Server clusters are fully supported on computers running the 64-bit versions ofWindows Server 2003. Windows Server 2003 supports Encrypting File System(EFS) on clustered (shared) disks.
Internet Connection Firewall (ICF): ICF, designed for use in a small business,provides basic protection on computers directly connected to the Internet oron local area network (LAN) segments. ICF is available for LAN, dial-up, VPN, orPPPoE connections. ICF integrates with ICS or with the Routing and RemoteAccess service.
Open File Backup: The backup utility included with Windows Server 2003 nowsupports “open file backup”. In Windows 2000, files had to be closed beforeinitiating backup operations. Backup now uses shadow copies to ensure thatany open files being accessed by users are also backed up.(Need to modifysome registry keys)
Stub Zones: This is introduced in windows 2003 DNS. A stub zone is like asecondary zone in that it obtains its resource records from other name servers(one or more master name servers). A stub zone is also read-only like asecondary zone, so administrators can’t manually add, remove, or modifyresource records on it. First, while secondary zones contain copies of all theresource records in the corresponding zone on the master name server, stubzones contain only three kinds of resource records:a. A copy of the SOA record for the zone.b. Copies of NS records for all name servers authoritative for the zone.c. Copies of (glue)A records for all name servers authoritative for the zone.
That’s it–no CNAME records, MX records, SRV records, or A records for otherhosts in the zone. So while a secondary zone can be quite large for a bigcompany’s network, a stub zone is always very small, just a few records. Thismeans replicating zone information from master to stub zone adds almost nilDNS traffic to your network as the records for name servers rarely changeunless you decommission an old name server or deploy a new one.Difference between NT & 2000
In Windows NT only PDC is having writable copy of SAM database but the BDC isonly having read only database. In case of Windows 2000 both DC and ADC ishaving write copy of the database.
Windows NT will not support FAT32 file system. Windows 2000 supports FAT32.Default authentication protocol in NT is NTLM (NT LAN manager). In windows2000 default authentication protocol is Kerberos V5.
Difference between PDC & BDC PDC contains a write copy of SAM databasewhere as BDC contains read only copy of SAM database. It is not possible toreset a password without PDC in Windows NT. But both can participate in theuser authentication. If PDC fails, we have to manually promote BDC to PDCfrom server manger.
What is DNS & WINS DNS is a Domain Naming System/Server, use for resolvethe Host names to IP addresses and also do the IP address to host name. It usesfully qualified domain names. DNS is a Internet standard used to resolve hostnames. Support up to 256 characters.
If DHCP server is not available what happens to the client First time client istrying to get IP address DHCP server, If DHCP server is not found. C IP addressfrom APIPA (Automatic Private I P Address) range 169.254.0.0 -169.254.255.255If client already got the IP and having lease duration it use the IP till the leaseduration expires.
Windows Server 2003 Active Directory supports the following types of trustrelationships:Tree-root trust Tree-root trust relationships are automatically establishedwhen you add a new tree root domain to an existing forest. This trustrelationship is transitive and two-way.
By default implicit two way transitive trust relationships establish between alldomains in the windows 2000/2003 forest.
NACK ——– If client not get the IP address after server given offer, then Serversends the NegativeAcknowledgement.
A volume is a storage unit made from free space on one or more disks. It canbe formatted with a file system and assigned a drive letter. Volumes ondynamic disks can have any of the following layouts: simple, spanned,mirrored, striped, or RAID-5.
A simple volume uses free space from a single disk. It can be a single regionon a disk or consist of multiple, concatenated regions. A simple volume can beextended within the same disk or onto additional disks. If a simple volume isextended across multiple disks, it becomes a spanned volume.
A spanned volume is created from free disk space that is linked together frommultiple disks. You can extend a spanned volume onto a maximum of 32 disks.A spanned volume cannot be mirrored and is not fault-tolerant.
The system volume contains the hardware-specific files that are needed toload Windows (for example, Ntldr, Boot.ini, and Ntdetect.com). The systemvolume can be, but does not have to be, the same as the boot volume.
The boot volume contains the Windows operating system files that arelocated in the
RAID 0 – Striping
Can GC Server and Infrastructure place in single server? If not explain why? No, As Infrastructure master does the same job as the GC. It does not work together.
What is the size of log file which created before updating into ntds.dit and the total number of files? Three Log files Names Edb.log Res1.log Res2.log Each initially 10 MB
What does SYSVOL contains? SysVol Folder contains the public information of the domain & the information for replication Ex: Group policy object & scripts can be found in this directory.
What is the port number for SMTP, Kerberos, LDAP, and GC Server?? SMTP 25, Kerberos 88, GC 3128, LDAP 53
What are the new features in Windows 2003 related to ADS, Replication,and Trust? ADS: Can more than 5000 users in the groups
What are the different types of Terminal Services? User Mode & ApplicationMode
What does mean by root DNS servers? Public DNS servers Hosted in theInternet which registers the DNS
How does the down level clients register it names with DNS server?Enable the WINS integration with DNS.
What is RsOP?RsOP is the resultant set of policy applied on the object (Group Policy)What is default lease period for DHCP Server? 8 days Default
• The Windows 2000 loader switches the processor to the 32-bit flat memory model. • The Windows 2000 loader starts a mini-file system. • The Windows 2000 loader reads the BOOT.INI file and displays the operating system selections (boot loader menu). • The Windows 2000 loader loads the operating system selected by the user. If Windows 2000 is selected, NTLDR runs NTDETECT.COM. For other operating systems, NTLDR loads BOOTSECT.DOS and gives it control. • NTDETECT.COM scans the hardware installed in the computer, and reports the list to NTLDR for inclusion in the Registry under the HKEY_LOCAL_MACHINE_HARDWARE hive. • NTLDR then loads the NTOSKRNL.EXE, and gives it the hardware information collected by NTDETECT.COM. Windows NT enters the Windows load phases.
What is WINS hybrid & mixed mode? Systems that are configured to use WINSare normally configured as a hybrid (H-node) client, meaning they attempt toresolve NetBIOS names via a WINS server and then try a broadcast (B-node) ifWINS is unsuccessful. Most systems can be configured to resolve NetBIOS namesin one of four modes:
What is Disk Quota? Disk Quota is the specifying the limits of usage on thedisks.
What is the port number for SMTP, Kerberos, LDAP, and GC Server? SMTP 25,Kerberos 88, GC 3268, LDAP 389
What are some of the new tools and features provided by Windows Server2008?Windows Server 2008 now provides a desktop environment similar to MicrosoftWindows Vista and includes tools also found in Vista, such as the new backupsnap-in and the BitLocker drive encryption feature. Windows Server 2008 alsoprovides the new IIS7 web server and the Windows Deployment Service.
What are the different editions of Windows Server 2008? The entry-levelversion of Windows Server 2008 is the Standard Edition. The Enterprise Editionprovides a platform for large enterprisewide networks. The Datacenter Editionprovides support for unlimited Hyper-V virtualization and advanced clusteringservices. The Web Edition is a scaled-down version of Windows Server 2008intended for use as a dedicated web server. The Standard, Enterprise, andDatacenter Editions can be purchased with or without the Hyper-Vvirtualization technology.
How do you configure and manage a Windows Server 2008 core installation?This stripped-down version of Windows Server 2008 is managed from thecommand line.Which Control Panel tool enables you to automate the running of serverutilities and other applications?The Task Scheduler enables you to schedule the launching of tools such asWindows Backup and Disk Defragmenter.
What are some of the items that can be accessed via the System Propertiesdialog box?You can access virtual memory settings and the Device Manager via the SystemProperties dialog box.
Which Windows Server utility provides a common interface for tools andutilities and provides access to server roles, services, and monitoring anddrive utilities?The Server Manager provides both the interface and access to a large numberof the utilities and tools that you will use as you manage your Windows server.
When a child domain is created in the domain tree, what type of trustrelationship exists between the new child domain and the tree’s rootdomain?Child domains and the root domain of a tree are assigned transitive trusts. Thismeans that the root domain and child domain trust each other and allowresources in any domain in the tree to be accessed by users in any domain inthe tree.
What are some of the other roles that a server running Windows Server2008 could fill on the network?A server running Windows Server 2008 can be configured as a domaincontroller, a file server, a print server, a web server, or an application server.Windows servers can also have roles and features that provide services such asDNS, DHCP, and Routing and Remote Access.
Which Windows Server 2008 tools make it easy to manage and configure aserver’s roles and features?The Server Manager window enables you to view the roles and featuresinstalled on a server and also to quickly access the tools used to manage thesevarious roles and features. The Server Manager can be used to add and removeroles and features as needed.
What utility is provided by Windows Server 2008 for managing disk drives,partitions, and volumes?The Disk Manager provides all the tools for formatting, creating, and managingdrive volumes and partitions.
What is the difference between a basic and dynamic drive in the WindowsServer 2008 environment?A basic disk embraces the MS-DOS disk structure; a basic disk can be dividedinto partitions (simple volumes).Dynamic disks consist of a single partition that can be divided into any numberof volumes. Dynamic disks also support Windows Server 2008 RAIDimplementations.
What is the most foolproof strategy for protecting data on the network?A regular backup of network data provides the best method of protecting youfrom data loss.
What protocol stack is installed by default when you install Windows Server2008 on a network server?TCP/IP (v4 and v6) is the default protocol for Windows Server 2008. It isrequired for Active Directory implementations and provides for connectivity onheterogeneous networks.
What term is used to refer to the first domain created in a new ActiveDirectory tree?The first domain created in a tree is referred to as the root domain. Childdomains created in the tree share the same namespace as the root domain.
What are some of the tools used to manage Active Directory objects in aWindows Server 2008 domain?When the Active Directory is installed on a server (making it a domaincontroller), a set of Active Directory snap-ins is provided. The Active DirectoryUsers and Computers snap-in is used to manage Active Directory objects such asuser accounts, computers, and groups. The Active Directory Domains and Trustssnap-in enables you to manage the trusts that are defined between domains.The Active Directory Sites and Services snap-in provides for the management ofdomain sites and subnets.
What type of group is not available in a domain that is running at the mixed-mode functional level?Universal groups are not available in a mixed-mode domain. The functionallevel must be raised to Windows 2003 or Windows 2008 to make these groupsavailable.
Can servers running Windows Server 2008 provide services to clients whenthey are not part of a domain?Servers running Windows Server 2008 can be configured to participate in aworkgroup. The server can provide some services to the workgroup peers butdoes not provide the security and management tools provided to domaincontrollers.
What does the use of Group Policy provide you as a network administrator?Group Policy provides a method of controlling user and computer configurationsettings for Active Directory containers such as sites, domains, and OUs. GPOsare linked to a particular container, and then individual policies andadministrative templates are enabled to control the environment for the usersor computers within that particular container.
How can you make sure that network clients have the most recent Windowsupdates installed and have other important security features such as theWindows Firewall enabled before they can gain full network access?You can configure a Network Policy Server (a service available in the NetworkPolicy and Access Services role). The Network Policy Server can be configuredto compare desktop client settings with health validators to determine thelevel of network access afforded to the client.
What types of zones would you want to create on your DNS server so thatboth queries to resolve hostnames to IP addresses and queries to resolve IPaddresses to hostnames are handled successfully?You would create both a forward lookup zone and a reverse lookup zone onyour Windows Server 2008 DNS server.
What tool enables you to manage your Windows Server 2008 DNS server?The DNS snap-in enables you to add or remove zones and to view the records inyour DNS zones. You can also use the snap-in to create records such as a DNSresource record.
How the range of IP addresses is defined for a Windows Server 2008 DHCPserver?The IP addresses supplied by the DHCP server are held in a scope. A scope thatcontains more than one subnet of IP addresses is called a superscope. IPaddresses in a scope that you do not want to lease can be included in anexclusion range.
How can you configure the DHCP server so that it provides certain deviceswith the same IP address each time the address is renewed?You can create a reservation for the device (or create reservations for anumber of devices). To create a reservation, you need to know the MAChardware address of the device. You can use the ipconfig or nbstat command-line utilities to determine the MAC address for a network device such as acomputer or printer.
To negate rogue DHCP servers from running with a domain, what is requiredfor your DHCP server to function?The DHCP server must be authorized in the Active Directory before it canfunction in the domain.What is DHCP? DHCP stands for "Dynamic Host Configuration Protocol". DHCP(Dynamic Host Configuration Protocol) is a communications protocol that letsnetwork administrators centrally manage and automate the assignment ofInternet Protocol (IP) addresses in an organization's network.DHCP assigns IP address to computers and other devices that are enabled asDHCP Clients. Deploying DHCP servers on the network automatically providescomputers and other TCP/IP based network devices with valid IP addresses andthe additional configuration parameters these devices need, called DHCPoptions, which allow them to connect to other network resources, such as DNSServers, WINS servers and routers. Dynamic Host Configuration Protocol (DHCP)automatically assigns IP addresses and other network configuration information(subnet mask, broadcast address, etc) to computers on a network. A clientconfigured for DHCP will send out a broadcast request to the DHCP serverrequesting an address. The DHCP server will then issue a "lease" and assign it tothat client. The time period of a valid lease can be specified on the server.DHCP reduces the amount of time required to configure clients and allows oneto move a computer to various networks and be configured with the ppropriateIP address, gateway and subnet mask.
At what layer of OSI it functions? DHCP works at Data link Layer. (Layer 2)
What is DORA? Finally, the chosen DHCP server sends the lease information(the IP address, potentially a subnet mask, DNS server, WINS server, WINS nodetype, domain name, and default gateway) to the workstation in a messagecalled the DHCP ACK (data communications jargon for acknowledge). You canremember the four parts of a DHCP message by the mnemonic DORA - Discover,Offer, Request, and ACK.
There are certain situations however when you might want to lengthen thislease period to several weeks or months or even longer. These situationsinclude (a) when you have a stable network where computers neither join orare removed or relocated; (b) when you have a large pool of available IPaddresses to lease from; or (c) when your network is almost saturated withvery little available bandwidth and you want to reduce DHCP traffic to increaseavailable bandwidth (not by much, but sometimes every little bit helps).
What is TCP/IP port no. used for DHCP service? DHCP uses the same two IANAassigned ports as BOOTP: 67/udp for the server side, and 68/udp for the clientside.
What is VLAN?A virtual LAN, commonly known as a vLAN or as a VLAN, is a method ofcreating independent logical networks within a physical network.A VLAN consists of a network of computers that behave as if connected to thesame wire – even though they may actually be physically connected to differentsegments of a LAN. Network administrators configure VLANs through softwarerather than hardware, which make them extremely flexible.
Option ClassesThe two option class types: User Class and Vendor Class. User Classes assignDHCP options to a group of clients that require similar configuration; VendorClasses typically assign vendor-specific options to clients that share a commonvendor type. For example, with Vendor Classes you can assign all Dellcomputers DHCP options that are common to those machines. The purpose ofoption classes is to group DHCP options for similar clients within a DHCP scope.
What is Multicast?A range of class D addresses from 224.0.0.0 to 239.255.255.255 that can beassigned to computers when they ask for them. A multicast group is assigned toone IP address. Multicasting can be used to send messages to a group ofcomputers at the same time with only one copy of the message.The Multicast Address Dynamic Client Allocation Protocol (MADCAP) is used torequest a multicast address from a DHCP server.
What is WSUS?It is Microsoft Software Update Server, and it is designed to automate theprocess of distributing Windows operating system patches. It works bycontrolling the Automatic Updates applet already present on all Windowsmachines. Instead of many machines at UVA all going to Microsoft's website todownload updates, the SUS server downloads all updates to an ITC-ownedserver and workstations then look there for updates.
What is DNS?DNS stands for Domain Naming System which provides name resolution forTCP/IP network. In addition it is a distributed database and hierarchalstructure which ensures that each hostname is unique across a local and widearea network.DNS is the name resolution system of the Internet. Using DNS allows clients toresolve names of hosts to IP addresses so that communication can take place.DNS is the foundation upon which Active Directory is built.
What is WINS?WINS (Windows Internet Naming Service) resolves’ Windows network computernames (also known as NetBIOS names) to Internet IP addresses, allowingWindows computers on a network to easily find and communicate with eachother.
What is the TCP/IP port no. used for WINS services? 137
What is Firewall? What are the essential settings are used in Firewall?A system designed to prevent unauthorized access to or from a privatenetwork. Firewalls can be implemented in both hardware and software, or acombination of both. Firewalls are frequently used to prevent unauthorizedinternet users from accessing private networks connected to the internet,especially intranets. All messages entering or leaving the intranet pass throughthe firewall, which examines each message and blocks those that do not meetthe specified security criteria.
There are several types of firewall techniques; the 3 basic are as given below:· Packets filter: Looks at each packet entering or leaving the network andaccepts or rejects it based on user-defined rules. Packet filtering is fairlyeffective and transparent to users, but it is difficult to configure. In addition, itis susceptible to IP spoofing.· Application gateway: Applies security mechanisms to specific applications,such as FTP and Telnet servers. This is very effective, but can imposeperformance degradation.· Circuit-level gateway: Applies security mechanisms when a TCP or UDPconnection is established. Once the connection has been made, packets canflow between the hosts without further checking.· Proxy server: Intercepts all messages entering and leaving the network. Theproxy server effectively hides the true network addresses.
What is VPN?VPN gives extremely secure connections between private networks linkedthrough the Internet. It allows remote computers to act as though they were onthe same secure, local network.
What is Object?Active Directory objects are the entities that make up a network. An object is adistinct, named set of attributes that represents something concrete, such as auser, a printer, or an application. For example, when we create a user object,Active Directory assigns the globally unique identifier (GUID), and we providevalues for such attributes as the user's given name, surname, the logonidentifier, and so on.
What is Schema?The schema defines the type of objects and the attributes that each objecthas. The schema is what defines a user account for example. A user accountmust have a name, a password, and a unique SID. A user account can also havemany additional attributes, such as location, address, phone number, e-mailaddresses, terminal services profiles, and so on.
What is LDAP? LDAP stands for Lightweight Directory Access Protocol is a networking protocolfor querying and modifying directory services running over TCP/IP. And the TCPport for LDAP is 389. LDAP Version 5.
What is GROUPS?Groups are Active Directory (or local computer) objects that can contain users,contacts, computers, and other groups. In Windows 2003, groups are created indomains, using the Active Directory Users and Computers tool. You can creategroups in the root domain, in any other domain in the forest, in anyorganizational unit, or in any container class object (such as the default Userscontainer). Like user and computer accounts, groups are Windows 2000 securityprincipals; they are directory objects to which SID’s are assigned at creation.
What is the difference between FAT, FAT32 & NTFS & what is it? Following are Microsoft's Windows Glossary definitions for each of the 3 filesystems:1. File Allocation Table (FAT): A file system used by MS-DOS and otherWindows-based operating systems to organize and manage files. The fileallocation table (FAT) is a data structure that Windows creates when youformat a volume by using the FAT or FAT32 file systems. Windows storesinformation about each file in the FAT so that it can retrieve the file later.2. FAT32: A derivative of the File Allocation Table (FAT) files system. FAT32supports smaller cluster sizes and larger volumes than FAT, which results inmore efficient space allocation on FAT32 volumes.3. NTFS: An advanced file system that provides performance, security,reliability, and advanced features that are not found in any version of FAT. Forexample, NTFS guarantees volume consistency by using standard transactionlogging and recovery techniques. If a system fails, NTFS uses its log file andcheckpoint information to restore the consistency of the file system. InWindows 2000 and Windows XP, NTFS also provides advanced features such asfile and folder permissions, encryption, disk quotas, and compression.NTFS File Systemsubstance.3. Another feature in NTFS is disk quotas. It gives you the ability to monitorand control the amount of disk space used by each user.4. Using NTFS, you can keep access control on files and folders and supportlimited accounts. InFAT and FAT32, all files and folders are accessible by all users no matter whattheir account type is.5. Domains can be used to tweak security options while keeping administrationsimple.6. Compression available in NTFS enables you to compress files, folders, orwhole drives when you're running out of disk space.7. Removable media (such as tapes) are made more accessible through theRemote Storage feature.8. Recovery logging helps you restore information quickly if power failures orother system problems occur.9. In NTFS we can convert the file system through:1. Back up all your data before formatting:So you want to start with a 'clean' drive but can't afford losing your preciousfiles? Very simple, all you need to do is back up your files to an external hard-drive or a partition other than the one you want to convert, or burn the dataonto CDs. After you're done you can format a drive with NTFS.2. Use the convert command from command prompt:This way, you don't need to back up. All files are preserved as they are.However, I recommend a backup. You don't know what might go wrong andbesides what would you lose if you do back-up? When I converted to NTFS usingconvert.exe, everything went smooth. Chances are your conversion will beequally smooth.IMPORTANT NOTE: This is a one-way conversion. Once you've converted toNTFS, you can't go back to FAT or FAT32 unless you format the drive.1. Open Command PromptStart | All Programs | Accessories | Command PromptORStart | Run | type "cmd" without quotes | OK2. Type "convert drive letter: /fs:ntfs" and press Enter. For example, type"convert C:/fs:ntfs" (without quotes) if you want to convert drive C. 2. If you're asked whether you want to dismount the drive, agree.
What is Backup?To copy files to a second medium (a disk or tape) as a precaution in case thefirst medium fails.
What is a Cluster?A cluster is a group of independent computers that work together to run acommon set of applications and provide the image of a single system to theclient and application. The computers are physically connected by cables andprogrammatically connected by cluster software. These connections allowcomputers to use problem-solving features such as failover in Server clustersand load balancing in Network Load Balancing (NLB) clusters.
What is RAID?RAID (Redundant Array of Independent Disks). A collection of disk drives thatoffers increased performance and fault tolerance. There are a number ofdifferent RAID levels. The three most commonly used are 0, 1, and 5: Level 0:striping without parity (spreading out blocks of each file across multiple disks).Level 1: disk mirroring or duplexing. Level 2: bit-level striping with parity Level3: byte-level striping with dedicated parity.
What is Raid-0?RAID Level 0 is not redundant, hence does not truly fit the "RAID" acronym. Inlevel 0, data is split across drives, resulting in higher data throughput. Since noredundant information is stored, performance is very good, but the failure ofany disk in the array results in data loss. This level is commonly referred to asstriping.
What is RAID-1?RAID Level 1 provides redundancy by writing all data to two or more drives.The performance of a level 1 array tends to be faster on reads and slower onwrites compared to a single drive, but if either drive fails, no data is lost. Thisis a good entry-level redundant system, since only two drives are required;however, since one drive is used to store a duplicate of the data, the cost permegabyte is high. This level is commonly referred to as mirroring.
What is RAID-5? RAID Level 5 is similar to level 4, but distributes parity among the drives. Thiscan speed small writes in multiprocessing systems, since the parity disk doesnot become a bottleneck. Because parity data must be skipped on each driveduring reads, however, the performance for reads tends to be considerablylower than a level 4 array. The cost per megabyte is the same as for level 4.
What is IP?The Internet Protocol (IP) is a data-oriented protocol used for communicatingdata across a packet switched internet-work. IP is a network layer protocol inthe internet protocol suite and is encapsulated in a data link layer protocol(e.g., Ethernet).
What is TCP?Transmission Control Protocol, and pronounced as separate letters. TCP is oneof the main protocols in TCP/IP networks. Whereas the IP protocol deals onlywith packets, TCP enables two hosts to establish a connection and exchangestreams of data. TCP guarantees delivery of data and also guarantees thatpackets will be delivered in the same order in which they were sent. What is UDP?UDP, a connectionless protocol that, like TCP, runs on top of IP networks.Unlike TCP/IP, UDP/IP provides very few error recovery services, offeringinstead a direct way to send and receive datagram’s over an IP network. It'sused primarily for broadcasting messages over a network.
How can we assign Static IP & dynamic IP using command prompt utility?Yes. Through netsh command
What is Gateway?A gateway is either hardware or software that acts as a bridge between twonetworks so that data can be transferred between a numbers of computers.
What is Difference between Windows NT, Windows 2000 & Windows 2003?The major difference between in NT, 2000 & 2003 are as follows:1) In winnt server concept pdc and bdc but there is no concept in 2000.2) In winnt server sam database r/w format in pdc and read only format in bdc,but in 2000 domain and every domain controller sam database read/writerformat.3) 2000 server can any time any moment become server or member of serversimple add/remove dcpromo. But in winnt you have to reinstall operatingsystem.A) In 2000 we cannot rename domain whereas in 2003 we can rename DomainB) In 2000 it supports of 8 processors and 64 GB RAM (In 2000 Advance Server)whereas in 2003 supports up to 64 processors and max of 512GB RAMC) 2000 Supports IIS 5.0 and 2003 Supports IIS6.0D) 2000 doesn't support Dot net whereas 2003 Supports Microsoft .NET 2.0E) 2000 has Server and Advance Server editions whereas 2003 has Standard,Enterprise, Datacentre and Web server Editions.F) 2000 doesn't have any 64 bit server operating system whereas 2003 has 64 bitserver operating systems (Windows Server 2003 X64 STD and Enterprise Edition)G) 2000 has basic concept of DFS (Distributed File systems) with defined rootswhereas 2003 has Enhanced DFS support with multiple roots.H) In 2000 there is complexality in administering Complex networks whereas2003 is easy administration in all & Complex networksI) in 2000 we can create 1 million users and in 2003 we can create 1 billionusers.J) In 2003 we have concept of Volume shadow copy service which is used tocreate hard disk snap shot which is used in Disaster recovery and 2000 doesn'thave this service.K) In 2000 we don't have end user policy management, whereas in 2003 wehave a End user policy management which is done in GPMC (Group policymanagement console).L) In 2000 we have cross domain trust relation ship and 2003 we have Crossforest trust relationship.M) 2000 Supports 4-node clustering and 2003 supports 8-node clustering.N) 2003 has High HCL Support (Hardware Compatibility List) issued by MicrosoftO) Code name of 2000 is Win NT 5.0 and Code name of 2003 is Win NT 5.1P) 2003 has service called ADFS (Active Directory Federation Services) which isused to communicate between branches with safe authentication.In 2003 there is improved storage management using service File ServerResource Manager (FSRM)R) 2003 has service called Windows Share point Services (It is an integratedportfolio of collaboration and communication services designed to connectpeople, information, processes, and systems both within and beyond theorganizational firewall.)S) 2003 has Improved Print management compared to 2000 serverT) 2003 has telnet sessions available.U) 2000 supports IPV4 whereas 2003 supports IPV4 and IPV6In windows 2003 support SHADOW COPIES. A NEW TOOLTO RECOVER FILESWindow 2003 server includes IIS server in it. That is the biggest advantage ontop of better file system managementIn 2003 server u can change the domain name at any time without rebuildingthe domain where as in 2000 u have to rebuild the entire domain to change thedomain nameIn windows 2000 support maximum 10 users’ access shared folder at a timethrough network.But in win2003 no limitation
What is domain?A collection of computer, user, and group objects defined by the administrator.These objects share a common directory database, security policies, andsecurity relationships with other domains. What is forest?One or more Active Directory domains that share the same class and attributedefinitions (schema), site, and replication information (configuration), andforest-wide search capabilities (global catalog). Domains in the same forest arelinked with two-way, transitive trust relationships.
What is site?One or more well-connected (highly reliable and fast) TCP/IP subnets. A siteallows administrators to configure Active Directory access and replicationtopology to take advantage of the physical network.
Why should you strive to create only one forest for your organization?Using more than one forest requires administrators to maintain multipleschemas, configuration containers, global catalogs, and trusts, and requiresusers to take complex steps to use the directory. Why should you try to minimize the number of domains in yourorganization?Adding domains to the forest increases management and hardware costs.
Why should you define the forest root domain with caution?Define your forest root domain with caution; because once you’ve named theforest root domain you cannot change it without renaming and reworking theentire Active Directory tree.
Which tool helps assign roles to a server, including the role of domaincontroller?Configure Your Server Wizard
What are the reasons to create more than one child domain under adedicated root domain?The reasons to create more than one child domain under the dedicated rootare to meet required security policy settings, which are linked to domains; tomeet special administrative requirements, such as legal or privacy concerns; tooptimize replication traffic; to retain Windows NT domains; and to establish adistinct namespace.
For best performance and fault tolerance, where should you store thedatabase and log files?For best performance and fault tolerance, it’s recommended that you place thedatabase and the log file on separate hard disks that are NTFS drives, althoughNTFS is not required.
What is the function of the shared system volume folder and where is thedefault storage location of the folder?The shared system volume folder stores public files that must be replicated toother domain controllers, such as logon scripts and some of the GPOs, for boththe current domain and the enterprise. The default location for the sharedsystem volume folder is %Systemroot%\Sysvol. The shared system folder mustbe placed on an NTFS drive.
What command must you use to install Active Directory using the ActiveDirectory Installation Wizard?Use the Dcpromo command to install Active Directory using the ActiveDirectory Installation Wizard. 2-62 Chapter 2 Installing and Configuring ActiveDirectory
What items are installed when you use the Active Directory InstallationWizard to install Active Directory?The Active Directory Installation Wizard installs Active Directory, creates thefull domain name, assigns the NetBIOS name for the domain, sets the ActiveDirectory database and log folder location, sets the shared system volumefolder location, and installs DNS and a preferred DNS server if you requestedDNS installation.
Explain the two ways you can use an answer file to install Active Directory.An answer file that is used to install Windows Server 2003 can also include theinstallation of Active Directory. Or, you can create an answer file that installsonly Active Directory and is run after Windows Server 2003 Setup is completeand you have logged on to the system.
What command must you use to install Active Directory using the networkor backup media?Use the Dcpromo /adv command to install Active Directory using the networkor backup media. Which of the following commands is used to demote a domain controller?a. Dcdemoteb. Dcinstallc. Dcpromod. DcremoveThe correct answer is c. You use the Dcpromo command to demote a domaincontroller.
After Active Directory has been installed, how can you verify the domainconfiguration?You can verify the domain configuration in three steps by using the ActiveDirectory Users and Computers console. First, you verify that your domain iscorrectly named by finding it in the con-sole tree. Second, you double-click thedomain, click the Domain Controllers container, and verify that your domaincontroller appears and is correctly named by finding it in the details pane.Third, you double-click the server and verify that all information is correct onthe tabs in the Properties dialog box for the server.
After Active Directory has been installed, how can you verify the DNSconfiguration?You can verify DNS configuration by viewing the set of default SRV resourcerecords on the DNS server in the DNS console.
After Active Directory has been installed, how can you verify DNSintegration with Active Directory?You can verify DNS integration by viewing the Type setting and the DynamicUpdates setting in the General tab in the Properties dialog box for the DNSzone and the Load Zone Data on Startup setting in the Advanced tab in theProperties dialog box for the DNS server.
After Active Directory has been installed, how can you verify installation ofthe shared system volume?You can verify installation of the shared system volume by opening%Systemroot%\Sysvol or the location you specified during Active Directoryinstallation and verifying that the Sysvol folder contains a shared Sysvol folderand that the shared Sysvol folder contains a folder for the domain, whichcontains a shared Scripts and a Policies folder.
How can you fix data left behind after an unsuccessful removal of ActiveDirectory?First, you must remove the orphaned metadata—NTDS Settings objects—usingNtdsutil. Then you must remove the domain controller object in the ActiveDirectory Sites And Services con-sole. You can safely delete the domaincontroller object only after all services have been removed and no childobjects exist.
What is the purpose of the Active Directory Domains And Trusts console?The Active Directory Domains And Trusts console provides the interface tomanage domains and manage trust relationships between forests and domains.
What is the purpose of the Active Directory Sites And Services console?The Active Directory Sites And Services console contains information about thephysical structure of your network.
What is the purpose of the Active Directory Users And Computers console?The Active Directory Users And Computers console allows you to add, modify,delete, and organize Windows Server 2003 user accounts, computer accounts,security and distribution groups, and published resources in your organization’sdirectory. It also allows you to manage domain controllers and OUs.
Can you restrict who can gain access to a completed backup file or tape? Ifso, how?You can restrict who can gain access to a completed backup file or tape byselecting the Replace The Data On The Media With This Backup option and theAllow Only The Owner And The Administrator Access To The Backup Data AndTo Any Backups Appended To This Medium option on the Backup Options pagein the Backup Or Restore Wizard.
When you specify the items you want to back up in the Backup Or RestoreWizard, which of the following should you select to successfully back upActive Directory data?a. System state datab. Shared system volume folderc. Database and log filesd. RegistryThe correct answer is a. When you specify the items you want to back up in theBackup Or Restore Wizard, you must specify system state data to successfullyback up Active Directory data.
Which of the following Ntdsutil command parameters should you use if youwant to restore the entire directory?a. Restore databaseb. Restore subtreec. Database restored. Subtree restoreThe correct answer is a. Database restore and subtree restore are not Ntdsutilcommand parameters. Restore subtree is used to restore a portion or a subtreeof the directory.
Why would you need to create additional trees in your Active Directoryforest?You might need to define more than one tree if your organization has morethan one DNS name.
Under what domain and forest functional levels can you rename orrestructure domains in a forest?You can rename or restructure the domains in a forest only if all domaincontrollers in the forest are running Windows Server 2003, all domainfunctional levels in the forest have been raised to Windows Server 2003, andthe forest functional level has been raised to Windows Server 2003.
Under what domain functional level can you rename a domain controller?You can rename a domain controller only if the domain functionality of thedomain to which the domain controller is joined is set to Windows Server 2003.
What preliminary tasks must you complete before you can create a foresttrust?Before you can create a forest trust, you must1. Configure a DNS root server that is authoritative over both forest DNSservers that you want to form a trust with, or configure a DNS forwarder onboth of the DNS servers that are authoritative for the trusting forests.2. Ensure that the forest functionality for both forests is Windows Server 2003.
Which of the following trust types are created implicitly? Choose all thatapply.a. Tree-rootb. Parent-childc. Shortcutd. Realme. Externalf. ForestThe correct answers are a and b. Shortcut, realm, external, and forest trustsmust all be created manually (explicitly).
What site is created automatically in the Sites container when you installActive Directory on the first domain controller in a domain?The Default-First-Site-Name site.
How many subnets must each site have? To how many sites can a subnet beassigned?Each site must have at least one subnet, but a subnet can be assigned to onlyone site.
You specified a preferred bridgehead server for your network. It fails andthere are no other preferred bridgehead servers available. What is theresult?If no other preferred bridgehead servers are specified or no other preferredbridgehead servers are available, replication does not occur to that site even ifthere are servers that can act as bridgehead servers.
You have a high-speed T1 link and a dial-up network connection in case theT1 link is unavailable. You assign the T1 link to have a cost of 100. Whatcost value should you assign to the dial-up link?a. 0b. 50c. 100d. 150The correct answer is d. Higher costs are used for slow links (the dialupconnection), and lower costs are used for fast links (the T1 connection).Because Active Directory always chooses the connection on a per-cost basis,the less expensive connection (T1) is used as long as it is available.
For optimum network response time, how many domain controllers in eachsite should you designate as a global catalog server?For optimum network response time and application availability, designate atleast one domain controller in each site as the global catalog server.
The universal group membership caching feature is set for which of thefollowing?a. Forestb. Domainc. Sited. Domain controllerThe correct answer is c. The universal group membership caching feature mustbe set for each site and requires a domain controller to run a Windows Server2003 operating system.
Which of the following tools can you use to delete an application directorypartition? (Choose all that apply.)a. Ntdsutil command-line toolb. Application-specific tools from the application vendorc. Active Directory Installation Wizardd. Active Directory Domains And Trusts consolee. Active Directory Sites And Services consoleThe correct answers are a, b, and c. To delete the application directorypartition, you can use the Active Directory Installation Wizard to remove allapplication directory partition replicas from the domain controller, the toolsprovided with the application, or the Ntdsutil command-line tool.
You received Event ID 1265 with the error “DNS Lookup Failure.” What aresome actions you might take to remedy the error? (Choose all that apply.)a. Manually force replication.b. Reset the domain controller’s account password on the PDC emulatormaster.c. Check the domain controller’s CNAME record.d. Make sure ―Bridge All Site Links‖ is set correctly.e. Check the domain controller’s A record.The correct answers are c and e. This message is often the result of DNSconfiguration problems. Each domain controller must register its CNAME recordfor the DsaGuid._msdcs.Forestname. Each domain controller must register its Arecord in the appropriate zone. So, by checking the domain controller’s CNAMEand A records, you may be able to fix the problem
What action must you take to be able to view the Security tab in theProperties dialog box for an OU?You must select Advanced Features from the View menu on the ActiveDirectory Users And Computers console.
How does the icon used for an OU differ from the icon used for a container?The icon used for an OU is a folder with a book. The icon used for a container isa folder.
What are the three ways to move Active Directory objects between OUs?There are three ways to move Active Directory objects between OUs:■ Use drag and drop■ Use the Move option on the Active Directory Users And Computers console■ Use the Dsmove command
What is authentication?The process by which the system validates the user’s logon information. Auser’s name and password are compared against the list of authorized users. Ifthe system detects a match, access is granted to the extent specified in thepermissions list for that user.
What is the purpose of the Guest account? What is the default condition ofthe Guest account?The purpose of the built-in Guest account is to provide users who do not havean account in the domain with the ability to log on and gain access toresources. By default, the Guest account does not require a password (thepassword can be blank) and is disabled. You should enable the Guest accountonly in low-security networks and always assign it a password.
Why should you always require new users to change their passwords thefirst time that they log on?Requiring new users to change their passwords means that only they know thepassword, which makes the system more secure.
From which tab on a user’s Properties dialog box can you set logon hours?a. General tabb. Account tabc. Profile tabd. Security tabThe correct answer is b. You set logon hours by clicking the Logon Hours buttonon the Account tab in a user’s Properties dialog box.
How can you ensure that a user has a centrally located home folder?First, create a shared folder on a network server that will contain the user’shome folder. Second, in the Profiles tab in the Properties dialog box for theuser, provide a path to the shared folder on the server. The next time that theuser logs on, the home folder is available from the My Computer window.
why would you rename a user account and what is the advantage of doingso?Rename a user account if you want a new user to have all of the properties of aformer user, including permissions, desktop settings, and group membership.The advantage of renaming an account is that you do not have to rebuild all ofthe properties as you do for a new user account.
Why would you disable a user account and what is the advantage of doingso?Disable a user account when a user does not need an account for an extendedperiod, but will need it again. The advantage of disabling a user account is thatwhen the user returns, you can enable the user account so that the user canlog on to the network again without having to rebuild a new account.
How is a disabled user account designated in the Active Directory Users AndComputers console?A disabled user account is designated by a red ―X.‖
Why should you select the User Must Change Password At Next Logon checkbox when you reset a user’s password?Select User Must Change Password At Next Logon to force the user to changehis or her pass-word the next time he or she logs on. This way, only the userknows the password.
When should you use security groups rather than distribution groups?Use security groups to assign permissions. Use distribution groups when theonly function of the group is not security related, such as an e-mail distributionlist. You cannot use distribution groups to assign permissions.
What strategy should you apply when you use domain and local groups?Place user accounts into global groups, place global groups into domain localgroups, and then assign permissions to the domain local group. Why is replication an issue with universal groups?Universal groups and their members are listed in the global catalog. Therefore,when member-ship of any universal group changes, the changes must bereplicated to every global catalog in the forest, unless the forest functionallevel is set to Windows Server 2003.
In what domain functional level is changing the group scope allowed? Whatscope changes are permitted in this domain functional level?You can change the scope of domains with the domain functional level set toWindows 2000 native or Windows Server 2003. The following scope changes arepermitted:■ Global to universal, as long as the group is not a member of another grouphaving global scope■ Domain local to universal, as long as the group being converted does not haveanother group with a domain local scope as its member■ Universal to global, as long as the group being converted does not haveanother universal group as its member■ Universal to domain local
The name you select for a group must be unique to which of the followingActive Directory components?a. forestb. treec. domaind. sitee. OUThe correct answer is c. The name you select for a group must be unique to thedomain in which the group is created.
What is delegation?An assignment of administrative responsibility that allows users withoutadministrative credentials to complete specific administrative tasks or tomanage specific directory objects. Responsibility is assigned throughmembership in a security group, the Delegation Of Control Wizard, or GroupPolicy settings.
What is permission?A rule associated with an object to regulate which users can gain access to theobject and in what manner. Permissions are assigned or denied by the object’sowner.
Which Dsquery command should you use to find users in the directory whohave been inactive for two weeks? Dsquery user –inactive 2
Which Dsquery command should you use to find computers in the directorythat have been disabled?Dsquery computer –disabled
The permissions check boxes for a security principal are shaded. What doesthis indicate?If permission is inherited, its check boxes (located in the Security tab in theProperties dialog box for an object, and in the Permission Entry dialog box foran object) are shaded. However, shaded special permissions check boxes donot indicate inherited permissions. These shaded check boxes merely indicatethat a special permission exists.
How can you remove permissions you set by using the Delegation Of ControlWizard?Although the Delegation Of Control Wizard can be used to grant administrativepermissions to containers and the objects within them, it cannot be used toremove those privileges. If you need to remove permissions, you must do somanually in the Security tab in the Properties dialog box for the container andin the Advanced Security Settings dialog box for the container.
For which of the following Active Directory objects can you delegateadministrative control by using the Delegation Of Control Wizard? (Chooseall that apply.)a. Folderb. Userc. Groupd. Sitee. OUf. Domaing. Shared folderThe correct answers are a, d, e, and f. Folders, sites, OUs, and domains are allobjects for which administrative control can be delegated by using theDelegation Of Control Wizard.
What is a GPO?A GPO is a Group Policy Object. Group Policy configuration settings arecontained within a GPO. Each computer running Windows Server 2003 has onelocal GPO and can, in addition, be sub ject to any number of nonlocal (ActiveDirectory–based) GPOs.
What are the two types of Group Policy settings and how are they used?The two types of Group Policy settings are computer configuration settings anduser configura tion settings. Computer configuration settings are used to setgroup policies applied to com puters, regardless of who logs on to them, andare applied when the operating system initializes. User configuration settingsare used to set group policies applied to users, regardless of which computerthe users logs on to, and are applied when users log on to the computer.
If you want to create a GPO for a site, what administrative tool should youuse?Use the Active Directory Sites And Services console to create a GPO for a site.
Besides Read permission, what permission must you assign to allow a useror administrator to see the settings in a GPO?Write permission. A user or administrator who has Read access but not Writeaccess to a GPO cannot use the Group Policy Object Editor to see the settingsthat it contains.
What’s the difference between removing a GPO link and deleting a GPO?When you remove a GPO link to a site, domain, or OU, the GPO still remains inActive Directory. When you delete a GPO, the GPO is removed from ActiveDirectory, and any sites, domains, or OUs to which it is linked are not longeraffected by it.
You want to deflect all Group Policy settings that reach the North OU fromall of the OU’s parent objects. a. You use the Block Policy Inheritance exception todeflect all Group Pol-icy settings from the parent objects of a site, domain, orOU. Block Policy Inheritance can only be applied directly to a site, domain, orOU, not to a GPO or a GPO link.
You want to ensure that none of the South OU Desktop settings applied tothe South OU can be overridden. f. You use the No Override exception to ensure that noneof a GPO’s set things can be overridden by any other GPO during the processingof group policies. No Override can only be applied directly to a GPO link.
What is SharePoint?A centralized location for key folders on a server or servers, which providesusers with an access point for storing and finding information andadministrators with an access point for managing information.
What are the three tools available for generating RSoP queries?Windows Server 2003 provides three tools for generating RSoP queries: theResultant Set Of Policy Wizard, the Gpresult command-line tool, and theAdvanced System Information– Policy tool.
What is the difference between saving an RSoP query and saving RSoPquery data?By saving an RSoP query, you can reuse it for processing another RSoP querylater. By saving RSoP query data, you can revisit the RSoP as it appeared for aparticular query when the query was created.
Which RSoP query generating tool provides RSoP query results on a consolesimilar to a Group Policy Object Editor console?a. Resultant Set Of Policy Wizardb. Group Policy Wizardc. Gpupdate command-line toold. Gpresult command-line toole. Advanced System Information–Policy toolf. Advanced System Information–Services toolThe correct answer is a. The Resultant Set Of Policy Wizard provides RSoPquery results on a console similar to a Group Policy Object Editor console.There is no Group Policy Wizard. Gpupdate and Gpresult are command-linetools. The Advanced System Information tools provide results in an HTML reportthat appears in the Help And Support Center window.
Q In which Event Viewer log can you find Group Policy failure and warningmessages?What type of event log records should you look for?You can find Group Policy failure and warning messages in the applicationevent log. Event log records with the Userenv source pertain to Group Policyevents.
What diagnostic log file can you generate to record detailed informationabout Group Policy processing and in what location is this file generated?You can generate a diagnostic log to record detailed information about GroupPolicy processing to a log file named Userenv.log in the hidden folder%Systemroot%\Debug\Usermode.
Which of the following actions should you take if you attempt to open aGroup Policy Object Editor console for an OU GPO and you receive themessage Failed To Open The Group Policy Object?a. Check your permissions for the GPO.b. Check network connectivity.c. Check that the OU exists.d. Check that No Override is set for the GPO.e. Check that Block Policy Inheritance is set for the GPO.The correct answer is b. The message Failed To Open The Group Policy Objectindicates a net-working problem, specifically a problem with the Domain NameSystem (DNS) configuration.
Which of the following actions should you take if you attempt to edit a GPOand you receive the message Missing Active Directory Container?a. Check your permissions for the GPO.b. Check network connectivity.c. Check that the OU exists.d. Check that No Override is set for the GPO.e. Check that Block Policy Inheritance is set for the GPO.The correct answer is c. The message Missing Active Directory Container iscaused by Group Policy attempting to link a GPO to an OU that it cannot find.The OU might have been deleted, or it might have been created on anotherdomain controller but not replicated to the domain controller that you areusing.
What is Assign?To deploy a program to members of a group where acceptance of the pro-gramis mandatory.
What is publish?To deploy a program to members of a group where acceptance of the pro-gramis at the discretion of the user.
What are the hardware requirements for deploying software by using GroupPolicy?To deploy software by using Group Policy, an organization must be runningWindows 2000 Server or later, with Active Directory and Group Policy on theserver, and Windows 2000 Professional or later on the client computers. Describe the tools provided for software deployment.The Software Installation extension in the Group Policy Object Editor consoleon the server is used by administrators to manage software. Add Or RemovePrograms in Control Panel is used by users to manage software on their owncomputers.
Which of the following file extensions allows you to deploy software usingthe Software Installation extension? (Choose two.)a. .mstb. .msic. .zapd. .zipe. .mspf. .aasThe correct answers are b and c. Files with the extension .msi are either nativeWindows Installer packages or repackaged Windows Installer packages, whilefiles with the extension .zap are application files. Files with the extensions.mst and .msp are modifications and do not allow you to deploy software ontheir own. Files with the extension .aas are application assignment scripts,which contain instructions associated with the assignment or publication of apackage.
You want to ensure that all users of the KC23 workstation can runFrontPage 2000. What action should you perform?a. Assign the application to the computer.b. Assign the application to users.c. Publish the application to the computer.d. Publish the application to users.The correct answer is a. Assigning the application to the KC23 workstation isthe only way to ensure that all users of the workstation can run FrontPage2000.
Attributes for which logs are defined in the Event Log security area?The Event Log security area defines attributes related to the application,security, and system event logs in the Event Viewer console.
In which of the following security areas would you find the settings fordetermining which security events are logged in the security log on thecomputer?a. Event Logb. Account Policiesc. Local Policiesd. Restricted GroupsThe correct answer is c. You determine which security events are logged in thesecurity log on the computer in the Audit Policy settings in the Local Policiessecurity area.
In which of the following file formats can you archive a security log? Choosethree.a. .txtb. .docc. .rtfd. .bmpe. .evtf. .csvg. .crvThe correct answers are a, e, and f. Logs can be saved as text (*.txt), event log(*.evt), or comma-delimited (*.csv) file format.
In which of the following archived file formats can you reopen the file in theEvent Viewer console?a. .txtb. .docc. .rtfd. .bmpe. .evtf. .csvg. .crvThe correct answer is e. If you archive a log in log-file (*.evt) format, you canreopen it in the Event Viewer console.
You filtered a security log to display only the events with Event ID 576.Then you archived this log. What information is saved?a. The entire log is savedb. The filtered log is savedc. The entire log and the filtered log are each saved separatelyd. No log is savedThe correct answer is a. When you archive a log, the entire log is saved,regardless of filtering options.
In the security analysis results, which icon represents a difference from thedata-base configuration?a. A red Xb. A red exclamation pointc. A green check markd. A black question markThe correct answer is a. A red X indicates a difference from the databaseconfiguration.
In which locations can you view performance data logged in a counter log?You can view logged counter data using System Monitor or export the data to afile for analysis and report generation.
What registry subkey contains the entries for which you can increase thelogging level to retrieve more detailed information in the directory servicelog?HKEY_LOCAL_MACHINE SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics
Why should you leave logging levels set to 0 unless you are investigating aproblem?You should leave logging levels set to 0 unless you are investigating a problembecause increasing the logging level increases the detail of the messages andthe number of messages emitted and can degrade server performance.
What are the four steps in the process of analyzing and interpretingperformance-monitoring results?The four steps are (1) establish a baseline, (2) analyze performance-monitoringresults, (3) plan and implement changes to meet the baseline, and (4) repeatsteps 2 and 3 until performance is optimized.
Installation Facts Active Directory requires the following:o TCP/IP running on the servers and clients.o A DNS server with SRV support.o Windows 2000 or 2003 operating systems. After installing Windows 2003, you can install Active Directory using theDcpromo command. Members of the Domain Admins group can add domain controllers to adomain. Members of the Enterprise Admins group can perform administrative tasksacross the entire network, including:o Change the Active Directory forest configuration by adding/removingdomains. (New domains are created when the first domain controller isinstalled. Domains are removed when the last domain controller is uninstalled.)o Add/remove sites.o Change the distribution of subnets or servers in a site.o Change site link configuration
You should know the following facts about Active Directory advancedinstallations: Installing from a replica media set will create the initial Active Directorydatabase using a backup copy and then replicate in any changes since thebackup. This prevents a lot of the replication traffic that is normally createdon a network when a server is promoted to a domain controller. To rename domain controllers, the domain functional level must be at leastWindows 2003 (this means all domain controllers must be running Windows2003).
Installation ToolsYou can use the following tools to Descriptiontroubleshoot an Active Directoryinstallation: ToolDirectory Services log Use Event Viewer to examine the log. The log lists informational, warning, and error events.Netdiag Run from the command line. Test for domain controller connectivity (in some cases, it can make repairs).DCDiag Analyzes domain controller states and tests different functional levels of Active Directory.Dcpromo log files Located in %Systemroot%/Debug folder. Dcpromoui gives a detailed progress report of Active Directory installation and removal. Dcpromos is created when a Windows 3.x or NT 4 domain controller is promoted.Ntdsutil Can remove orphaned data or a domain controller object from Active Directory.You can also check the following settings to begin troubleshooting an ActiveDirectory installation: Make sure the DNS name is properly registered. Check the spelling in the configuration settings. PING the computer to verify connectivity. Verify the domain name to which you are authenticating. Verify that the username and password are correct. Verify the DNS settings.
Microsoft gives the following as the best practice procedure for restoring ActiveDirectory from backup media:1. Reboot into Active Directory restore mode. Log in using the password youspecified during setup (not a domain account).2. Restore the System State data from backup to its original and to an alternatelocation.3. Run Ntdsutil to mark the entire Active Directory database (if you're restoringthe entire database) or specific Active Directory objects (if you're onlyrestoring selected Active Directory objects) as authoritative.4. Reboot normally.5. Restore Sysvol contents by copying the Sysvol directory from the alternatelocation to the original location to overwrite the existing Sysvol directory (ifyou're restoring the entire database). Or, copy the policy folders (identified byGUID) from the alternate location to the original location to overwrite theexisting policy folders.
Security Facts A security principal is an account holder who has a security identifier. The Active Directory migration tool allows you to move objects betweendomains. Objects moved to a new domain get a new SID. The Active Directory migration tool creates a SID history. The SID history allows an object moved to a new domain to keep its originalSID.You should know the following information pertaining to identifiers:
Identifier DescriptionGUID Globally Unique Identifier. 128- bit number guaranteed to be unique across the network. Assigned to objects when they are created. An object's GUID never changes (even if object is renamed or moved).SID Security Identifier. Unique number assigned when an account is created. Every account is given a unique SID. System uses the SID to track the account rather than the account's user or group. A deleted account that is recreated will be given a different SID. The SID is composed of the domain SID and a unique RID.RID Relative Identifier. Unique to all the SIDs in a domain. Passed out by the RID master.Group FactsActive Directory defines three scopes Descriptionthat describe the domains on thenetwork from which you can assignmembers to the group; where thegroup's permissions are valid; and whichgroups you can nest. ScopeGlobal groups Are used to group users from the local domain. Typically, you assign users who perform similar job functions to a global group. A global group can contain user and computer accounts and global groups from the domain in which the global group resides. Global groups can be used to grant permissions to resources in any domain in the forest.Domain local groups Are used to grant access to resources in the local domain. They have open membership, so they may contain user and computer accounts, universal groups, and global groups from any domain in the forest. A domain local group can also contain other domain local groups from its domain. Domain local groups can be used to grant permissions to resources in the domain in which the domain local group resides.
Trust TypesThe following table shows the types of Characteristics and Usestrusts you can create in ActiveDirectory. Trust TypeTree root Automatically established between two trees in the same forest. Trusts are transitive and two-way.Parent/child Automatically created between child and parent domains. Trusts are transitive and two-way.Shortcut Manually created between two domains in the same forest. Trusts are transitive, and can be either one-way or two-way. Create a shortcut trust to reduce the amount of Kerberos traffic on the network due to authentication.External Manually created between domains in different forests. Typically used to create trusts between Active Directory and NT 4.0 domains. Trusts are not transitive, and can be either one-way or two-way.Forest root Manually created between the two root domains or two forests. Transitive within the two forests. Can be either one-way or two-way.Realm Manually created between Active Directory and non-Windows Kerberos realms.Trusts have a direction that indicates which way trust flows in the relationship. The direction of the arrow identifies the direction of trust. For example, ifDomain A trusts Domain B, the arrow would point from Domain A to Domain B.Domain A is the trusting domain, and Domain B is the trusted domain. Resource access is granted opposite of the direction of trust. For example, ifDomain A trusts Domain B, users in Domain B have access to resources inDomain A (remember that users in the trusted domain have access to resourcesin the trusting domain). A two-way trust is the same as two one-way trusts in opposite directions.Functional Level TypesThe table below shows Domain Controller Featuresthe domain functional Operating Systemslevels. DomainFunctional Level2000 Mixed NT 2000 2003 The following features are available in 2000 Mixed: Universal groups are available for distribution groups. Group nesting is available for distribution groups.
To manually refresh group policy settings, use the Gpupdate command with thefollowing switches:
Switch FunctionNo switch Refresh user and computer-related group policy./target:user Refresh user-related group policy./target:computer Refresh computer- related group policy.Editing GPO Facts Group Policy Object Editor has two nodes:o Computer Configuration to set Group Policies for computers.o User Configuration to set Group Policies for users. You can extend each node's capabilities by using snap-ins. Use an Administrative Template file (.adm) to extend registry settingsavailable in the Group Policy Editor. Use the Software setting to automate installation, update, repair, andremoval of software for users or computers. The Windows setting automates tasks that occur during startup, shutdown,logon, or logoff. Security settings allow administrators to set security levels assigned to alocal or non-local GPO.
Block InheritanceYou can prevent Active Directory child objects from inheriting GPOs that arelinked to the parent objects. To block GPO inheritance,1. Click the Group Policy tab for the domain or OU for which you want to blockGPO inheritance.2. Select the Block Policy inheritance check box.
WMI FilteringYou should know the following facts about WMI filtering: You can use WMI queries to filter the scope of GPOs. WMI filtering is similar to using security groups to filter the scope of GPOs. WMI queries are written in WMI query language (WQL).
Loopback ProcessingBy default, Group Policy configuration applies Computer Configuration GPOsduring startup and User Configuration GPOs during logon. User Configurationsettings take precedence in the event of a conflict.You can control how Group Policy is applied by enabling loopback processing.Following are some circumstances when you might use loopback processing: If you want Computer Configuration settings to take precedence over UserConfiguration settings. If you want to prevent User Configuration settings from being applied. If you want to apply User Configuration settings for the computer, regardlessof the location of the user account in Active Directory.
RSoPRSoP (Resultant Set of Policy) is the accumulated results of the group policiesapplied to a user or computer. You should know the following facts about RSoP: The RSoP wizard reports on how GPO settings affect users and computers.The wizard runs in two modes: logging and planning. The RSoP wizard logging mode reports on existing group policies appliedagainst computers or users. The RSoP wizard planning mode simulates the effects policies would have ifapplied to computers or users.
RSoP AccessYou can access the Resultant Set of Policy (RSoP) wizard in various ways. Hereare some common ways: Install the RSoP wizard as an MMC snap-in Use the Start > Run sequence and run Rsop.msc. You can also select an object in Active Directory Users and Computer andselect Resultant Set of Policy (in planning or logging mode) from the All Tasksmenu.
Delegation FactsYou should know the following facts about trust delegating control of grouppolicies: Decentralized administrative delegation means that administration isdelegate to OU level administrators. In decentralized administrativedelegation, assign full-control permission to the OU administrators for GPOs. Centralized administrators only delegate full-control permissions to top levelOU administrators. Those administrators are responsible for everythingdownward. In task-based delegation, administration of specific group policies toadministrators who handle specific tasks. For example, security administratorswould get full-control of security GPOs, and application administrators wouldget full-control of application GPOs.
Logon FactsYou should know the following facts about managing logon: Password policies are only effective in GPOs applied to the domain. To create different password policies, you must create additional domains. Each forest has a single alternate user principle name (UPN) suffix list thatyou can edit from the properties of the Active Directory Domains and Trustsnode. After adding an alternate UPN suffix, you can configure all user accountsto use the same UPN suffix, thus simplifying user logon for users in all domainsin the forest.You should be familiar with the following password and account lockout policysettings:
Setting DescriptionEnforce password history Keeps a history of user passwords (up to 24) so that users cannot reuse passwords.Minimum password length Configures how many characters a valid password must have.Minimum password age Forces the user to use the new password for whatever length of time you determine before changing it again.Password must meet complexity Determines that user passwordsrequirements cannot contain the user name, the user's real name, the company name, or a complete dictionary word. The password must also contain multiple types of characters, such as upper and lowercase letters, numbers, and symbols.Maximum password age Forces the user to change passwords at whatever time interval you determine.Account lockout threshold Configures how many incorrect passwords can be entered before being locked out.Account lockout duration Identifies how long an account will stay locked out once it has been locked. A value of 0 indicates that an administrator must manually unlock the account. Any other number indicates the number of minutes before the account will be automatically unlocked.Reset account lockout after Specifies the length of time that must pass after a failed login attempt before the counter resets to zero.Automatic Certificate Enrollment FactsYou should know the following facts about using Group Policy to configureautomatic certificate enrollment: Before you can add an automatic certificate request, you must havecertificate templates configured on your system. Run Certtmpl.msc to installthe certificate templates. For a completely automatic certificate installation, set the Request Handlingoptions of the certificate template to enroll the subject without requiring anyuser input. Without the Request Handling option selected, the user will be prompted forinput during the certificate enrollment phase. An icon on the taskbar will also appear, which users can click to start theenrollment process.
Replication FactsYou should know the following facts about replication: Active Directory automatically decides which servers are the bridgeheadservers (generally, the first domain controller in the site). To force a specific server to be the bridgehead server, you must manuallyconfigure it as the bridgehead server. To designate a preferred bridgehead server, edit the server objectproperties in Active Directory Sites and Services. Replication between sites occurs only between the bridgehead servers. To have different replication settings for different WAN links, you need toconfigure multiple site links. For complete flexibility, you should create a site link for each networkconnection between sites. The default link cost is 100. A higher cost for a link is less desirable. To force traffic over one link, set alower cost. For example, set a lower cost for high-speed links to force trafficover the high speed link. Configure a higher cost for dial-up links that are usedas backup links. Costs are additive when multiple links are required between sites. Use SMTP replication for high latency links where RPC replication wouldprobably fail.
• What is LDAP? The Lightweight Directory Access Protocol (LDAP) is a directory service protocol that runs directly over the TCP/IP stack
• Where is the AD database held? What other folders are related to AD? AD Database is saved in %systemroot%/ntds. You can see other files also in this folder. These are the main files controlling the AD structure ntds.dit, edb.log, res1.log, res2.log, edb.chk
• What is the SYSVOL folder? All active directory data base security related information store in SYSVOL folder and it’s only created on NTFS partition. The Sysvol folder on a Windows domain controller is used to replicate file-based data among domain controllers.
Application directory partitions are usually created by the applications that willuse them to store and replicate data. For testing and troubleshooting purposes,members of the Enterprise Admins group can manually create or manageapplication directory partitions using the Ntdsutil command-line tool..
Repl
• How can you forcibly remove AD from a server, and what do you do later?• Demote the server using dcpromo /forceremoval, then remove the metadata from Active directory using ndtsutil. There is no way to get user passwords from AD that I am aware of, but you should still be able to change them.• Another way out too• Restart the DC is DSRM mode• a. Locate the following registry subkey:• HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\ProductOptio ns• b. In the right-pane, double-click ProductType.• c. Type ServerNT in the Value data box, and then click OK.• Restart the server in normal mode• its
Restart the server in normal mode it’s
• What tool would I use to try to grab security related packets from the wire? You must use sniffer-detecting tools to help stop the snoops. ... A good packet sniffer would be "ethereal".
QFE 265089 (included in Windows 2000 SP2 and later) is required to preventpotential domain controller corruption.For more information about preparing your forest and domain see KB articleQ3311 61 at.
[User Action]If ALL your existing Windows 2000 domain controllers meet this requirement,type C and then press ENTER to continue. Otherwise, type any other key andpress ENT ER to quit.
COpened Connection to SAVDALDC01SSPI Bind succeededCurrent Schema Version is 30Upgrading schema to version 31Connecting to "SAVDALDC01"Logging in as current user using SSPIImporting directory from file "C:\WINDOWS\system32\sch31.ldf"Loading entries...........................................................................................................139 entries modified successfully.The command has completed successfullyAdprep successfully updated the forest-wide information.
• How would you find all users that have not logged on since last month?• Using only native commands, JSILLD.bat produces a sorted/formatedreport of Users who have not logged on since YYYYMMDD.• The report is sorted by UserName and list the user's full name and lastlogon date.• The syntax for using JSILLD.bat is:• JSILLD \Folder\OutputFile.Ext YYYYMMDD [/N]• where:• YYYYMMDD will report all users who have not logged on since thisdate.• /N is an optional parameter that will bypass users who have neverlogged on.• JSILLD.bat contains:
@echo off setlocal if {%2}=={} goto syntax if "%3"=="" goto begin if /i "%3"=="/n" goto begin :syntax @echo Syntax: JSILLD File yyyymmdd [/N] endlocal goto :EOF :begin if /i "%2"=="/n" goto syntax set dte=%2 set XX=%dte:~0,4% if "%XX%" LSS "1993" goto syntax set XX=%dte:~4,2% if "%XX%" LSS "01" goto syntax if "%XX%" GTR "12" goto syntax set XX=%dte:~6,2% if "%XX%" LSS "01" goto syntax if "%XX%" GTR "31" goto syntax set never=X if /i "%3"=="/n" set never=/n set file=%1 if exist %file% del /q %file% for /f "Skip=4 Tokens=*" %%i in ('net user /domain^|findstr /v /c:"----"^| findstr /v /i /c:"The command completed"') do ( do call :parse "%%i" ) endlocal goto :EOF :parseset str=#%1#set str=%str:#"=%set str=%str:"#=%set substr=%str:~0,25%#set substr=%substr: =%set substr=%substr: #=%set substr=%substr:#=%if "%substr%"=="" goto :EOFfor /f "Skip=1 Tokens=*" %%i in ('net user "%substr%" /domain') do call:parse1 "%%i"set substr=%str:~25,25%#set substr=%substr: =%set substr=%substr: #=%set substr=%substr:#=%if "%substr%"=="" goto :EOFfor /f "Skip=1 Tokens=*" %%i in ('net user "%substr%" /domain') do call:parse1 "%%i"set substr=%str:~50,25%#set substr=%substr: =%set substr=%substr: #=%set substr=%substr:#=%if "%substr%"=="" goto :EOFfor /f "Skip=1 Tokens=*" %%i in ('net user "%substr%" /domain') do call:parse1 "%%i"goto :EOF:parse1set ustr=%1if %ustr%=="The command completed successfully." goto :EOFset ustr=%ustr:"=%if /i "%ustr:~0,9%"=="Full Name" set fullname=%ustr:~29,99%if /i not "%ustr:~0,10%"=="Last logon" goto :EOFset txt=%ustr:~29,99%for /f "Tokens=1,2,3 Delims=/ " %%i in ('@echo %txt%') do set MM=%%i&setDD=%%j&set YY=%%kif /i "%MM%"=="Never" goto tstnvrgoto year:tstnvrif /i "%never%"=="/n" goto :EOFgoto report:yearif "%YY%" GTR "1000" goto mmmif "%YY%" GTR "92" goto Y19set /a YY=100%YY%%%100set YY=%YY% + 2000goto mmm:Y19set YY=19%YY%:mmmset /a XX=100%MM%%%100if %XX% LSS 10 set MM=0%XX%set /a XX=100%DD%%%100if %XX% LSS 10 set DD=0%XX% set YMD=%YY%%MM%%DD% if "%YMD%" GEQ "%dte%" goto :EOF :report set fullname=%fullname% # set fullname=%fullname:~0,35% set substr=%substr% # set substr=%substr:~0,30% @echo %substr% %fullname% %txt% >> %file%
12345). The best way to learn about this DS family is to logon at a domain controller and experiment from the command line. I have prepared examples of the two most common programs. Try some sample commands for DSadd. Two most useful Tools: DSQuery and DSGet The DSQuery and DSGet remind me of UNIX commands in that they operate at the command line, use powerful verbs, and produce plenty of action. One pre-requisite for getting the most from this DS family is a working knowledge of LDAP.
If you need to query users or computers from a range of OU's and then return information, for example, office, department manager. Then DSQuery and DSGet would be your tools of choice. Moreover, you can export the information into a text file
Ldifde
The LDAP Data Interchange Format (LDIF) is a draft Internet standard for a fileformat that may be used for performing batch operations against directoriesthat conform to the LDAP standards. LDIF can be used to export and importdata, allowing batch operations such as add, create, and modify to beperformed against the Active Directory. A utility program called LDIFDE isincluded in Windows 2000 to support batch operations based on the LDIF fileformat standard. This article is designed to help you better understand how theLDIFDE utility can be used to migrate directories.
Csvde
Imports and exports data from Active Directory Domain Services (AD DS) usingfiles that store data in the comma-separated value (CSV) format. You can alsosupport batch operations based on the CSV file format standard.
The source .csv file can come from an Exchange Server directory export.However, because of the difference in attribute mappings between theExchange Server directory and Active Directory, you must make somemodifications to the .csv file. For example, a directory export from ExchangeServer has a column that is named "obj-class" that you must rename to"objectClass." You must also rename "Display Name" to "displayName."
• What are the FSMO roles? Who has them by default? What happens when each one fails?FSMO stands for the Flexible single Master Operation
It has 5 Roles: - •. •. •. •. •.
• I want to look at the RID allocation table for a DC. What do I do? install support tools from OS disk(OS Inst: Disk=>support=>tools=>suptools.msi)
If the domain controller that is the Schema Master FSMO role holder istemporarily unavailable, DO NOT seize the Schema Master role.If you are going to seize the Schema Master, you must permanently disconnectthe current Schema Master from the network.If you seize the Schema Master role, the boot drive on the original SchemaMaster must be completely reformatted and the operating system must becleanly installed, if you intend to return this computer to the network.
NOTE: The Boot Partition contains the system files (\System32). The SystemPartition is the partition that contains the startup files, NTDetect.com, NTLDR,Boot.ini, and possibly Ntbootdd.sys.
The Active Directory Installation Wizard (Dcpromo.exe) assigns all 5 FSMO rolesto the first domain controller in the forest root domain. The first domaincontroller in each new child or tree domain is assigned the three domain-wideroles. Domain controllers continue to own FSMO roles until they are reassignedby using one of the following methods: • An administrator reassigns the role by using a GUI administrative tool. • An administrator reassigns the role by using the ntdsutil /roles command. •.
A domain controller whose FSMO roles have been seized should not bepermitted to communicate with existing domain controllers in the forest. Inthis scenario, you should either format the hard disk and reinstall the operatingsystem on such domain controllers or forcibly demote such domain controllerson a private network and then remove their metadata on a surviving domaincontroller in the forest by using the ntdsutil /metadata cleanup command. Therisk of introducing a former FSMO role holder whose role has been seized intothe forest is that the original role holder may continue to operate as beforeuntil it inbound-replicates knowledge of the role seizure. Known risks of twodomain controllers owning the same FSMO roles include creating securityprincipals that have overlapping RID pools, and other problems.Back to the top Transfer FSMO rolesTo.
Notes o Under typical conditions, all five roles must be assigned to "live" domain controllers in the forest. If a domain controller that owns a FSMO role is taken out of service before its roles are transferred, you must seize all roles to an appropriate and healthy domain controller. We recommend that you only seize all roles when the other domain controller is not returning to the domain. If it is possible, fix the broken domain controller that is assigned the FSMO roles. You should determine which roles are to be on which remaining domain controllers so that all five roles are assigned to a single domain controller. For more information about FSMO role placement, click the following article number to view the article in the Microsoft Knowledge Base: 223346 ( ) FSMO placement and optimization on Windows 2000 domain controllers o If the domain controller that formerly held any FSMO role is not present in the domain and if it has had its roles seized by using the steps in this article, remove it from the Active Directory by following the procedure that is outlined in the following Microsoft Knowledge Base article: 216498 ( ) How to remove data in active directory after an unsuccessful domain controller demotion o Removing domain controller metadata with the Windows 2000 version or the Windows Server 2003 build 3790 version of the ntdsutil /metadata cleanup command does not relocate FSMO roles that are assigned to live domain controllers. The Windows Server 2003 Service Pack 1 (SP1) version of the Ntdsutil utility automates this task and removes additional elements of domain controller metadata. o Some customers prefer not to restore system state backups of FSMO role-holders in case the role has been reassigned since the backup was made. o Do not put the Infrastructure master role on the same domain controller as the global catalog server. If the Infrastructure master runs on a global catalog server it stops updating object information because it does not contain any references to objects that it does not hold. This is because a global catalog server holds a partial replica of every object in the forest.
•.
•.
Method 1If Windows 2000 Service Pack 2 or later is installed on your computer, you canuse the Setpwd.exe utility to change the SAM-based Administrator password. Todo this:
setpwd /s:servername
4. When you are prompted to type the password for the Directory Service Restore Mode Administrator account, type the new password that you want to use.
NOTE: If you make a mistake, repeat these steps to run setpwd again.
Method 2
• What are GPOs? Group Policy gives you administrative control over users and computers in your network. By using Group Policy, you can define the state of a user's work environment once, and then rely on Windows Server 2003 to continually force the Group Policy settings that you apply across an entire organization or to specific groups of users and computers.• Group Policy Advantages You can assign group policy in domains, sites and organizational units. All users and computers get reflected by group policy settings in domain, site and organizational unit. No one in network has rights to change the settings of Group policy; by default only administrator has full privilege to change, so it is very secure. Policy settings can be removed and can further rewrite the changes. Where GPO's store Group Policy Information Group Policy objects store their Group Policy information in two locations:
Managing GPOs to avoid conflicts in replication, consider the selection of domain controller, especially because the GPO data resides in SYSVOL folder and the Active Directory. Active Directory uses two independent replication techniques to replicate GPO data among all domain controllers in the domain. If two administrator's changes can overwrite those made by other administrator, depends on the replication latency. By default the Group Policy Management console uses the PDC Emulator so that all administrators can work on the same domain controller.• WMI Filter WMI filters is use to get the current scope of GPOs based on attributes of the user or computer. In this way, you can increase the GPOs filtering capabilities beyond the security group filtering mechanisms that were previously available.
• Linking can be done with WMI filter to a GPO. When you apply a GPO to the destination computer, Active Directory evaluates the filter on the destination computer. A WMI filter has few queries that active Directory evaluates in place of WMI repository of the destination computer. If the set of queries is false, Active Directory does not apply the GPO. If set of queries are true, Active Directory applies the GPO. You write the query by using the WMI Query Language (WQL); this language is similar to querying SQL for WMI repository.
• Also consider how you will implement Group Policy for the organization. Be sure to consider the delegation of authority, separation of administrative duties, central versus decentralized administration, and design flexibility so that your plan will provide for ease of use as well as administration.
• Planning GPOs Create GPOs in way that provides for the simplest and most manageable design -- one in which you can use inheritance and multiple links.
2:- Site-Any GPOs that have been linked to the site that the computer belongs to are processed next. Processing is in the order that is specified by the administrator, on the Linked Group Policy Objects tab for the site in Group Policy Management Console (GPMC). The GPO with the lowest link order is processed last, and therefore has the highest precedence..)
With all of these benefits, there are still negatives in using the GPMC alone.Granted, the GPMC is needed and should be used by everyone for what it isideal for. However, it does fall a bit short when you want to protect the GPOsfrom the following: • Role based delegation of GPO management • Being edited in production, potentially causing damage to desktops and servers • Forgetting to back up a GPO after it has been modified • Change management of each modification to every GPO
• What are the GPC and the GPT? Where can I find them?.
Declare your class as Final. A final class cannot be inherited by any other class. You can block policy inheritance for a domain or organizational unit. Using block inheritance prevents GPOs linked to higher sites, domains, or organizational units from being automatically inherited by the child- level. By default, children inherit all GPOs from the parent, but it is sometimes useful to block inheritance. For example, if you want to apply a single set of policies to an entire domain except for one organizational unit, you can link the required GPOs at the domain level (from which all organizational units inherit policies by default), and then block inheritance only on the organizational unit to which the policies should not be applied
like an example:
• How can you determine what GPO was and was not applied for a user? Name a few ways to do that. Simply use the Group Policy Management Console created by MS for that very purpose, allows you to run simulated policies on computers or users to determine what policies are enforced. Link in sources
• A user claims he did not receive a GPO, yet his user and computer accounts are in the right OU, and everyone else there gets the GPO. What will you look for? Here interviewer want to know the troubleshooting steps what gpo is applying ? if it applying in all user and computer? what gpo are implemented on ou?..
• What is a subnet mask? A subnet mask separates the IP address into the network and host addresses• What is ARP? Address Resolution Protocol, a network layer protocol used to convert an IP address into a physical address (called a DLC address), such as an Ethernet address
• What is ARP Cache Poisoning? ARP cache poisoning, also known as ARP spoofing, is the process of falsifying the source Media Access Control (MAC) addresses of packets being sent on an Ethernet network.
•. • What is CIDR? In Internet Protocol terminology, a private network is typically a network that uses private IP address space, following the standards set by RFC 1918 and RFC 4193. These addresses are common in home and office local area networks (LANs), as globally routable addresses are scarce, expensive to obtain, or their use is not necessary. Private IP address spaces were originally defined in efforts to delay IPv4 address exhaustion, but they are also a feature of the next generation Internet Protocol, IPv6.
• You have the following Network ID: 131.112.0.0. You need at least 500 hosts per network. How many networks can you create? What subnet mask will you use? If you need to divide it up into the maximum number of subnets containing at least 500 hosts each, you should use a / 23 subnet mask. This will provide you with 128 networks of 510 hosts each. If you used a /24 mask, you would be limited to 254 hosts. Similarly, a /22 mask would be wasteful, allowing you 1022 hosts.
• You need to view at network traffic. What will you use? Name a few tools? winshark or tcp dump you can use Network Monitor. You can also use Etheral
• What is DHCP? What are the benefits and drawbacks of using it? DHCP, Dynamic Host Configuration Protocol. | https://it.scribd.com/document/40088540/Windows-Q-A-Final | CC-MAIN-2020-10 | refinedweb | 39,607 | 55.44 |
#ifndef _PWD_H #define _PWD_H /* Trying to declare uid_t is a mess. We tried #include <sys/types.h>, which only worked on VAX (VMS 6.2, I think), and we tried defining it here which only worked on alpha, I think. In any event, the VMS C library's concept of uid_t is fundamentally broken anyway (getuid() returns only the group part of the UIC), so we are better off with higher-level hooks like get_homedir and SYSTEM_GETCALLER. */ #define pid_t int struct passwd { char *pw_name; }; struct passwd *getpwuid(/* really uid_t, but see above about declaring it */); char *getlogin(void); #else #endif /* _PWD_H */ | http://opensource.apple.com/source/cvs_wrapped/cvs_wrapped-15/cvs_wrapped/vms/pwd.h | CC-MAIN-2014-15 | refinedweb | 101 | 63.9 |
Hello everyone! I am having a bit of an odd problem here.
So, on my Visual Studio project, I decided to use a precompiled header. This didn't seem to cause a noticable problem up until today.
I have a small segment of code that is supposed to remove one element of a vector based on it's contents:
using namespace std; . . . vector<int> v; v.push_back(4); v.push_back(5); v.push_back(6); v.push_back(7); v.erase(std::remove(v.begin(), v.end(), 6), v.end());
This USUALLY works. However, Visual Studio kept griping at me about how the synax is incorrect. So, I right-clicked on the remove function and found it's definition inside stdio.h:
_CRTIMP int __cdecl remove(_In_z_ const char * _Filename);
Hmm. Go figure. That seems to be a different remove function all together! This has stunted my progress, as I can't seem to find any help online for this.
Does anyone know how to fix this issue? Am I making a noob mistake?
I would greatly appreciate any comments.
-Kaleb | https://www.daniweb.com/programming/software-development/threads/428867/std-remove-has-differing-syntax-in-visual-studio | CC-MAIN-2021-17 | refinedweb | 179 | 69.38 |
A small C++ Tip. Return values from Constructors
You can’t return anything from a constructor. One way is to use exceptions but those can bring their own issues. Google’s C++ guidelines actually stipulate no exceptions and I’ve heard it from others as well that they prefer not to use them. Some people avoid it by having a mostly empty constructor and then an init() method to do the real initialisation and return a success/fail state.
But there is a simple trick that you can use, just return the state as a reference. Here’s an example. If you want to return more values, use a struct.
#include <iostream> using namespace std; class Simple { public: Simple(bool& failure) { failure = true; } ~Simple() { } }; int main() { bool fail_flag = false; Simple f(fail_flag); if (fail_flag) cout << "failed " << endl; else cout << "Success" << endl; } | https://learncgames.com/tag/references/ | CC-MAIN-2020-50 | refinedweb | 141 | 73.27 |
As usual while waiting for the next release - don't forget to check the nightly builds in the forum.
I've just commited Revision 15 of the Interpreted Langs plugins:
sorry about the current state of that plugin -- you should just remove py_embedder.cpp, py_embedder.h, libpython and any python related vars from the project [...]
BTW: You can use a branch for this experimental stuff thus trunk would always be a "usable" version. SVN can do so much more that "just trunk". ;-)With regards, Morten.
latest build (rev 31), which includes a file explorer here:
#include <sdk.h>#include <cbplugin.h> // for "class cbPlugin"
I guess it's rev 35 already...?! ;-)
After these changes I was able to compile... now I'm testing... ;-)
rev 44 win32 build | http://forums.codeblocks.org/index.php?topic=8416.0;prev_next=next | CC-MAIN-2019-51 | refinedweb | 127 | 77.53 |
Put MediaWiki to Work for You 171
NewsForge (Also owned by VA) is running a short writeup on how to put MediaWiki to work for your organization. The writeup includes several addition tools that could be helpful in rounding out the overall package. From the article: "?"
Does it come with Admin tools? (Score:4, Funny)
Re:Does it come with Admin tools? (Score:2)
Crap (Score:5, Insightful)
Slashvertisement. (Score:3, Funny)
as part of a sales pitch.
Re:Crap (Score:5, Insightful)
That said, your average business person stops reading the moment they get to "Next, find the LocalSettings.php file in your wiki directory. Add the following lines: $wgGroupPermissions['*']['createaccount'] = false;..." A better way to word this would have been "Now go find those tech guys you keep in the basement and tell them you want a Wiki."
Just a thought.
Re:Crap (Score:2, Insightful)
The issue the other person seems to have isn't that this article exists, but rather that it was posted here (which you agreed to in the next paragraph). This is quite simply a bizarre article for Slashdot -- it's superficial, ther
Keep reading, fm6, how this is a big deal. (Score:2)
From the article:.
Let me put that into perspective. One of the main features selling M$ Word is easy to use vers
Re:Keep reading, fm6, how this is a big deal. (Score:2)
A collaborative editor might be a better replacement than a wiki. With a collaborative editor two or more people can have the same document open on different computers at the same time. When someone types words into the doucment they appear on other peoples screens in real time. With a wiki y
Re:Keep reading, fm6, how this is a big deal. (Score:2)
It's not both "much better" and "free." Either they have to give up some functionality, or they have to re-learn the wiki, and at the least yo
Re:Crap (Score:2)
Wait...
Wiki works (Score:3, Insightful)
However, the information does need to be organized, otherwise you can only really put info into it and nobody will ever find it. Luckily Sinorca4moin provides a wiki editable navigation menu, that allows you to put some minimal organization on top of your wiki.
This has allowed me to migrate the Kernelnewbies [kernelnewbies.org] site to a wiki. Now it gets regular updates again...
Re:Wiki works (Score:3, Insightful)
Re:Wiki works (Score:2)
For internal "corporate" documentation you need to be able to have projects, programs, systems, servers, etc all in a pretty deeply nested structure and each with it's own set of people responsible and with the ability to delegate permissions. Needless to say it needs to be universally searchable.
Oh and one more thing, it needs to be
Re:Wiki works (Score:3, Informative)
Because writing an entirely new content management system is actually easier than figuring out how to use the mess that is Zope/Plone.
Re:Wiki works (Score:2)
Re:Crap (Score:2)
Re:Crap (Score:2)
Re:Crap (Score:3, Informative)
You may personally dislike MediaWiki and Wikimedia, and that's fine, but it's no substitute for facts.
Re:Crap (Score:2)
Then I read the article and decided it would be not a good idea to do that.
Absolutely useless article, imo.
Re:Crap (Score:2)
Re:Crap (Score:2)
Because it involves learning a new skillset (Score:5, Insightful)
Re:Because it involves learning a new skillset (Score:3, Funny)
Learning MediaWiki vs. learning Word or OOo? (Score:2)
Is learning MediaWiki really any harder than learning Microsoft Office Word or OpenOffice.org Writer?
Re:Learning MediaWiki vs. learning Word or OOo? (Score:2)
Someone needs to bolt one of these AJAX Web 2.0 Beta platform browser-hosted swishy word processors to the editing stage.
/ On the subject of Office files bei
Re:Because it involves learning a new skillset (Score:3, Insightful)
Managing a wiki isn't so hard, you just look at the RSS feed of changes on a daily basis and if there's a mistake, it's trivial to revert it.
Cheers,
Ben
I concur! (Score:4, Insightful)
Wiki (Score:3, Funny)
-Grey [wellingtongrey.net]
Obligatory: (Score:2)
Re:Wiki (Score:2)
Infinite monkies... Infinite typewriters... Hamelet? Oh never mind.
Re:Wiki (Score:3, Funny)
I can seen this now.. (Score:5, Funny)
I can already see the productivity peaking =) (Score:2, Troll)
I can see how helpful it can be for a company.
Seriously though, a friend of mine actually did install a wiki under the same premises of the article. They are having lots of fun with it, but it hardly helped their workplace.
I welcome Wikis to my organization (Score:2, Interesting)
Re:I welcome Wikis to my organization (Score:3, Interesting)
We've grown from being ~20 employees about a year ago to being just shy of 50 now (not counting external consultants).
We do spend face-to-face time on education, but the company wiki contains lots of information as well; and we really like the people who browse through it and ask questions regarding the material.
It's definitely a timesaver.
Nothing New (Score:2)
Its cool that this idea is put out, but I don't understand why this is such a big deal. It was on newsforge, linked from ITMJ, slashdot too? Yippy?
Re:Nothing New (Score:2)
worked for me (Score:5, Interesting)
Re:worked for me (Score:2)
*Yes, it would be easier to have it all stay in a designated library room, but then reality hits.
Re:worked for me (Score:2)
Semantic MediaWiki (Score:3, Informative)
Extensibility of MediaWiki (Score:4, Informative)
Don't expect to be able to extend or modify it easily. I've come to the conclusion that it would be easier to reimplement it than to modify it.
Re:Extensibility of MediaWiki (Score:4, Insightful)
However, you're correct. If you plan to change the look or behaviour of it, you are truly out of luck due to the MediaWiki codebase mess.
Re:Extensibility of MediaWiki (Score:2)
In action in our tech department... (Score:2, Insightful)
Re:In action in our tech department... (Score:2)
Re:In action in our tech department... (Score:2)
Re:In action in our tech department... (Score:2)
WTF? Give them their own damn password.
Man, security at your place of work must be really lax.
Re:In action in our tech department... (Score:2)
If you mean sharing servers passwords and what not, we store public PGP keys on the wiki and any passwords sent are sent by email encrypt
Re:In action in our tech department... (Score:2)
Uh, no. You have to be pretty high up on the geek scale to use a version control system. Unless you were forced into learning it while doing a computer science degree it's a pretty formidable thing.
"What makes version control systems (VCS) so great is this: lots of people can take your code, make little branches, and fiddle around with it in a distributed fashion. Then at some point, you get to merge it all back together in such a way that the VCS will seamlessly delete all the wrong bit
What's going on here...? (Score:4, Insightful)
I'm not a MW guru, but does the article's idea of <PHP> tag really do what I think it does?
As in "raw code in a a place where people can edit it?"
Doesn't matter they are trying to limit the wiki's edit access only to registered users - this is wrong.
Ugh. You know, one of the reasons why I like MediaWiki is that it does well with separating the page code from the HTML. And now these people want to sprinkle random PHP crap in the pages again. Argh.
And as an additional bonus, you get to store your mysql_connect() parameters to the page source. Whee. Realllly smart.
Somebody please submit this to TheDailyWTF...
The real way to do this is to write a MediaWiki extension, of course (look at ParseFunctions for an example of something simple), which is then accessed through the usual hooks, like {{foo:...}}, but don't ask me, I don't know that much about MW's internal structure. I just know bad ideas when I see them. =)
Re:What's going on here...? (Score:2)
It uses the extension mechanism of parser hooks, it just uses it in the wrong place, Setup.php, w
Re:What's going on here...? (Score:2)
The {{foo}} (variables) syntax is possible to extend, but does not, as far as I have been able to tell, support parameters. So {{foo:...}} won't work. See the UserNameMagic extension for how to add a Med
Re: What's going on here...? (Score:2)
This article offers the insane learning curve of giving a really broad overview of why you'd use a wiki, then discussing how to develop it. Typically, the builtin configuration pages or existing plugins would go in between those.
How to compare Wikis (Score:5, Informative)
Re:How to compare Wikis (Score:2)
A couple months ago at my workplace, I proposed using a wiki in my group because we were having a lot of problems organizing our information and keeping it updated in a timely manner. My proposal was well received by the group and we went with Twiki , because it had already been setup by the IT department. Personally, I would not recommend twiki to anyone, because the syntax is horrible and the interface is very unintuitive (among a host of other downsi
Re:How to compare Wikis (Score:2)
Re:How to compare Wikis (Score:3, Interesting)
Company wikis (Score:4, Interesting)
Sure, user education would help here, but there is only so much one can do... especially in a company of 30,000+ users.
While wikis certainly lower the bar for producing web content, there really needs to be some sort of way to prevent users from doing things that they don't particularly realize are (overall) harmful. Or at least much better training tools.
Wikis are evil (Score:5, Interesting)
Stable versions (Score:2)
Your "knowledge base" is your web site or documentation section.
And your web site or documentation section is a copy of wiki page versions chosen by your program's release crew. Release engineers search for the version of your page that most closely matches the behavior of the release branch and then copy that version to the publicly viewable site that uses the same wiki engine.
Re:Wikis are evil (Score:2, Insightful)
It's a well known fact in large organisations that knowledge management is a bastard. All those people trying to figure out how best to share information, some of them trying out wikis, and it turns out all they have to do is design a website and maintain a documentation section! They're going to feel pretty foolish when they realise it was so simple!
Seriously, get over yourself. There will be many cases where the employees of a company are capable of providing more content than those responsible for the w
Re:Wikis are evil (Score:2)
They work well when people want to share (Score:4, Interesting)
I've put into it design documentation, instructions for accessing our other services (e.g. Subversion repositories), troubleshooting tips, sequence diagrams of various race conditions, you name it. I try to periodically dump everything in my notes directory into the wiki. The effort of cleaning it up means I'll understand it later, having it on the wiki server means it's backed up regularly, and as a bonus, other people see it and don't need to ask me as many questions, so I can spend more time developing. And it gives people a way to still get answers when I'm off bicycling through Africa.
But collaboration technology like MediaWiki or bugzilla only works when people use it. There are always some people who won't play with others. If I put information on the wiki, they'll come bug me for it anyway. If I tell them it's on the wiki, they still won't read it. If I give them information verbally and specifically ask them to put it on the wiki, they won't do it. And then they wonder why I ignore their emails...
Re:They work well when people want to share (Score:2, Interesting)
Recently our IT people installed MediaWiki and I have been entering every bit of information I have to look up from other sources whilst trying to maintain a consistent structure.
I've talked a few other people into using it, but takeup is very slow even though I can s
Installed one last week! (Score:2)
Mixed results with our intranet wiki (Score:5, Interesting)
The wiki has succeeded in a couple of notable areas. The photo directory page is critical for learning new faces on a rapidly growing staff. Another page has completely replaced sticky-notes that were formerly used to coordinate certain tasks among staff and interns. The IT department has a lot of miscellaneous documentation pages. A few other pages serve the function of an electronic bulletin board for staff scattered across two buildings.
Management was very concerned at first that staff would abuse the wiki, either by wasting time posting trivia or by outright vandalism. Neither fear has materialized.
The biggest failure of the wiki is the number of abandoned pages. They don't do any harm, but about a third of pages are derelict, with old information that the author obviously lost interest in maintaining. Having a wiki editor might solve that problem, but in practice it doesn't rise to the level.
Re:Mixed results with our intranet wiki (Score:2)
We've been doing this for about a year (Score:2, Insightful)
It's a huge improvement on any previous method we've used to organise our documentation - mostly FAQs, instructions, process documentation, links to external resources, screenshots, all sorts. Apart from backups (VBSCript to take a MYSQL dump and copy the images directory), I use HTTrack to take a 1 link de
Does it come with a wafer? (Score:3, Interesting)
Re:Does it come with a wafer? (Score:2)
Did that at my company... (Score:3, Interesting)
The situation we used to work in was that we had a lot of customer information that changed quickly, a group of engineers who worked disparate hours (there was supposed to be someone available between 7AM and Midnight) and documentation that was scattered all over. We had a central repository for documentation, but it was the pits. You could only search on key words or categories, check-out and check-in procedures were laborious, if not counter-productive, and everything had to go through an approval cycle. Finally (and that, combined with the fact the repository was unsearchable, was kinda the nail in its coffin), reviews were partially based on how many entries you'd submit. The end result was an essentially unsearchable repository was filled to the bring with duplicate entries and outdated stuff.
Fed up with that, we created a Wiki on the side project. Initially I filled it myself with random things that I found useful. Then other people started using it. It wasn't perfect, but it was loads better than what we had - we could actually find information! Outdated stuff could be updated. People didn't have to call others at all hours of the night for server information anymore. And best of all, new hires could be pointed to it, and they could find useful starting information.
To give you an idea of how successful it was, it was initially completely disallowed by management, as it was creating a duplicate information store. The desktop server on which it was stored was yanked. But it stuck around, because people actually used it. Now, the entire group uses it for storing training, server, contact or any other information that a lot of people need and that changes often. Contrary to the commercial data storage software, it helps us do our job more efficiently.
Wikis are undeniably useful and loads better than anything else out there - if you make sure that the information you try to make accessible falls in the following categories:
- lots of different people can use it
- changes often
- lots of people can contribute to it
Oh, and it also helps if people aren't dicks, to use Wikipedia's rule.
PSU (Score:2)
However, you really need to work with people who already are used to collaborating i
Coursebook replacement (Score:2, Interesting)
We have started using a wiki to cooperatively develop materials for this course. We hope that it will ev
It works, and it needs FCKEditor (Score:2)
We did find FCKEditor [fckeditor.net] but that doesn't come built-in and support is beta. Mention that to the system admins, and they'll refuse to install it. Once that
Lifehacker had something about this... (Score:2)
I tried this, and it worked quite well.
Which Wiki? (Score:2, Interesting)
(Semantic) MediaWiki on XAMPPLITE is easy (Score:2, Informative)
To start hacking on the awesome Semantic MediaWiki extensions [ontoworld.org], I downloaded XAMPPLITE (MySQL, PHP, Apache, and phpMyAdmin all nicely bundled for Windows) and the MediaWiki source. I had it up and running on Windows XP in 10 minutes!
For PHP development, I downloaded Eclipse and the PHPEclipse extension. I already had Cygwin and Vim installed, but I don't think you need them.
I've also used TWiki at work. The benefit of MediaWiki is the users' familiarity with Wikipedia.
Semantic MediaWiki adds attribut
using it here (Score:2)
Some useful add-ons we've used are: [wikimedia.org] (a patch to have restrictions of namespaces to certain groups) [wikimedia.org]
MediaWiki seems a strange choice for corporate use (Score:4, Insightful)
Now, I may be wrong, (and I welcome corrections if so), but from what I gathered, MediaWiki has poor-to-nonexistent support for advanced granularity of permissions. Essentially, everything is editable by everyone. Beyond that, there is a very simple level of control inasmuch as admins can lock a page and whatnot. But setting up a system whereby users come out of AD/LDAP and can edit (or not) different areas corresponding to their department/group, or setting up workflow systems where (for example) anyone can edit but it must be approved by a departmental admin (who can act as admin within their department's pages, but not elsewhere) before showing up... It didn't look as if any of this was possible.
Furthermore, I was told there's no point even asking for it. Because such things don't gel with the Wikipedia philosophy, the people spending their time coding MediaWiki simply aren't interested in implementing them. (Don't get me wrong, I'm not whinging about this - naturally they should devote their time to features which actually suit their demands, not somebody else's).
So it seems to me very odd to promote MediaWiki for the corporation, when other systems have much more sophisticated ACL-type features, granular permissions, and so on.
(PS. FWIW, we eventually settled on Plone. Plone does have a Wiki plugin so if we ever do use Wiki's I guess we'll use that. But I'm still evaluating which Wiki system to use for a separate project, outside work, but which still requires more advanced editing permission granularity. DokuWiki seemed the best fit, with the one problem that it uses flat files for storage, and our sysamin would prefer a db backend as they have a dedicated db box, so it'd be quicker. WikiMatrix narrowed it down to ErfurtWiki, Midgard Wiki, miniWiki, PhpWiki, TikiWiki, WackoWiki and Wiclear: out of these, I didn't like the look of phpWiki for some reason I can't remember right now, and I've never even heard of the others. If anyone has any experience with any of these systems, please do share
:) )
Re:MediaWiki seems a strange choice for corporate (Score:2)
Re:MediaWiki seems a strange choice for corporate (Score:2)
Very true, point taken.
In fact I did make that point in my doc. Something along the lines "al
Re:MediaWiki seems a strange choice for corporate (Score:2)
For this reason, we used the Perspective Wiki -- IIS based, uses integrated AD authentication. Probably not the greatest Wiki software, but in my opinion any system that requires another password is a system that people aren't going to want to use.
Wikis in Enterprises (Score:2, Informative)
Hopefully this information will be translated to english in the next 1-3 days.
unparalleled success (Score:2)
TWiki should be better for a corporate environment (Score:2, Interesting)
I've heard that a while ago, some folks inside Intel set up a mediawiki site for internal documentation, and when the lawyers heard about it, they had the project shut down. There was too much liability I gather.
Anyway, I've set up a TWiki installation at my work three years ago now and a
No standardized markup (Score:2)
Hopefully GUI editors will minimize this problem.
tiddlywiki (Score:2)
Eventually I found tiddlywiki [tiddlywiki.com].
Pros:
* no httpd required
* all information stored in a single html file (including the wiki code itself!)
* has tags and a search function
* monstrously quick and easy to set up.
Cons:
* haven't found any
Wikiphilia (Score:2)
I wonder if it's related to Morgellons [morgellons.org]?
Just don't use it as a doge for competence. (Score:2)
Probably written by someone who hasn't thought about it or tried it. In my experience Wiki's have the same fault of most other "management" software. Unless people must use it, it's a waste of time.
It's not typically the software's fault, it's the people. Managers are lazy/busy and resistant to change. Frankly most of them don't have the skill to organize a wiki
Re:Great! I can update the Overtime Policy. (edit) (Score:2)
NEW EDIT - by Employer
Overtime need not be paid out at all, with the employer providing no benefits.
Re:Great! I can update the Overtime Policy. (edit) (Score:2)
Re:Great! I can update the Overtime Policy. (edit) (Score:2)
Stable versions (Score:2)
Many would prefer a system in which info passes through several hierarchies before being published
Then let the hierarchy approve specific versions of each article that are copied to the world-visible web server. Remember that MediaWiki stores all previous versions of each article.
Re:PBH? (Score:2)
Re:Wiki and Work (Score:2)
Re:VA? (Score:2)
Slashdot is a part of/owned by OSTG, which is owned by VA Software. They used to state that this or that site was a part of the OSTG, but now, it seems that the VA (which, IIRC, stands for 'Value Added') brand has been brought back from the dead.
Re:Needs better Oracle support (Score:2)
If your organization is interested in sponsoring maintenance for Oracle support or in maintaining it yourself, please let us know. Thanks.
Re:Trying to setup a Wiki for this now! (Score:2)
Backup the db - if you're using mySQL, google for "phpmyadmin", install it and you'll find an easy way.
Re:Trying to setup a Wiki for this now! (Score:2)
For html - that depends. I don't know if there's any viable import facility; you may have to do it by hand. You'll want to strip some or all tags from your html files, maybe replace them with wiki tags for formatting. But remember that the actual data lives in a database, so with some programming you can load your data into the wiki's database directly, though this may involve modifying a | https://slashdot.org/story/06/05/21/1515232/put-mediawiki-to-work-for-you | CC-MAIN-2018-09 | refinedweb | 4,018 | 62.17 |
Hello all,
I'd first like to say that I'm pretty much a beginner in C, so this and other problems I post may seem trivial to many of you. All I can ask for is patience and forgiveness :)
Now, on to my problem. For some time now, I have been bugged with the issue of making a "really dynamic array". To be more specific, an array whose size would be defined by the length of the string. If the array content where to be, say "Hello World", the array would be of size [12]. If the content of the array were to be "C", the array would be size [2]. You can always just define a big enough array (something like ar_word[500]; ) but thats just way too...inelegant. Anyway, I had an idea solving this problem that pretty much worked out:
Code:
#include <stdio.h>
#include <string.h>
int main()
{
int x = 0; /*Will be the size of the array*/
char word[x]; /*The actual array*/
while ((word[x] = getchar()) != '\n') { /* User presses a key, the corresponding character is then loaded into word[x]. AFTER that the program determines whether or not "Enter" was pressed*/
char word[++x]; /*x is enlarged by one, thus also the array size*/
}
word[x]='\0'; /*When the loop ends, the last "column" of word has the value of "\n". That is replaced with the value of "\0"*/
printf("%s",word);
return 0;
}
The problem with this code is that the limit is 32. Once you go above 32 characters, the array size goes back to zero (I've strlen'd the array and then compared it to the value of x. While x is 33 or more, the length of "word" is zero). I guess this has something to do with the 32bit environment I'm working in, but then again I'm not sure.
My question: does anyone know how I can get around this? Either that, or does anyone know another whay of making a "really dynamic array"?
Thanks in advance for all help and advice. | http://cboard.cprogramming.com/c-programming/100489-little-array-difficulty-printable-thread.html | CC-MAIN-2014-35 | refinedweb | 347 | 79.3 |
On Thu, 09 Feb 2006, Peter N. Lundblad wrote:
> On Wed, 8 Feb 2006, Daniel Rall wrote:
>
> > On Wed, 08 Feb 2006, Peter N. Lundblad wrote:
> >
> > > On Tue, 7 Feb 2006, Justin Erenkrantz wrote:
> > I wouldn't mind both being supported. Peter's nitpick is important
> > because the tools/dev/contribulyze.py script that Karl wrote depends
> > upon an exact match:
> >
> > field_re = re.compile('^(Patch|Review|Suggested|Found) by:\s+(.*)')
> >
> +1. Here, consistency isn't worth the noise. And, well, how many scripts
> depend on this? :-)
I've attached a patch which allows for both "Reviewed by" and "Review
by", effectively making the former an alias for the latter.
This implementation has the side-effect of generating HTML with log
messages which don't exactly match what's in the repository, because
canonical names (e.g. "Review by") are used in place of provided names
(e.g. "Reviewed by").
[[[
Accept "Reviewed by" as an alias for "Review by".
* tools/dev/contribulyze.py
(Field.__init__): Accept an optional ALIAS argument which creates an
instance field of the same name.
(field_aliases): A dict of alias names to field names
(e.g. "Reviewed" to "Review").
(graze): When instantiating and using Field objects, take field
aliases into account.
]]]
Index: tools/dev/contribulyze.py
===================================================================
--- tools/dev/contribulyze.py (revision 18403)
+++ tools/dev/contribulyze.py (working copy)
@@ -385,9 +385,11 @@
class Field:
"""One field in one log message."""
- def __init__(self, name):
+ def __init__(self, name, alias = None):
# The name of this field (e.g., "Patch", "Review", etc).
self.name = name
+ # An alias for the name of this field (e.g., "Reviewed").
+ self.alias = alias
# A list of contributor objects, in the order in which they were
# encountered in the field.
self.contributors = [ ]
@@ -477,7 +479,8 @@
log_separator = '-' * 72 + '\n'
log_header_re = re.compile\
('^(r[0-9]+) \| ([^|]+) \| ([^|]+) \| ([0-9]+)[^0-9]')
-field_re = re.compile('^(Patch|Review|Suggested|Found) by:\s+(.*)')
+field_re = re.compile('^(Patch|Review(ed)?|Suggested|Found) by:\s*(.*)')
+field_aliases = { 'Reviewed' : 'Review' }
parenthetical_aside_re = re.compile('^\(.*\)\s*$')
def graze(input):
@@ -520,10 +523,14 @@
# We're on the first line of a field. Parse the field.
while m:
if not field:
- field = Field(m.group(1))
+ ident = m.group(1)
+ if field_aliases.has_key(ident):
+ field = Field(field_aliases[ident], ident)
+ else:
+ field = Field(ident)
# Each line begins either with "WORD by:", or with whitespace.
in_field_re = re.compile('^('
- + field.name
+ + (field.alias or field.name)
+ ' by:\s+|\s+)(\S.*)+')
m = in_field_re.match(line)
user, real, email = Contributor.parse(m.group(2))
This is an archived mail posted to the Subversion Dev
mailing list. | http://svn.haxx.se/dev/archive-2006-02/0571.shtml | CC-MAIN-2015-18 | refinedweb | 425 | 61.22 |
This article is meant for those programmers who are only getting started with the Visual Studio environment, and trying to compile their C++ projects under it. Everything can look strange and complicated in an unfamiliar environment, and novices are especially irritated by the stdafx.h file that causes strange errors during compilation. Pretty often it all ends in them diligently turning off all precompiled headers in every project. We’ve written this article in order to help Visual Studio newcomers to figure it all out.
The purpose of precompiled headers. The benefit can be seen even with a project of just a few dozen files. Using such heavy libraries as a boost will make. We don’t know how exactly it is all implemented in Visual C++, but I know that, for instance, you can store text already split into lexemes. This will speed up the compilation process even more.
How precompiled headers work
A file containing precompiled headers has the “.pch” extension. The file name usually coincides with the project name, but you can naturally change this and any other names used, which):
#include "VivaCore/VivaPortSupport.h" //For /Wall #pragma warning(push) #pragma warning(disable : 4820) #pragma warning(disable : 4619) #pragma warning(disable : 4548) #pragma warning(disable : 4668) #pragma warning(disable : 4365) #pragma warning(disable : 4710) #pragma warning(disable : 4371) #pragma warning(disable : 4826) #pragma warning(disable : 4061) #pragma warning(disable : 4640) #include <stdio.h> #include <string> #include <vector> #include <iostream> #include <fstream> #include <algorithm> #include <set> #include <map> #include <list> #include <deque> #include <memory> #pragma warning(pop) //For /Wall:
- File A: <vector>, <string>
- File B: <vector>, <algorithm>
- File C: <string>, <algorithm>, outweighs the losses on syntax analysis of additional code fragments.. that’s just the start. We now need to add #include “stdafx.h” into each file.
The “stdafx.h” header must be the very first one to be included into the *.c/*.cpp file. This is obligatory! Otherwise you are guaranteed to get compilation errors.
It really makes sense, if you think about, as the entire mechanism of “precompiled headers” gets broken. I believe this to be one of the reasons why “stdafx.h” must be included in the first place. Perhaps there are some other reasons too.
Life hack.
However, there is an easier way to handle precompiled headers. This method is not a universal one, but it did help me in many cases.
Instead of manually adding #include “stdafx.h” into.
What to include into stdafx.h change once a month, it’s too frequent. In most cases, it takes you more than once to make which you use really often. Including <set> won’t make sense, if you need it in just a couple of files. Instead, simply include this file where needed.
Several precompiled headers
For what may we need several precompiled headers, in one project? Well, it’s a pretty rare situation indeed. But here should all be done very carefully of course, but there’s nothing especially difficult about using two: ‘Debug ‘#include “stdafx.h”‘ to your source?.
Fatal error C1853: ‘project out
You must have done something wrong. For example, the line #include “stdafx.h” is not the first one in the file.
Take a look at this example:
int A = 10; #include "stdafx.h" int _tmain(int argc, _TCHAR* argv[]) { return A; }:
#include "stdafx.h" int A = 10; int _tmain(int argc, _TCHAR* argv[]) { return A; }
One more example:
#include "my.h" #include "stdafx.h"
The contents of the file “my.h” won’t be used. As a result, you won’t be able to use the functions declared in this file. Such behavior confuses” at the very beginning of the file ALL THE TIME. Well, you can leave comments before #include “stdafx.h”; they don’t take part in compilation anyway.
Another way is to use Forced Included File. See the section “Life hack” above.
The entire project keeps completely recompiling when using precompiled headers
You have added into stdafx.h a file that you keep regularly editing. Or you could have included an auto-generated file by mistake.
Closely examine the contents of the “stdafx.h” file: it must contain only headers that never or very rarely change. Keep in mind that while certain included files do not change themselves, they may contain references to other *.h files that do.
Something strange going be aware of it. Personally I have faced this issue only 2 or 3 times during the many years of my career. It can be solved by complete full project recompilation.
By the way, you can download PVS-Studio and check your program for errors.
Conclusion
As you can see, working with precompiled headers is pretty easy. Programmers that try to use them and constantly face “compiler’s numerous bugs”, just don’t understand the working principles behind this mechanism. We hope this article has helped you to overcome that misunderstanding.
Precompiled headers are a very useful option, which allow you to significantly enhance project compilation speed.
11 thoughts on “StdAfx.h”
Thanks for this insight. I never really learned to use these, but I think I’ll go play around with it now. One drawback I can see already, though. If it doesn’t take as long to compile, there will be fewer sword fights:
Great article. I would also mention that precompiled headers can be auto-generated, like with this tool I wrote: Microsoft also released a tool some time ago to do that () but unfortunately it was retired from the Visual Studio gallery shortly after being published.
Thanks for the idea about precompiled headers – we’ll check out this tool.
LikeLiked by 1 person
I inherited a solution that has several stdafx.h in various projects so depending what project you’re currently in that you reference a different stdafx.h. This drives me nut because I want to import msxml by issuing the statement “#import msxml…” in the top project “stdafx.h” but it’s not being recognized in the project that uses xml since it’s referencing a totally different stdafx.h. Is there a way we can force all projects to use only one common stdafx.h>
In fact, all this ‘magic’ with the precompiled headers occurs at the level of translation units with the help of the combination of flags /Yu”path\to\stdafx.h” and /Fp”path\to\pch_file.pch”. You can generate the precompiled header in one of the projects in the solution using the combination of flags /Yc”path\to\stdafx.h” and /Fp”path\to\pch_file.pch” (but you should remember that this project must be compiled the first). Then reuse it in all other projects using flags /Yu”path\to\stdafx.h” and /Fp”path\to\pch_file.pch”.
In addition you can find more information in my reply for Mr.P.
Hope this will help you.
Thanks for your great explanation. Stil it does not solve the problem I’m having currently. I was using precompiled headers for many years in VS2010 but had to upgrade to VS2017 (C++ 11, yeah!). Now my precomiled header gets deleted whenever I compile a project that is supposed to use it.
This is bothering me for some time now, Google does not yield any help at all so far. Do you have any ideas? =)
Best,
Mr.P
In fact, you need to check 2 combinations of flags on your translation units. When working, stdafx.cpp uses a combination of flags /Yc”path\to\stdafx.h” and /Fp”path\to\pch_file.pch”. Each translation unit that enables the precompiled header, must be compiled with combination of flags /Yu”path\to\stdafx.h” and /Fp”path\to\pch_file.pch”. Path in flags must match. Proceeding from this, try to do the following:
1) Set the flag/Yc”path\to\stdafx.h” to the “stdafx.cpp” file by one of the following ways:
-Right click on stdafx.cpp -> Properties -> C/C++ -> Command Line -> write it in the Additional Options box
-Right click stdafx.cpp -> Properties -> C/C++ -> Precompiled Headers -> set “Precompiled Header” in “Create (/Yc)”, write the path to “stdafx.h” in “Precompiled Header File”
2) The flag /Fp”path\to\pch_file.pch” is set. This will generate a precompiled header file. Set it by two ways:
-Right click on stdafx.cpp -> Properties -> C/C++ -> Command Line -> write it in the Additional Options box
-Either right click on stdafx.cpp -> Properties -> C/C++ -> Precompiled Headers -> in “Precompiled Header Output File” specify the path where to save the precompiled header
3) For each translation unit that connects the preprocessed file, set the flag /Yu”path\to\stdafx.h” and /Fp”path\to\pch_file.pch”. Do similarly to the steps 1 and 2.
Hope this will help you.
It is better to use $(ProjectDir)StdAfx.h;%(ForcedIncludeFiles)
This allows to compile file in other folder without problems.
Thank you so much for this! I both solved my issue and understand the use of pre-compiled headers 🙂
I have annoying situation with StdAfx.h
If my project are like this
Project/
StdAfx.h
StdAfx.cpp
run.cpp
…
folder/
second.cpp
If I add #include “StdAfx.h” in all cpp files. second.cpp says it can’t find StdAfx.h file and here are bunch of identifier xxx is undefined. But project compiles. If I change #include “../StdAfx.h” all classes and functions are recognized. But I can’t compile project.
fatal error C1010: unexpected end of file while looking for precompiled header. Did you forget to add ‘#include “stdafx.h”‘ to your source?
So, if I include right path to stdafx.h file, I can’t compile project. If I include wrong path to stdafx.h, project compiles, but I get bunch of identifier xxx is undefined. It is very, very annoying.
You can find information in our other article, namely in this section: | https://hownot2code.com/2016/08/16/stdafx-h/ | CC-MAIN-2021-39 | refinedweb | 1,643 | 68.87 |
To plot half or quarter polar plots in Matplotlib, we can take the following steps −
Set the figure size and adjust the padding between and around the subplots.
Create a new figure or activate an existing figure using figure() method.
Add an axes to the figure as part of a subplot arrangement.
For half or quarter polar plots, use set_thetamax() method.
To display the figure, use show() method.
from matplotlib import pyplot as plt plt.rcParams["figure.figsize"] = [7.50, 3.50] plt.rcParams["figure.autolayout"] = True fig = plt.figure() ax = fig.add_subplot(111, projection="polar") max_theta = 90 ax.set_thetamax(max_theta) plt.show() | https://www.tutorialspoint.com/how-to-plot-half-or-quarter-polar-plots-in-matplotlib | CC-MAIN-2022-21 | refinedweb | 103 | 53.07 |
(For more resources on this topic, see here.)
Understanding Vaadin
In order to understand Vaadin, we should first understand what is its goal regarding the development of web applications.
Vaadin's philosophy
Classical HTML over HTTP application frameworks are coupled to the inherent request/response nature of the HTTP protocol. This simple process translates as follows:
- The client makes a request to access an URL.
- The code located on the server parses request parameters (optional).
- The server writes the response stream accordingly.
- The response is sent to the client.
All major frameworks (and most minor ones, by the way) do not question this model: Struts, Spring MVC, Ruby on Rails, and others, completely adhere to this approach and are built upon this request/response way of looking at things. It is no mystery that HTML/HTTP application developers tend to comprehend applications through a page-flow filter.
On the contrary, traditional client-server application developers think in components and data binding because it is the most natural way for them to design applications (for example, a select-box of countries or a name text field).
A few recent web frameworks, such as JSF, tried to cross the bridge between components and page-flow, with limited success. The developer handles components, but they are displayed on a page, not a window, and he/she still has to manage the flow from one page to another.
The Play Framework () takes a radical stance on the page-flow subject, stating that the Servlet API is a useless abstraction on the request/response model and sticks even more to it. Vaadin's philosophy is two-fold:
- It lets developers design applications through components and data bindings
- It isolates developers as much as possible from the request/response model in order to think in screens and not in windows
This philosophy lets developers design their applications the way it was before the web revolution. In fact, fat client developers can learn Vaadin in a few hours and start creating applications in no time.
The downside is that developers, who learned their craft with the thin client and have no prior experience of fat client development, will have a hard time understanding Vaadin as they are inclined to think in page-flow. However, they will be more productive in the long run.
Vaadin's architecture
In order to achieve its goal, Vaadin uses an original architecture. The first fact of interest is that it is comprised of both a server and a client side.
- The client side manages thin rendering and user interactions in the browser
- The server side handles events coming from the client and sends changes made to the user interface to the client
- Communication between both tiers is done over the HTTP protocol.
We will have a look at each of these tiers.
Client server communication
Messages in Vaadin use three layers: HTTP, JSON, and UIDL. The former two are completely un-related to the Vaadin framework and are supported by independent third parties; UIDL is internal.
HTTP protocol
Using the HTTP protocol with Vaadin has the following two main advantages:
- There is no need to install anything on the client, as browsers handle HTTP (and HTTPS for that matter) natively.
- Firewalls that let pass the HTTP traffic (a likely occurrence) will let Vaadin applications function normally.
JSON message format
Vaadin messages between the client and the server use JavaScript Objects Notation (JSON). JSON is an alternative to XML that has the following several differences:
- First of all, the JSON syntax is lighter than the XML syntax. XML has both a start and an end tag, whereas JSON has a tag coupled with starting brace and ending brace. For example, the following two code snippets convey the same information, but the first requires 78 characters and the second only 63. For a more in depth comparison of JSON and XML, refer to the following URL: <person> <firstName>John</firstName> <lastName>Doe</lastName> </person> {"person" { {"firstName": "John"}, {"lastName": "Doe"} }
The difference varies from message to message, but on an average, it is about 40%. It is a real asset only for big messages, and if you add server GZIP compression, size difference starts to disappear. The reduced size is no disadvantage though.
- Finally, XML designers go to great length to differentiate between child tags and attributes, the former being more readable to humans and the latter to machines. JSON messages design is much simpler as JSON has no attributes.
UIDL "schema"
The last stack that is added to JSON and HTTP is the User Interface Definition Language (UIDL). UIDL describes complex user interfaces with JSON syntax.
The good news about these technologies is that Vaadin developers won't be exposed to them.
The client part
The client tier is a very important tier in web applications as it is the one with which the end user directly interacts.
In this endeavor, Vaadin uses the excellent Google Web Toolkit (GWT) framework. In the GWT development, there are the following mandatory steps:
- The code is developed in Java.
- Then, the GWT compiler transforms the Java code in JavaScript.
- Finally, the generated JavaScript is bundled with the default HTML and CSS files, which can be modified as a web application.
Although novel and unique, this approach provides interesting key features that catch the interest of end users, developers, and system administrators alike:
- Disconnected capability, in conjunction with HTML 5 client-side data stores
- Displaying applications on small form factors, such as those of handheld devices
- Development only with the Java language
- Excellent scalability, as most of the code is executed on the client side, thus freeing the server side from additional computation
On the other hand, there is no such thing as a free lunch! There are definitely disadvantages in using GWT, such as the following:
- The whole coding/compilation/deployment process adds a degree of complexity to the standard Java web application development.
- Although a Google GWT plugin is available for Eclipse and NetBeans, IDEs do not provide standard GWT development support. Using GWT development mode directly or through one such plugin is really necessary, because without it, developing is much slower and debugging almost impossible.
For more information about GWT dev mode, please refer to the following URL:
- There is a consensus in the community that GWT has a higher learning curve than most classic web application frameworks; although the same can be said for others, such as JSF.
- If the custom JavaScript is necessary, then you have to bind it in Java with the help of a stack named JavaScript Native Interface (JSNI), which is both counter-intuitive and complex.
- With pure GWT, developers have to write the server-side code themselves (if there is any).
- Finally, if ever everything is done on the client side, it poses a great security risk. Even with obfuscated code, the business logic is still completely open for inspection from hackers.
Vaadin uses GWT features extensively and tries to downplay its disadvantages as much as possible. This is all possible because of the Vaadin server part.
The server part
Vaadin's server-side code plays a crucial role in the framework.
The biggest difference in Vaadin compared to GWT is that developers do not code the client side, but instead code the server side that generates the former. In particular, in GWT applications, the browser loads static resources (the HTML and associated JavaScript), whereas in Vaadin, the browser accesses the servlet that serves those same resources from a JAR (or the WEB-INF folder).
The good thing is that it completely shields the developer from the client-code, so he/she cannot make unwanted changes. It may be also seen as a disadvantage, as it makes the developer unable to change the generated JavaScript before deployment.
It is possible to add custom JavaScript, although it is rarely necessary.
In Vaadin, you code only the server part!
There are two important tradeoffs that Vaadin makes in order achieve this:
- As opposed to GWT, the user interface related code runs on the server, meaning Vaadin applications are not as scalable as pure GWT ones. This should not be a problem in most applications, but if you need to, you should probably leave Vaadin for some less intensive part of the application; stick to GWT or change an entirely new technology.
While Vaadin applications are not as scalable as applications architecture around a pure JavaScript frontend and a SOA backend, a study found that a single Amazon EC2 instance could handle more than 10,000 concurrent users per minute, which is much more than your average application. The complete results can be found at the following URL:
- Second, each user interaction creates an event from the browser to the server. This can lead to changes in the user interface's model in memory and in turn, propagate modifications to the JavaScript UI on the client. The consequence is that Vaadin applications simply cannot run while being disconnected from the server! If your requirements include the offline mode, then forget Vaadin.
Terminal and adapter
As in any low-coupled architecture, not all Vaadin framework server classes converse with the client side. In fact, this is the responsibility of one simple interface:
com.vaadin.terminal.Terminal.
In turn, this interface is used by a part of the framework aptly named as the Terminal Adapter, for it is designed around the Gang of Four Adapter () pattern.
This design allows for the client and server code to be completely independent of each other, so that one can be changed without changing the other. Another benefit of the Terminal Adapter is that you could have, for example, other implementations for things such as Swing applications. Yet, the only terminal implementation provided by the current Vaadin implementation is the web browser, namely com.vaadin.terminal.gwt.server.WebBrowser.
However, this does not mean that it will always be the case in the future. If you are interested, then browse the Vaadin add-ons directory regularly to check for other implementations, or as an alternative, create your own!
Client server synchronization
The biggest challenge when representing the same model on two heterogeneous tiers is synchronization between each tier. An update on one tier should be reflected on the other or at least fail gracefully if this synchronization is not possible (an unlikely occurrence considering the modern day infrastructure).
Vaadin's answer to this problem is a synchronization key generated by the server and passed on to the client on each request. The next request should send it back to the server or else the latter will restart the current session's application.
This may be the cause of the infamous and sometimes frustrating "Out of Sync" error, so keep that in mind.
(For more resources on this topic, see here.)
Deploying a Vaadin application
Now, we will see how we can put what we have learned to good use.
Vaadin applications are primarily web applications and they follow all specifications of Web Archive artifacts (WARs). As such, there is nothing special about deploying Vaadin web applications. Readers who are familiar with the WAR deployment process will feel right at home!
WAR deployment is dependent on the specific application server (or servlet/JSP container).
Inside the IDE
Creating an IDE-managed server
Although it is possible to export our project as a WAR file and deploy it on the available servlet container, the best choice is to use a server managed by the IDE. It will let us transparently debug our Vaadin application code.
The steps are very similar to what we did with the mock servlet container.
Selecting the tab
First of all, if the Server tab is not visible, then go to the menu Window | Open perspective | Other... and later choose Java EE.
Creating a server
In order to be as simple as possible, we will use Tomcat. Tomcat is a servlet container, as opposed to a full-fledged application server, and only implements the servlet specifications, not the full Java EE stack. However, what it does, it does it so well that Tomcat was the servlet API reference implementation.
Right-click on the Server tab and select New | Server. Open Apache and select Tomcat 6.0 Server. Keep both Server's host name and Server name values and click on Next.
For running Vaadin applications, Tomcat 6.0 is more than enough as compared to Tomcat 7.x. In fact, it can be downloaded and installed with just a push button. If you want to use Tomcat 7.x, then the process is similar, but you will have to download it separately out of Eclipse. Download it from the following URL:
However, beware that the first stable version of the 7.x branch is 7.0.6.
Now, the following two options are possible:
- If you don't have Tomcat 6 installed, then click on Download and install. Accept the license agreement and then select the directory where you want to install it as shown in the following screenshot:
- If you already do, just point to its root location in the Tomcat installation directory field.
By default, you should see a warning message telling you that Tomcat needs a Java Development Kit (JDK) and not a Java Runtime Environment (JRE).
A JRE is a subset of the JDK as it lacks the javac compiler tool (along with some other tools such as javap, a decompiler tool). As JSPs are compiled into servlets at runtime and most regular web applications make heavy use of them, it is a standard to choose a JDK to run Tomcat.
The good thing about Vaadin is that it does not use JSP, so we can simply ignore the warning.
Click on the Finish button.
Verifying the installation
At the end of the wizard, there should be a new Tomcat 6.0 server visible under the Servers tab, as shown in the following screenshot. Of course, if you chose another version or another server altogether, that will be the version or server displayed.
Adding the application
As Vaadin applications are web applications, there is no special deployment process.
Right-click on the newly created server and click on the Add and Remove menu entry. A pop-up window opens. On the left side, there is the list of available web application projects that are valid candidates to be deployed on your newly created server. On the right side, there is the list of currently deployed web applications.
Select MyFirstVaadinApp project and click on the Add button. Then, click on Finish.
The application should now be visible under the server.
Launching the server
Select the server and right-click on it. Select the Debug menu entry. Alternatively, you can:
- Click on the Debug button (the one with the little bug) on the Server tab header
- Press Ctrl + Alt + d
Each IDE has its own menus, buttons, and shortcuts. Know them and you will enjoy a huge boost in productivity.
The Console tab should display a log similar to the following:
12 janv. 2011 21:14:36 org.apache.catalina.core.AprLifecycleListener init
INFO: The APR based Apache Tomcat Native library which allows optimal
performance in production environments was not found on the java.library.path:
...
12 janv. 2011 21:14:36 org.apache.tomcat.util.digester.SetPropertiesRule begin
ATTENTION: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting
property 'source' to 'org.eclipse.jst.jee.server:MyFirstVaadinApp' did not find a
matching property.
12 janv. 2011 21:14:36 org.apache.coyote.http11.Http11Protocol init
INFO: Initialisation de Coyote HTTP/1.1 sur http-8080
12 janv. 2011 21:14:36 org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 484 ms
12 janv. 2011 21:14:36 org.apache.catalina.core.StandardService start
INFO: Démarrage du service Catalina
12 janv. 2011 21:14:36 org.apache.catalina.core.StandardEngine start
INFO: Starting Servlet Engine: Apache Tomcat/6.0.26
12 janv. 2011 21:14:37 org.apache.coyote.http11.Http11Protocol start
INFO: Démarrage de Coyote HTTP/1.1 sur http-8080
12 janv. 2011 21:14:37 org.apache.jk.common.ChannelSocket init
INFO: JK: ajp13 listening on /0.0.0.0:8009
12 janv. 2011 21:14:37 org.apache.jk.server.JkMain start
INFO: Jk running ID=0 time=0/21 config=null
12 janv. 2011 21:14:37 org.apache.catalina.startup.Catalina start
INFO: Server startup in 497 ms
This means Tomcat started normally.
Outside the IDE
In order to deploy the application outside the IDE, we should first have a deployment unit.
Creating the WAR
For a servlet container, such as Tomcat, the deployment unit is a Web Archive, better known as a WAR.
Right-click on the project and select the Export menu | WAR file. In the opening pop up, just update the location of the exported file: choose the webapps directory where we installed Tomcat and name it myfirstvaadinapp.war.
Launching the server
Open a prompt command. Change the directory to the bin subdirectory of the location where we installed Tomcat and run the startup script.
Troubleshooting
If you have installed Tomcat for the first time, then chances are that the following message will be displayed:
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variables is needed to run this program
In this case, set the JAVA_HOME variable to the directory where Java is installed on your system (and not its bin subdirectory!).
The log produced should be very similar to the one displayed by running Tomcat inside the IDE (as shown in preceding section), apart from the fact that Apache Portable Runtime will be available on the classpath and that does not change a thing from the Vaadin point of view.
Using Vaadin applications
Vaadin being a web framework, its output can be displayed inside a browser.
Browsing Vaadin
Whatever way you choose to run our previously created Vaadin project, in order to use it, we just have to open one's favorite browser and navigate to. Two things should happen:
- First, a simple page should be displayed with the message Hello Vaadin user
- Second, the log should output that Vaadin has started
==============================================================
Vaadin is running in DEBUG MODE.
Add productionMode=true to web.xml to disable debug features.
To show debug window, add ?debug to your application URL.
==============================================================
Troubleshooting
In case nothing shows up on the browser screen and after some initial delay an error pop up opens with the following message:
Failed to load the widgetset: /myfirstvaadinapp/VAADIN/widgetsets/com.vaadin.terminal.gwt.DefaultWidgetSet/com.vaadin.terminal.gwt.DefaultWidgetSet.nocache.js?1295099659815
Be sure to add the /VAADIN/* mapping to the web.xml in the section Declare the servlet mapping and redeploy the application.
Out-of-the-box helpers
Before going further, there are two things of interest to know, which are precious when developing Vaadin web applications.
The debug mode
Component layout in real-world business-use cases can be a complex thing to say the least. In particular, requirements about fixed and relative positioning are a real nightmare when one goes beyond Hello world applications, as they induce nested layouts and component combinations at some point.
Given the generated code approach, when the code does not produce exactly what is expected on the client side, it may be very difficult to analyze the cause of the problem. Luckily, Vaadin designers have been confronted with them early on and are well aware of this problem.
As such, Vaadin provides an interesting built-in debugging feature: if you are ever faced with such a display problem, just append ?debug to your application URL. This will instantly display a neat window that gives a simplified tree view of your components/layouts with additional information such as the component class, the internal Vaadin id, the caption, and the width.
Just be aware that this window is not native (it is just an artifact created with the client-side JavaScript). It can be moved with its title bar, which is invaluable if you need to have a look at what is underneath it. Likewise, it can be resized by pressing the Shift key while the cursor is over the debug window.
Although it considerably decreases the debugging time during the development phase, such a feature has no added value when in production. It can even be seen as a security risk as the debug windows displays information about the internal state of Vaadin's component tree.
Vaadin provides the means to disable this feature. In order to do so, just add the following snippet to your WEB-INF/web.xml:
<context-param> <description>Vaadin production mode</description> <param-name>productionMode</param-name> <param-value>true</param-value> </context-param>
Now, if you try the debug trick, nothing will happen.
Production mode is NOT default
As such, it is a good idea to always set the productionMode context parameter from the start of the project, even if you set it to false. Your build process would then set it to true for release versions. This is much better than forgetting it altogether and having to redeploy the webapp when it becomes apparent.
Restart the application, not the server
We have seen in the Vaadin's architecture section that Vaadin's user interface model is sent to the client through UIDL/JSON messages over HTTP. The whole load is sent at the first request/response sequence, when the Application instance is initialized; additional sequences send only DOM updates.
Yet, important changes to the component tree happen often during the development process. As Vaadin stores the UI state in memory on the server side, refreshing the browser does not display such changes.
Of course, restarting the server discards all the states in memory and remedies the problem, but this operation is not free for heavyweight application servers. Although recent releases of application servers emphasize better startup time, it is a time waste; even more so if you need to restart 10 or 15 times per hour, which is not an unlikely frequency at the start of a new application development.
As for the debugging feature, Vaadin provides the means to reload the computed UI through an URL parameter: change your server-side code, wait for the changes to take effect on the server, just append ?restartApplication, and watch the magic happen. Alternatively, if you are already in the debug mode, there is a button labeled Restart app that has the same effect.
Increase performance
You should remove the restartApplication URL parameter as soon as it is not needed anymore. Otherwise, you will re-run the whole initialization/send UI process each time your refresh the browser, which is not welcome.
(For more resources on this topic, see here.)
Behind the surface
Wow, in just a few steps, we created a brand new application! Granted, it does not do much, to say the least. Yet, those simple actions are fundamental to the comprehension of more advanced concepts. So, let's catch our breath and see what really happened under the hood.
Stream redirection to Vaadin servlet
The URL can be decomposed in the following three parts, each part being handled by a more specific part:
- is the concatenation of the protocol, the domain, and the port. This URL is handled by the Tomcat server we installed and started previously, whether inside the IDE or normally.
- /myfirstvaadinapp is the context root and references the project we created before. Thus, Tomcat redirects the request to be handled by the webapp.
- In turn, /vaadin is the servlet mapping the Vaadin plugin added to the web deployment descriptor when the project was created. The servlet mapping uses the Vaadin servlet, which is known under the logical name My First Vaadin Application. The latter references the com.vaadin.terminal.gwt. server.ApplicationServlet class.
Vaadin request handling
As you can see, there is nothing magical in the whole process: the URL we browsed was translated as a request that is being handled by the ApplicationServlet.service() method, just like any Java EE compliant servlet would do.
To be exact, the service() method is not coded in ApplicationServlet directly, but in its super class, com.vaadin.terminal.gwt.server.AbstractApplicationServlet.
Vaadin's servlet directly overrides service() instead of the whole group of doXXX() methods (such as doGet() and doPost()). This means that Vaadin is agnostic regarding the HTTP method you use. Purists and REST programmers will probably be horrified at this mere thought, but please remember that we are not manipulating HTTP verbs in request/response sequences and instead using an application.
The following are the rough steps on how the Vaadin servlet services the request response model:
- Finds out which application instance this request is related to; this means either create or locate the instance. It delegates to the effective implementation of the Vaadin servlet. For example, when using Spring or CDI, the code will locate the Spring/CDI bean.
- If the application is not running, Vaadin launches it. For detailed explanations on the application concept, see the next section.
- Locates the current window and delegates it to the request handling.
- If the need be:
- Stops the application
- Or sends initial HTML/JS/CSS to the client that will interact with the server
The initial load time
Be wary of this last step when creating your own applications: an initial screen that is too big in size will generate an important latency followed by a strange update of your client screen. This is generally not wanted:either try to decrease your initial page complexity or use a change manager that will mitigate the user's feelings about it.
What does an application do?
In Vaadin, an application represents the sum of all components organized in windows, layouts, and having a theme applied. The central class representing an application is the com.vaadin.Application class.
Application responsibilities
Application responsibilities include the following:
- Managing windows: adding and removing them for the windows registry. Windows are first-level container components, and as such, are of utmost importance to all Vaadin applications.
- Callbacks during the lifecycle of the application: Two such hooks are possible: before starting and after stopping. For example, the following initialization actions are possible:
- Making a connection to a database
- Reading properties file
These callbacks can replace the Java EE standard —javax.servlet.Servlet-ContextListener— which does the same, in a more Vaadin-oriented way.
- Setting themes: Vaadin, being an abstraction layer over HTML/JS/CSS, lets you manage your CSS as a single bundle named a theme. As such, you can change the whole look and feel of your applications by a single server-side command.
Two themes are provided out-of-the-box by Vaadin (reindeer and runo). You can also tweak them and reference them under a new theme name or create entirely new themes from scratch. Readers interested into going further on this road can find documentation at the following link:
Application configuration
In our first project, having a look at the web deployment descriptor, notice there is an application servlet parameter configured for the Vaadin servlet:
<servlet> <servlet-name>VaadinApplication</servlet-name> <servlet-class> com.vaadin.terminal.gwt.server.ApplicationServlet </servlet-class> <init-param> <param-name>application</param-name> <param-value> com.packt.learnvaadin.MyApplication </param-value> </init-param> </servlet>
As such, there can be only a single Vaadin application configured for each Vaadin servlet.
Application and session
The most important fact about the Application class is that one instance of it is created the first time a user session requests the Vaadin servlet; this instance is stored in the HttpSession related to the session from then on.
In reality, the Application instances are not stored directly in HttpSession, but within a com.vaadin.terminal.gwt.server.WebApplicationContext instance that is stored in the session. There is a 1-n relationship from WebApplicationContext to Application meaning there is a possibility that more than one Application could relate to the same session. You should keep in mind that each session stores one and only one application object for each configured Vaadin servlet.
Vaadin's object model encompasses Application, Window, and AbstractComponent as shown in the following diagram:
Out of memory
Storing the UI state in the session has a major consequence. Great care must be taken in evaluating the number of users and the average load of each user session because the session is more loaded than in traditional Java EE web applications, thus greater is the risk of java.lang.OutOfMemoryError.
Scratching the surface
Having said all that, it is time to have a look at the both the source code that was created by the Vaadin plugin and the code that it generated and pushed on the client.
The source code
The source code was taken care of by Vaadin plugin:
import com.vaadin.Application; import com.vaadin.ui.*; public class HelloWorldApp extends Application public void init() { Window mainWindow = new Window("Hello World Application"); Label label = new Label("Greetings, Vaadin user!"); mainWindow.addComponent(label); setMainWindow(mainWindow); }
Though having no prior experience in Vaadin and only armed with some basic concepts, we can guess what the class does. That is the strength of Vaadin, compared to competitor frameworks, it is self-evident!
- At line 1 of the init() method, we create a new window with a title. Windows are first-class components in Vaadin as they can be the top-most elements.
- At line 2, we create a new label. Labels are used to display static messages. They are often found in web forms as description for fields. Our label has a specific text. Notice it is displayed in the final screen.
- At line 3, we add the label to the window. Even when you have no prior experience with component-based development (whether thin or fat client based), it is clear that the label will be displayed in the window.
- Finally, at line 4, we set the window we created as the main window of the screen, displaying it as a root component. We can check that the window's very title takes place in the HTML <title> element. With most browsers, it is also shown in the browser window title.
The generated code
In your favorite browser, right-clicking and selecting the menu that shows the source will only display JavaScript—gibberish to the inexperienced eye.
In fact, as the UI is generated with GWT, we don't see anything interesting in the HTML source—only the referenced JavaScript and a single
<noscript> tag that handles the case where our browser is not JavaScript-enabled (an unlikely occurrence in our time, to say the least).
There is a consensus on the Web that AJAX-powered web applications should degrade gracefully, meaning that if the user deactivates JavaScript, applications should still run, albeit with less user-friendliness. Although a very good practice, most of the time, JavaScript applications will not run at all in this case. GWT and thus Vaadin are no exceptions in this matter.
Of much more interest is the generated HTML/JS/CSS. In order to display it, we will need Google Chrome, Firefox with the Firebug plugin, or an equivalent feature in another browser.
More precisely, locate the following snippet:
<div class="v-app v-theme-reindeer v-app-HelloWorldApp" id="myfirstvaadinappvaadin-627683907"> <div class="v-view" tabindex="1" style=""> <div style="position: absolute; display: none;" class="v-loading-indicator"></div> <div style="overflow: hidden; width: 1680px; height: 54px;" class="v-verticallayout"> <div style="overflow: hidden; margin: 18px; width: 1644px; height: 18px;"> <div style="height: 18px; width: 1644px; overflow: hidden; padding-left: 0px; padding-top: 0px;"> <div style="float: left; margin-left: 0px;"> <div class="v-label" style="width: 1644px;"> Greetings, Vaadin user!</div> </div> </div> <div style="width: 0px; height: 0px; clear: both; overflow: hidden;"></div> </div> </div> </div> </div>
Things of interest
First of all, notice that although only a simple message is displayed on the user screen, Vaadin has created an entire DOM tree filled with
- The class v-view denotes a window
- The class v-label indicates a label
- The class v-verticallayout represents a vertical layout
Although we didn't code a vertical layout per se, standard components cannot be the first-level children of windows. By default, the Window class uses a vertical layout and components added to the window are in fact added to the latter.
Moreover, the vertical layout is not the only child div of the view: there is another one that has v-loading-indicator as a class. You probably did not notice as the UI is very simple, but if you refresh the browser window, there should be a circular loading indicator displayed at the left of the page just before the UI finishes loading:
Be aware that it gives no indication on the progress whatsoever, but it at least lets users know that the application is not ready to be used. Such indicators are added, for free, each time a window instance is set as the main window of an application.
Summary
We saw a few things of importance in this article.
First, we had an overview of the Vaadin's philosophy. Vaadin creates an abstraction over the classic request/response sequence in order for developers to think in "applications" and no more in "pages".
In order to do that, the Vaadin architecture has three main components:
- The client side: that is JavaScript upon the Google Web Toolkit.
- The server side that generates the client code. One concept of note on the server side is the terminal one: the terminal is in charge of abstracting over the client side. Should the need arise; we could create an abstraction that is not web-oriented.
- Communications between the client and the server are implemented with JSON/UIDL messages over the HTTP protocol.
There is nothing special with Vaadin applications, they are simple web archives and are deployed as such. Two tools bundled with Vaadin will prove useful during the development:
- The debug window that is very convenient when debugging display and layout-related bugs
- How to restart the framework without necessarily restarting the application server
Finally, we somewhat scratched the surface of how it all works and most notably:
- The handling of an HTTP request by Vaadin
- The notion of application in the framework
- The code, both the source generated by the plugin and the HTML structure generated by the former
This article concludes the introduction to Vaadin. It is a big step on the path to learning Vaadin.
- Vaadin Portlets in Liferay User Interface Development [Article]
- Spring Roo 1.1: Working with Roo-generated Web Applications [Article]
- Getting Started with Ext GWT [Article] | https://www.packtpub.com/books/content/creating-basic-vaadin-project | CC-MAIN-2015-35 | refinedweb | 5,785 | 52.49 |
Hi all,
I just encountered the same problem. Fortunately there's an easy solution:
Enter the base configuration in the normal way (no enhancement, no customising).
When adding your own adaptations, you just need to specify a namespace e.g. 'Z'. Then you should not be asked for an object key.
Roger
Also see SAP Note 1899263
Hi Gautham,
Thanks for your information.
Regards,
M.Ramana Murthy.
Hi All,
I was also facing same problem while adding a new UIBB in the least and every time i got a pop screen for providing the Object Access Key.But i got another solution to come out of this problem,for that You can enhance the your existing UI and add your own UIBB in the list.
Thanks and Regards,
T.Biren Kumar Patro
Add comment | https://answers.sap.com/questions/9540675/index.html | CC-MAIN-2019-18 | refinedweb | 134 | 73.98 |
MBSINIT(3) BSD Programmer's Manual MBSINIT(3)
mbsinit - determines whether the state object is in initial state
#include <wchar.h> int mbsinit(const mbstate_t *ps);
mbsinit() determines whether the state object pointed to by ps is in ini- tial conversion state, or not. ps may be a NULL pointer. In this case, mbsinit() will always return non-zero.
mbsinit() returns: 0 The current state is not in initial state. non-zero The current state is in initial state or ps is a NULL pointer.
No errors are defined.
The mbsinit() conforms to ISO/IEC 9899/AMD1:1995 ("ISO C90, Amendment. | http://mirbsd.mirsolutions.de/htman/sparc/man3/mbsinit.htm | crawl-003 | refinedweb | 101 | 68.36 |
China
has announced plans to cut the amount of land used for corn cultivation by 1.3
million ha over the next two years, a move that could have far-reaching effects
for global grain markets, according to analysts CCM.
Though
the full details of the new policy - named the Structure Adjustment Plan on Corn Planting Area in the ‘Sickle-Shaped
Region’ (2016-2020) - have not yet been published, a statement released by
China’s Ministry of Agriculture (MOA) on September 2 confirmed that the policy
will include targets to reduce the total area of land used for corn cultivation
by 666,666 ha by 2016 and 1.3 million ha by 2017.
A
large amount of land will also be converted from producing grain corn to
growing silage corn.
“The
new policy is aimed at dealing with high corn inventory levels and soil
eutrophication”, said Yu Xinrong, China’s Deputy Minister of Agriculture.
A
full policy document will be published ‘soon’, according to the statement.
Corn cultivation in China, 2014
Source: CCM
Cutting corn
The
target to reduce the amount of land being used for corn production in Northeast
China - often referred to as the ‘sickle-shaped region’ due to the region’s
appearance on maps of China - by 1.3 million ha over the next two years is
intended to deal with the side effects created by China’s ‘temporary purchase
and storage policy’ for grain, in which the government agrees to purchase a set
amount of corn from farmers each year at a fixed price even if the market price
drops below that figure.
The
policy, which has been in place since 2008, has gone far beyond its original
aim of shoring up China’s food security and has led to the country’s corn
output ballooning each year as farmers cash in on the generous price offered by
the government.
By
2014, China’s corn output had risen over 30% from 2008 levels and the total
area of land used for growing corn was 21% higher than in 2008, according to
data from China’s National Bureau of Statistics.
This
has created a number of problems, as Mai Xiaoying, editor of Corn Products China News, explains:
“The
government is finding it impossible to sell this extra corn, partly because
output has risen quickly, and partly because the ‘purchase and storage policy’
has kept corn prices artificially high,” said Mai.
“In
2014/15, the government purchased 83 million tonnes of corn, but between May
2014 and August 2015 it only managed to sell around 30 million tonnes. As a
result, the country’s corn inventory keeps growing. China is now storing over
110 million tonnes of corn, which is completely unsustainable.”
The
removal of 666,666 ha of land from corn cultivation next year will reduce corn
output by around 5 million tonnes, which should reduce the pressure on storage
facilities somewhat.
From grain corn to
silage corn
Another
key goal outlined in MOA’s statement is to increase agricultural efficiency by
cutting down the amount of land used for producing grain corn - corn that can
be used for food production - and convert this land to producing silage corn -
corn used only to produce animal feed.
Only
4.3% of the total corn produced in China last year was silage corn, according
to MOA, but about 60% of Chinese corn is used to produce feed. As a result, a
huge amount of grain corn, which is more expensive and produces lower yields
than silage corn, is needlessly being used as raw material for feed.
“Silage
corn enjoys high yields and is easy to grow, compared to edible corn. Thus,
this move will help reduce production costs for feed manufacturers by better
utilizing the land, increasing the corn yield and changing the supply patterns
of raw materials for feed in China”, stated Mai.
This
is also likely to have a knock-on effect on global grain markets as corn prices
in China become more affordable for feed manufacturers.
In
recent years, high domestic corn prices have forced Chinese feed manufacturers,
many of whom are unable to import corn cheaply due to restrictive import quotas
that favor state-owned companies, to ramp up imports of substitute grains such
as DDGS, barley and sorghum.
Sorghum
imports, for example, have skyrocketed from just 84,000 tonnes in 2011 to a
forecasted 10 million tonnes in 2015, encouraging growers in the US and
Australia to increase production significantly.
However,
Mai expects the Structure Adjustment Plan
to make imported grain less attractive to China’s feed industry:
“The
corn supply will not be influenced [by the Structure
Adjustment Plan] because both the reduced volume and the planting area for
silage corn are small. However, the corn price will decline because of the
reduced production costs.
“The
price performance of substitutes for corn will also fall. The growth of the
import volume of substitutes will probably slow significantly,” Mai added.
However,
some industry insiders have questioned whether the Structure Adjustment Plan will be effective due to the lack of
adequate incentives for farmers to switch from growing grain corn.
“The
adjustment will encounter major challenges if China only relies on policy
guidance or regulations without implementing financial incentives”, said Jing
Qin, spokesperson for Inner Mongolia Jinmu Prataculture Co. Ltd. “The
government should increase subsidies for planting forage grass, coarse cereals
and soybeans, or reduce the purchase and storage price of grain corn. These
incentives are necessary to encourage farmers to alter their planting
structures.”
CCM
will publish a more detailed analysis of the likely effect of the Structural Adjustment Plan once the full
policy document is published in Corn
Products China News, our newsletter providing breaking news, market
data and expert commentary on China’s corn products market.
More information on
China’s corn market:
Market Data - regular
updates on the price, production, consumption, manufacturing costs and leading
producers of corn products in China. Find
out more...
Trade Analysis -
detailed data on the import/export of corn products in China, including
detailed data on Chinese suppliers and traders and what prices they are
offering. Find
out more... | http://eshare.cnchemicals.com/publishing/home/2015/09/21/2008/china-announces-major-shift-in-grain-policy-in-bid-to-reduce-huge-corn-stockpiles.html | CC-MAIN-2017-43 | refinedweb | 1,023 | 52.63 |
If you have not already heard .Ionic 3 is already released which comes with Angular 4 support and many features .So in this post I'll show you quickly how you can easily upgrade to Ionic 3 and Angular 4 .
Don't worry ,Ionic 3 is not a completely new rewritten framework as it was the case with Ionic 2 .The reason behind this version of Ionic 3 is the compatibility with Angular 4 which introduces new features ,newest version of TypeScript but most importantly ,Angular 4 produces faster and smaller apps .You can read more about this on Angular 4 official blog
Ionic 3 has also introduced some changes in project structure but they are optional .
Ionic 3 has a lot of great features :
- The lazy loading of modules .
- The support of async/await .
- IonicPage decorator for setting up deep links ,etc.
How to upgrade to Ionic 3 ?
Upgrading your project to Ionic 3 is actually very easy .All you need to do is updating your package.json and execute npm install
So go ahead and open package.json of your project and copy this
"dependencies": { "@angular/common": "4.0.0", "@angular/compiler": "4.0.0", "@angular/compiler-cli": "4.0.0", "@angular/core": "4.0.0", "@angular/forms": "4.0.0", "@angular/http": "4.0.0", "@angular/platform-browser": "4.0.0", "@angular/platform-browser-dynamic": "4.0.0", "@ionic-native/core": "3.4.2", "@ionic-native/splash-screen": "3.4.2", "" }
Remove the old node_modules then execute
npm install
Wait until the new dependencies are installed and then continue with these changes which are mainly related to Angular 4.
Open app/app.module.ts then import the BrowserModule from @angular/platform-browser
import { BrowserModule } from '@angular/platform-browser';
Next you need to add it to the imports array
imports: [ BrowserModule, /* ...*/ ],
In case your are using the HTTP module in your application ,you need to import the HttpModule in app/app.module.ts: and add it to imports array
imports: [ BrowserModule, HttpModule, IonicModule.forRoot(MyApp) ],
Don't forget to also install Ionic native via npm with
npm install --save @ionic-native/core
Now you need to check you have Ionic 3 successfully installed so go ahead inside your project folder and run
ionic info Your system information: Cordova CLI: 6.5.0 Ionic Framework Version: 3.0.1 Ionic CLI Version: 2.2.2 Ionic App Lib Version: 2.2.1 Ionic App Scripts Version: 1.3.0 ios-deploy version: Not installed ios-sim version: Not installed OS: Linux 4.2 Node Version: v7.9.0 Xcode version: Not installed
Upgrading to Ionic 3.0.1
Ionic 3.0.1 is released and has 3 minor updates .To upgrade your project to this version ,follow these steps
First make sure you have upgraded your project to Ionic 3.0.0 then open package.json and simply change ionic-angular from 3.0.0 to 3.0.1
"ionic-angular" : "3.0.1"
Then open up your terminal and enter
npm install
You can use ionic info to check if Ionic framework is upgraded to the latest version successfully .
Conclusion
So that is all you need to upgrade your project to use Ionic 3 instead of Ionic 2 .See your on the next<< | https://www.techiediaries.com/ionic-3-angular-4-upgrade-guide/ | CC-MAIN-2017-43 | refinedweb | 545 | 61.12 |
Hi again !I've tried that :in function "mympu_open", with MPU_DEBUG activated, I have suppressed the line"if (ret) return 10+ret;"Then, the program ouputs that :"Firmware compare failed addr 00 0 112 0 0 0 0 36 0 0 33 55 0 118 7 0 1 4 40 1 "Probably an interesting information... but difficult to understand for me !Regards,Sylvain
#define MPU9250
Hi Gregory,Great library, that is what I was looking for. Got it up and running on Nano + GY-521. And even connected it to famous teapot demo. I have a few questions.1. Why you produced your own version mpu.cpp? Can I use inv_mpu.cpp?2.What should I do to change FSR from 2000 to 250 ? (as far as I understand it is much more than just simple #define FSR 250 in mpu.cpp)
Hi Greg,I have debugged this issue on the 9250 a bit and it seems to be related to the programming the DMP firmware to the 9250 board.I have logged the bytes written and bytes read back and when the issue happens it seems to be when the bytes read back does not match the bytes written. Furthermore, it seems that the bytes read back is always the same as the previous written buffer.It seems the something in either the i2c state machine or the memory bank/write machine on the mpu gets stuck.I have also added some retry mechanism to try to reprogram the bank if at first it failed. It improved a bit but it still not very stable.[...]
Hi,I'm actualy trying to use your library but i've encountered some errors.I first downloaded the .zip with all the files. I extracted it in my project folder and opened it in Arduino IDE. It didn'T build at first because of the #define i had to put in inv_mpu.cpp.Once i've put my "#define MPU6050" i tried to build but got some more errors:C:\Program Files (x86)\Arduino\libraries\I2Cdev\I2Cdev.cpp: In static member function 'static int8_t I2Cdev::readBytes(uint8_t, uint8_t, uint8_t, uint8_t*, uint16_t)':C:\Program Files (x86)\Arduino\libraries\I2Cdev\I2Cdev.cpp:276: error: 'BUFFER_LENGTH' was not declared in this scopeC:\Program Files (x86)\Arduino\libraries\I2Cdev\I2Cdev.cpp:277: error: 'Wire' was not declared in this scopeC:\Program Files (x86)\Arduino\libraries\I2Cdev\I2Cdev.cpp: In static member function 'static int8_t I2Cdev::readWords(uint8_t, uint8_t, uint8_t, uint16_t*, uint16_t)':C:\Program Files (x86)\Arduino\libraries\I2Cdev\I2Cdev.cpp:414: error: 'BUFFER_LENGTH' was not declared in this scopeC:\Program Files (x86)\Arduino\libraries\I2Cdev\I2Cdev.cpp:415: error: 'Wire' was not declared in this scopeC:\Program Files (x86)\Arduino\libraries\I2Cdev\I2Cdev.cpp: In static member function 'static bool I2Cdev::writeBytes(uint8_t, uint8_t, uint8_t, uint8_t*)':C:\Program Files (x86)\Arduino\libraries\I2Cdev\I2Cdev.cpp:598: error: 'Wire' was not declared in this scopeC:\Program Files (x86)\Arduino\libraries\I2Cdev\I2Cdev.cpp: In static member function 'static bool I2Cdev::writeWords(uint8_t, uint8_t, uint8_t, uint16_t*)':C:\Program Files (x86)\Arduino\libraries\I2Cdev\I2Cdev.cpp:653: error: 'Wire' was not declared in this scopeThese are quite simple errors but i can't find the origine.Wire is declared a bit higher in the code:#elif I2CDEV_IMPLEMENTATION == I2CDEV_BUILTIN_NBWIRE #ifdef I2CDEV_IMPLEMENTATION_WARNINGS #warning Using I2CDEV_BUILTIN_NBWIRE implementation may adversely affect interrupt detection. #warning This I2Cdev implementation does not support: #warning - Repeated starts conditions #endif // NBWire implementation based heavily on code by Gene Knight <Gene@Telobot.com> // Originally posted on the Arduino forum at // Originally offered to the i2cdevlib project at TwoWire Wire;#endifAt fisrt I tought that it was because it was in a if condition and the compiler just ignored it, but even if i put just after the if condition, the compiler says:I2Cdev.cpp:90: error: 'TwoWire' does not name a typeI know for sure that all the code in the if condition is not executed because i wrote some random code and it didn't flaged it.Once there, i don't realy know what else to do. For the BUFFER_LENGTH error i can't find the problem either. This constant is used in other functions but the readBytes and readWords don't recognize the BUFFER_LENGTH. Since i didn't wrote anything in the code, i trust that it should be working fine but it won't on my computer.If you have anything to help me it would be realy awesome !Thanks in advance,Marvin. | https://forum.arduino.cc/index.php?topic=242335.msg1923384 | CC-MAIN-2019-43 | refinedweb | 747 | 56.35 |
>>."
Code... (Score:4, Informative)
Re: (Score:3)
It really does lower one's opinion towards the author. If I read TFAs, I wouldn't read this one.
Re: (Score:3, Insightful)
Re: (Score:3)
Not to mention the fact that the author has erased history (well, the summary implies the author has erased history - I haven't read TFA) because the Cray 1 had a vector processing unit and a specially designed compiler to make use of it, and the compiler was for Fortran. This was in 1978 when C++ didn't even exist.
Re:Code... (Score:4, Informative)
actually, "codes" is common usage amongst researchers and has been since at least the 1970s.
most of them are not programmers or geeks or computer scientists, they're researchers or academics or post-grad students who happen to do a little programming or simply use someone else's "codes".
it used to make me cringe every time i heard it when working with academics and researchers and on HPC clusters, but then i got used to it and stopped caring.
and, really, they're not interested in a lecture or why it's a dumb usage of the word. they've got stuff they want to get done ("codes to run") and don't give a damn.
Author here. (Score:4, Informative)
Re: (Score:2)
I guess I was mistaken in assuming the computer-science way of thinking about the concept of "code" (vs. "program", "algorithm", or "function", which are definitely discrete, countable things) should extended to other fields.
As a side note, although it is true that my nickname was originally a misspelling of "gigahertz", I chose it while I was young, and as a non-native English speaker, my knowledge of the language was lacking. I have been perfectly aware of that fact for a long time, but I chose to maintai
Re: (Score:2)
"I guess I was mistaken in assuming the computer-science way of thinking about the concept of "code" (vs. "program", "algorithm", or "function", which are definitely discrete, countable things) should extended to other fields."
You were not mistaken, the world just grows more poorly educated. Yesterday's illiteracy is today's literacy. Making "code" synonymous with "program" and subsequently requiring a plural form serves no purpose, sounds stupid, and is stupid.
No doubt others wish to "commentate" on the
Re: (Score:3)
I'd suggest you don't be so pious. I'm for protecting the language as much as anyone else but ultimately it evolves. I don't think this is really about the people employing numerical techniques in science becoming "more poorly educated"; I think it's about your field branching out and attracting new jargon and new uses for the old jargon. It's just what happens.
As it happens, I've spent close to ten years in academia where we build "codes" (typically in Fortran -- 90 or more recent if you were lucky; 77 if
Re: (Score:2)
Re: (Score:2)
Calling corn 'wheat' a default is a good way of putting it - it was what I was trying to get across in more words. "Corn" is ultimately related to "Kernel" and does certainly come from "seed", "grain". But the language evolved so that now it's a catch-all for grains, and wheat by default across most of Europe. Of course, I want the kids off my lawn and the language to stop evolving
;)
Re: (Score:2)
I had some common taters for dinner.
Re: (Score:2)
You were not mistaken, the world just grows more poorly educated. Yesterday's illiteracy is today's literacy.
Forsooth, good sirrah, thou hast spake sagely, and shewn thyself more wise that thy wyrd wurds should haue me think.
Re: (Score:2)
Re: (Score:2)
never worked in the field of high performance numerical methods?
Re: (Score:2)
Obviously not. Even after reading the other replies, and realizing that I was partially wrong, I still can't help but feel that it sounds wrong.
The concept of code, to me, is a collection of statements, expressions, functions, packages, etc. Like a bowl of rice, it makes no sense to count the grains for themselves. I can accept that other people understand it differently, but it's not easy not to feel that they are doing it wrong...
Re: (Score:2).
Re: (Score:2)
True, it's not OO - though you can force it to act almost as if it is - but not everything has to be forced into OO.
That's an almost meaningless statement, unless you define what is "OO" (ten programmers will give you twelve different definitions for that).
Also, the modern replacement for Fortran ought to be Fortress. (Or, more properly, a language using similar algebraic techniques to improve the HPC programming experience, seeing as the original Fortress effort has wrapped up.)
Re: (Score:2)
"That's an almost meaningless statement, unless you define what is "OO" (ten programmers will give you twelve different definitions for that)."
Very true. In this exact context I'm meaning an object containing functions that are contained within its namespace. There's almost certainly actually a way to do that in modern Fortran but I'm not aware of it, and instead I use modules as a rough analogue of a class and select the functions from them that I want to use in a particular method. (I can also get around
Re: (Score:2)
To use the rice analogy, you would say "bowls of rice", not "rices".
What, you mean like that ignorant error we all make of counting fish, cows etc?
Re: (Score:2)
it is correct though
similar to how moneys and monies are both correct plurals of money, even though in America people use money to refer to both singular and plural but they should recheck the dictionary.
codes is interchangeable as well [thefreedictionary.com]
Re: (Score:3)
Re: (Score:2)
...like rice, is not countable. At least not since I learned the word.
It's a shiboleth of physicists and lousy journalists.
Re: (Score:2)
"Secret codes" or "cipher codes" are countable. So are "weather simulation codes".
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Code is a mass noun, and it's number is indeterminate, neither singular nor plural.
Codes code as peoples people (Score:3)
I, too, work in HPC computing, and while I found "codes" very jarring to begin with, I've learned to live with it. I am not sure the "code" vs. "codes" issue it is more grammatically problematic than "people" vs. "peoples". A people (countable) is made up of people (uncountable). Similarly "a code" (countable, but nonstandard) is made up of code (uncountable). Personally I would use "a program" or "a library" instead of "a code", though.
Another related issue is whether "data" is countable or not. I'm used t
Re: (Score:2)
Re: (Score:2)
Very limited scope (Score:3)'.
By the way, regarding the use of the word 'codes': I don't think English is the first language of this developer. Cut some slack.
Re: (Score:3)
Re: (Score:2)
In regard to:
You really think t
Re: (Score:2)
You really think that sentence was written by a person with a tenuous grasp of the English language. Seriously?
Tenuous grasp, no; but non_native != tenuous_grasp. The blog's at a German university. The choice of "marry" over "combine" is slightly unusual, as the idea of choice falling on something. It's very, very good. But it's still most likely not his first language, so pedantic polemics are uncalled for.
Re: (Score:2)
Very limited indeed (Score:5, Informative)
I took a look at TFA and followed up by reading the description of LibGeoDecomp:'.
Correct. We didn't try to come up with a solution for every (Fortran) program in the world. Because that would either take forever or the solution would suck in the end. Instead we tried to build something which is applicable to a certain class of applications which is important to us. So, what's in this class of iterative algorithms which can be limited to neighborhood access only?
It's interesting that almost(!) all computer simulation codes fall in one of the categories above. And supercomputers are chiefly used for simulations.
By the way, regarding the use of the word 'codes': I don't think English is the first language of this developer. Cut some slack.
Thanks
:-) You're correct, I'm from Germany. I learned my English in zeh interwebs.
How do I put this... (Score:2)
I think I speak for many geeks when I say....
KHHHAAAAAAAAAAAAAANNNNNN!!!!
That is all.
bigger problems (Score:2)
Seems to me that there are bigger problems when porting Fortran code to C++, like lack of a multidimensional array type in C++, lack of all the other Fortran libraries, and the fact that Fortran code usually still seems to give faster executables than comparable C++ code on numerical applications.
Re: (Score:2)
fortran code usually still seems to give faster executables than comparable C++ code on numerical applications
I don't think this is true anymore. C++ is pretty much the only language that has BLAS libraries that can actually beat the fortran ones. The latest C++ template libraries are using SSE/etc vector intrinsics and are capable of meeting if not exceeding the fortran performance for many applications.
But, if you have a bunch of code in fortran, its probably not worth the trouble to convert it.
Re: (Score:2)
Lots of existing code is
Re: (Score:2)
Why would any Fortran compiler be using a slower BLAS implementation than the C compiler?
Hand-tuned C code is "capable of meeting if not exceeding the fortran performance for many applications", but that doesn't make C a good numerical programming lang
The trick is to avoid solving the bigger problems (Score:2)
We're using Boost Multi-array [boost.org] as a multi-dimensional array, so that's not really a problem. And since we call back the original Fortran code users are still free to use their original libraries (some restrictions apply -- not all of these libraries will be able to handle the scale of current supercomputers).
Regarding the speed issue: yeah, that's nonsense today [ieee.org]. It all boils down writing C++ in a way that the compiler can understand the code well enough to vectorize it.
Re: (Score:2)
You never want a compiler to vectorize code. You want interfaces to vectoring hardware that you use to vectorize operations on your data. Just like you don't want compilers to provide multidimensional arrays - memory isn't multidimensional, so there's no natural layout. Instead you implement the arrays you need - even if they look the same the complexity contract and implementation is completely different for statically dimensioned (e.g. template params in C++) vs dynamically dimensioned (can be resized)
Re: (Score:2)
I most certainly do.
There is a natural layout that handles 99% of all numerical needs. Numerical programmers understand it, and so do compilers.
You listed a bunch of exceptional cases that should indeed be handled by libraries. But not to support common cases well because of exceptional cases i
Re: (Score:2)
Boost Multi-array doesn't support most modern Fortran array features, so it's useless for porting modern Fortran code to C++: you end up having to rewrite most of the code from scratch.
That just shows that with enough effort, you can create efficient special purpose libraries in C++; of course you can. The question is whether straightforward, boring numerical code compiles
FUD (Score:2)
Re:FUD (Score:4, Informative)
You claim to be writing high performance code and you don't understand the difference between Boost multi-array and Fortran arrays? I'm sorry, but if you do any kind of high performance computing, you should at least have a decent understanding of one of the major tools used for it, namely modern Fortran. Once you do, you can then make an informed choice, instead of behaving like an immature language zealot.
Here are two places you should start looking: [wikipedia.org] [wikipedia.org]
(The Fortran code on libdecomp.org is cringe-inducing and inefficient.)
And, FWIW, I'm primarily a C++ programmer, because that's what the market demands, not a Fortran programmer, but at least I know my tools and their limitations.
If you use C, assembly, or Java "correctly", you can usually match Fortran code. That is entirely not the point.
QED (Score:2)
So you said Fortran codes we faster than C++ codes and now that's not the point any longer as they really aren't? Great, thanks!
The links you provided show that Fortran has some convenience functions for selecting parts of arrays and applying arithmetics to them. What I didn't see is anything you can't so with Boost Multi-Array and Boost SIMD [meetingcpp.com].
Re: (Score:2)
I said no such thing; that doesn't make any sense. What I said is:
Boost Multi-array doesn't support most modern Fortran array features, so it's useless for porting modern Fortran code to C++: you end up having to rewrite most of the code from scratch.
and
That just shows that with enough effort, you can create efficient special purpose libraries in C++; of course you can. The question is whether straightforward, boring numerical code compiles into fast executa
Re: (Score:2)
Sorry, I got overexcited and did see something in your post that apparently wasn't there.
And yet I don't buy into this "OMG, C++ is either clumsy or slow compared to Fortran" FUD (I hope I'm paraphrasing it correctly this time). For a certain (perhaps smallish) domain LibGeoDecomp is such a library which makes it easy to write short, yet (nearly) optimal code with C++.
I don't doubt though that there are use cases where it's hard to come up with a good C++ solution while Fortran would outperform it in both,
Re: (Score:2)
Its code not codes FFS (Score:2)
Re: (Score:2)
Unlike Ru
Re: (Score:3)
I have had conversations with some of my friends who work on the peta-scale clusters and thought much the same as you. But, it turns out, when you're working with that level of system, you're probably addressing some small part of a much, much larger problem that has been largely solved. The existing code that performs 99.9% of your task is written in Fortran and actively developed by a very successful team of researchers. Attempting to rewr
Re: (Score:2)
Thing about those old systems, they typically weren't written by dumbasses. Most of my career has been following along behind dumbasses cleaning up at them. It's lucrative work, and I'm never hurting for something to do. Every so often I happen upon a system that was actually written by engineers and it's usually
Re: (Score:3)
Re: (Score:3)
That's because you aren't doing development on computationally expensive simulation codes that run on supercomputers. Because then you would use FORTRAN. C++ is such a memory hog, and the memory overhead scales with the number of processors. In FORTRAN, you only allocate what you need to use, and that's important when working with large arrays. Java and Ruby are out of the question.
FORTRAN is not obsolete, because there are currently no other languages that can fill the role. When running simulations that t
Re: (Score:2)
C++ is such a memory hog, and the memory overhead scales with the number of processors.
What on earth are you talking about?
Re: (Score:2)
Re: (Score:2)
Re: Its code not codes FFS (Score:4, Informative)
Please don't learn FORTRAN, learn Fortran instead. (For the pedantic, all caps is F77. Normal caps is F90 and later.)
Fortran works fine with MPI (Score:5, Informative)
...and has done for years.
We write a scientific code for solving quantum mechanics for solids and use both OpenMP and MPI in hybrid. Typically we run it on a few hundred processors across a cluster. A colleague extended our code to run on 260 000 cores sustaining 1.2 petaflops and won a supercomputer prize for this. All in Fortran -- and this is not unusual.
Fortran gets a lot of bad press, but when you have a set of highly complex equations that you have to codify, it's a good friend. The main reason is that (when well written) it's very easy to read. It also has lot's of libraries, it's damn fast, the numerics are great and the parallelism is all worked out. The bad press is largely due to the earlier versions of Fortran (66 and 77), which were limited and clunky.
In short, the MPI parallelism in Fortran90 is mature and used extensively for scientific codes.
Agreed. (Score:2)
Re: (Score:2).
Did you read TFA? (Score:2)
Re: (Score:2)
And so on the basis of one example you're willing to take their word that changing languages doesn't require re-debugging the entire program?
My, my, but you are naive, aren't you?
Re: (Score:2)
Even when you change compilers but keep the same source code you have to redebug complex FORTRAN code, due to idiosyncracies in implementations over the years.
Re: (Score:2)
Sometimes you get new bugs even with the same compiler, just because you changed optimization flags for the build.
No, I'm taking MY word for it. :-) (Score:2)
Sorry, I should probably have added a disclaimer that I'm involved in the development of the library as my signature apparently doesn't make it obvious enough: I'm the project lead.
So far we've built about a dozen application with LibGeoDecomp, including porting a dozen large scientific codes towards it. You're right that porting a code usually involves debugging. But that's inevitable when parallelizing a previously sequential code anyway. We don't claim to do magic, we just have some cool tricks up our s
Targeted at Managers (Score:2)
Someone has some legacy Fortran code and a task of modifying it. There are two approaches: Port it or work on the existing source. Porting it allows for hiring from a very large (but shallow*) pool of programmers familiar with 'current' languages like C++. Working with the existing code means having to locate resources in a much smaller market. The former are cheap. The latter much more expensive. What to do?
*Good programmers can probably pick up a book and teach themselves Fortran pretty easily. But eve
Recent experience with old code (Score:2)
Reminds me of a recent experience writing a new system to replace a legacy system.
A key part of one of the homegrown network protocols was a CRC. This sounds OK, but the implementation was wrong. I spent a lot of time trying to reverse-engineer just what the original engineers had implemented. The fact that it was written in ADSP2181 assembler didn't help. It had never been an issue before because both ends of the link used the same wrong implementation, so the errors cancelled out.
I ended up writing an.
In my personal experience... (Score:3)
In my personal experience...
Most of the physics code in FORTRAN that I've dealt with are things like relativistically invariant P-P and N-P particle collision simulations in order to test models based on the simultaneous solution to 12 or more Feynman-Dyson diagrams. It's what was used to predict the energy range for the W particle, and again for the Higgs Boson, and do it rather reliably.
The most important part of this code was reproducibility of results, so even though we were running Monte Carlo simulations of collisions, and then post-constraining the resulting pair productions by the angles and momentum division between the resulting particles, the random number stream had to be reproducible. So the major constraint here was that for a reproducible random stream of numbers, you had to start with the same algorithm and seed, and the number generation had to occur linearly - i.e. it was impossible to functionally decompose the random number stream to multiple nodes, unless you generated and stored a random number stream sufficient to generate the necessary number of conforming events to get a statistically valid sample size.
So, it was linear there, and it was linear in several of the sets of matrix math as it was run through the diagrams to filter out pair non-conforming pair production events.
So we had about 7 linearity choke-points, one of which could probably be worked around by pre-generating a massive number of PRNG output far in excess of what would be eventually needed, and 6 of which could not.
The "add a bunch of PCs together and call it a supercomputer" approach to HPC only works on highly parallelizable problems, and given that we've had that particular capability for decades, the most interesting unsolved problems these days are not subject to parallel decomposition (at least not without some corresponding breakthroughs in mathematics).
I converted a crap-load of FORTRAN code to C in order to be able to optimize it for Weitek vector processors plugged into Sun hardware, including the entire Berkeley Physics package, since that got us a better vector processor than was on the Cray and CDC hardware at Los Alamos where the code was running previously, but adding a bunch of machines together would not have improved the calculation times.
Frankly, it seems to me that the available HPC hardware being inherently massively parallel has had a profound effect on constraining the problems we try to solve, and that there are huge, unexplored areas that are unexplored for what amounts to the equivalent of someone looking for their contact lens under the streetlight, rather than in the alley where they lost it, "because the light's better".
HPC is just a niche market, too (Score:2)
You're right: the current compute architectures we see in HPC are geared at data parallel problems of massive size. Clock speeds are stagnating, sometimes even stepping down (e.g. NVIDIA Kepler has its cores actually clocked slower that Fermi with its hot clock for the shaders). Your description sounds like you'd benefit from a singular core which is tuned for single thread performance (e.g. with really big caches, a large out of order execution window) and runs at 5-10 GHz (which might require liquid nitro
Re: (Score:2)
See here for a parallel way to deal with your random number generation problem: [osu.edu]
Thanks; read the paper; it presents three methods, 2 of which are unsuitable for parallel decomposition to an arbitrary number of CPUs (the Mersenne Twistor is not suitable to thread level decomp.), and one of which where you have to really carefully define you m(i). Changing algorithms isn't really an option, unless you are willing to rerun all of your historical computations, since unles you use the same PRNG, there is no guarantee of precise reproducibility, which is one of the issues here.
I think it'd
But aside from that, Mrs. Lincoln? (Score:2)
That's not what I call "limited". More like a rewrite, or at least a salvage operation.
Re:It never ceases to amaze me (Score:4, Insightful): (Score:2)
A pedantic note - it hasn't been called "FORTRAN" since Fortran 90 was introduced. Otherwise it's nice to see people defending it for scientific applications
:)
Re:It never ceases to amaze me (Score:4, Insightful)
Re: (Score:2)
And the funny thing is, game developers who are designing game engines in C++ for multi-core systems also use scripting languages like LUA at the high level. Then you have so many ways of doing parallel processing of arrays in C++; STL vectors (foreach), Intel TBB, Intel ABB, Boost, pthreads, and many others. I can't imagine what it would be like trying to bolt together a dozen or more different utility libraries each using their own favorite blend of parallel processing API's.
I guess Fortran is like the Py
Re:It never ceases to amaze me (Score:5, Funny)
Not at all. It might be a bit more monocultured than, say, C++ but there are still more than enough ways to skin the same cat that you end up with a ton of cat parts and a mass of confusion.
Re: (Score:2)
Re: (Score:2)
Very true. For my own comfort I tended to stick within pure Fortran programs - I'm still more comfortable in Fortran than in C or C++ - but there were things that were better to do elsewhere, particularly where I had access to a library that I much preferred (say, in C, which happened quite a bit) to what I easily had available in Fortran. Sure, I could have gone hunting but it was a lot easier to build a trivial wrapper around the C library and just call that. I can't actually remember why I didn't like th
Re: (Score:3)
In Fortran you don't. Fortran has the mathematically expected parallel constructions built into the language, and the compiler directives commonly used before things are entirely in the language were reasonably standard.
I think Fortran is very good for quantitative programming and I regret that in my commercial enterprise it is essenti
Re: (Score:2)
Re: (Score:2)
Yes you can, since Fortran 2003.
Re: (Score:2)
There is nothing you can do in fortran that can't be done better in C++
Yeah? Well there's nothing in C++ that can't be done better in assembler.
Old and kludgy makes it harder to port. (Score:3)
Not only does it cost a LOT to port this stuff and risk errors in doing so, but the cruftier it is the harder (and more expensive and error-prone) it is to port it.
If, instead, you can get the new machines to run the old code, why port it? Decades of Moore's Law made the performance improve by orders of magnitude, and the behavior is otherwise unchanged.
If you have an application where most of the work is done in a library that is largely parallelizable, and with a few tiny tweaks you can plug in a modern
Re: (Score:2)
"Eventually somebody will teach the computers to convert the Fortran to a readable and easily understandable modern language - while both keeping the behavior identical and highlighting likely bugs and opportunities for refactoring."
That language will likely be Fortran 2008 or 2015...
Re: (Score:2)
Re: (Score:2)
I've long been tempted to get a
.net compiler for Fortran. That would make it *really* easy to build some ugly Java.
Re: (Score:2)
please go back to your java and c++ world and leave Fortran alone - you don't know the first thing about Fortran or Fortran compilers.
Re: (Score:3)
more than adequate, Fortran is still the most optimizable language for high performance numeric computation, moreso than C and derived languages
Re: (Score:2)
Fortran is the most used and therefore the biggest target for continued improvement. Saying it is the "most optimizable" means nothing. As a tools company you aren't going to focus on what none of your customers do.
x86 is the most modern high-performance instruction set by your reasoning. Sometimes alternatives are just not sufficiently compelling, that doesn't mean they are inferior.
Re: (Score:2)
Re: (Score:2)
Efficient software is more than good assembly (Score:2) | http://developers.slashdot.org/story/13/09/21/165211/a-c-library-that-brings-legacy-fortran-codes-to-supercomputers?sdsrc=prev | CC-MAIN-2014-52 | refinedweb | 4,645 | 60.65 |
How to learn Quartz?
How to learn Quartz? Hi,
I have to make a program in Java is scheduling account processing Job after an interval of 20 minutes.
Tell me the tutorial.
Thanks
Hi,
Please check the tutorial Hello World Quartz
Form Processing Problem
Form Processing Problem I am trying to create a Circular Page. This is the Code where the circular is updated and asks for Circular Reference number... the index of file
pos = file.indexOf("filename
applications, mobile applications, batch processing applications. Java is used
index
Mysql Date Index
Mysql Date Index
Mysql Date Index is used to create a index on specified table. Indexes...
combination of columns in a database table. An Index is a database structure
which
Processing XML with Java
;
}
Processing XML with Java
XML is cross-platform software, hardware.... DOM
Parser is slow and consume a lot memory than SAX parser. DOM is language... stand for "Java API for XML Processing".
It is used for processing
data processing is
data processing is data processing is
Data processing is associated with commercial work. Data processing is also referred as Information System. Data processing is basically used for analyzing, processing
Quartz framework tutorial
Quartz framework tutorial Hi,
I am trying to create the scheduler application. Can any one provide me the url of Quartz framework tutorials.
Thanks
Hi,
Check the examples at Quartz Tutorial page.
Thanks
SQL function extremely slow from 5.5
SQL function extremely slow from 5.5 SQL function extremely slow from 5.5
Spring JdbcTemplate and ResultSet is too slow..
Spring JdbcTemplate and ResultSet is too slow.. Hi..
I am using spring jdbctemplate to fetch the records from oracle database.
But it is taking too long time to execute a simple select query.
The query which is having 400....
What is Index?
What is Index? What is Index
Quartz Tutorial
Quartz Tutorial
In this Quartz Tutorial you will how to use Quartz Job scheduler in your java
applications. Quartz Job scheduler is so flexible that it can be used with your
image Processing
image Processing BCIF Image Compresssion Algorithm s alossless image Compression algorithm ,Pleas Help Me weather it can support only 24 bit bmp images
Image processing
Audio Processing
image Processing
index of javaprogram
index of javaprogram what is the step of learning java. i am not asking syllabus am i am asking the step of program to teach a pesonal student.
To learn java, please visit the following link:
Java Tutorial
Steps for the payment gateway processing?
Steps for the payment gateway processing? Steps for the payment gateway processing
Search index
Quartz Tutorial
Q - Java Terms
Q - Java Terms
Java Quartz Framework
Quartz is an open source job... to processing. Queues
provide additional insertion, extraction, and inspection
Drop Index
Drop Index
Drop Index is used to remove one or more indexes from the current database.
Understand with Example
The Tutorial illustrate an example from Drop Index
The product of data processing is
The product of data processing is The product of data processing is
1. Data
2. Information
3. Software
4. Computer
5. All of the above
Answer: 3. Software
Abort JSP processing
Abort JSP processing Can I just abort processing a JSP?
Yes. You can put a return statement to abort JSP processing
Speech Processing - Java Beginners
Speech Processing I want to implement Speech Processing in Java. How can I do It? Please Answer
text processing program
text processing program how can i compare letter by letter?
what i was trying to do is a program that can define what is root word,prefix and suffix.
plz help
clustered and a non-clustered index?
clustered and a non-clustered index? What is the difference between clustered and a non-clustered index
Quartz trigger dropping automatically - Java Beginners
Quartz trigger dropping automatically In our application we....
As per our understanding quartz trigger will not get dropped off....
Thanks
Mysql Btree Index
Mysql Btree Index Mysql BTree Index
Which tree is implemented to btree index?
(Binary tree or Bplus tree or Bminus tree
image processing in java
image processing in java public class testing
{
static BufferedImage image;
public static void main(String[] args) throws IOException {
ArrayList<Integer> l1=new ArrayList<Integer>();
ArrayList<Integer>
z-index always on top
z-index always on top Hi,
How to make my div always on top using the z-index property?
Thanks
Hi,
You can use the following code:
.mydiv
{
z-index:9999;
}
Thanks
How do i slow down the game (othello game)??
How do i slow down the game (othello game)?? Hello,
I built an othello game that can be played by 2 humans , human and computer and two computers.
Everything works correctly but the problem is that when 2 computers play
array, index, string
array, index, string how can i make dictionary using array...please help
$ processing
JavaScript array processing
By processing array we simply mean that processing....
In this example of processing JavaScript array we have
three array objects emp1, emp2
checking index in prepared statement
checking index in prepared statement If we write as follows:
String query = "insert into st_details values(?,?,?)";
PreparedStatement ps = con.prepareStatement(query);
then after query has been prepared, can we check the index
jaav image processing
Job scheduling with Quartz - Java Server Faces Questions
Job scheduling with Quartz I have an JSF application deployed... to database. It works fine but when the Quartz scheduler fires a job it accquires... while initialization or while calling the job. Hi,How you
JavaScript array index of
JavaScript array index of
In this Tutorial we want to describe that makes you to easy to understand
JavaScript array index of. We are using JavaScript... line.
1)index of( ) - This return the position of the value that is hold
image processing matching an 2 image
image processing matching an 2 image source code for dense and sparse matching feature in lbp
The CPU(Central processing unit)consist of
The CPU(Central processing unit)consist of The CPU(Central processing unit)consist of
1. Input, Output, and Processing
2. Control Unit, Primary storage, and Secondary storage
3. Control unit, arithmetic logic unit
Java arraylist index() Function
Java arrayList has index for each added element. This index starts from 0.
arrayList values can be retrieved by the get(index) method.
Example of Java Arraylist Index() Function
import
Parallel Processing & Multitasking
Parallel Processing & Multitasking
Multitasking & Multithreading
Multitasking allow to execute more than one tasks at the same time, a task being a program
Processing stored procedure in MySQL - SQL
Processing stored procedure in MySQL Dear All java with mysql developer.....
i need to convert my SQL procedure to MySQL database... here is my Stored procedure in SQL....
create or replace procedure INVNAMES (bno 3.2 Asynchronous Request Processing
In this section, you will learn about Asynchronous Request Processing in Spring MVC 3.2.
Compute the image processing - Java Beginners
including index in java regular expression
including index in java regular expression Hi,
I am using java regular expression to merge using underscore consecutive capatalized words e.g., "New York" (after merging "New_York") or words that has accented characters
Shopping Cart Index Page
Introduction to Quartz Scheduler
Introduction to Quartz Scheduler
Introduction to Quartz Scheduler
This introductory section will inform you the fundamentals of Quartz like its definition, its uses and its
What is Parallel Processing?
What is Parallel Processing
Parallel processing is a process that allows the processing of multiple
program instructions and operations simultaneously... time.
The concept of parallel processing has been taken from the human brain
Form processing using Bean
Form processing using Bean
In this section, we will create a JSP form using bean ,which will use a class
file for processing. The standard way of handling forms in JSP is to define a
"bean". This is not a full Java
JDBC Batch Processing Example
JDBC Batch Processing Example:
Batch processing mechanism provides a way... with a call to the database. By using
batch processing you can reduce the extra..., you can learn about Batch processing and these few methods.
The addBatch
Foreach loop with negative index in velocity
Foreach loop with negative index
in velocity
... with negative index in velocity.
The method used in this example... have
used foreach loop with negative index..
Java APIs for XML Processing (JAXP)
Java APIs for XML Processing (JAXP)
JAXP (Java APIs for XML Processing) enables... kind of
processing instead it provides a mechanism to obtain parsed XML documents
Java APIs for XML Processing (JAXP)
Java APIs for XML Processing (JAXP)
JAXP (Java APIs for XML Processing) enables... kind of
processing instead it provides a mechanism to obtain parsed XML documents
Why is processing a sorted array faster than an unsorted array?
Why is processing a sorted array faster than an unsorted array? Why is processing a sorted array faster than an unsorted array
Bubble Sort in Java
index. Then it compares next pair of elements and repeats the same process. When it reaches the last two pair and puts the larger value at higher index... are needed.
Bubble Sort is a slow and lengthy way to sort elements.
Example
Insert a Processing Instruction and a Comment Node
Insert a Processing Instruction and a Comment Node
... a Processing Node
and Comment Node in a DOM document. JAXP (Java API for XML Processing) is an
interface which provides parsing of xml documents. Here the Document
Write a byte into byte buffer at given index.
Write a byte into byte buffer at given index.
In this tutorial, we...; index.
ByteBuffer API:
The java.nio.ByteBuffer class extends... ByteBuffer
putChar(int index, byte b)
The putChar(..) method write | http://www.roseindia.net/tutorialhelp/comment/59879 | CC-MAIN-2015-06 | refinedweb | 1,599 | 56.05 |
wrowe@apache.org writes:
> Index: sa_common.c
> ===================================================================
> RCS file: /home/cvs/apr/network_io/unix/sa_common.c,v
> retrieving revision 1.64
> retrieving revision 1.65
> diff -u -r1.64 -r1.65
> --- sa_common.c 14 Oct 2002 00:03:46 -0000 1.64
> +++ sa_common.c 15 Oct 2002 04:10:31 -0000 1.65
> @@ -474,7 +474,20 @@
> #ifdef WIN32
> return apr_get_netos_error();
> #else
> - return (h_errno + APR_OS_START_SYSERR);
> + switch (h_errno) {
> +#ifdef NETDB_INTERNAL
> + NETDB_INTERNAL:
> + return APR_FROM_OS_ERROR(errno);
NETDB_INTERNAL is not defined in some places where the others are
defined
> + HOST_NOT_FOUND:
> + return APR_EHOSTUNREACH;
-1... EHOSTUNREACH is not the same as HOST_NOT_FOUND... this
is making it hard for somebody to know what error code the resolver
actually set... they have to go dig through the APR source code
instead of extracting HOST_NOT_FOUND from the APR error code
The app should get the real error code set by the resolver, encoded
properly so as not to be confused with errno or anything else.
Obviously, if the resolver says to look in errno then errno should be
returned.
> + NO_DATA:
> + return APR_E;
even if I didn't want NO_DATA, what is APR_E?
> + NO_RECOVERY:
> + return APR_EHOSTUNREACH;
not the same
> + TRY_AGAIN:
> + return APR_EAGAIN;
not the same
by the way, two of the h_errno constants often have the same numeric
value and can't be used like this in a switch()...
> + }
> + /* return (h_errno + APR_OS_START_SYSERR); */
what gets returned if the switch() didn't find a match?
--
Jeff Trawick | trawick@attglobal.net
Born in Roswell... married an alien... | http://mail-archives.apache.org/mod_mbox/apr-dev/200210.mbox/%3Cm3of9was7b.fsf@rdu74-177-063.nc.rr.com%3E | CC-MAIN-2016-40 | refinedweb | 244 | 59.4 |
Migrating Spring Data Neo4j 3.x to 4.0 - Graph Entities
Migrating Spring Data Neo4j 3.x to 4.0 - Graph Entities
In this installment, we get to the heart of dealing with graphs in Spring Data Neo4j: Graph entities, namely, nodes and relationships (as Neo4j refers to them). Read on and learn more.
Join the DZone community and get the full member experience.Join For Free
Read the 2019 State of Database DevOps Report for the very latest insights
In this installment, we get to the heart of dealing with graphs in Spring Data Neo4j: Graph entities, namely, nodes and relationships (as Neo4j refers to them).
"Finally," you faithful readers are sighing, "the stuff I'm really interested in! Right up there with 'free drinks' and Netflix."
Fair point.
The good news is that these entities are largely the same in SDN 4.0 as they were in previous versions; however, there are a few differences that we will highlight and navigate—nothing this crowd can't handle!
The largest similarity is that you still annotate your POJOs that represent nodes and relationships.
Just like previous versions of SDN, relationships POJOs are only necessary when you need to store properties of a relationship, so if you just need a plain relationship with no properties, simply annotating the appropriate properties in your node POJO will suffice!
Let's have a closer look at each entity type.
I Nodes What You Are Thinking
Von Neumann's Store is nothing without its customers! So let's have a peak at some important parts of the Customer POJO, shall we?
@NodeEntity public class Customer { @GraphId private Long id; private String username; private String firstName; private String lastName; private String email; @Relationship(type = "FRIEND", direction = Relationship.UNDIRECTED) Set<Customer> friends; @Relationship(type = "PURCHASED", direction = Relationship.OUTGOING) Iterable<Purchase> purchases; @Relationship(type = "HAS_ADDRESS", direction = Relationship.OUTGOING) Address address; ... }
A lot of this should look very familiar.
@NodeEntity is just the same as before. When Customer is persisted to the graph, a :Customer label is automatically created and used. Previously, if we wanted to use a different label, we would have had to have used the @TypeAlias annotation—not so anymore! @NodeEntity has available an attribute called "label" that allows you to set a different label, e.g. @NodeEntity(label = "LifeBloodOfBusiness").
@GraphId is needed just as before, and it must be of type Long; however, if you have a property called "id" of type Long, the @GraphId annotation is not required as SDN will automatically use the field for its purposes. It's worth noting that the type must be the non-primitive Long. And, as an important reminder, remember that 0 is a valid value for a Neo4j node ID.
Other primitive (both single and array-based) and string fields are automatically persisted into the node.
What might jump out at you is the use of @Relationship. Let's talk about this a bit more.
Relationships Are Now a Lot Less Work
Spring Data Neo4j won't judge you!
@Relationship has replaced both @RelatedTo and @RelatedToVia. Recall that the former was used to relate a node to another node (or group of nodes) and that the latter was used to designate a relationship entity (recall what relationship entities are used for as stated above) to use for a specific relationship.
These two annotations have both been replaced by @Relationship. Simpler is better! SDN is smart enough to know what you mean based on the type of the field. All that is required is a "type" and a "direction".
Also notice that Relationship.* (where "*" can be one of INCOMING, OUTOING, or UNDIRECTED) has replaced Direction.* when specifying the relationship's direction. It is important to note that while the use of UNDIRECTED simply ignores the relationship direction actually stored in the graph, the persisted relationship does not, in fact, lack direction.
"But what if you have relationships that are of the same name but point to different node types?" the astute reader (that is, you) is asking. In previous versions of SDN, we had to specify an "enforceTargetType" attribute into the annotation. Not so anymore! SDN is smart enough to know what you mean, and so this kind of situation is made easier to deal with.
(It is worth noting that the target types must be of differeing types.)
Another quick reminder: If you are relating a group of nodes to your node POJO and need the group to be modifiable, you would generally use a Set<T> (or some other Collection-based container); otherwise, you should use Iterable<T> for read-only fields. Also, be careful if order is important: A Set cannot relied about for order, while Iterable can.
Phew, okay, let's now have a look at relationship entities!
A Graph Without Relationships Is Like a Day Without Sunshine!
Relationships form the heart of any graph—they are what give the graph meaning.
SHOW ME THE MONEY! ...no, wait...
(Courtesy: indatimes.in and TriStar Pictures)
Now, ordinarily, we don't need relationship POJOs if we are persisting relationships that have no properties; however, when we do need to persist properties on a relationship, we require the use of a relationship entity.
Let's have a look at one!
@RelationshipEntity(type = "PURCHASED") public class Purchase { @GraphId private Long id; @StartNode private Customer customer; @EndNode private Stock item; private int quantity = 0; ... }
Again, a lot of similarities exist here with the previous version of SDN:
- @RelationshipEntity operates just like it used to, with the "type" attribute being required (after all, what good is a relationship without a type/name?)
@GraphId is used in exactly the same way as for node entities, with exactly the same caveats and semantics.
@StartNode and @EndNode work just as before and are required, specifying the "source" of the relationship and the "destination" of the relationship.
So, as you can see, there really is no appreciable difference between older versions of SDN and SDN 4.0 when it comes to relationships. Awesome!
Conclusion
In this post we have seen the differences and uses of the annotations that are at the heart of Spring Data Neo4j. Node entities and relationship entities are both crucial to persisting and retrieving our graphs, and make our lives significantly easier when dealing with such an integration.
The good news is that the use of these entities, while similar and familar, have been improved to help speed up productivity and avoid some pitfalls that used to exist.
In the next post, we will have a quick look at indices in Spring Data Neo4j and wrap up this series. I know, I know—I'm just as sad as you are, but, all good things must come to an end. Or not. Hopefully not.
See you in the next post! }} | https://dzone.com/articles/do-not-publish-migrating-spring-data-neo4j-3x-to-4 | CC-MAIN-2019-09 | refinedweb | 1,132 | 55.44 |
First, a disclaimer. Naming things in the world of programming is always a challenge. Naming this blog post was also difficult. There are all sorts of implications that come up when you claim something is "functional" or that something is a "pattern". I don't claim to be an expert on either of these topics, but what I want to describe is a pattern that I've seen develop in my code lately and it involves functions, or anonymous functions to be more precise. So please forgive me if I don't hold to all the constraints of both of these loaded terms.
A pattern
The pattern that I've seen lately is that I need to accomplish of myriad of steps, all in sequence, and I need to only proceed to the next step if my current step succeeds. This is common in the world of Rails controllers. For example:
def update @order = Order.find params[:id] if @order.update_attributes(params[:order]) @order.calculate_tax! @order.calculate_shipping! @order.send_invoice! if @order.complete? flash[:notice] = "Order saved" redirect_to :index else render :edit end end
What I'm really trying to accomplish here is that I want to perform the following steps:
- Find my order
- Update the attributes of my order
- Calculate the tax
- Calculate the shipping
- Send the invoice, but only if the order is complete
- Redirect back to the index page.
There are a number of ways to accomplish this set of steps. There's the option above but now my controller is doing way more than it should and testing this is going to get ugly. In the past, I may have created a callback in my order model. Something like
after_save :calculate_tax_and_shipping and
after_save :send_invoice if: :complete?. The trouble with this approach is that now anytime my order is updated these steps also occur. There may be many instances where I want to update my order and what I'm updating has nothing to do with calculating totals. This is particularly problematic when these calculations take a lot of processing and have a lot of dependencies on other models.
Another approach may be to move some of my steps into the controller before and after filters (now
before_action and
after_action in Rails 4). This approach is even worse because I've spread my order specific steps to a layer of my application that should only be responsible for routing user interaction to the business logic of my application. This makes maintaining this application more difficult and debugging a nightmare.
The approach I prefer is to hand off the processing of the order to a class that has the responsibility of processing the user’s interaction with the model, in this case, the order. Let's take a look at how my controller action may look with this approach.
def update handler = OrderControllerHandler.new(params) handler.execute! if hander.order_saved? redirect_to :index else @order = handler.order render :edit end end
OK, now that I have my controller setup so that it’s only handling routing, as it should, how do I implement this
OrderControllerHandler class? Let’s walk through this:
class OrderControllerHandler attr_reader :order def initialize(params) @params = params @order = nil # a null object would be better! @order_saved = false end def execute! end def order_saved? @order_saved end end
We now have the skeleton of our class setup and all we need to do is proceed with the implementation. Here’s where we can bust out our TDD chops and get to work. In the interest of brevity, I’ll leave out the tests, but I want to make the point that this approach makes testing so much easier. We now have a specific object to test without messing with all the intricacies of the controller. We can test the controller to route correctly on the
order_saved? condition which can be safely mocked. We can also test the processing of our order in a more safe and isolated context. Ok, enough about testing, let’s proceed with the implementation. First, the execute method:
def execute! lookup_order update_order calculate_tax calculate_shipping send_invoice end
Looks good right? Now we just need to create a method for each of these statements. Note, I’m not adding responsibility to my handler. For example, I’m not actually calculating the tax here. I’m just going to tell the order to calculate the tax, or even better, tell a
TaxCalculator to calculate the tax for my order. The purpose of the handler class is to orchestrate the running of these different steps, not to actually perform the work. So, in the private section of my class, I may have some methods that look like this:
private def lookup_order @order = Order.find(@params[:id]) end def update_order @saved_order = @order.update_attributes(@params[:order]) end def calculate_tax TaxCalculator.calculate(@order) end ... etc, you get the idea
Getting function(al)
So far, so good. But we have a problem here. What do we do if the lookup up of the order fails? I wouldn’t want to proceed to update the order in that case. Here’s where a little bit of functional programming can help us out (previous disclaimers apply). Let’s take another shot at our
execute! method again and this time, we’ll wrap each step in an anonymous function aka, stabby lambda:
def execute! steps = [ ->{ lookup_order }, ->{ update_order }, ->{ calculate_tax }, ->{ calculate_shipping }, ->{ send_invoice! }, ] steps.each { |step| break unless step.call } end
What does this little refactor do for us? Well, it makes each step conditional on the return status of the previous step. Now we will only proceed through the steps when they complete successfully. But now each of our steps needs to return either true or false. To pretty this up and add some more meaning, we can do something like this:
private def stop; false; end def proceed; true; end def lookup_order @order = Order.find(@params[:id]) @order ? proceed : stop end
Now each of my step methods has a nice clean way to show that I should either proceed or stop execution that reads well and is clear on its intent.
We can continue to improve this by catching some errors along the way so that we can report back what went wrong if there was a problem.
attr_reader :order, :errors def initialize(params) @params = params @order = nil # a null object would be better! @order_saved = false @errors = [] end ... private def proceed; true; end def stop(message="") @errors << message if message.present? false end def invalid(message) @errors << message proceed end def lookup_order @order = Order.find(@params[:id]) @order ? proceed : stop("Order could not be found.") end ...
I’ve added these helpers to provide us with three different options for capturing errors and controlling the flow of our steps. We use the
proceed method to just continue processing,
invalid to record an error but continue processing anyway, and
stop to optionally take a message and halt the processing of our step.
In summary, we’ve taken a controller with a lot of mixed responsibilities and conditional statements that determine the flow of the application and implemented a functional handler. This handler orchestrates the running of several steps and provides a way to control how those steps are run and even captures some error output if need be. This results in much cleaner code that is more testable and maintainable over time.
Homework Assignment
- How could this pattern be pulled out into a module that could be easily included every time I wanted to use it?
- How could I decouple the
OrderControllerHandlerclass from the controller and make it a more general class that can be easily reused throughout my application anytime I needed to perform this set of steps?
- How could this pattern be implemented as a functional pipeline that acts on a payload? How is this similar to Rack middleware?
Hint:
def steps [ ->(payload){ step1(payload) }, ->(payload){ step2(payload) }, ->(payload){ step3(payload) }, ] end def execute_pipeline!(payload) last_result = payload steps.each do |step| last_result = step.call(last_result) end end
2 comments:
Hi Mike, what you described looks exactly like a pattern known as Data-Context-Interaction or DCI which is a somewhat new topic of interest in the Rails world. What distinguishes what you did as DCI is:
1) You extracted to a separate class OrderControllerHandler instead of doing what would more commonly be done in the past which is to extract that logic to Order.
2) OrderControllerHandler#execute! very cleanly and obviously states with the use case is for the class:
def execute!
lookup_order
update_order
calculate_tax
calculate_shipping
send_invoice
end
From what little I know about DCI a major impetus for it is in the idea that there is a lot of value gained simply from describing some cohesive piece of logic in the "step through" algorithm like what you described above in the #execute! method. It's fairly easy for an outside observer not familiar with the code to look at the OrderControllerHandler and figure out what it does.
I'm still learning DCI myself:. Navigate down all the way to the bottom of this article and look at the code labeled "The Controller Action". Does that look familiar?
Now what distinguishes DCI from the pattern you described?
1) A slight semantic change, DCI would call your OrderControllerHandler class a "context" and you would name it a verb instead of a noun, perhaps: OrderUpdater, in Rails it's being proposed that you store this file as 'app/contexts/order_updater.rb'. Execute is a commonly used method so you would have OrderUpdater.execute!.
2) A "context" has a notion of "actors", in your example there is only one "actor" which is Order. DCI seems to excel at circumstances where you multiple collaborators needed for fulfilling a particular context. Where DCI takes an exotic turn is that in a DCI application your model objects would be simple dumb objects without much logic to interact with other classes. When a context is executed at runtime the "actors" passed into the context as arguments get the logic and methods necessary to fulfill the use case tacked onto them at runtime. What this does is to decouple the logic of use case from the classes that will be marshalled to execute the use case. If this last paragraph didn't make any sense trust that I'm trying to explain something interesting and probably doing a bad job of it, this is something that takes a few chapters of a book to explain properly.
The contexts exist as modules that can be included everytime you want to use them. The promise of DCI is that this smart logic can be combined and nested with other contexts to provide more complex interactions. As I listed above, you could begin to decouple the OrderControllerHandler by first naming it something that doesn't use the word controller or handler and more generally describes what it does. Finally I believe in your third question what you are describing as "payload" DCI describes as "actor".
This is all still very much a work in progress and the details of how it should all work is still being hammered out. Here's some resources if you are interested in learning more.
1. DCI described by it's creator Trygve Reenskaug, (who also created another pattern you may have heard of called MVC)
2. Clean Ruby by Jim Gay
3. A sample DCI app in Ruby/Rails.
Tim,
Thanks for the awesome comment. I haven't looked at DCI only just heard things about it. Now that you point out some of the similarities it has piqued my interest. I'll take a look! | http://blog.endpoint.com/2014/01/functional-handler-pattern-in-ruby.html | CC-MAIN-2015-06 | refinedweb | 1,934 | 62.88 |
Hi,
I am making a game that involves the OnMouseOver() function, and I would like the game to start with the player's cursor already positioned at specific coordinates, but with the player will still being able to move his cursor after that. Is this possible? I know how to lock the cursor in the middle of the screen, but how could I manually set its position at the beginning of the game?
Thanks in advance. :)
You can use the GUI to create a mouse cursor, similar to creating a crosshair and manipulate that image.
You cannot however move the cursor itself unless you include windows.form.dll.
Basically the cursor is a windows operating system object that Unity reads information from. It doesn't write to it though, its not a unity object. You'd need to basically include microsoft's code for the mouse to move the mouse, hence windows.form.dll
Answer by skalev
·
Oct 10, 2012 at 09:14 PM
You can't. This is controlled by the operating system. (as far as I know). How do you lock the cursor in the middle of the screen? if you set it somehow, just set it to the right coordinates.
screen.lockcursor = true;
What you can do though, is HIDE the cursor and then write your own class that shows a different cursor (using an Image you created) and moves along with the mouse.
However, this isn't a great solution, since you will always have an offset between your mouse and the real mouse, which means yo cannot use any of the built in OnMouse functions, and will have to implement those yourself.
Probably not worth the trouble...
Answer by zachwuzhere
·
Mar 03, 2018 at 09:47 PM
Anyone who still needs to move the move position, here is one method...
//C#
using System.Runtime.InteropServices;
[DllImport("user32.dll")]
static extern bool SetCursorPos(int X, int Y);
int xPos = 30, yPos = 1000;
SetCursorPos(xPos,yPos);//Call this when you want to set the mouse position
don't know why this doesn't have more upvotes since it seems to solves the problem perfectly.. does this work accross diferent platforms somehow?
It doesn't. user32.dll is a windows library, so this fix is windows-only.
Other OSes probably have similar system calls to do this sort of thing though.
This works to perfection!
Now if only we can get this to work for mac, linux (and webGL)
I saw someone else on another post using Hardware.mouseposition so maybe that would work on other platforms.
Answer by sparkzbarca
·
Oct 10, 2012 at 09:34 PM
Answer by NightmarexGR
·
Apr 30, 2014 at 11:15 AM
Hello guys it seems this problem is still active, for this reason i created a unity API that lets you set the cursor's position on screen "SetCursorPosition(x : int,y : int)"
I know this thread is 3 years old, but I've seen you paste this comment on a bunch of related threads. Your workaround is for Windows only and uses js instead of c#. Both of which are inadvisable. Even if it were multiplat, one would first be faced with translating.
Locking cursor without changing position
1
Answer
Locking the mouse cursor without it centering after unlocking it?
0
Answers
custom mouse changing when hovering over objects
1
Answer
Lock mouse position with a Rect, possible?
1
Answer
Use OnMouseEnter/OnMouse Exit with center of screen
1
Answer | https://answers.unity.com/questions/330661/setting-the-mouse-position-to-specific-coordinates.html | CC-MAIN-2019-51 | refinedweb | 579 | 71.65 |
Parallel Port Box
Back in college I acquired some old DJ light controlling equipment. This included one box with switches and buttons to turn eight channels on and off, two things that looked like big power strips where each outlet was switched by a relay, and two cables, each about fifty feet long to attach the switches and buttons to the relays and outlets.
A few years back I made a box to control the relays from a parallel port. This is used at work to control as many as eight lights that indicate the status of various builds.
I wanted my own to play with the same setup at home, so I made another one. (read: Phone-controlled Christmas lights?)
It looks like this:
It's just eight transistors, resistors, LEDs, and diodes with the necessary connectors in a little box.
It went a lot faster this time. I did all of the soldering and most of the customization of the project box in one night. I think the previous time it took me the better part of a day.
The only thing that prevented it from working the first time I tried it out was that the Centronics connector pinout I was relying on labelled more pins as ground than were actually connected to anything. I moved the ground wire to a different pin and it worked perfectly.
I couldn't find the right kind of Centronics connector, so I ended up soldering to a connector that was supposed to be PCB-mounted. It's ugly, but it's safely tucked away inside the box and works fine.
And finally, when I just about had it ready to test, I realized I didn't have a printer cable. Fortunately, AIT computers provided me with this annoyingly somewhat hard to find cable for a perfectly reasonable price.
In other news, the Python parallel library is about as nice and simple as it gets:
import parallel import time p = parallel.Parallel() p.setData(0b101010) time.sleep(1.0) p.setData(0b010101)
Now I need to find something to do with a box of switches and buttons.:
App Store Store
This just in: Wikipedia to the rescue. | http://www.unprompted.com/projects/blog/2011/1 | CC-MAIN-2014-42 | refinedweb | 366 | 69.62 |
POLL(2) POLL(2)
poll - input/output multiplexing
#include <stropts.h>
#include <poll.h>
int poll(struct pollfd *fds, unsigned long nfds, int timeout);
The IRIX version of poll provides users with a mechanism for multiplexing
input and output over a set of any type of file descriptors, rather than
the traditional limitation to only descriptors of STREAMS devices [see
select(2)]. Poll identifies those descriptors on which a user can send
or receive messages, or on which certain events have occurred.
Fds specifies the file descriptors to be examined and the events of
interest for each file descriptor. It is a pointer to an array with one
element for each open file descriptor of interest. The array's elements
are pollfd structures which contain the following members:
int fd; /* file descriptor */
short events; /* requested events */
short revents; /* returned events */
where fd specifies an open file descriptor and events and revents are
bitmasks constructed by or-ing any combination of the following event
flags:
POLLIN Data other than high priority data may be read without
blocking. For STREAMS, the flag is set even if the
message is of zero length.
POLLRDNORM Normal data (priority band = 0) may be read without
blocking. For STREAMS, the flag is set even if the
message is of zero length.
POLLRDBAND Data from a non-zero priority band may be read without
blocking. For STREAMS, the flag is set even if the
message is of zero length.
POLLPRI High priority data may be received without blocking. For
STREAMS, this flag is set even if the message is of zero
length.
POLLOUT Normal data may be written without blocking.
POLLWRNORM The same as POLLOUT.
POLLWRBAND Priority data (priority band > 0) may be written. This
event only examines bands that have been written to at
least once.
Page 1
POLL(2) POLL(2)
POLLERR An error message has arrived at the stream head. This
flag is only valid in the revents bitmask; it is not used
in the events field. field.
POLLNVAL The specified fd value does not belong to an open stream.
This flag is only valid in the revents field; it is not
used in the events field.
For each element of the array pointed to by fds, poll examines the given
file descriptor for the event(s) specified in events. The number of file
descriptors to be examined is specified by nfds. If nfds exceeds
NOFILES, the system limit of open files [see ulimit(2)], poll will fail.
If the value fd is less than zero, events is ignored and revents is set
to 0 in that entry on return from poll.
The results of the poll query are stored in the revents field in the
pollfd structure. Bits are set in the revents bitmask to indicate which
of the requested events are true. If none are true, none of the
specified bits msec. poll is not
affected by the O_NDELAY and O_NONBLOCK flag.
poll fails if one or more of the following are true:
[EAGAIN] Allocation of internal data structures failed but request
should be attempted again.
[EFAULT] Some argument points outside the allocated address space.
[EINTR] A signal was caught during the poll system call.
[EINVAL] The argument nfds is less than zero, or nfds is greater than
NOFILES.
Page 2
POLL(2) POLL(2)
intro(2), select(2), read(2), write(2), getmsg(2), putmsg(2), streamio(7)
Upon successful completion, a non-negative value is returned. A positive
value indicates the total number of file descriptors that has been
selected (i.e., file descriptors for which the revents field is nonzero).
A value of 0 indicates that the call timed out and no file
descriptors have been selected. Upon failure, a value of -1 is returned
and errno is set to indicate the error.
Some devices do not support polling via the select(2) and poll(2) system
calls. Doing a select or poll on a file descriptor associated with an
"un-pollable" device will cause the select or poll to return immediately
with a success value of 0 and the with the corresponding file descriptor
events of queried set true. For instance, if a select or poll is
performed on a read file descriptor associated with an un-pollable
device, the call would return immediately, even though there may be
nothing to read on the device. A subsequent read(2) in this situation
might return with a "bytes-read" count of 0 or might block if the device
supports read blocking. Devices which exhibit this behavior (especially
those from third-party vendors) should be suspected as not supporting
polling.
PPPPaaaaggggeeee 3333 | https://nixdoc.net/man-pages/IRIX/man2/poll.2.html | CC-MAIN-2020-45 | refinedweb | 772 | 62.88 |
Paul Eggert <address@hidden> wrote: >> From: Exactly. As you probably noticed, coreutils' replacement lib/euidaccess.c is essentially the same as glibc's sysdeps/posix/euidaccess.c. > . I like that. Thanks! How about this change? 2003-02-10 Jim Meyering <address@hidden> * src/test.c (eaccess): Rewrite function to set the real uid and gid temporarily to the effective uid and gid, then invoke 'access', and then set the real uid and gid back. On systems that lack setreuid or setregid, fall back on the kludges in euidaccess. Before, it would not work for e.g., files with ACLs, files that were marked immutable, or on file systems mounted read-only. Nelson Beebe raised the issue. Paul Eggert suggested the new implementation. Index: test.c =================================================================== RCS file: /fetish/cu/src/test.c,v retrieving revision 1.81 retrieving revision 1.83 diff -u -p -r1.81 -r1.83 --- test.c 9 Feb 2003 08:28:59 -0000 1.81 +++ test.c 10 Feb 2003 13:19:00 -0000 1.83 @@ -39,8 +39,8 @@ # include "filecntl.h" #else /* TEST_STANDALONE */ # include "system.h" -# include "group-member.h" # include "error.h" +# include "euidaccess.h" # if !defined (S_IXUGO) # define S_IXUGO 0111 # endif /* S_IXUGO */ @@ -135,43 +135,45 @@ test_syntax_error (char const *format, c test_exit (SHELL_BOOLEAN (FALSE)); } -/* Do the same thing access(2) does, but use the effective uid and gid, - and don't make the mistake of telling root that any file is executable. - But this loses when the containing filesystem is mounted e.g. read-only. */ +#if HAVE_SETREUID && HAVE_SETREGID +/* Do the same thing access(2) does, but use the effective uid and gid. */ + static int -eaccess (char *path, int mode) +eaccess (char const *file, int mode) { - struct stat st; - static uid_t euid = -1; - - if (stat (path, &st) < 0) - return (-1); - - if (euid == (uid_t) -1) - euid = geteuid (); + static int have_ids; + static uid_t uid, euid; + static gid_t gid, egid; + int result; - if (euid == 0) + if (have_ids == 0) { - /* Root can read or write any file. */ - if (mode != X_OK) - return (0); - - /* Root can execute any file that has any one of the execute - bits set. */ - if (st.st_mode & S_IXUGO) - return (0); + have_ids = 1; + uid = getuid (); + gid = getgid (); + euid = geteuid (); + egid = getegid (); } - if (st.st_uid == euid) /* owner */ - mode <<= 6; - else if (group_member (st.st_gid)) - mode <<= 3; + /* Set the real user and group IDs to the effective ones. */ + if (uid != euid) + setreuid (euid, uid); + if (gid != egid) + setregid (egid, gid); + + result = access (file, mode); + + /* Restore them. */ + if (uid != euid) + setreuid (uid, euid); + if (gid != egid) + setregid (gid, egid); - if (st.st_mode & mode) - return (0); - - return (-1); + return result; } +#else +# define eaccess(F, M) euidaccess (F, M) +#endif /* Increment our position in the argument list. Check that we're not past the end of the argument list. This check is supressed if the | http://lists.gnu.org/archive/html/bug-coreutils/2003-02/msg00030.html | CC-MAIN-2015-22 | refinedweb | 463 | 77.43 |
---------- Forwarded message ----------
From: Deven Goratela <dev_khatri@yahoo.com>
Date: Apr 1, 2005 7:06 PM
Subject: [itsdifferent] Top Ten Technologies in .Net Development
To: itsdifferent@yahoogroups.com
Microsoft's .Net platform is finally happening! From Web Services to small devices Microsoft is offering number of different solutions surrounding the .Net platform. These solutions are based on carefully positioned strategies and technologies. If you are planning to board the .Net ship it is ideal that you master these technologies. If your organization has to provide end-to-end .Net Solutions, then these are the technologies that will make your team complete.
These technologies, tools or development topics have been chosen with help from Microsoft India. We have not considered Operating Systems such as Windows Server 2003 as well as specific software tools such as Visual Studio.net. You need command over these tools and technologies if you have to developer applications.
- ASP.net development Visual Basic .NET, C# .NET, or any. NET-compatible language. Web applications and XML Web services take advantage of the features of the common language runtime, such as type safety, inheritance, language interoperability, versioning, and integrated security.
Why ASP.Net?
ASP.net assumes great significance within Microsoft's grand plans. More and more web applications are written using server side scripts these days. You can understand that since ASP.net is the only Server Side Scripting Language supporting .Net (if you ignore Python.Net and Perl.Net), learning this technology is quite important.
ASP.net goes beyond Microsoft's erstwhile ASP technology including better Language Support, facility of Event Driven Programming, Programmable Controls, XML Based Components, and User Authentication. You also need to remember that ASP.net is not exactly compatible with ASP, and goes beyond ASP scripting. Microsoft had introduced a 2 MB lightweight version called Web Matrix, which lets you learn ASP without much investment. Beginners can learn ASP using Web Matrix.
- ADO.net
The .NET Framework technology for interacting with data, Microsoft ADO.NET, is designed for Web-based style of data access. Using ADO.NET, developers have the option of working with a platform-neutral, XML-based cache of the requested data, instead of directly manipulating the database.
This approach to data access frees up database connections and results in significantly greater scalability.
Why ADO.net
The answer is simple. In the Internet age, more and more applications are getting web enabled. ADO.net provides the best way to access data for web application within the .Net scheme of things.
- Web Services. Microsoft is betting on its XML Web Services platform, and it forms a key part of the .Net framework.
Why XML Web Services?
The very idea of the .Net platform has evolved out of the need for creating a platform for tomorrow's Web Services. Microsoft .Net Web Services are built on number of open technologies such as XML, SOAP, HTTP, and UDDI . You can use ASP.net or the different CLR compliable languages to create XML Web Services.
- COM Interoperability
The XML web services can be considered as an extension of Microsoft's legacy technology of COM. Since many of Microsoft's customers has invested millions of dollars in developing COM based technology. How does a .NET application connect to unmanaged code, including COM libraries, ActiveX controls, and native (Win32) DLLs? Microsoft has ensured that .Net framework provides backward compatibility to these legacy applications. But of course some amounts of recoding and tweaking are necessary.
Why COM Interoperability?
COM Interoperability is not a technology, but an opportunity as far as .Net developers go. There are several millions of dollars worth legacy applications, which needs to be updated to the .Net age. A sample example is changing Windows API programs to .Net framework libraries. Yet another example can be a project to call COM components from .Net clients.
- Winforms
Windows Forms is a framework for building Windows client applications that utilize the common language runtime. Windows Forms applications can be written in any language that the common language runtime supports including C#, Visual Basic.net, J# and many more. Windows Forms is a programming model for developing Windows applications that combines the simplicity of the Visual Basic 6.0 programming model with the power and flexibility of the common language runtime.
Why Winforms?
Windows Forms takes advantage of the versioning and deployment features of the common language runtime to offer reduced deployment costs and higher application robustness over time. This significantly lowers the maintenance costs (TCO) for applications written in Windows Forms. In addition, Windows Forms offers an architecture for controls and control containers that is based on concrete implementation of the control and container classes. This significantly reduces control-container interoperability issues. At the end of the if you are creating Client Side programs the best bet is to Winforms, independent of which language you are coding.
- Microsoft Mobile Internet Tool Kit (MMITK)
This freely downloadable tool kit is .Net's best bet to build mobile Web applications. The ASP.NET mobile controls, originally delivered as the Microsoft Mobile Internet Toolkit, contain server-side technology that enables ASP.NET to deliver markup to a wide variety of mobile devices. These devices include WML and cHTML cell phones, HTML pagers, and PDAs like the Pocket PC
Why MMITK?
If Microsoft is lagging behind Java somewhere it has to be in the mobile devices space. Microsoft hopes that developers will soon be shipping applications, using the MMITK and Visual Studio.net. There is a dearth of mobile applications developers on Microsoft platform. So install MMITK today.
- .NET Framework Class Library
The .NET Framework, which is an environment for developing, deploying, and running .NET applications, consists of three basic parts: ASP.NET, the Common Language Runtime (CLR), and the .NET Framework classes.
The .NET Framework classes (or the System classes) provide a huge amount of core functionality that Microsoft has made available and that you can take advantage of when building ASP.NET (and non-ASP.NET) applications. The System classes are available to developers of all .NET languages. Think of the System classes as the Windows API of .NET. Unlike the Windows API, however, the System classes provide high-level COM-like interfaces that are fairly easy to employ.
Why .Net Framework Classes?
The .Net Framework classes are important not just because it is an integral part of the framework, but because it provides you with the best method to develop non-ASP applications on the net.
- .Net Remoting
Microsoft the channel transports them.. In other words .Net remoting can be described as a technology for a managing RPC between two domains.
Why .NET remoting?
NET Remoting is a useful tool to employ in certain types of distributed solutions. It offers an extensible model in terms of the protocols and message formats it can support and can offer performance advantages in specific scenarios.
- Smart Devices Extensions
The .Net Smart Devices Extensions is key to Microsoft's embedded devices road map. Key development is on the Microsoft Compact .Net Framework..
Because the .NET Compact Framework is a subset of the full desktop .NET Framework, developers can easily reuse existing programming skills and existing code throughout the device, desktop, and server environments.
Why Smart Devices Extensions?
It has been answered in the explanation above. The embedded and smart devices space is expected to be worth 50 USD Billion market in the coming days. And if .Net will have some share of the market, then Smart Devices Extensions programmability is key to reaching there. If you are a developer in Visual Studio.net environment then writing applications of .Net Compact Framework will not be difficult. The .Net Compact Framework is a subset of .Net Famework.
- Enterprise Classes
Classes written using the Microsoft .NET Framework can leverage COM+ services. When used from .NET, COM+ services are referred to as Enterprise Services.. The System.EnterpriseServices namespace provides the programming model to add services to managed classes.
Why Enterprise Services?
Like COM interoperability Enterprise Services for .Net is more of an opportunity for developers. However the underlying technologies are worth a detailed study.
Conclusion
This article only touches the tip of an iceberg. If you have to build capabilities in .Net development, these ten topics will help you, to get a start. This does not mean that you need to learn all ten. Even mastering one of these technologies will provide you with an edge over your competition. But of course, you need to break the ice and go in-depth and check it out ofr yourself!:
i really enjoy reading your blog
its all a conspiracy!
very interesting. kinda makes you think | http://its-different.blogspot.com/2005/04/itsdifferent-top-ten-technologies-in.html | CC-MAIN-2017-30 | refinedweb | 1,435 | 51.55 |
img_convert_data()
Convert data from one image format to another
Synopsis:
#include <img/img.h> int img_convert_data( img_format_t sformat, const uint8_t* src, img_format_t dformat, uint8_t* dst, size_t n );
Arguments:
- sformat
- The format of the data you are converting from; see img_format_t .
- src
- A pointer to a buffer containing the source data.
- dformat
- The format you would like to convert the data to.
- dst
- A pointer to a buffer to store the converted data. This may point to a different buffer, or it can point to the same buffer as src, as long as you've ensured that the source buffer is large enough to store the converted data (the IMG_FMT_BPL() macro can help you with this).
- n
- The number of samples to convert.
Library:
libimg
Use the -l img option to qcc to link against this library.
Description:
This function converts data from one image format to another. The conversion may be done from one buffer to another, or in place.
The neither the destination nor the source formats can be a palette-based format (for example IMG_FMT_PAL8 or IMG_FMT_PAL4). Both must be direct formats. See img_expand_getfunc() to convert a palette-based image to a direct format.
If you're repeatedly converting data, it's better to call img_convert_getfunc() to get the conversion function, and then call the conversion function as required.
Returns:
- IMG_ERR_OK
- Success.
- IMG_ERR_NOSUPPORT
- One of the formats specified is invalid.
Classification:
Image library | http://developer.blackberry.com/playbook/native/reference/com.qnx.doc.libimg.lib_ref/com.qnx.doc.neutrino.lib_ref/topic/i/img_convert_data.html | CC-MAIN-2014-52 | refinedweb | 235 | 57.87 |
Revision history for namespace-autoclean 0.24 2015-01-03 20:44:36Z - re-release to fix problematic $VERSION declaration (RT#101095) 0.23 2014-12-27 04:07:03Z - be more lenient in optional Mouse tests to handle edge cases in older and pure perl versions 0.22 2014-11-04 06:19:54Z - fix an erroneous change in 0.21 0.21 2014-11-04 05:24:36Z - drop testing of MooseX::MarkAsMethods, now that Moose 2.1400 has better overload handling 0.20 2014-09-06 23:04:12Z - Moose earlier than 2.0300 had a broken ->does method, which called methods on a class's meta when it might not be initialized (RT#98424) 0.19 2014-06-17 04:57:07Z - more comprehensive testing with Moo/Mouse/Moose - fixed cleaning of constants 0.18 2014-06-14 20:12:59Z - better method detection for Mouse (github #4, Graham Knop) 0.17 2014-06-10 20:13:14Z - Add -except to import options. This allows you to explicitly not clean a sub. (github #3, Dave Rolsky) 0.16 2014-05-27 04:50:22Z (TRIAL RELEASE) - Changed the code to no longer _require_ Class::MOP. If your class is not a Moose class then we don't load Class::MOP. This was particularly problematic for Moo classes. Using namespace::autoclean with a Moo class "upgraded" it to be a Moose class. - Using this module broke overloading in a class. Reported by Chris Weyl. (RT#50938) 0.15 2013-12-14 17:47:21Z - update configure_requires checking in Makefile.PL, add CONTRIBUTING file 0.14 2013-10-09 03:06:00Z - bump dependency on B::Hooks::EndOfScope, to get the separation of pure-perl and XS components (RT#89245) - repository migrated to the github moose organization 0.13 2011-08-24 09:33:00Z - Fix issue in dist.ini which was causing links to be incorrectly generated. - Re-package to remove BEGIN { $VERSION hackery by using a newer Dist-Zilla. 0.12 2011-02-04 10:39:00Z - Bump namespace::clean dep to 0.20 to pull in the bugfix for Package::Stash::XS 0.19 0.11 2010-05-07 17:32:37Z - Improve distribution metadata. 0.10 2010-05-01 18:32:59Z - Fix documentation typo (Andrew Rodland). 0.09 2009-09-15 05:45:16Z - Fix to avoid deprecation warnings from the latest Class::MOP, but it still works with older versions too. (Dave Rolsky) - Fix a documentation typo (Jonathan Yu). 0.08 2009-06-07 15:34:02Z - Run the role tests again Moose >= 0.56 only. - Add diagnostic for the Moose version to the role test. 0.07 2009-05-27 20:27:46Z - Drop the useless Class::MOP::class_of call (Chris Prather). - Extend -also to make it accept coderefs and regexen (Kent Fredric). 0.06 2009-05-20 13:14:36Z - Allow selection of explicit cleanee (Shawn M Moore). 0.05 2009-05-01 10:44:25Z - Don't clean the 'meta' method for Moose roles, even if it's not included in get_method_list. 0.04 2009-04-22 05:42:32Z - Make -also accept a plain string instead of only an array ref. 0.03 2009-04-18 09:43:10Z - Changes dependeny of Class::MOP to 0.80, this is the first version with class_of. 0.02 2009-04-17 12:21:46Z - Allow removing symbols other than imports using the -also option. 0.01 2009-04-11 00:41:50Z - Initial release. | https://metacpan.org/changes/distribution/namespace-autoclean | CC-MAIN-2015-14 | refinedweb | 584 | 69.58 |
import sys import os sys.path.append(os.path.abspath('..')) import esda
/libpysal/io/iohandlers/__init__.py:25: UserWarning: SQLAlchemy and Geomet not installed, database I/O disabled warnings.warn('SQLAlchemy and Geomet not installed, database I/O disabled')
import pandas as pd import geopandas as gpd import libpysal as lps import numpy as np import matplotlib.pyplot as plt :
df = gpd.read_file('data/neighborhoods.shp') # was created in previous notebook with df.to_file('data/neighborhoods.shp')
df.head()
We have an
nan to first deal with:
pd.isnull(df['median_pri']).sum()
1
df = df df['median_pri'].fillna((df['median_pri'].mean()), inplace=True)
df.plot(column='median_pri')
<matplotlib.axes._subplots.AxesSubplot at 0x7f3f1dcf0438>
fig, ax = plt.subplots(figsize=(12,10), subplot_kw={'aspect':'equal'}) df.plot(column='median_pri', scheme='Quantiles', k=5, cmap='GnBu', legend=True, ax=ax) #ax.set_xlim(150000, 160000) #ax.set_ylim(208000, 215000)
/home/serge/anaconda3/envs/esda)
<matplotlib.axes._subplots.AxesSubplot at 0x7f3f1d9b9a20>:
wq = lps.weights.Queen.from_dataframe(df) wq.transform = 'r'([45.2 , 52.625 , 45.75 , 32.5 , 63.5 , 42. , 45.625 , 44.14285714, 43.33333333, 38.75 , 41.5 , 50.8 , 36.6875 , 54.36363636, 54.375 , 38.92857143, 38.125 , 50.9 , 35.6875 , 59.66666667, 46.875 , 46.92857143, 49.58333333, 47.25 , 53.25 , 40.57142857, 37.66666667, 37.14285714, 40.75 , 41.5 , 45.9 , 35.3 , 47.9375 , 47.33333333, 40. , 44. , 58.3 , 53.16666667, 41.1459854 , 43.75 , 51.625 , 52.3 , 50.5 , 46.91666667, 47. , 38.125 , 35.33333333, 48.83333333, 46.6 , 43.125 , 39.95498783, 41.33333333, 42. , 44.43248175, 55.66666667, 46.2 , 47.33333333, 49.84124088, 47.93248175, 42.92857143, 43.4 , 40.78571429, 37.42857143, 32.75 , 45.57142857, 51.25 , 44. , 33.33333333, 33.25 , 42. , 40. , 34.8 , 43.57142857, 41.75 , 43.85714286, 39.14285714, 42.25 , 47.21428571, 46. , 51.2 , 36.66666667, 39.875 , 35.375 , 44.8 , 42.25 , 36.6 , 38.66666667, 41.11111111, 40.2 , 38.33333333, 39.625 , 43.5 , 39.1 , 40.9 , 40.625 , 48.625 , 49.5 , 50.02554745, 37.42857143, 39.83333333, 51.58333333, 48.6875 , 50.5 , 38.5 , 41.5 , 44.04545455, 36.33333333, 45.1875 , 40.85714286, 55.875 , 58. , 47.14285714, 38.16666667, 44.7 , 44.1 , 45.5 , 50.83333333, 44.83333333, 40.25 , 40. , 42.66666667, 44.125 , 37.6 , 39.25 , 34.25 , 30.33333333, 36.14285714, 38.75 , 42.16666667, 39. , 38.92857143, 36.75 , 34. , 41.375 , 36.5 , 38. , 35.91666667, 36. ]):
y.median()
42.03f1c34f2e8>)
The resulting object stores the observed counts for the different types of joins:
jc.bb
121.0
jc.ww
114.0
jc.bw
150.0
Note that the three cases exhaust all possibilities:
jc.bb + jc.ww + jc.bw
385.0
and
wq.s0 / 2
385.0
which is the unique number of joins in the spatial weights object.
Our object tells us we have observed 121 BB joins:
jc.bb
121.0
92.65365365365365.09715984916381672.02:913: RuntimeWarning: invalid value encountered in true_divide self.z_sim = (self.Is - self.EI_sim) / self.seI_sim /home/serge/anaconda3/envs/esda/lib/python3.6/site-packages/scipy/stats/_distn_infrastructure.py:879: RuntimeWarning: invalid value encountered in greater return (self.a < x) & (x < self.b) /home/serge/anaconda3/envs/esda/lib/python3.6/site-packages/scipy/stats/_distn_infrastructure.py:879: RuntimeWarning: invalid value encountered in less return (self.a < x) & (x < self.b) /home/serge/anaconda3/envs/esda/lib/python3.6/site-packages/scipy/stats/_distn_infrastructure.py:1738: RuntimeWarning: invalid value encountered in greater_equal cond2 = (x >= self.b) & cond0
li.q
array([2, 1, 1, 4, 2, 3, 2, 1, 3, 4, 3, 1, 3, 1, 1, 3, 3, 2, 3, 1, 1, 1, 1, 2, 2, 3, 4, 4, 3, 4, 2, 3, 1, 1, 3, 1, 1, 1, 4, 1, 1, 1, 1, 1, 2, 3, 3, 2, 2, 4, 3, 3, 3, 2, 1, 1, 1, 1, 1, 3, 3, 3, 3, 4, 2, 2, 2, 4, 4, 3, 3, 4, 3, 4, 2, 4, 4, 1, 2, 1, 4, 3, 4, 2, 3, 3, 3, 3, 3, 3, 4, 3, 4, 3, 4, 1, 1, 1, 3, 3, 1, 1, 1, 3, 4, 1, 4, 2, 3, 1, 1, 2, 4, 2, 2, 2, 1, 1, 3, 3, 4, 1, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 4, 3, 3, 3])
We can again test for local clustering using permutations, but here we use conditional random permutations (different distributions for each focal location)
(li.p_sim < 0.05).sum()
24', .', 'cold spot'] labels = [spots[i] for i in coldspot*1]
df = df from matplotlib import colors hmap = colors.ListedColormap(['blue', .', 'doughnut'] labels = [spots[i] for i in doughnut*1]
df = df from matplotlib import colors hmap = colors.ListedColormap(['lightblue', .', 'diamond'] labels = [spots[i] for i in diamond*1]
df = df from matplotlib import colors hmap = colors.ListedColormap(['pink', 'lightgrey']) f, ax = plt.subplots(1, figsize=(9, 9)) df.assign(cl=labels).plot(column='cl', categorical=True, \ k=2, cmap=hmap, linewidth=0.1, ax=ax, \ edgecolor='white', legend=True) ax.set_axis_off() plt.show() | https://nbviewer.jupyter.org/github/pysal/esda/blob/master/notebooks/Spatial%20Autocorrelation%20for%20Areal%20Unit%20Data.ipynb | CC-MAIN-2019-18 | refinedweb | 866 | 64 |
[ Eby] > By this argument, we should be using ob.len() instead of len(ob), and > ob.iter() instead of iter(ob). Hey, no fair misstating her argument and then reducing it to the absurd. We're all aware that __len__ and __iter__ are supported by a wide variety of object types and are accessed through builtin functions provided expressly for that purpose. In contrast, we're all aware that next() applies only iterator objects, that it was designed for both direct and magic invocation, and that there is no corresponding builtin. This conversation is getting goofy. There is no underlying problem to be solved. Essentially, a couple of developers feel irritated by a perceived aberration from a naming convention. To assuage that irritation, they are willing to either 1) inflict pain on direct callers by cluttering the calling code with pairs of double underscores, or 2) add yet another builtin (needlessly bloating the namespace, adding another level of indirection, and burdening execution with an unnecessary global lookup). The cure is worse than the disease. IMO, the appropriate solution is to amend your understanding of the naming convention to only apply to methods whose normal invocation is exclusively magic and not called directly. Raymond | https://mail.python.org/pipermail/python-dev/2006-March/062065.html | CC-MAIN-2017-30 | refinedweb | 203 | 54.83 |
Notepad, the text editor that ships with Windows, is not a complicated application. For many, this is its major advantage—by having virtually no features, it cannot go wrong—but especially for software developers, it has often proven an annoyance.
That's because Notepad has traditionally only understood Windows line endings. Windows, Unix, and "classic" MacOS all use different conventions for indicating the end of a line of text. Windows does things correctly: it uses a pair of characters, the carriage return (CR), followed by the line feed (LF). Two characters are needed because they do different things: the CR moves the print head to the start of a line; the LF advances the paper by one line. Separating these is valuable, as it allows for effects such as underlining to be emulated: first print the text to be underlined, then issue a CR, and then print underscore characters.
Unix, however, uses a bare line feed to denote that a new line should be started. Classic MacOS (though not modern macOS) uses a bare carriage return for the same purpose. Given the meaning behind the CR and LF characters, these operating systems are both obviously wrong, but sometimes wrongness is allowed to prevail and persist..
And if you don't like the idea, there's a registry setting to make it stick with its traditional behavior.
You must login or create an account to comment.
Channel Ars Technica | https://arstechnica.com/gadgets/2018/05/notepad-gets-a-major-upgrade-now-does-unix-line-endings/ | CC-MAIN-2019-13 | refinedweb | 238 | 53.31 |
Mulesoft With IoT: LEDs and a Raspberry Pi
Mulesoft With IoT: LEDs and a Raspberry Pi
See how you can use MuleSoft's Anypoint and Runtime engines can work for your IoT apps. This simple blinky app demonstrates its capabilities while incorporating Twitter.
Join the DZone community and get the full member experience.Join For Free
The Internet of Things (IoT) is the network of physical devices, vehicles, and other items embedded with electronics, software, sensors, actuators, and network connectivity which enable these objects to collect and exchange data.
The Anypoint blog, I will demonstrate how we can use a Twitter feed in Mule to send the input to a Raspberry Pi in order to light up LEDs.
Mule provides connectivity to all SaaS apps like Twitter, Facebook, Box, Google, etc.
A Mule app deployed in an MMC instance on the Raspberry Pi subscribes to a Twitter feed. For every feed, it calls a Python script and, in turn, the Raspberry Pi controls the blinking of the LEDs.
Prerequisites
You'll need the following on order to follow along with this tutorial:
Raspberry Pi
MMC
Python script (provided below)
Resistors
Power supply
LED bulbs
Setup Instructions
You should be able to SSH into the Raspberry Pi. Ensure it is connected to Wi-fi.
The GPIO pins on the Raspberry Pi are what makes it powerful. These pins are a physical interface between the Pi and the outside world. In layman's terms, you can think of them as switches that you can turn on or off (input) or that the Pi can turn on or off (output). Of the 40 pins, 26 are GPIO pins, and the others are power or ground pins. You can program the pins to interact in amazing ways with the real world.
We are using a very basic Python script that can control GPIO commands to blink the LED bulb.
Python Script (ledBlink.py)
import RPi.GPIO as GPIO import time GPIO.setmode(GPIO.BCM) GPIO.setwarnings(False) GPIO.setup(18,GPIO.OUT) GPIO.output(18,GPIO.HIGH) time.sleep(1) print "LED off" GPIO.output(18,GPIO.LOW)
Install an MMC instance on the Raspberry Pi. Develop a Mule app that gets a Twitter feed and can call the Python script. Call the Python script in localhost (Raspberry Pi). Deploy it on the Raspberry Pi Mule instance.
The Mule app will get the feed from my Twitter account and call the Python script, which, in turn, will turn the LEDs on.
This demonstrates how a Mule instance running on a tiny Raspberry Pi controls the turning on/off of lights.
MuleSoft’s Runtime Engine can be used in various innovative ways to connect devices, data, and applications. This is a basic example of the Internet of Things with Mulesoft. If you'd like to look at the code for this tutorial, please have a look at my GitHub Repository.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/mulesoft-with-iot-raspberry-pi?fromrel=true | CC-MAIN-2019-18 | refinedweb | 505 | 64.1 |
Red Hat Bugzilla – Full Text Bug Listing
Description of problem:
* Fri Aug 10 2007 Harald Hoyer <harald@redhat.com> - 114-1
- version 114
- big rule unification and cleanup
- added persistent names for network and cdrom devices over reboot
However, we still have:
- 60-net.rules, which
- renames devices based on HWADDR/SUBCHANNEL in ifcfg-XXX files
- queues ifup based on those names
- anaconda writing ifcfgs with HWADDR for that rules file
- kudzu writing ifcfgs with HWADDR for that rules file
We need to clean this up before release, as having 60-net.rules and
70-persistent-net.rules disagreeing on the device name is going to cause
problems somewhere.
Initial suggestion:
- remove /lib/udev/rename_device
- have a quick script that runs on upgrade that writes a persistent net rules
file based on the current ifcfg-*
- nuke the kudzu device writing
- have anaconda write persistent rules as well
Questions:
- how does this integrate with bios_dev_name, if at all?
- does the persistent net generator in udev use subchannels as the primary key
instead of address on s390? (grrr)
re: integration with biosdevname.
I expect the following order is what's wanted.
persistent-net.rules
biosdevname.rules
someothernaming.rules
In this way, assigned persistent names are used if present, falling back to
invoking biosdevname for an answer, and falling back yet again to whatever
final rules may exist.
Yes, but udev will write new persistent-net rules once it's done booting. As
long as that's done after biosdevname has picked out a name, it would
theoretically work, except the rule generator wouldn't run (in most cases)
because the device name wouldn't match its whitelist.
What about:
70-persistent-net.rules
71-biosdevname.rules
72-net.rules
75-persistent-net-generator.rules
This should work...
Well, if -net.rules does nothing more than queue an ifup, its ordering is
irrelevant. The issue is going to move from the HWADDR as primary key in the
ifcfg- file - do we keep the rename_device rule around for if the user changes it?
Yes, I think ifcfg- should overrule everything, so rename_device should run
after persistent.
Maybe 70-persistent-net.rules should run _after_ biosdevname.rules, so that the
user can "config" that by hand.
Also 70-persistent-net.rules is should be have taken initial names from
biosdevname.rules and net.rules, because 75-persistent-net-generator.rules runs
after them.
So:
60-biosdevname.rules
70-persistent-net.rules
72-net.rules
75-persistent-net-generator.rules
70-persistent-net.rules will do configured device renaming.
75-persistent-net-generator will write out rules, if a device is unconfigured.
I think all other optional device naming stuff should go in-between the both and
_only_ provide a hint to 75-persistent-net-generator which name to use for this
device.
All stuff in-between must check for NAME=="?*" (already handled), and skip any
action if set. That way, biosdevname would run only _once_ at interface
discovery, and the given name would end as a config in 70-persistent-net.rules.
It looks like a waste of resources, to call biosdevname at every bootup, while
it will always return the same name anyway. With integrating in the generator,
it would just create an entry in the usual database file, and users can go and
edit that name without any knowledge about biosdevname. The name would just be a
custom name, not an enumerated one.
Kay, I'll put at 71-biosdevname.rules then:
KERNEL!="eth*", GOTO="biosdevname_end"
ACTION!="add", GOTO="biosdevname_end"
NAME=="?*", GOTO="biosdevname_end"
PROGRAM="/sbin/biosdevname --policy=embedded_ethN_slots_names -i %k", NAME="%c"
LABEL="biosdevname_end"
But that will still not make 75-persistent-net-generator write out your
interface name to 70-persistent-net.rules, which is what I would like to have.
So the next bootup, NAME will be already set and you are never called again on
the same system.
Something like setting ENV{INTERFACE_NEW}="%c", and make
/lib/udev/write_net_rules to use that name, if INTERFACE_NEW is already
suggested by something else, would do that magic, I guess.
sry for the confusion, but first NAME= takes it all...
NAME
The name of the node to be created, or the name the network
interface should be renamed to. Only one rule can set the node
name, all later rules with a NAME key will be ignored.
Hrm, this is going to require more jiggering of what we currently do. Where in
the ordering are the temporary rules in /dev/.udev used?
The files in /dev/.udev/rules.d/ are sorted with the other rules files, just
like they would all live in one single directory.
So, I'm not seeing the 'NAME' stuff working here in brief testing. I have a rule
that runs at 72 that sets NAME (via IMPORT), and udev is happily running the
persistent net generator and renaming anyway. (Needless to say, this breaks
horribly.)
What does "set NAME via IMPORT" mean? How does the rule look like?
ACTION=="add", SUBSYSTEM=="net", IMPORT{program}="/lib/udev/rename_device"
where the program prints out:
NAME=<whatever>
INTERFACE=<whatever>
DEVPATH=<whatever>
In any case, I'm not sure the suggested workflow of:
...
Yes, I think ifcfg- should overrule everything, so rename_device should run
after persistent.
...
can work right, period.
Example:
You have a network card A that normally is eth0; you have an ifcfg file that
defines this.
You add a network card B.
It happens to load/initialize first via udev. udev will generate a persistent
name, possibly eth0. (After all, if the prior ifcfg is setting the device name,
it will (theoretically) set NAME, so the persistent generator will never run for
such devices.)
Then, you load the driver for A. It initializes, and goes into the renamer that
reads the ifcfg file. This renames B to a temporary device, and renames A to eth0.
So, problems:
- B never gets renamed to something sane
- the udev init script promptly writes a persistent rule for B that has 'eth0'
as the device
Oh, NAME is not an environment variable, like DEVPATH or INTERFACE, udev will
not do anything with it. You'll need to set the NAME= rule key.
Does NAME=%E{INTERFACE} DTRT, if interface is set via IMPORT?
NAME="$env{INTERFACE}" might work.
Will do some more testing, but I still think we need to do some more work here.
Right now, we have a supposed priority of using:
1) the name in ifcfg-<whatever> for the given HWADDR
2) the name in persistent-device-names for the given HWADDR
3) the name suggested by biosdevname
4) the name assigned by the persistent-device-names generator
In order for this to work reliably when a new device is added:
- #2 can't use any names defined in #1
- #3 can't use any names defined in #1 or #2
- #4 can't use any names defined in #1, #2, or #3
That's not the case right now.
OK, some more testing yields that to do this right, we need the following things
done.
1) /lib/udev/rename_device (Fedora/RH specific) needs to simply suggest device
names with INTERFACE_NAME, not do any actual renaming (and the Fedora rules in
initscripts need adjusted appropriately.) I have a patch for this.
2) anaconda needs to write out persistent device names for what it loads. No
patch for this yet.
3) biosdevname needs the following added to its rules, so that it doesn't do
anything if something else has already suggested a name:
ENV{INTERFACE_NAME}=="?*", GOTO="biosdevname_end"
4) biosdevname needs some behavior changes.
It never reads the current persistent device settings, or even the currently
existing devices. This means that it is very likely that it will suggest a
device name already in use, if it returns a name in the ethX space. (Using
all_ethN as the policy makes this very likely to happen.) When it's using
something like eth_sX_Y, it's less likely there will be a collision, as that's
its 'own' namespace, so to speak.
5) What happens when new devices are added will probably need some
documentation. If you don't have biosdevname installed, every new device
added will just get a new ethX number, monotonically increasing. If you
*do* have biosdevname installed, what happens depends on the policy - right
now with all_ethN it doesn't work.
What we do with ifcfg-XXXX files on device add depends on this.
Created attachment 224801 [details]
anaconda patch to write persistent net rules
Here's the anaconda patch to write persistent net rules.
anaconda bits are committed. Bill -- what else needs doing here?
Well, if biosdevname isn't installed, it should all work OK. If it is installed,
strange things may happen.
Bill, what does biosdevname need to do to prevent "strange things"?
See comment #19.
Since biosdevname doesn't check what it's going to offer against:
1) currently allocated persistent device names (in 70-persistent-net.rules)
2) currently in-use by other devices device name (eth0, eth1, whatever)
it's highly likely that it will cause conflicts on device add. That's what I'd
like fixed. If it uses a different policy by default (eth_sX_Y, for example)
this shouldn't be an issue.
So can this be closed or what? (or moved off the F8Blocker list)?
Hrm, maybe moved to Target at this point?
With madwifi driver on F8, I see "ath0_rename" as the device name. What causes
this and how do I fix it?
Ooh, the first one. Please attach /etc/sysconfig/network-scripts/ifcfg-*, and
/etc/udev/rules.d/70-persistent-net.rules.
Does the madwifi driver create two interfaces with the same MAC address, like
the iwl* driver?
If that's the case, you may need to add ATTR{type}=="1" to the rule in
/etc/udev/rules.d/70-persistent-net.rules, like the current version of the rule
writer is doing it by default.
Created attachment 240651 [details]
ifcfg-eth0
Created attachment 240661 [details]
ifcfg-lo
Created attachment 240671 [details]
ifcfg-wifi0 (atheros card)
Created attachment 240681 [details]
70-persistent-net.rules
Created attachment 240691 [details]
ifconfig output showing multiple wireless devices
Yes, madwifi creates multiple wireless interfaces, wifi0 and ath0. See
attachment for ifconfig output.
Oh, that's *cute*. I'm assuming you didn't create ifcfg-wifi0. What happens if
you remove the persistent net rule for wifi0 and that file?
I removed ifcfg-ath* and ifcfg-wifi* and the persistent net rule for wifi0.
After rebooting, I now have only this file:
ifcfg-ath0:
# Atheros Communications, Inc. AR5212 802.11abg NIC
DEVICE=ath0
ONBOOT=yes
BOOTPROTO=dhcp
HWADDR=00:05:4e:47:de:c4
and the device is named properly (ath0).
This is using madwifi ath-pci driver.
Ugh, ok. So, at some point, kudzu decided to write an ifcfg file for the device,
and it decided (for reasons unknown) to pick wifi0. I'm not seeing how it would
do this off the top of my head, and unfortunately I don't have hardware to test
this is currently handled OK in Fedora 10. At least until we try and use biosdevname. | https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=264901 | CC-MAIN-2016-26 | refinedweb | 1,858 | 65.83 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.