text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
insq - insert a message into a queue
#include <sys/stream.h> int insq(queue_t *q, mblk_t *emp, mblk_t *nmp);
Architecture independent level 1 (DDI/DKI).
Pointer to the queue containing message emp.
Enqueued message before which the new message is to be inserted. mblk_t is an instance of the msgb(9S) structure.
Message to be inserted.
The insq() function inserts a message into a queue. The message to be inserted, nmp, is placed in q immediately before the message emp. If emp is NULL, the new message is placed at the end of the queue. The queue class of the new message is ignored. All flow control parameters are updated. The service procedure is enabled unless QNOENB is set.
The insq() function returns 1 on success, and 0 on failure.
The insq() function can be called from user, interrupt, or kernel context.
This routine illustrates the steps a transport provider may take to place expedited data ahead of normal data on a queue (assume all M_DATA messages are converted into M_PROTO T_DATA_REQ messages). Normal T_DATA_REQ messages are just placed on the end of the queue (line 16). However, expedited T_EXDATA_REQ messages are inserted before any normal messages already on the queue (line 25). If there are no normal messages on the queue, bp will be NULL and we fall out of the for loop (line 21). insq acts like putq(9F) in this case.
1 #include <sys/stream.h> 2 #include <sys/tihdr.h> 3 4 static int 5 xxxwput(queue_t *q, mblk_t *mp) 6 { 7 union T_primitives *tp; 8 mblk_t *bp; 9 union T_primitives *ntp; 10 11 switch (mp->b_datap->db_type) { 12 case M_PROTO: 13 tp = (union T_primitives *)mp->b_rptr; 14 switch (tp->type) { 15 case T_DATA_REQ: 16 putq(q, mp); 17 break; 18 19 case T_EXDATA_REQ: 20 /* Insert code here to protect queue and message block */ 21 for (bp = q->q_first; bp; bp = bp->b_next) { 22 if (bp->b_datap->db_type == M_PROTO) { 23 ntp = (union T_primitives *)bp->b_rptr; 24 if (ntp->type != T_EXDATA_REQ) 25 break; 26 } 27 } 28 (void)insq(q, bp, mp); 29 /* End of region that must be protected */ 30 break; . . . 31 } 32 } 33 }
When using insq(), you must ensure that the queue and the message block is not modified by another thread at the same time. You can achieve this either by using STREAMS functions or by implementing your own locking.
putq(9F), rmvq(9F), msgb(9S)
Writing Device Drivers in Oracle Solaris 11.4
STREAMS Programming Guide
If emp is non-NULL, it must point to a message on q or a system panic could result. | https://docs.oracle.com/cd/E88353_01/html/E37855/insq-9f.html | CC-MAIN-2022-05 | en | refinedweb |
Getting Started with Identity Vault in @ionic/vue
In this tutorial we will walk through the basic setup and use of Ionic's Identity Vault in an
@ionic/vue/services/useVault.ts: A composition API function that abstracts the logic associated with using Identity Vault. The functions and reactive variable exported here model what might be done in a real application.
src/views/Home.vue:-vue blank --type=vue
Now that the application has been generated let's add the iOS and Android platforms.
Open the
capacitor.config.ts file and change the
appId to something unique like
io.ionic.gettingstartedivvue:
import { CapacitorConfig } from '@capacitor/cli'; const config: CapacitorConfig = { appId: 'io.ionic.gettingstartedivvue', appName: 'getting-started-iv-vue', webDir: 'dist',": "vue-cli-service/services/useVault.ts. Within this file we will define the vault as well as create a composition function that abstracts all of the logic we need in order to interact with the vault:
import { Capacitor } from '@capacitor/core';import { BrowserVault, DeviceSecurityType, IdentityVaultConfig, Vault, VaultType,} from '@ionic-enterprise/identity-vault';import { ref } from 'vue';); const key = 'sessionData';const session = ref<string | null | undefined>(); export default function useVault() { const setSession = async (value: string): Promise<void> => { session.value = value; await vault.setValue(key, value); }; const restoreSession = async () => { const value = await vault.getValue(key); session.value = value; }; return { session, setSession, restoreSession, };}
Let's look at this file section by section:
The first thing we do is define the configuration for our vault. The
key gives the vault a name. The other properties provide a default behavior for our vault. As we shall see later, the configuration can be changed as we use the vault.
We then create the vault. Note that we are using the
BrowserVault class the application is running on the web. The
BrowserVault allows us to continue to use our normal web-based development workflow.);
note
The
BrowserVault class allows developers to use their normal web-based development workflow. It does not provide locking or security functionality.
Next, we define a key for storing data. All data within the vault is stored as a key-value pair. You can store multiple key-value pairs within a single vault.
We also create a reactive property that is used to reflect the current
session data to the outside world.
const key = 'sessionData';const session = ref<string | null | undefined>();
Finally, we create a composition function that returns our
session as well defining a couple of functions that are used to set and restore our session:
export default function useVault() { const setSession = async (value: string): Promise<void> => { session.value = value; await vault.setValue(key, value); }; const restoreSession = async () => { const value = await vault.getValue(key); session.value = value; }; return { session, setSession, restoreSession, };}.
Now that we have the vault in place, let's switch over to
src/views/Home.vue and implement some simple interactions with the vault. Here is a snapshot of what we will change:
- Replace the "container"
divwith a list of form controls
- Add a
setup()function
- Remove the existing styling
Update the file to match the following code:
<template> <ion-page> <ion-header : <ion-toolbar> <ion-title>Blank</ion-title> </ion-toolbar> </ion-header> <ion-content : <ion-header <ion-toolbar> <ion-titleBlank</ion-title> </ion-toolbar> </ion-header> <ion-list> <ion-item> <ion-labelEnter the "session" data</ion-label> <ion-input</ion-input> </ion-item> <ion-item> <div style="flex: auto"> <ion-button Set Session Data </ion-button> </div> </ion-item> <ion-item> <div style="flex: auto"> <ion-button Restore Session Data </ion-button> </div> </ion-item> <ion-item> <ion-label> <div>Session Data: {{ session }}</div> </ion-label> </ion-item> </ion-list> </ion-content> </ion-page></template> <script lang="ts"> import { IonButton, IonContent, IonHeader, IonInput, IonItem, IonLabel, IonList, IonPage, IonTitle, IonToolbar, } from '@ionic/vue'; import { defineComponent, ref } from 'vue'; import useVault from '@/services/useVault'; export default defineComponent({ name: 'Home', components: { IonButton, IonContent, IonHeader, IonInput, IonItem, IonLabel, IonList, IonPage, IonTitle, IonToolbar, }, setup() { const data = ref(''); return { ...useVault(), data }; }, });</script> <style scoped></style>
note
Throughout the rest of this tutorial only new markup or required code will be provided. It is up to you to make sure that the correct imports and component definitions are added.
Notice that this view is returning the full return value of the
useVault() composition function. This is just being done for convenience. Normally, you would use destructuring to just grab the bits that are needed in any component or service.
Build the application and run it on an iOS and/or Android device. You should be able to enter some data and store it in the vault by clicking "Set Session Data." If you code to the
useVault() composition function:
const lockVault = () => { vault.lock();}; const unlockVault = () => { vault.unlock();};
Remember to return the references to the functions:
return { session, lockVault, unlockVault, setSession, restoreSession,};
We can then add a couple of buttons to our
Home.vue component file:
<ion-item> <div style="flex: auto"> <ion-buttonLock Vault</ion-button> </div></ion-item> <ion-item> <div style="flex: auto"> <ion-buttonUnlock Vault</ion-button> </div></ion-item> onl allow the user to proceed if they unlock the vault.
In our case, we will just clear the
session and have a flag that we can use to visually indicate if the vault is locked or not. We can do that by using the vault's
onLock event.
Add the following code to
src/services/useVault.ts before the start of the
useVault() function:
const vaultIsLocked = ref(false); vault.onLock(() => { vaultIsLocked.value = true; session.value = undefined;}); vault.onUnlock(() => (vaultIsLocked.value = false));
Update the return value from
useVault() to include the
vaultIsLocked reactive value:
return { session, vaultIsLocked, lockVault, unlockVault, setSession, restoreSession,};
Finally, update
Home.vue to display the
vaultIsLocked value along with the session:
<ion-item> <ion-label> <div>Session Data: {{ session }}</div> <div>Vault is Locked: {{ vaultIsLocked }}</div> </ion-label></ion-item>/views/Home.vue file.
First, import the
Device API and add
isPlatform to the import from
@ionic/vue:
import { ... isPlatform} from '@ionic/vue';import { Device } from '@ionic-enterprise/identity-vault';
Next, add the following code to the
setup() function:
const isMobile = isPlatform('hybrid');const privacyScreen = ref(false); if (isMobile) { Device.isHideScreenOnBackgroundEnabled().then(x => (privacyScreen.value = x));} const privacyScreenChanged = (evt: { detail: { checked: boolean } }) => { Device.setHideScreenOnBackground(evt.detail.checked);};
Remember to add
isMobile,
privacyScreen, and
privacyScreenChanged to the return value of
setup() so we can use those items in our template:
return { ...useVault(), data, isMobile, privacyScreen, privacyScreenChanged };
Finally, we can add the checkbox to our template:
<ion-item> <ion-label>Use Privacy Screen</ion-label> <ion-checkbox :</ion-checkbox></ion-item>
Build the app and play around with changing the check box and putting the app in the background. In most applications, you would leave this value set by default, but if you were going to change it, you would most likely just authentication() composition function.
First, let's add a reactive property to
src/services/useVault just like the other ones that exist:
const lockType = ref< 'NoLocking' | 'Biometrics' | 'SystemPasscode' | undefined>();
Next, we will need to watch for changes and update the configuration when they occur. Since we only need a single watch to do this once, we should put that outside the
useVault() function, just like our reactive properties and our
onLock and
onUnlock event handlers.
const setLockType = ( lockType: 'NoLocking' | 'Biometrics' | 'SystemPasscode' | undefined,) => { let type: VaultType; let deviceSecurityType: DeviceSecurityType; if (lockType) { switch (lockType) { case 'Biometrics': type = VaultType.DeviceSecurity; deviceSecurityType = DeviceSecurityType.Biometrics; break; case 'SystemPasscode': type = VaultType.DeviceSecurity; deviceSecurityType = DeviceSecurityType.SystemPasscode; break; default: type = VaultType.SecureStorage; deviceSecurityType = DeviceSecurityType.None; } vault.updateConfig({ ...vault.config, type, deviceSecurityType, }); }}; watch(lockType, lock => setLockType(lock));
note
When this code is added, you will also need to add
watch to the import from "vue."
We can now add a group of radio buttons to our
Home view that will control the vault type. Remember to import any new components we are using and specify them in the view's components object.
<ion-item> <ion-radio-group <ion-list-header> <ion-label> Vault Locking Mechanism </ion-label> </ion-list-header> <ion-item> <ion-label>Do Not Lock</ion-label> <ion-radio</ion-radio> </ion-item> <ion-item> <ion-label>Use Biometrics</ion-label> <ion-radio :</ion-radio> </ion-item> <ion-item> <ion-label>Use System Passcode</ion-label> <ion-radio :</ion-radio> </ion-item> </ion-radio-group></ion-item>
For the "Use Biometric" and "Use System Passcode "radio buttons, we are disabling it based on whether or not the feature has been enabled on the device. We will need to code for that in our
setup().
const canUseSystemPIN = ref(false); const canUseBiometrics = ref(false); if (isMobile) { Device.isSystemPasscodeSet().then(x => (canUseSystemPIN.value = x)); Device.isBiometricsEnabled().then(x => (canUseBiometrics.value = x)); ... }vue'), the vault remembers which mode it is operating in and will ignore the mode passed into the constructor.
#Current Lock Status
Try the following:
- immediately after the
onLock and
onUnlock event handlers in our
useVault.ts file:
vault.isLocked().then(x => (vaultIsLocked.value = x));
Now when we restart the app, the vault should be shown as locked.
#Clearing the Vault
In this final step, we will remove all items from the vault and then remove the vault itself. This can be achieved through the
Vault API by calling the
clear() method.
To show this in action, let's add a
vaultExists reactive property to our
src/services/useVault.ts file. Remember to return it from the
useVault() composable function so we bind to it in our view:
const vaultExists = ref(false);
Next, add a
clearVault() function within
useVault(). This function will call
vault.clear(), reset the lockType to the default of
NoLocking, and clear our session data cache:
const clearVault = async () => { await vault.clear(); lockType.value = 'NoLocking'; session.value = undefined;};
Remember to add it to the return from
useVault() as well.
In order to see when a vault does and does not exist, let's use
vaultExists.value = await vault.doesVaultExist(); in a couple of places. Add a call in
clearVault() as well as in
setSession(). Let's also add a call within
useVault() itself, but since that function is not
async we will need to use the "promise then" syntax there. Add those call now.
Once that is all in place, open the
Home.vue file and do the following:
- Add a button to clear the vault by calling
clearVault()on click.
- Display the current value of
vaultExistsin a
divjust like we are currently showing
sessionand
vaultIsL other documentation to determine how to facilitate specific areas of functionality within your application. | https://ionic.io/docs/identity-vault/getting-started-vue | CC-MAIN-2022-05 | en | refinedweb |
README
hotkeys-managerhotkeys-manager
Executes callback when hotkey commands are pressed.
InstallInstall
npm i -D hotkeys-manager
InitializeInitialize
import HotkeysManager from 'hotkeys-manager';
// settings these options in the constructor defines these options // globaly thorughout command registrations. Though individual options // have precedence over those defined globally. const options = { // defaults preventDefault: true, once: true, } // new ShortcutsManager(Element, Object); const hotkeys = new HotkeysManager(window, options); // returns the target Element the eventlisteners are bound to hotkeys.target; // reference to the element passed in to the HotkeysManager constructor. the `window` object in this case
Register commandRegister command
The available options consists of these properties.
// optional options { once: Boolean // if global option has been set, this option overrides it preventDefault: Boolean // same with preventDefault. This overrides the global value data: Any groups: Array, priority: Number // which command has precendence of being triggered }
Define a new command with the
set method on the shortcuts instance.
once and
preventDefault options overrides any options defined globally. Data is where you can pass any data which you have access to in the callback.
// shortcuts.set(Array<KeyboardEvent.code>, options); const open = hotkeys.set(['ControlLeft', 'KeyO'], { once: true, preventDefault: true, data: 'this command opens something', groups: ['group1'], priority: 3 });
Adding callbacks for on and off states. Note that defining on or off callbacks will make it so the global general callback wont be executed for that particular state.
// on open.on(({ e, Hotkey, on }) => { // handle on state }); // off open.off(({ e, Hotkey, on }) => { // handle off state });
GroupsGroups
If no groups is provided the command will be stored under a global wildcard group
[*]. If you want to register a shortcut to the wildcard group along with other groups, simply add that to the array list as well. Like so.
groups: ['*', 'group1', 'group2']
Enabling certain groups to be listened to.
// returns the same arguments provided, as an array const groupsArray = hotkeys.enableGroups('group1', 'group2');
Enabling all groups is a special case where it listens for all registered commands. And pick the first registered or the one with the highest priority.
// enable all groups (default) hotkeys.enableAllGroups();
ListeningListening
To start listening for bound hotkeys call the subscribe method on the instance. This in turn returns an unsubscribe method. These methods add and removes the eventlisteners.
const unsubscribe = hotkeys.subscribe();
Providing a callback function as the subscribe argument you can catch all events here instead of binding
on and
off on each seperate hotkey.
This callback will be called for both on and off states. The
on argument returns a
boolean with the value
true|false to differentiate between on and off state.
const unsubscribe = hotkeys.subscribe(({ e, Hotkey, on }) => { // here we are still able to access the `event` object // so it's possible to prevent the default behaviour or // stopPropagation if we so like. e.preventDefault(); // along with the current Hotkey Shortcut instance console.log(Hotkey); });
Executing keycommands programmaticallyExecuting keycommands programmatically
You can execute a command programmarically by calling the execute function. Call the execute function with the same keys that you would use when you set a new command. And a second optional argument of the state you want to execute.
On some odd occasions you might be required to call one or the other. So provide either the string
on or
off for the on off states and leave it or provide
undefined for sequentially calling on then off.
// hotkeys.execute(keys: Array[, state: String|undefined]) // calling on followed by off hotkeys.execute(['KeyE']); // call only 'on' state hotkeys.execute(['KeyE'], 'on'); // call only 'off' state hotkeys.execute(['KeyE'], 'off'); | https://www.skypack.dev/view/hotkeys-manager | CC-MAIN-2022-05 | en | refinedweb |
.
Tutorial Chapters
- BDD, SpecFlow and The SpecFlow Ecosystem (Chapter 1)
- Getting Started with SpecFlow (Chapter 2)
- You’re here →, we have seen how to set up a SpecFlow project in Visual Studio, how to add a first SpecFlow feature to the project and how to let SpecFlow auto-generate step definitions that implement the steps in various scenarios. In this article, we are going to take a closer look at how steps in SpecFlow scenarios and step definition methods work together, as well as some techniques to make your SpecFlow steps more powerful and expressive.
Matching Steps with Step Definitions
When we asked SpecFlow to generate the step definitions that implement the steps in the example feature for us in the previous article, it generated a C# source code file with step definitions in it. But how does SpecFlow know what step definition (i.e., what method) to run when it encounters a specific step? How are the scenario step and the step definition connected?
There are two factors playing a part here. First, SpecFlow will only look for step definitions in classes that have the
[Binding] annotation, as shown in the snippet below:
namespace testproject_specflow.StepDefinitions { [Binding] public class ReturningLocationDataBasedOnCountryAndZipCodeSteps { [Given(@"the country code (.*) and zip code (.*)")] public void GivenTheCountryCodeAndZipCode(string countryCode, string zipCode) { } [When(@"I request the locations corresponding to these codes")] public void WhenIRequestTheLocationsCorrespondingToTheseCodes() { } [Then(@"the response contains the place name (.*)")] public void ThenTheResponseContainsThePlaceName(string expectedPlaceName) { }
Second, SpecFlow matches steps with step definitions methods inside these classes annotated with
[Binding] following one of three strategies. These correspond with the three style options you have when you generate step definitions in Visual Studio (see the previous article).
Regular Expressions
This is the most versatile matching strategy, and also the one you see in the example above. When selecting this style, SpecFlow adds an annotation to the step definition method that contains a regular expression that matches the step as it appears in the feature file. So, for example, the step:
Given I use regular expressions to match steps with step definitions
is implemented by a step definition method that looks like this:
[Given(@"I use regular expressions to match steps with step definitions")] public void IUseRegularExpressionsToMatchStepsWithStepDefinitions(string countryCode, string zipCode) { }
The part between the () brackets in the annotation is interpreted by SpecFlow as a regular expression, and when a step in a feature file matches this regular expression, the associated method is executed. The regular expression above is straightforward, as in: it only matches a very specific step (the one you see a couple of lines above the code snippet), but we’ll see later on in this article how you can leverage the power of regular expressions to create more flexible steps and step definitions.
Note that the actual method name (
IUseRegularExpressionsToMatchStepsWithStepDefinitions) is of no influence on whether or not the step matches the step definitions, i.e., the method name can be anything as long as it’s a valid C# method name. For readability purposes, though, it might be useful to give the step definition method a name similar to the actual step. This comes in especially handy when an error or exception is thrown during the execution of a scenario and you’re left with a stack trace to decipher.
Method Names – Pascal Casing
Another strategy that’s available to you when matching steps with step definitions is by using specific method names and Pascal casing, i.e., starting every new word with a capital letter.
To give you an example, when using this matching style, the step:
Given I use Pascal casing to match steps with step definitions
is matched by the following step definition:
[Given] public void GivenIUsePascalCasingToMatchStepsWithStepDefinitions() { }
Pascal casing-based matching is less flexible than the regular expression matching style, which explains why it’s not used as often as the former.
Method Names – Underscores
Finally, there’s also the option to use an underscores-based matching style. When using this matching style, the step:
Given I use underscores to match steps with step definitions
is matched by this step definition:
[Given] public void Given_I_use_underscores_to_match_steps_with_step_definitions() { }
The same disadvantages apply here: this matching style does not offer the power and flexibility that regular expressions do, explaining why you won’t see this used as often in practice.
Using Regular Expressions to Create More Flexible Steps
Let’s see some of the ways you can use regular expressions to create more flexible steps and ultimately, make your scenarios more readable and expressive.
In the example feature file you have seen in the previous article, we expressed that we were going to use the Zippopotam.us API to look up locations that correspond to the US zip code 90210. We specified these values in a Given step like this:
Given the country code us and zip code 90210
which can be matched by the step definition method
[Given(@"the country code us and zip code 90210")] public void GivenTheCountryCodeUsAndZipCode90210() { }
But what if you have another scenario that uses different values for the country and/or the zip code? What if you have this step instead of the previous one?
Given the country code ca and zip code B2A
If you would simply copy and paste the existing step definition method and slightly change the regular expression used to match it with a step, you might quickly end up with a large number of step definitions that had really similar purposes and implementations. What you would not have, though, is a maintainable code base.
To handle these situations, you can use the power of regular expressions to make your steps more flexible. So, instead of copying and pasting steps that are expressed similarly, you might create a parameterized step definition instead:
[Given(@"the country code (.*) and zip code (.*)")] public void GivenTheCountryCodeAndZipCode(string countryCode, string zipCode) { }
This step definition matches both
Given the country code us and zip code 90210
and
Given the country code ca and zip code B2A
as well as steps containing any other value for the country code and zip code, because the regular expression
(.*) matches strings that consist of any possible character (expressed by the dot) and of length 0 or more (expressed by the asterisk).
The parameter values are passed to the step definition method parameters countryCode (this will be assigned the value
us or
ca) and zipCode (which will equal
90210 or
B2A). You can then use these variables in your step definition method body to make your test code more flexible and powerful, too.
If you want to avoid matching empty strings, you can replace
(.*) with
(.+), which matches strings with any possible character of length 1 or above (expressed by the plus sign).
When you have installed the SpecFlow Extension, Visual Studio visualizes that the country and zip code are parameterized in this regular expression by italicizing the values in the steps and giving them a different color:
Other Useful Regular Expression Features
While the
(.*) and
(.+) regular expressions are very versatile, as in they accept practically any parameter value, sometimes you might want to be a little more strict in the types of parameters you accept in your step definitions. Let’s look at some of the possibilities that regular expressions provide to do just that.
Integer-only Parameter Values
Let’s take a look at this step from the example feature:
Then the response contains exactly 1 location
You can imagine that it might be useful to be able to reuse this step in other scenarios to verify that retrieving location data for a given combination of country code and zip code results in a different number of locations. However, it doesn’t make sense to accept all possible parameter values, only integer values.
You can do this with regular expressions as follows:
[Then(@"the response contains exactly (\d+) location")] public void ThenTheResponseContainsExactlyLocation(int expectedNumberOfPlacesReturned) { }
The
(\d+) in the regular expression restricts the matching values to integers. Also, the
+ sign makes sure that the parameter value isn’t empty, as it only accepts parameter values that consist of at least one digit. This means that the regular expression is now matched by both
Then the response contains exactly 1 location
as well as
Then the response contains exactly 99 location
but not by
Then the response contains exactly banana location
Also, we can now safely set the data type of the step definition method parameter to
int, because our regular expressions ensure that only integer values are being passed on.
Optional Characters
There’s a negative side effect, however, to making the step more flexible as in the example above. When more than one location is returned by the API, you’ll have to use locations in the step to define your expectations in a grammatically correct manner, not location.
Copying and pasting the step definition with only a slightly different regular expression (only an extra ‘s’) with (presumably) the exact same method body doesn’t make sense either. Instead, you can use the following solution, again made possible by the power of regular expressions:
[Then(@"the response contains exactly (\d+) locations?")] public void ThenTheResponseContainsExactlyLocation(int expectedNumberOfPlacesReturned) { }
The question mark
? behind the
s character implies that the latter is an optional character, so both steps containing
location as well as
locations are matched by the regular expression now.
Restricting Parameter Values to a List of Options
As a final example of using the power of regular expressions in your SpecFlow features, consider this step from the example feature:
Then the response has status code 200
Let’s now assume that the API is built to only return HTTP status code 200 (when a country code and zip code combination is supplied that exists in the service database), HTTP status code 404 (when the combination is not present in the database) or HTTP 500 (when an internal server error occurs).
In this case, using
(\d+) to accept only integer values in your steps may not be restrictive enough, because you only want to allow specific integer values. Regular expressions come to the rescue here, too:
[Then(@"the response has status code (200|404|500)")] public void ThenTheResponseHasStatusCode(int expectedStatusCode) { }
Now, only the aforementioned integer values 200, 404 and 500 are matched by the regular expression, while other integer values are not.
There are many more things you can do with regular expressions in SpecFlow, but I find myself using the above three examples regularly.
In the next article, we’ll be taking a closer look at some of the options provided by SpecFlow to further clean up your feature files and scenarios and avoid repeating steps, in an effort to make your features even more readable and powerful.
The example project used in this article can be found on GitHub:. | https://blog.testproject.io/2019/10/23/writing-more-expressive-specflow-steps/ | CC-MAIN-2022-05 | en | refinedweb |
This article is my entry for the language detector part of CodeProject's Machine Learning and Artificial Intelligence Challenge[^]. The goal of the challenge was to train a model to recognize programming languages, based on a provided training dataset with 677 code samples.
I've used C#. The solution has a LanguageRecognition.Core project which is a library with the machine learning code, and a LanguageRecognition project which is a console application that tests the code. The project has SharpLearning[^] (more specific, SharpLearning.Neural) as dependency.
LanguageRecognition.Core
LanguageRecognition
I decided to go with a neural network[^] to train my model. A neural network takes a vector[^] of floating-point numbers as input, the features of the object we're trying to classify. This vector is also known as the input layer. A unit of a layer is also known as a neuron. A neural network also has an output layer. In case of a network that's trained for classification (such as the one for this project), the output layer will have as many elements as there are classification categories, where the values indicate where the network would classify a given input. (For example, the result of a classification with 3 categories could be [0.15 0.87 0.23], indicating that the network preferred the second category). Between the input layer and the output layer, you can also have one or more hidden layers, with a number of units that you can choose. How do you get from one layer to the next? A matrix multiplication[^] is performed with the first layer and a weight matrix, and that result goes through an activation function[^] and then we have the values of the next layer. (For the network in this article, the rectifier[^] is used because this one is used by SharpLearning.) That layer is then used to calculate the values for the next layer, and so on. For the last layer, we'll also use the softmax function[^] and not just an activation function (one important difference is that an activation function works independently on all units of a layer, whereas the softmax function has to be applied on an array of values). 'Between' every two layers, there is a different weights matrix. It's really the values of these weight matrices that decide what the output of the network is going to look like. So if a neural network is going to be trained, the values of these weight matrices are going to be adjusted so the actual outputs will match the expected outputs better. SharpLearning uses gradient descent[^] for this (more specific, mini-batch gradient descent[^]).
[0.15 0.87 0.23]
I am not going to go in depth about the details and mathematics of neural networks, because SharpLearning takes care of that. I'm going to focus on how to apply it to recognizing programming languages. If you're interested in learning more, there is plenty of material available; the links in the previous paragraph can be used as a start.
I mentioned that a neural network takes a floating-point vector, the features, as input. What are those going to be here? For this challenge, the number of features (and the features themselves) cannot be pre-defined because that would require assumptions on how many languages and which languages we have to be able to classify. We don't want to make this assumption; instead, we're going to derive the number of features from the code samples that we use as training. Deriving the features is the first step in our training process.
First, the features that appear to be significant for each language are derived separately. I decided to derive three types of features: the most common symbols used in the code, the most common words and the most common combinations of two words. Those features seemed most important to me. For example, in HTML, the < and > symbols are significant, and so are keywords such as body, table and div. The keyword import would be significant for both Java and Python, and there the combinations will be a good help: combinations like import java would be significant for Java, and combinations like import os would be significant for Python.
<
>
body
table
div
import
import java
import os
After having derived those features per language, we combine them: we want to tell our neural network about the presence (or absence) of all features that could point to a specific language. The total number of input neurons will be the sum of all symbols, keywords and combinations selected for each language (duplicates are filtered out of course; we don't need multiple input neurons for the presence of the keyword import for example). The number of output neurons will be the number of languages that are presented in our training dataset.
Let me clarify that with an example. Imagine there were only 3 languages in the training set: C#, Python and JavaScript. For all these languages, the 10 most common symbols, 20 most common words, and 30 most common combinations are selected. That's 60 features per language, so 180 for the three languages combined. However, most of the symbols and some of the keywords/combinations will be duplicated. For the sake of this example, let's say that there are 11 unique symbols overall, 54 unique words and 87 unique combinations, then our neural network will take 11+54+87 values as input. Each input value corresponds to one symbol/word/combination and the value will be the number of occurrences of the symbol/word/combination in an arbitrary piece of code.
What about the hidden layers, the ones between the input and the output layer? I went with 4 hidden layers: if S is the sum of all symbols, keywords, combinations, and possible output languages, then the hidden layers respectively have S / 2, S / 3, S / 4 and S / 5 units. Why those numbers? Because those gave me the one of the best results when testing my model - there isn't much more to it. Using S units in all those four layers gave comparable results (perhaps even slightly better on average), but the training was much slower then.
S
S / 2
S / 3
S / 4
S / 5
After having selected the features to use, it's time for the actual training. For the neural network maths, I'm using the SharpLearning library. For each code sample, the previously selected symbols/words/combinations are counted and used as neural network inputs. All to-be-recognized languages get an index, and those indexes will be passed to SharpLearning as outputs for the training data.
SharpLearning
When the training is done, we have a model that can recognize languages for code samples. To predict a language, the inputted code sample will be transformed into an input vector exactly like the pre-processing for the training samples (i.e. counting of certain symbols, words, and combinations) and SharpLearning will take care of the maths to return the index of the predicted programming language.
In the previous section, I said we'd select the most common symbols for all given languages as a part of the neural network features. Let's first define what a "symbol" means in this context. The following seems like a sensible definition to me: a char is a symbol if it's not a letter, not a digit, no whitespace, and no underscore (because underscores are perfectly valid in variable names). Translating that into code:
char
static class CharExtensions
{
internal static bool IsProgrammingSymbol(char x)
{
return !char.IsLetterOrDigit(x) && !char.IsWhiteSpace(x) && x != '_';
}
}
Next, we'll work on our classes that will derive the features from the given code samples. As I previously said, we'll first do this per language, then combine the features. The LanguageTrainingSet class takes care of the former and also holds all training samples for one language. This class has the following properties to keep track of the samples and symbol/keyword/combination counts:
LanguageTrainingSet
List<string> samples = new List<string>();
public List<string> Samples { get => samples; }
Dictionary<char, int> symbolCounters = new Dictionary<char, int>();
Dictionary<string, int> keywordCounters = new Dictionary<string, int>();
Dictionary<string, int> wordCombinationCounters = new Dictionary<string, int>();
These collections will be filled when a new training sample is added to the training set. That's what the AddSample method is for:
AddSample
public void AddSample(string code)
{
code = code.ToLowerInvariant();
samples.Add(code);
var symbols = code.Where(CharExtensions.IsProgrammingSymbol);
foreach (char symbol in symbols)
{
if (!symbolCounters.ContainsKey(symbol))
{
symbolCounters.Add(symbol, 0);
}
symbolCounters[symbol]++;
}
string[] words = Regex.Split(code, @"\W").Where
(x => !string.IsNullOrWhiteSpace(x)).ToArray();
foreach (string word in words)
{
if (!keywordCounters.ContainsKey(word))
{
keywordCounters.Add(word, 0);
}
keywordCounters[word]++;
}
for (int i = 0; i < words.Length - 1; i++)
{
string combination = words[i] + " " + words[i + 1];
if (!wordCombinationCounters.ContainsKey(combination))
{
wordCombinationCounters.Add(combination, 0);
}
wordCombinationCounters[combination]++;
}
}
Let's walk through this step by step:
samples
Where
IsProgrammingSymbol
symbolCounters
keywordCounters
When more samples are added using this method, the counters will gradually increase and we can get a good ranking that indicates what keywords appear most often and what keywords appear less. Eventually, we want to know what keywords, symbols, and combinations appear most and use those as features for our neural network. To select those, the class has a ChooseSymbolsAndKeywords method. It's internal because we want to be able to call it from other classes in the LanguageRecognition.Core assembly, but not outside the assembly.
ChooseSymbolsAndKeywords
internal
const int SYMBOLS_NUMBER = 10;
const int KEYWORDS_NUMBER = 20;
const int COMBINATIONS_NUMBER = 30;
internal IEnumerable<char> Symbols { get; private set; }
internal IEnumerable<string> Keywords { get; private set; }
internal IEnumerable<string> Combinations { get; private set; }
internal void ChooseSymbolsAndKeywords()
{
Symbols = symbolCounters.OrderByDescending(x => x.Value).Select
(x => x.Key).Take(SYMBOLS_NUMBER);
Keywords = keywordCounters.OrderByDescending(x => x.Value).Select
(x => x.Key).Where(x => !int.TryParse(x, out int _)).Take(KEYWORDS_NUMBER);
Combinations = wordCombinationCounters.OrderByDescending
(x => x.Value).Select(x => x.Key).Take(COMBINATIONS_NUMBER);
}
The point of the .Where call to select the keywords is to exclude 'keywords' that are only numbers. Those wouldn't be useful at all. Numbers in combinations with letters are not excluded (and they shouldn't be; for example 1px is still useful).
.Where
1px
The TrainingSet class manages all LanguageTrainingSets so you don't need to worry about that when you use the LanguageRecognition.Core library. And when the LanguageRecognizer class (which we'll talk about later) wants to perform the neural network training, the TrainingSet class will combine the .Symbols, .Keywords and .Combinations that are picked by each LanguageTrainingSet's ChooseSymbolsAndKeywords so we also have TrainingSet.Symbols, TrainingSet.Keywords and TrainingSet.Combinations -- the features that will be used in our neural network.
TrainingSet
LanguageRecognizer
.Symbols
.Keywords
.Combinations
TrainingSet.Symbols
TrainingSet.Keywords
TrainingSet.Combinations
public class TrainingSet
{
Dictionary<string, LanguageTrainingSet> languageSets =
new Dictionary<string, LanguageTrainingSet>();
internal Dictionary<string, LanguageTrainingSet> LanguageSets { get => languageSets; }
internal char[] Symbols { get; private set; }
internal string[] Keywords { get; private set; }
internal string[] Combinations { get; private set; }
internal string[] Languages { get; private set; }
public void AddSample(string language, string code)
{
language = language.ToLowerInvariant();
if (!languageSets.ContainsKey(language))
{
languageSets.Add(language, new LanguageTrainingSet());
}
languageSets[language].AddSample(code);
}
internal void PrepareTraining()
{
List<char> symbols = new List<char>();
List<string> keywords = new List<string>();
List<string> combinations = new List<string>();
foreach (KeyValuePair<string, LanguageTrainingSet> kvp in languageSets)
{
LanguageTrainingSet lts = kvp.Value;
lts.ChooseSymbolsAndKeywords();
symbols.AddRange(lts.Symbols);
keywords.AddRange(lts.Keywords);
combinations.AddRange(lts.Combinations);
}
Symbols = symbols.Distinct().ToArray();
Keywords = keywords.Distinct().ToArray();
Combinations = combinations.Distinct().ToArray();
Languages = languageSets.Select(x => x.Key).ToArray();
}
}
The PrepareTraining method will be called by the LanguageRecognizer class when it needs to know all features for the network input, and the possible languages for the output.
PrepareTraining
The LanguageRecognizer class is where the actual work happens: the neural network is trained, and we get a model that we can use to predict the language of a code sample. Let's first take a look at the fields of this class:
[Serializable]
public class LanguageRecognizer
{
NeuralNet network;
char[] symbols;
string[] keywords;
string[] combinations;
string[] languages;
ClassificationNeuralNetModel model = null;
First, let's note that the class is Serializable: if you've trained the model and want to reuse it later, you shouldn't have to retrain it, but you can just serialize it and restore it later. The symbols, keywords, combinations, and languages fields are the features for the neural network input - they will be taken from a TrainingSet. NeuralNet is a class from SharpLearning and so is ClassificationNeuralNetModel, where the latter is the trained model and the former is used for the training.
Serializable
symbols
keywords
combinations
languages
NeuralNet
ClassificationNeuralNetModel
Next, we have a static CreateFromTraining method, taking a TrainingSet and returning an instance of LanguageRecognizer. I decided to go with a static method and not a constructor, because the constructor guidelines[^] say to do minimal work in a constructor, and training the model is not quite "minimal work".
CreateFromTraining
The LanguageRecognizer.CreateFromTraining method constructs the neural network and its layers in the way I described previously in this article. It will go through all training samples and transform the code into an input vector. These input vectors are combined into one input matrix, and this matrix is passed to SharpLearning, alongside with the expected outputs.
LanguageRecognizer.CreateFromTraining
public static LanguageRecognizer CreateFromTraining(TrainingSet trainingSet)
{
LanguageRecognizer recognizer = new LanguageRecognizer();
trainingSet.PrepareTraining();
recognizer.symbols = trainingSet.Symbols;
recognizer.keywords = trainingSet.Keywords;
recognizer.combinations = trainingSet.Combinations;
recognizer.languages = trainingSet.Languages;
recognizer.network = new NeuralNet();
recognizer.network.Add(new InputLayer(recognizer.symbols.Length +
recognizer.keywords.Length + recognizer.combinations.Length));
int sum = recognizer.symbols.Length + recognizer.keywords.Length +
recognizer.combinations.Length + recognizer.languages.Length;
recognizer.network.Add(new DenseLayer(sum / 2));
recognizer.network.Add(new DenseLayer(sum / 3));
recognizer.network.Add(new DenseLayer(sum / 4));
recognizer.network.Add(new DenseLayer(sum / 5));
recognizer.network.Add(new SoftMaxLayer(recognizer.languages.Length));
ClassificationNeuralNetLearner learner =
new ClassificationNeuralNetLearner(recognizer.network, loss: new AccuracyLoss());
List<double[]> inputs = new List<double[]>();
List<double> outputs = new List<double>();
foreach (KeyValuePair<string, LanguageTrainingSet> languageSet in trainingSet.LanguageSets)
{
string language = languageSet.Key;
LanguageTrainingSet set = languageSet.Value;
foreach (string sample in set.Samples)
{
inputs.Add(recognizer.PrepareInput(sample));
outputs.Add(recognizer.PrepareOutput(language));
}
}
F64Matrix inp = inputs.ToF64Matrix();
double[] outp = outputs.ToArray();
recognizer.model = learner.Learn(inp, outp);
return recognizer;
}
This method refers to PrepareInput and PrepareOutput. PrepareOutput is very simple: for a given language, it returns the index of that language in the list of known languages. PrepareInput constructs a double[] with the features to feed to the neural network: the count of symbols we care about, keywords we care about, and keyword combinations we care about.
PrepareInput
PrepareOutput
double[]
double[] PrepareInput(string code)
{
code = code.ToLowerInvariant();
double[] prepared = new double[symbols.Length + keywords.Length + combinations.Length];
double symbolCount = code.Count(CharExtensions.IsProgrammingSymbol);
for (int i = 0; i < symbols.Length; i++)
{
prepared[i] = code.Count(x => x == symbols[i]);
}
string[] codeKeywords = Regex.Split(code, @"\W").Where(x => keywords.Contains(x)).ToArray();
int offset = symbols.Length;
for (int i = 0; i < keywords.Length; i++)
{
prepared[offset + i] = codeKeywords.Count(x => x == keywords[i]);
}
string[] words = Regex.Split(code, @"\W").ToArray();
Dictionary<string, int> cs = new Dictionary<string, int>();
for (int i = 0; i < words.Length - 1; i++)
{
string combination = words[i] + " " + words[i + 1];
if (!cs.ContainsKey(combination))
{
cs.Add(combination, 0);
}
cs[combination]++;
}
offset = symbols.Length + keywords.Length;
for (int i = 0; i < combinations.Length; i++)
{
prepared[offset + i] = cs.ContainsKey(combinations[i]) ? cs[combinations[i]] : 0;
}
return prepared;
}
double PrepareOutput(string language)
{
return Array.IndexOf(languages, language);
}
Lastly, after having created and trained the recognizer, we obviously want to use it to actually recognize languages. That's a very simple piece of code: the input just need to be turned into an input vector with PrepareInput, passed into SharpLearning's trained model which gives an index as output.
public string Recognize(string code)
{
return languages[(int)model.Predict(PrepareInput(code))];
}
The downloadable LanguageRecognition has two projects: LanguageRecognition.Core as library with all learning-related code, and LanguageRecognition as console application that trains the recognizer based on the dataset provided by CodeProject. The dataset contains 677 samples. 577 of this are used for training, the remaining 100 for testing how good the model turned out to be.
The test code extracts the code samples, shuffles them, takes the first 577, performs training with those, then tests serialization and de-serialization of the model, and eventually it performs the prediction testing.
static void Main(string[] args)
{
// Reading and parsing training samples:
string sampleFileContents = File.ReadAllText("LanguageSamples.txt").Trim();
string[] samples = sampleFileContents.Split(new string[] { "</pre>" },
StringSplitOptions.RemoveEmptyEntries);
List<Tuple<string, string>> taggedSamples = new List<Tuple<string, string>>();
foreach (string sample in samples)
{
string s = sample.Trim();
string pre = s.Split(new char[] { '>' }, 2)[0];
string language = pre.Split('"')[1];
s = WebUtility.HtmlDecode(s.Replace(pre + ">", "")); // The code samples
// are HTML-encoded because they are in pre-tags.
taggedSamples.Add(new Tuple<string, string>(language, s));
taggedSamples = taggedSamples.OrderBy(x => Guid.NewGuid()).ToList();
}
// Setting up training set and performing training:
TrainingSet ts = new TrainingSet();
foreach (Tuple<string, string> sample in taggedSamples.Take(577))
{
ts.AddSample(sample.Item1, sample.Item2);
}
LanguageRecognizer recognizer = LanguageRecognizer.CreateFromTraining(ts);
// Serialization testing:
BinaryFormatter binaryFormatter = new BinaryFormatter();
LanguageRecognizer restored;
using (MemoryStream stream = new MemoryStream())
{
binaryFormatter.Serialize(stream, recognizer);
stream.Seek(0, SeekOrigin.Begin);
restored = (LanguageRecognizer)binaryFormatter.Deserialize(stream);
}
// Prediction testing:
int correct = 0;
int total = 0;
foreach (Tuple<string, string> sample in taggedSamples.Skip(577))
{
if (restored.Recognize(sample.Item2) == sample.Item1.ToLowerInvariant())
{
correct++;
}
total++;
}
Console.WriteLine($"{correct}/{total}");
}
On average, the accuracy on unseen samples appears to be approximately 85%. The accuracy differs every time you run the test application though, because the code samples are shuffled (so the selected features will be somewhat different) and the neural network is initialized with different random weights every time. Sometimes, the accuracy is just below 80%, sometimes it's just above 90% as well. I wanted to test with bigger training sets as well, but I did not have the time to gather these. I believe that it would increase the accuracy though, because a bigger training set means a better selection of features and a better training of the neural network.. | https://www.codeproject.com/Articles/1232473/Recognizing-Programming-Languages-using-a-Neural-N?msg=5496534#xx5496534xx | CC-MAIN-2022-05 | en | refinedweb |
Java Heap Memory Error
Errors and exceptions are very common when working with any programming language. In Java, all objects are stored in the Heap memory, and JVM throws an OutOfMemoryError when it is unable to allocate space to an object. Sometimes this error is also called Java Heap Space Error. Let's learn more about this error.
Reasons Behind the java.lang.OutOfMemoryError
Java has a predefined maximum amount of memory that an application should take and if the application exceeds this limit then the OutOfMemory error is thrown. Let's take a look at the reasons behind this error.
- This error can occur due to poor programming practices like using inefficient algorithms, wrong data structure choices, infinite loops, not clearing unwanted resources, holding objects for too long, etc. If this is the reason behind the error then one should reconsider the choices made when writing the program.
- There can be some other reasons that are not in control of the user. For example, a third-party library that is caching strings, or a server that does not clean up after deploying. Sometimes this error can occur even if everything is fine with the heap and the objects.
- Memory Leaks can also lead to OutOfMemory errors. A memory leak is a situation when an unused object is present in the heap and cannot be removed by the Garbage Collector because it still has valid references to it. Open resources can also lead to memory leaks and they must be closed by using the finally block. The garbage collector will automatically remove all the unreferenced objects and this situation can easily be avoided by making unused objects available to the garbage collector. Additional tools are also available to detect and avoid memory leaks.
- Excessive use of finalizers can also lead to the OutOfMemoryError. Classes having a finalize method do not have their object spaces reclaimed at the time of Garbage Collection. After the garbage collection process, these objects are queued for finalization at a later time and if the finalizer thread cannot keep up with the queue then this error is thrown.
We can always increase the memory needed by JVM but if the issue is more advanced then we will eventually run out of memory. One should try to optimize the code and look for memory leaks that may cause this error.
Example: OutOfMemoryError Due to an Infinite Loop
Let's create an ArrayList of Object type and add elements to it infinitely until we run out of memory.
import java.util.ArrayList; public class OutOfMemory { public static void main(String args[]) { ArrayList<Object> list = new ArrayList<Object>(); while(true) { Object o = new Object(); list.add(o); } } }
In this example, we will eventually run out of space even if we increase the memory assigned to the application. The following image shows the error message returned by the above program.
Example: Out of Memory Because of Limited Memory
If the memory assigned to the application is less than the required memory then the OutOfMemory error is returned. We can also try to optimize our code so that its space complexity can be reduced. If we are sure that this error can be fixed if we had more memory then we can increase it by using the -Xmx command. Let's run a program that does not have the required memory.
public class OutOfMemoryError { public static void main(String[] args) { Integer[] outOfMemoryArray = new Integer[Integer.MAX_VALUE]; } }
The error message is shown in the image below.
Resolving OutOfMemoryError by Increasing Heap Space
Increasing the heap space can sometimes resolve the OutOfMemory error. If we are sure that there are no memory leaks and our code cannot be optimized further, then increasing the heap space could solve our problem. We will use the -Xmx command while running the java application to increase the heap space.
For example, the following code will not work if set a 4MB limit on the memory.
public class OutOfMemory { public static void main(String[] args) { String[] str = new String[1000000]; System.out.print("Memory Allocated"); } }
However, if we increase the limit to 8MB or anything greater then it works perfectly fine.
Frequently Asked Questions
Q. What is the default heap size in Java?
The default heap size in Java is 1GB and it is sufficient for most tasks.
Q. What is the difference between -Xmx and -Xms?
Xms defines the initial or the minimum heap size and xmx is used to set the maximum heap size. JVM will start the memory defined by -xms and will not exceed the limit set by -xmx.
Q. What is the Stack Memory?
The stack memory is a Last-In-First-Out memory used for static memory allocation and for thread execution. Whenever a new object is created, it is stored in the Heap memory and a reference to it is stored in the stack memory.
Summary
The OutOfMemoryError is a subclass of the VirtualMachineError and it is thrown when JVM does not have sufficient space to allocate new objects. This error can occur due to poor programming practices like using infinite loops or not clearing unwanted objects. This can also occur due to the presence of some third-party libraries. Memory leaks also lead to this error and we can use tools like Eclipse Memory Analyzer to fix them. Sometimes, just increasing the heap space allotted to the application solves the problem. We should also try to look for optimized algorithms so that the overall space complexity of our program is less than the maximum space available. | https://www.studytonight.com/java-examples/java-heap-memory-error | CC-MAIN-2022-05 | en | refinedweb |
On Mar 13, 2013, at 08:52, Bjoern Drabeck wrote: > Btw I think there are still a couple more issues with the configure going wrong sometimes, depending what options I choose. Bit later when I got more time I can create a list of options and outcomes.. Will try to see if I can get a debug build to work which allows me to step into the code, with the feedback from John > The easiest way to get the best possible debugging experience would of course be to change av_get_cpu_flags as below. This removes the dependency on dead-code stripping, and is all that ought to be required if indeed the only linking errors you get are about the 3 worker functions called. (And frankly, is this really so unreadable that it is preferable to leave the stripping to the compiler rather than doing it explicitly?) int av_get_cpu_flags(void) { if (checked) return flags; #ifdef ARCH_ARM if (ARCH_ARM) flags = ff_get_cpu_flags_arm(); #endif #ifdef ARCH_PPC if (ARCH_PPC) flags = ff_get_cpu_flags_ppc(); #endif #ifdef ARCH_X86 if (ARCH_X86) flags = ff_get_cpu_flags_x86(); #endif checked = 1; return flags; } | http://ffmpeg.org/pipermail/ffmpeg-devel/2013-March/140442.html | CC-MAIN-2022-05 | en | refinedweb |
Use Visual C# to create a remote server
This article helps you create a remote server where another application can access by using Visual C#.
Original product version: Visual C#
Original KB number: 307445
Summary
This article illustrates how to create a .NET Remoting run-time framework.
This article refers to the following Microsoft .NET Framework Class Library namespaces:
System.Runtime.Remoting
System.Runtime.Remoting.Channels
System.Runtime.Remoting.Channels.Tcp
Requirements
This article assumes that you are familiar with the following topics:
- Visual Studio .NET or Visual Studio
- Visual C# .NET or Visual C#
- Networking
Creating the remote server object
Theclass..
Create the remote server application
Afternamespace.
Add a reference to the ServerClass.dll assembly that you created in the previous section.
Use the
usingstatement on the
Remoting,
Remoting.Channels, and
Remoting.Channels.TCPnamespaces so that you aren't required to qualify declarations in those namespaces later in your code. You must use the
usingstatement prior to any other declarations.
using System.Runtime.Remoting; using System.Runtime.Remoting.Channels; using System.Runtime.Remoting.Channels.Tcp;
Declare the appropriate variable. Declare and initialize a
TcpChannelobject that listens for clients to connect on a certain port, which is port 8085 in this example. Use the
RegisterChannelmethod to register the channel with the channel services. Add the following declaration code in the
Mainprocedure of
Class1:
TcpChannel chan = new TcpChannel(8085); ChannelServices.RegisterChannel(chan);
Call the
RegisterWellKnownTypemethod of the
RemotingConfigurationobject to register the
ServerClassobject didn't specify a namespace in the previous section, the default root namespace is used.
Name the endpoint where the object is to be published as RemoteTest. Clients need to know this name in order to connect to the object.
Use the
SingleCallobject mode to specify the final parameter. The object mode specifies the lifetime of the object when it is activated on the server. In the case of
SingleCallobjects, a new instance of the class is created for each call that a client makes, even if the same client calls the same method more than once. On the other hand,
Singletonobjects are created only once, and all clients communicate with the same object.
RemotingConfiguration.RegisterWellKnownServiceType( System.Type.GetType("ServerClass.myRemoteClass, ServerClass"), "RemoteTest", WellKnownObjectMode.SingleCall);
Use the
ReadLinemethod of the
Consoleobject to keep the server application running.
System.Console.WriteLine("Hit <enter> to exit..."); System.Console.ReadLine();
Build your project.
Save and close the project.
References
For an overview of .NET Remoting, see the .NET Framework Developer's Guide documentation. | https://docs.microsoft.com/en-US/troubleshoot/dotnet/csharp/create-remote-server | CC-MAIN-2022-05 | en | refinedweb |
7. Train and Test Sets by Splitting Learn and Test Data
By Bernd Klein. Last modified: 02 Dec 2021.
Learn, Test and Evaluation Data
>
When you consider how machine learning normally works, the idea of a split between learning and test data makes sense. Really existing systems train on existing data and if other new data (from customers, sensors or other sources) comes in, the trained classifier has to predict or classify this new data. We can simulate this during training with a training and test data set - the test data is a simulation of "future data" that will go into the system during production.
In this chapter of our Python Machine Learning Tutorial, we will learn how to do the splitting.
Splitting Example: Iris Data Set
We will demonstrate the previously discussed topics with the Iris Dataset.
The 150 data sets of the Iris data set are sorted, i.e. the first 50 data correspond to the first flower class (0 = Setosa), the next 50 to the second flower class (1 = Versicolor) and the remaining data correspond to the last class (2 = Virginica).
If we were to split our data in the ratio 2/3 (learning set) and 1/3 (test set), the learning set would contain all the flowers of the first two classes and the test set all the flowers of the third flower class. The classifier could only learn two classes and the third class would be completely unknown. So we urgently need to mix the data.
Assuming all samples are independent of each other, we want to shuffle the data set randomly before we split the data set as shown above.
In the following we split the data manually:
import numpy as np from sklearn.datasets import load_iris iris = load_iris()
Looking at the labels of
iris.target shows us that the data is sorted.
iris.target. For this purpose, we will use the permutation function of the random submodul of Numpy:
indices = np.random.permutation(len(iris.data)) indices
OUTPUT:
array([ 49, 16, 63, 76, 116, 11, 44, 27, 95, 136, 40, 132, 43, 138, 54, 82, 72, 81, 50, 14, 69, 48, 3, 31, 104, 117, 111, 118, 89, 47, 24, 124, 39, 120, 113, 93, 90, 106, 142, 109, 23, 87, 125, 41, 84, 62, 128, 1, 94, 85, 146, 18, 57, 20, 60, 4, 83, 46, 21, 110, 114, 78, 103, 17, 100, 130, 68, 67, 34, 144, 112, 12, 2, 115, 147, 141, 38, 8, 96, 137, 75, 98, 33, 105, 53, 86, 55, 10, 51, 145, 107, 37, 102, 58, 123, 22, 133, 108, 7, 61, 140, 79, 80, 131, 92, 42, 15, 70, 36, 77, 6, 119, 52, 134, 74, 13, 139, 66, 127, 143, 19, 26, 99, 64, 0, 148, 149, 129, 5, 97, 73, 59, 45, 35, 88, 25, 30, 65, 91, 32, 29, 71, 126, 9, 135, 121, 101, 28, 56, 122])])
OUTPUT:
[[5. 3.3 1.4 0.2] [5.4 3.9 1.3 0.4] [6.1 2.9 4.7 1.4] [6.8 2.8 4.8 1.4]] [0 0 1 1] [[6.1 3. 4.6 1.4] [5.2 4.1 1.5 0.1] [4.7 3.2 1.6 0.2] [6.1 2.8 4. 1.3]] [1 0 0 1]
Splits with Sklearn
Even though it was not difficult to split the data manually into a learn (train) and an evaluation (test) set, we don't have to do the splitting manually as shown above. Since this is often required in machine learning, scikit-learn has a predefined function for dividing data into training and test sets.
We will demonstrate this below. We will use 80% of the data as training and 20% as test data. We could just as well have taken 70% and 30%, because there are no hard and fast rules. The most important thing is that you rate your system fairly based on data it did not see during exercise! In addition, there must be enough data in both data sets. n = 7 print(f"The first {n} data sets:") print(test_data[:7]) print(f"The corresponding {n} labels:") print(test_labels[:7])
OUTPUT:
The first 7 data sets: [[6.1 2.8 4.7 1.2] [5.7 3.8 1.7 0.3] [7.7 2.6 6.9 2.3] [6. 2.9 4.5 1.5] [6.8 2.8 4.8 1.4] [5.4 3.4 1.5 0.4] [5.6 2.9 3.6 1.3]] The corresponding 7 labels: [1 0 2 1 1 0 1]
Stratified random sample
Especially with relatively small amounts of data, it is better to stratify the division. Stratification means that we keep the original class proportion of the data set in the test and training sets. We calculate the class proportions of the previous split in percent using the following code. To calculate the number of occurrences of each class, we use the numpy function 'bincount'. It counts the number of occurrences of each value in the array of non-negative integers passed as an argument.
import numpy as np 34.16666667 32.5 ] Test: [33.33333333 30. 36.66666667]
To stratify the division, we can pass the label array as an additional argument to the train_test_split function:
from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split iris = load_iris() data, labels = iris.data, iris.target res = train_test_split(data, labels, train_size=0.8, test_size=0.2, random_state=42, stratify=labels) train_data, test_data, train_labels, test_labels = res 33.33333333 33.33333333] Test: [33.33333333 33.33333333 33.33333333]
This was a stupid example to test the stratified random sample, because the Iris data set has the same proportions, i.e. each class 50 elements.
We will work now with the file
strange_flowers.txt of the directory
data. This data set is created in the chapter Generate Datasets in Python The classes in this dataset have different numbers of items. First we load the data:
content = np.loadtxt("data/strange_flowers.txt", delimiter=" ") data = content[:, :-1] # cut of the target column labels = content[:, -1] labels.dtype labels.shape
OUTPUT:
(795,)
res = train_test_split(data, labels, train_size=0.8, test_size=0.2, random_state=42, stratify=labels) train_data, test_data, train_labels, test_labels = res # np.bincount expects non negative integers: print('All:', np.bincount(labels.astype(int)) / float(len(labels)) * 100.0) print('Training:', np.bincount(train_labels.astype(int)) / float(len(train_labels)) * 100.0) print('Test:', np.bincount(test_labels.astype(int)) / float(len(test_labels)) * 100.0)
OUTPUT:
All: [ 0. 23.89937107 25.78616352 28.93081761 21.3836478 ] Training: [ 0. 23.89937107 25.78616352 28.93081761 21.3836478 ] Test: [ 0. 23.89937107 25.78616352 28.93081761 21.3836478 ] | https://python-course.eu/machine-learning/train-and-test-sets-by-splitting-learn-and-test-data.php | CC-MAIN-2022-05 | en | refinedweb |
35. Net Income Method Example with Numpy, Matplotlib and Scipy
By Bernd Klein. Last modified: 12 Dec 2021.
The Net Income Method (in Germany known as Einnahmeüberschussrechnung, EÜR) is a simplified profit determination method. Under German law, self-employed persons such as doctors, lawyers, architects and others have the choice of submitting annual accounts or using the simple net income method. What we present in this chapter of our Pandas tutorial will not only be interesting for small companies, especially in Germany and EU countries, but also for other countries in the world as well.
We are mainly interested in introducing the different ways in which pandas and python can solve such problems. We show what it is all about to monitor the flow of money and in this way to better understand the financial situation. The algorithms can also be used for tax purposes. But be warned, this is a very general treatment of the matter and needs to be adjusted to the actual tax situation in your country. Although it is mainly based on German law, we cannot guarantee its accuracy! So, before you actually use it, you have to make sure that it is suitable for your situation and the tax laws of your country.
Very often a Microsoft Excel file is used for mainting a journal file of the financial transactions. In the
data1 folder there is a file that contains an Excel file
net_income_method_2020.xlsx with the accounting data of a fictitious company. We could have used simple texts files like csv as well.
Journal File
This excel document contains two data sheets: One with the actual data "journal" and one with the name "account numbers", which contains the mapping from the account numbers to the description.
We will read this excel file into two DataFrame objects:
import pandas as pd with pd.ExcelFile("/data1/net_income_method_2020.xlsx") as xl: accounts2descr = xl.parse("account numbers", index_col=0) journal = xl.parse("journal", index_col=0, ) journal.index = pd.to_datetime(journal.index) journal.index
OUTPUT:
DatetimeIndex(['2020-04-02', '2020-04-02', '2020-04-02', '2020-04-02', '2020-04-02', '2020-04-02', '2020-04-05', '2020-04-05', '2020-04-05', '2020-04-05', '2020-04-09', '2020-04-09', '2020-04-10', '2020-04-10', '2020-04-10', '2020-04-10', '2020-04-10', '2020-04-10', '2020-04-13', '2020-04-13', '2020-04-13', '2020-04-26', '2020-04-26', '2020-04-26', '2020-04-26', '2020-04-27', '2020-05-03', '2020-05-03', '2020-05-03', '2020-05-03', '2020-05-05', '2020-05-05', '2020-05-08', '2020-05-09', '2020-05-10', '2020-05-11', '2020-05-11', '2020-05-11', '2020-05-11', '2020-05-11', '2020-05-13', '2020-05-18', '2020-05-25', '2020-05-25', '2020-06-01', '2020-06-02', '2020-06-03', '2020-06-03', '2020-06-04', '2020-06-04', '2020-06-09', '2020-06-10', '2020-06-10', '2020-06-11', '2020-06-11', '2020-06-11', '2020-06-11', '2020-06-11', '2020-06-12', '2020-06-13', '2020-06-13', '2020-06-26', '2020-06-26', '2020-06-27', '2020-07-02', '2020-07-03', '2020-07-05', '2020-07-05', '2020-07-08', '2020-07-09', '2020-07-10', '2020-07-10', '2020-07-10', '2020-07-10', '2020-07-10', '2020-07-10', '2020-07-11', '2020-07-11', '2020-07-13', '2020-07-18', '2020-07-23', '2020-07-23', '2020-07-25', '2020-07-25', '2020-07-27', '2020-07-26', '2020-07-28'], dtype='datetime64[ns]', name='date', freq=None)
The first one is the tab "account numbers" which contains the mapping from the account numbers to the description of the accounts:
accounts2descr
The second data sheet "journal" contains the actual journal entries:
journal[:10]
There are many ways to analyze this data. We can for example sum up all the accounts:
account_sums = journal[["account number", "gross amount"]].groupby("account number").sum() account_sums
Account Charts
What about showing a pie chart of these sums? We encounter one problem: Pie charts cannot contain negative values. However this is not a real problem. We can split the accounts into income and expense accounts. Of course, this corresponds more to what we really want to see.
Charts for the Income Accounts
We create a DataFrame with the income accounts:
income_accounts = account_sums[account_sums["gross amount"] > 0] income_accounts
We can now visualize these values in a pie chart.
plot = income_accounts.plot(y='gross amount', figsize=(5, 5), kind="pie")
/>
You probably don't like the position of the legend? With the parameter bbox_to_anchor we can position it on a desired position. So, if we want to, we can even move it outside of the plot. With the relative coordinate values
(0.5, 0.5) we position the legend in the center of the plot. The legend is a box structure. So the question is what does the postion (0.5, 0.5) mean? We can define this by using the parameter
loc additionally:
We use this to position the legend with its left upper corner positioned in the middle of the plot:
plot = income_accounts.plot(y='gross amount', figsize=(5, 5), kind="pie") plot.legend(bbox_to_anchor=(0.5, 0.5), loc="upper left")
OUTPUT:
/><matplotlib.legend.Legend at 0x7f172ca03a90>
Now we position the lower right corner of the legend into the center of the plot:
plot = income_accounts.plot(y='gross amount', figsize=(5, 5), kind="pie") plot.legend(bbox_to_anchor=(0.5, 0.5), loc="lower right")
OUTPUT:
/><matplotlib.legend.Legend at 0x7f172c9c5590>
There is another thing we can improve. We see the labels 4400, 4401, and 4402 beside of each pie segment. In addition, we see them in the legend. This is ugly and redundant information. In the following we will turn the labels off, i.e. set them to an empty string, in the plot and we explicitly set them in the legend method:
plot = income_accounts.plot(y='gross amount', figsize=(5, 5), kind="pie", labels=['', '', '']) plot.legend(bbox_to_anchor=(0.5, 0.5), labels=income_accounts.index)
OUTPUT:
/><matplotlib.legend.Legend at 0x7f172c956550>
Now, we are close to perfection. Just one more tiny thing. Some might prefer to see the actual description text rather than an account number. We will cut out this information from the DataFrame
accounts2descr by using
loc and the list of desired numbers
[4400, 4401, 4402]. The result of this operation will be the argument of the
set_index method. (Atention:
reindex is not giving the wanted results!)
descriptions = accounts2descr["description"].loc[[4400, 4401, 4402]] plot = income_accounts.plot(kind="pie", y='gross amount', figsize=(5, 5), labels=['', '', '']) plot.legend(bbox_to_anchor=(0.5, 0.5), loc="lower left", labels=descriptions)
OUTPUT:
/><matplotlib.legend.Legend at 0x7f172c8ba8d0>
Would you prefer a bar chart? No problem, we just have to set the parameter
kind to
bar instead of
pie:
plot = income_accounts.plot(y='gross amount', figsize=(5, 5), kind="bar", legend=False)
/>
For bar charts we have to set
kind to
barh. So that it doesn't get too boring, we can also color the bars by passing a list of colors to the color parameter:
plot = income_accounts.plot(y='gross amount', figsize=(5, 5), kind="barh", legend=False, color=['green', 'orange', 'blue'])
/>
Charts for the Expenses Accounts
we can do the same now with our debitors (expenses accounts):
expenses_accounts = account_sums[account_sums["gross amount"] < 0] expenses_accounts
acc2descr_expenses = accounts2descr["description"].loc[expenses_accounts.index] acc2descr_expenses
OUTPUT:
account number 2010 souvenirs 2020 clothes 2030 other articles 2050 books 2100 insurances 2200 wages 2300 loans 2400 hotels 2500 petrol 2600 telecommunication 2610 internet Name: description, dtype: object
expenses_accounts.set_index(acc2descr_expenses.values, inplace=True) expenses_accounts *= -1
labels = [''] * len(expenses_accounts) plot = expenses_accounts.plot(kind="pie", y='gross amount', figsize=(5, 5), labels=labels) plot.legend(bbox_to_anchor=(0.5, 0.5), labels=expenses_accounts.index)
OUTPUT:
/><matplotlib.legend.Legend at 0x7f172868f0d0>
Tax Sums
We will sum up the amount according to their tax rate.
journal.drop(columns=["account number"])
87 rows × 4 columns
In the following we will define a function
tax_sums that calculates the VAT sums according to tax rates from a journal DataFrame:
def tax_sums(journal_df, months=None): """ Returns a DataFrame with sales and tax rates - If a number or list is passed to 'months', only the sales of the corresponding months will be used. Example: tax_sums(df, months=[3, 6]) will only use the months 3 (March) and 6 (June)""" if months: if isinstance(months, int): month_cond = journal_df.index.month == months elif isinstance(months, (list, tuple)): month_cond = journal_df.index.month.isin(months) positive = journal_df["gross amount"] > 0 # sales_taxes eq. umsatzsteuer sales_taxes = journal_df[positive & month_cond] negative = journal_df["gross amount"] < 0 # input_taxes equivalent to German Vorsteuer input_taxes = journal_df[negative & month_cond] else: sales_taxes = journal_df[journal_df["gross amount"] > 0] input_taxes = journal_df[journal_df["gross amount"] < 0] sales_taxes = sales_taxes[["tax rate", "gross amount"]].groupby("tax rate").sum() sales_taxes.rename(columns={"gross amount": "Sales Gross"}, inplace=True) sales_taxes.index.name = 'Tax Rate' input_taxes = input_taxes[["tax rate", "gross amount"]].groupby("tax rate").sum() input_taxes.rename(columns={"gross amount": "Expenses Gross"}, inplace=True) input_taxes.index.name = 'Tax Rate' taxes = pd.concat([input_taxes, sales_taxes], axis=1) taxes.insert(1, column="Input Taxes", value=(taxes["Sales Gross"] * taxes.index / 100).round(2)) taxes.insert(3, column="Sales Taxes", value=(taxes["Expenses Gross"] * taxes.index / 100).round(2)) return taxes.fillna(0) tax_sums(journal)
stsum_5 = tax_sums(journal, months=5) stsum_6 = tax_sums(journal, months=6) stsum_5
tax_sums(journal, months=[5, 6]) | https://python-course.eu/numerical-programming/net-income-method-example-with-numpy-matplotlib-and-scipy.php | CC-MAIN-2022-05 | en | refinedweb |
Written By Tony SuiLewis Fogden,
Mon 27 February 2017, in category Data science
Twitter’s data can often provide valuable insight into your company's products, brand, clients, or competition. You can extract sentiment, volume, what's trending, and much more. Enough said, let’s stream some tweets!
First, a Twitter Application is required. You can create one here. You will need to sign in with your twitter account. Once you’ve signed in, click on Create New App in the top right corner. You will then see the following:
Give your application a Name and a Description. The Website is not so relevant at this stage, so you can put down any valid URL as a placeholder. You can also ignore Callback URL for now.
Inside your application, go to the tab Keys and Access Tokens, you can find the values for your Consumer Key (API Key) and Consumer Secret (API Secret). We will use them later.
Before we can start coding happily, we need to have the following two files ready:
search_terms.txt, where you have the term/s that you want to search. If you have multiple terms, put each of them in a new line.
config.json, which looks like this:
{ "TERMS_FILE": "search_terms.txt", "APP_KEY": "Your Consumer Key (API Key)", "APP_SECRET": "Your Consumer Secret (API Secret)", "STORAGE_PATH": "./tweets/" }
Next, create a folder and name it tweets. This is where the streamed tweets will be stored in.
Lastly, run pip install twython in your environment, since twython is the python library that we use to connect with twitter. In your script, import the following list of libraries:
import time import json import os import logging import requests from threading import Thread from http.client import IncompleteRead from twython import TwythonStreamer from twython import Twython
Create an Authentication class; within the class, we have the following method:
This first method reads the configuration file and returns the parameters that we will use later:
def read_config_file(self, filename): with open(filename, "r") as f: s = f.read() d = json.loads(s) APP_KEY = d["APP_KEY"] APP_SECRET = d["APP_SECRET"] TERMS_FILE = d["TERMS_FILE"] STORAGE_PATH = d["STORAGE_PATH"] return APP_KEY, APP_SECRET, TERMS_FILE, STORAGE_PATH
To get the OAuth token and token secret, we need to first get the pin code from twitter. This method will output a link in the command line:
def get_oauth_link(self, APP_KEY, APP_SECRET): twitter = Twython(APP_KEY, APP_SECRET) auth = twitter.get_authentication_tokens() OAUTH_TOKEN = auth['oauth_token'] OAUTH_TOKEN_SECRET = auth['oauth_token_secret'] url = (auth['auth_url']) r = requests.get(url) logging.info( "Go to the URL below, log in, and copy-paste the PIN you get to " "'code.txt':" ) logging.info(url) return url, OAUTH_TOKEN, OAUTH_TOKEN_SECRET
Copy and paste the link to a web browser, you will see the pin code. Create a file called code.txt, paste the pin code inside and save it (Do not try to web scrap the pin code as you will risk your IP address being blacklisted by Twitter). The following line is a quick way of creating the pincode file:
echo "123_my_pincode_456" > code.txt
The next method will sit and wait for the pin code in code.txt. Once it detects it, it will read and return it:
def wait_for_pin_code(self): while True: if not os.path.exists("code.txt"): time.sleep(5) logging.debug( "'code.txt' file doesn't exists, waiting to start listening" " to twitter until it is created" ) else: pincode = 0 with open("code.txt") as f: pincode = int(f.read().strip()) logging.info("Pincode read succesfully:" + str(pincode)) return str(pincode)
The pin code is only valid for one connection, so every time we re-run the twitter scrapper, we need to first remove the code.txt if it exists, hence we have the following method:
def remove_old_code_file(self): if os.path.exists("code.txt"): os.remove("code.txt")
Lastly, the following method will take the pincode alongside with other authentication parameters, and return the OAUTH_TOKEN and OAUTH_TOKEN_SECRET, which we will use later for creating a Twython Streamer instance:
def auth_with_pin(self, APP_KEY, APP_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET, pincode): twitter = Twython(APP_KEY, APP_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET) final_step = twitter.get_authorized_tokens(pincode) logging.debug("Old OATH_TOKEN: " + str(OAUTH_TOKEN)) logging.debug("Old OAUTH_TOKEN_SECRET: " + str(OAUTH_TOKEN_SECRET)) OAUTH_TOKEN = final_step['oauth_token'] OAUTH_TOKEN_SECRET = final_step['oauth_token_secret'] logging.debug("New OATH_TOKEN: " + str(OAUTH_TOKEN)) logging.debug("New OAUTH_TOKEN_SECRET: " + str(OAUTH_TOKEN_SECRET)) return OAUTH_TOKEN, OAUTH_TOKEN_SECRET
This is the class that will do the actual heavy lifting – streaming data from twitter. But before that, we need to have a TooLongTermException class defined:
class TooLongTermException(Exception): def __init__(self, index): self.index = index def get_too_long_index(self): return self.index
This exception will be raised if the search term you specified is too long.
Next is the StreamListener class:
class StreamListener(TwythonStreamer): def __init__(self, APP_KEY, APP_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET, comm_list): super().__init__(APP_KEY, APP_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET) self.tweet_list = comm_list def on_success(self, data): self.tweet_list.append(data) logging.info("tweet captured") def on_error(self, status_code, data): logging.error(status_code) logging.error(data) if int(status_code) == 406: data = str(data) try: index = int(data.strip().split()[4]) logging.error("to remove index:" + str(index)) raise TooLongTermException(index) except ValueError: logging.debug("ValueError while trying to extract number")
This function will instantiate the Authentication class, and return all parameters we need:
def get_authentication(): auth = Authentication() logging.basicConfig( format='%(levelname)s: %(asctime)s - %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p', level=logging.INFO ) logging.info("Removing old pincode file") auth.remove_old_code_file() logging.info("Loading config file") APP_KEY, APP_SECRET, TERMS_FILE, STORAGE_PATH = auth.read_config_file("config.json") logging.info("Getting OAuth data") url, OAUTH_TOKEN, OAUTH_TOKEN_SECRET = auth.get_oauth_link(APP_KEY, APP_SECRET) logging.info("Waiting for pin code") pincode = auth.wait_for_pin_code() logging.info("Authorizing with pin code") OAUTH_TOKEN, OAUTH_TOKEN_SECRET = auth.auth_with_pin( APP_KEY, APP_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET, pincode ) logging.info("Start listening....") filter_terms = [] with open(TERMS_FILE) as f: for term in f: filter_terms.append(term.strip()) logging.info("List of terms to filter" + str(filter_terms)) return APP_KEY, APP_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET, filter_terms, STORAGE_PATH
The following two functions will listen and write the streamed tweets into files. Currently it is writing to file every 100 tweets streamed.
def twitter_listener( APP_KEY, APP_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET, comm_list): streamer = StreamListener(APP_KEY, APP_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET, comm_list) while True: try: streamer.statuses.filter(track=[', '.join(filter_terms)], language='en') except requests.exceptions.ChunkedEncodingError: print('error, but under control\n') pass except IncompleteRead: print('incompletetereaderror, but under control') pass except TooLongTermException as e: index_to_remove = e.get_too_long_index() filter_terms.pop(index_to_remove) :::python def twitter_writer(comm_list): internal_list = [] time_start = time.time() while True: if len(internal_list) > 100: file_name = STORAGE_PATH + str(round(time.time())) + ".json" with open(file_name, 'w+', encoding='utf-8') as output_file: json.dump(internal_list, output_file, indent=4) internal_list = [] logging.info('------- Data dumped -------') time_stop = time.time() logging.info('Time taken for 100 tweets: {0:.2f}s'.format( time_stop - time_start )) time_start = time.time() else: for i in range(len(comm_list)): internal_list.append(comm_list.pop()) time.sleep(1)
Finally, we run the twitter scraper inside a name == 'main' block. This ensures that the scraper is only launched if we run the script with interpreter directly; if it is imported into another script as a module, only the defined classes and functions will be imported. The listener and writer will run in different threads.
if __name__ == '__main__': # Get the authentication APP_KEY, APP_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET, filter_terms, STORAGE_PATH = get_authentication() comm_list =[] # Start the threads listener = Thread(target = twitter_listener, args = ( APP_KEY, APP_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET, comm_list )) listener.start() writer = Thread(target = twitter_writer, args = (comm_list,)) writer.start() writer.join() listener.join()
If you have followed through to here, great job! Now you can run the twitter script, sit back and relax, and look at all the tweets streaming in. This should feel awesome! | http://blog.keyrus.co.uk/streaming_data_from_twitter_using_python.html | CC-MAIN-2022-05 | en | refinedweb |
So now we know our way round the front end part of a new React + Web API project, what about the Web API part?
Everything else in the new project (apart from the ClientApp folder) is there to make your Web API work.
First up we have the Controllers folder.
API controllers for your data
In the controllers folder you’ll find an example controller which returns some hard-coded data for your application.
[Route("api/[controller]")] public class SampleDataController : Controller { // other code omitted )] }); } // other code omitted }
At the top we have a
Route attribute which means any requests to our application
/api/SampleData will be routed here.
The
WeatherForecasts method has its own
HttpGet("[action]") attribute.
This tells ASP.NET Core’s routing engine to take any requests to
/api/sampledata/WeatherForecasts and forward them to this method (our controller action).
The rest of the code in this method (and more code in the controller which I’ve omitted for brevity) simply generates some random “weather” data and returns it as an IEnumerable of
WeatherForecast.
The shape of the returned data is defined by the
WeatherForecast type which looks like this…
public class WeatherForecast { public string DateFormatted { get; set; } public int TemperatureC { get; set; } public string Summary { get; set; } public int TemperatureF { get { return 32 + (int) (TemperatureC / 0.5556); } } }
Testing your work
To test a simple GET like this you can launch your application and click through the various screens in your front end app, or you can take a more direct approach and make HTTP requests direct to your API.
Start by launching your application.
If this is the first time you’ve launched you may run into a warning message about a “Potential Security Risk Ahead”; this is because ASP.NET is attempting to run using HTTPS but uses a special “self-signed” development certificate to make it work.
You wouldn’t want to use that certificate for deployment to production but it’s fine to accept this certificate and continue on your local machine. Here’s what the warning looks like in Firefox.
Once it’s up and runnning, you can test that GET action by simply entering the URL in the browser…
Alternatively you can use other tools to test your API.
Postman and Insomnia are the two major contenders for this job, with my personal favourite being Insomnia.
To test your API using Insomnia you need to download and install it, then take note of the address your application is running on, and enter it into the URL box at the top of the screen.
Ensure the request type is
GET, hit
Send and you should see your lovely random weather data returned.
Startup.cs
The other file of note in the backend part of the project is
startup.cs.
In ASP.NET Core this is where you’ll find yourself configuring your application.
Take a look and you’ll see various calls to tell ASP.NET how it should operate.
For example, the following code tells ASP.NET Core to show a “developer friendly” error message if we’re running in development mode (on a development machine)…
if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); }
Most of this file is bog standard ASP.NET Core but there are a few React/SPA specific calls.
Most notably you’ll see a call to
UseSpaStaticFiles.
When you run your application in production your React code will be compiled and minified into static
.html and
.js files in
ClientApp/build.
UseSpaStaticFiles and the corresponding call to
AddSpaStaticFiles ensure these compiled and minified files will be served to the browser when a user visits your application.
UseSpa similarly ensures that the default starting page for your React application is served up when requests are made to the application.
This code…
if (env.IsDevelopment()) { spa.UseReactDevelopmentServer(npmScript: "start"); }
… makes sure that your React application is served using the React Development server when running in Development mode (on your machine).
This gives you more detailed output in the console of the browser and also handles things like auto-refreshing the browser when you make changes to your application.
Launch and/or publish your application
It’s worth knowing what’s going on when you launch or publish your application.
If you take a gander at your application’s
.csproj file you’ll notice some calls to
npm.
<Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish"> <!-- As part of publishing, ensure the JS resources are freshly built in production mode --> <Exec WorkingDirectory="$(SpaRoot)" Command="npm install"/> <Exec WorkingDirectory="$(SpaRoot)" Command="npm run build"/> <!-- rest of code omitted for brevity --> </Target>
When you publish your application these
npm calls ensure any front-end dependencies are downloaded and that your application is built (compiled and minified) ready to be served in production.
To run your project locally you can just hit CTRL+F5 (in VS) and both parts of the application (front-end and back-end) will be launched together, running at the same web address.
You can control which port is used by modifying the
launchSettings.json file in the
Properties folder.
So in this example, if you launched this application using IIS Express it would run on port 40512 (http) or 44360 (https).
"iisSettings": { "windowsAuthentication": false, "anonymousAuthentication": true, "iisExpress": { "applicationUrl": "", "sslPort": 44360 } },
Launching the React application on its own URL
One downside to running both the React and ASP.NET Core parts of your application together is that any changes to your C# code will trigger a recompile and restart of your application, rendering your React front-end unresponsive for a few seconds.
The ASP.NET Core build will also take longer because it also has to rebuild the React part of the application, even if you haven’t changed it.
To work around this you can choose to launch the React part of the application separately, on its own port.
To do so, add a
.env file to the
ClientApp directory and add this setting:
BROWSER=none
Now you’ll need to launch the React application manually (it won’t launch when you launch the .net application):
cd ClientApp npm start
By default your React app will launch on port 3000.
Now you need to alert ASP.NET Core to this fact by making a small change to
startup.cs.
Replace
spa.UseReactDevelopmentServer with this code:
spa.UseProxyToSpaDevelopmentServer("");
Now when you build and launch your ASP.NET Core app it won’t launch its own React server or restart the one you’ve manually started.
You’ll find your application running at whilst your ASP.NET Core API continues to run on the port specified in
launchSettings.json.
There are a few gotchas to watch out for if you go down this route: specifically you might find your network requests to the API start failing; my next post will explain why (and how to fix it).
All posts in the
Getting started with the ASP.NET Core React Template series. | https://jonhilton.net/understanding-the-asp-net-react-template-web-api/ | CC-MAIN-2022-05 | en | refinedweb |
NAMEqbloop.h - Main loop manages timers, jobs and polling sockets.
SYNOPSIS
#include <qb/qbloop.h>
DESCRIPTIONOnly a weaker sense of priorities is implemented, alluding to distinct set of pros and cons compared to the stronger, strict approach to them as widely applied in this problem space (since the latter gives the application more control as the effect of the former can still be achieved with some reductions, whereas it is not straightforward the other way around; cf. static priority task scheduling vs. relative fine-tuning within a single priority domain with nice(2)):
implicit mitigation for deadlock-prone priority arrangements
less predictable (proportional probability based, we can talk about an advisory effect of the priorities) responses to the arrival of the high-ranked events (i.e. in the process of the picking the next event to handle from the priority queue when at least two different priorities are eligible at the moment)
One practical application for this module of libqb is in combination with IPC servers based on qbipcs.h published one (the qb_ipcs_poll_handlers structure maps fittingly to the control functions published here). | https://man.archlinux.org/man/community/libqb/qbloop.h.3.en | CC-MAIN-2022-05 | en | refinedweb |
Get the qualifier from an ACL entry
#include <sys/acl.h> void *acl_get_qualifier( acl_entry_t entry_d );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The acl_get_qualifier() function gets the qualifier from an ACL entry and returns a pointer to a copy of it. The data type of the pointer depends on the type of the entry:
When you're finished with the copy of the qualifier, use acl_free() to release it.
This function is based on the withdrawn POSIX draft P1003.1e. | https://www.qnx.com/developers/docs/7.1/com.qnx.doc.neutrino.lib_ref/topic/a/acl_get_qualifier.html | CC-MAIN-2022-05 | en | refinedweb |
@Retention(value=RUNTIME) @Target(value=FIELD) public @interface LinkingObjects
RealmResults.
To expose reverse relationships for use, create a declaration as follows:In the above example `Person` is related to `Dog` through the field `dog`. This in turn means that an implicit reverse relationship exists between the class `Dog` and the class `Person`. This inverse relationship is made public and queryable by the `RealmResults` field annotated with `@LinkingObject`. This makes it possible to query properties of the dogs owner without having to manually maintain a "owner" field in the `Dog` class.
public class Person extends RealmObject { String name; Dog dog; // Normal relation } public class Dog extends RealmObject { // This holds all Person objects with a relation to this Dog object (= linking objects) \@LinkingObjects("dog") final RealmResults>Person< owners = null; } // Find all Dogs with at least one owner named John realm.where(Dog.class).equalTo("owners.name", "John").findAll();
Linking objects have the following properties:
In addition, they have the following restrictions:
public class DogLover extends RealmObject { String name; List<Dog> dogs = new ArrayList<Dog>; }then the following code executes without error
Dog fido = new Dog(); DogLover john = new DogLover() john.dogs.add(fido); john.dogs.add(fido); assert john.dogs.size() == 2; assert fido.owners.size() == 2;
Querying inverse relationship is like querying any
RealmResults. This means that an inverse relationship
cannot be
null but it can be empty (length is 0). It is possible to query fields in the source class. This is
equivalent to link queries. Please read for more
information. | https://docs.mongodb.com/realm-legacy/docs/java/5.7.0/api/io/realm/annotations/LinkingObjects.html | CC-MAIN-2022-05 | en | refinedweb |
.
/* "SecurityManager.h" #include "pretty_printer.h" #if MBED_CONF_APP_FILESYSTEM_SUPPORT #include "LittleFileSystem.h" #include "HeapBlockDevice.h" #endif //MBED_CONF_APP_FILESYSTEM_SUPPORT /** char DEVICE_NAME[] = "SM_device"; /* we have to specify the disconnect call because of ambiguous overloads */ typedef ble_error_t (Gap::*disconnect_call_t)(ble::connection_handle_t, ble::local_disconnection_reason_t); const static disconnect_call_t disconnect_call = &Gap::disconnect; /* for demonstration purposes we will store the peer device address * of the device that connects to us in the first demonstration * so we can use its address to reconnect to it later */ static ble::address ble::Gap::EventHandler { public: SMDevice(BLE &ble, events::EventQueue &event_queue, ble::address gap events */ _ble.gap().setEventHandler(this); error = _ble.init(this, &SMDevice::on_init_complete); if (error) { printf("Error returned by BLE::init.\r\n"); return; } /* this will not return until shutdown */ _event_queue.dispatch_forever(); };; } /* This path will be used to store bonding information but will fallback * to storing in memory if file access fails (for example due to lack of a filesystem) */ const char* db_path = "/fs/bt_sec_db"; /* If the security manager is required this needs to be called before any * calls to the Security manager happen. */ error = _ble.securityManager().init( true, false, SecurityManager::IO_CAPS_NONE, NULL, false, db_path ); if (error) { printf("Error during init %d\r\n", error); return; } error = _ble.securityManager().preserveBondingStateOnReset(true); if (error) { printf("Error during preserveBondingStateOnReset %d\r\n", error); } #if MBED_CONF_APP_FILESYSTEM_SUPPORT /* Enable privacy so we can find the keys */ error = _ble.gap().enablePrivacy(true); if (error) { printf("Error enabling privacy\r\n"); } Gap::peripheral_privacy_configuration_t configuration_p = { /* use_non_resolvable_random_address */ false, Gap::peripheral_privacy_configuration_t::REJECT_NON_RESOLVED_ADDRESS }; _ble.gap().setPeripheralPrivacyConfiguration(&configuration_p); Gap::central_privay_configuration_t configuration_c = { /* use_non_resolvable_random_address */ false, Gap::CentralPrivacyConfiguration_t::RESOLVE_AND_FORWARD }; _ble.gap().setCentralPrivacyConfiguration(&configuration_c); /* this demo switches between being master and slave */ _ble.securityManager().setHintFutureRoleReversal(true); #endif /* Tell the security manager to use methods in this class to inform us * of any events. Class needs to implement SecurityManagerEventHandler. */ _ble.securityManager().setSecurityManagerEventHandler(this); /* gap events also handled by this class */ _ble.gap().setEventHandler(this); /* print device address */ print_mac_address(); /* start test in 500 ms */ _event_queue.call_in(500, this, &SMDevice::start); }; /**: /* Event handler */ /**"); } /* disconnect in 2 s */ _event_queue.call_in( 2000, &_ble.gap(), disconnect_call, _handle, ble::local_disconnection_reason_t(ble::local_disconnection_reason_t::USER_TERMINATION) ); } /** This is called by Gap to notify the application we disconnected, * in our case it ends the demonstration. */ virtual void onDisconnectionComplete(const ble::DisconnectionCompleteEvent &) { printf("Diconnected\r\n"); _event_queue.break_dispatch(); }; virtual void onAdvertisingEnd(const ble::AdvertisingEndEvent &event) { if (!event.isConnected()) { printf("Advertising timed out - aborting\r\n"); _event_queue.break_dispatch(); } } virtual void onScanTimeout(const ble::ScanTimeoutEvent &) { printf("Scan timed out - aborting\r\n"); _event_queue.break_dispatch(); } private: DigitalOut _led1; protected: BLE &_ble; events::EventQueue &_event_queue; ble::address_t &_peer_address; ble::connection_handle_t _handle; bool _is_connecting; }; /** A peripheral device will advertise, accept the connection and request * a change in link security. */ class SMDevicePeripheral : public SMDevice { public: SMDevicePeripheral(BLE &ble, events::EventQueue &event_queue, ble::address_t &peer_address) : SMDevice(ble, event_queue, peer_address) { } virtual void start() { /* Set up and start advertising */ uint8_t adv_buffer[ble::LEGACY_ADVERTISING_MAX_SIZE]; /* use the helper to build the payload */ ble::AdvertisingDataBuilder adv_data_builder( adv_buffer ); adv_data_builder.setFlags(); adv_data_builder.setName(DEVICE_NAME); /* Set payload for the set */ ble_error_t error = _ble.gap().setAdvertisingPayload( ble::LEGACY_ADVERTISING_HANDLE, adv_data_builder.getAdvertisingData() ); if (error) { print_error(error, "Gap::setAdvertisingPayload() failed"); _event_queue.break_dispatch(); return; } ble::AdvertisingParameters adv_parameters( ble::advertising_type_t::CONNECTABLE_UNDIRECTED ); error = _ble.gap().setAdvertisingParameters( ble::LEGACY_ADVERTISING_HANDLE, adv_parameters ); if (error) { print_error(error, "Gap::setAdvertisingParameters() failed"); return; } error = _ble.gap().startAdvertising(ble::LEGACY_ADVERTISING_HANDLE); if (error) { print_error(error, "Gap::startAdvertising() failed"); return; } printf("Please connect to device\r\n"); /**ConnectionComplete(const ble::ConnectionCompleteEvent &event) { ble_error_t error; /* remember the device that connects to us now so we can connect to it * during the next demonstration */ _peer_address = event.getPeerAddress(); printf("Connected to peer: "); print_address(event.getPeerAddress().data()); _handle = event.getConnectionHandle(); /*::address_t &peer_address) : SMDevice(ble, event_queue, peer_address) { } virtual void start() { ble::ScanParameters params; ble_error_t error = _ble.gap().setScanParameters(params); if (error) { print_error(error, "Error in Gap::startScan %d\r\n"); return; } /* start scanning, results will be handled by onAdvertisingReport */ error = _ble.gap().startScan(); if (error) { print_error(error, "Error in Gap::startScan %d\r\n"); return; } printf("Please advertise\r\n"); printf("Scanning for: "); print_address(_peer_address.data()); } private: /* Gap::EventHandler */ /** Look at scan payload to find a peer device and connect to it */ virtual void onAdvertisingReport(const ble::AdvertisingReportEvent &event) { /* don't bother with analysing scan result if we're already connecting */ if (_is_connecting) { return; } /* parse the advertising payload, looking for a discoverable device */ if (event.getPeerAddress() == _peer_address) { ble_error_t error = _ble.gap().stopScan(); if (error) { print_error(error, "Error caused by Gap::stopScan"); return; } ble::ConnectionParameters connection_params( ble::phy_t::LE_1M, ble::scan_interval_t(50), ble::scan_window_t(50), ble::conn_interval_t(50), ble::conn_interval_t(100), ble::slave_latency_t(0), ble::supervision_timeout_t(100) ); connection_params.setOwnAddressType(ble::own_address_type_t::RANDOM); error = _ble.gap().connect( event.getPeerAddressType(), event.getPeerAddress(), connection_params ); if (error) { print_error(error, "Error caused by Gap::connect"); return; } /* we may have already scan events waiting * to be processed so we need to remember * that we are already connecting and ignore them */ _is_connecting = true; return; } } /** This is called by Gap to notify the application we connected, * in our case it immediately request pairing */ virtual void onConnectionComplete(const ble::ConnectionCompleteEvent &event) { if (event.getStatus() == BLE_ERROR_NONE) { /* store the handle for future Security Manager requests */ _handle = event.getConnectionHandle(); printf("Connected\r\n"); /* in this example the local device is the master so we request pairing */ ble_error_t error = _ble.securityManager().requestPairing(_handle); if (error) { printf("Error during SM::requestPairing %d\r\n", error); return; } /* upon pairing success the application will disconnect */ } /* failed to connect - restart scan */ ble_error_t error = _ble.gap().startScan(); if (error) { print_error(error, "Error in Gap::startScan %d\r\n"); return; } }; }; #if MBED_CONF_APP_FILESYSTEM_SUPPORT bool create_filesystem() { static LittleFileSystem fs("fs"); /* replace this with any physical block device your board supports (like an SD card) */ static HeapBlockDevice bd(4096, 256); int err = bd.init(); if (err) { return false; } err = bd.erase(0, bd.size()); if (err) { return false; } err = fs.mount(&bd); if (err) { /* Reformat if we can't mount the filesystem */ printf("No filesystem found, formatting...\r\n"); err = fs.reformat(&bd); if (err) { return false; } } return true; } #endif //MBED_CONF_APP_FILESYSTEM_SUPPORT int main() { BLE& ble = BLE::Instance(); events::EventQueue queue; #if MBED_CONF_APP_FILESYSTEM_SUPPORT /* if filesystem creation fails or there is no filesystem the security manager * will fallback to storing the security database in memory */ if (!create_filesystem()) { printf("Filesystem creation failed, will use memory storage\r\n"); } #endif while(1) { { printf("\r\n PERIPHERAL \r\n\r\n"); SMDevicePeripheral peripheral(ble, queue, peer_address); peripheral.run(); } { printf("\r\n CENTRAL \r\n\r\n"); SMDeviceCentral central(ble, queue, peer_address); central.run(); } } return 0; } | https://os.mbed.com/docs/mbed-os/v6.2/apis/securitymanager.html | CC-MAIN-2020-34 | en | refinedweb |
.
This is the way I went about the "Validate a Sudoku Board” problem..
This one is fairly straight-forward, your basic task is to make sure that the value of any given cell isn’t replicated in the cell’s row, column, or “cube”. The problem is how you go about this. There is a space-efficient way, but requires more looping, or there is a method that requires only visiting any cell one time, but requires extra space.
Given that the puzzle is fixed in size (9x9 grid), I’ll opt for better algorithmic complexity and use space to hold the set of all numbers seen for each given row, column, and cube. By using 27 sets to hold these values (9 for the rows, 9 for the columns, 9 for the cubes), we can easily see if we’ve already seen the current number in the given row, column, or cube and immediately declare the puzzle invalid.
Of course, we could get even more space-efficient and use 27 BitArrays (or one large one partitioned, etc.), but then we lose the elegance of set logic. I like keeping things logically simple and then optimizing for space after determining there is a need, so I’d probably opt to use Sets in my original answer in an evaluation, and then mention that if space were a concern, I would then optimize to BitArray.
So, here’s my solution:
1: public class SudokuBoard
2: {
3: private readonly char[,] board;
4:
5: // validate board is a 9x9 array
6: public SudokuBoard(char[,] board)
7: {
8: if (board == null || board.GetLength(0) != 9 || board.GetLength(1) != 9)
9: {
10: throw new ArgumentException("Board is not valid size for Sudoku");
11: }
12:
13: this.board = board;
14: }
15:
16: public bool Validate()
17: {
18: // yes, i could use BitArray for space efficiency, but i like the logical feel
19: // of the set and how it returns false on Add() if already there.
20: var rows = Enumerable.Range(1, 9).Select(i => new HashSet<char>()).ToArray();
21: var columns = Enumerable.Range(1, 9).Select(i => new HashSet<char>()).ToArray();
22: var cubes = Enumerable.Range(1, 9).Select(i => new HashSet<char>()).ToArray();
23:
24: // process each cell only once
25: for (int row = 0; row < 9; ++row)
26: {
27: for (int column = 0; column < 9; ++column)
28: {
29: var current = board[row, column];
30: if (char.IsDigit(current))
31: {
32: // determine which of the "cubes" the row/col fall in
33: var cube = 3 * (row / 3) + (column / 3);
34:
35: // if add to any set returns false, it was already there.
36: if (!rows[row].Add(current) || !columns[column].Add(current) || !cubes[cube].Add(current))
37: {
38: return false;
39: }
40: }
41: }
42: }
43:
44: return true;
45: }
46: }
Note that I’m not checking for invalid characters for the sake of brevity, though we could easily do this in the constructor, or in the Validate() method itself:
1: var current = board[row, column];
2: if (char.IsDigit(current))
3: {
4: // blah blah blah
5: }
6: else if (!char.IsWhiteSpace(current))
7: {
8: return false;
9: }
Finally, here’s a simple driver to illustrate usage:
1: public static class Driver
3: public static void Perform()
4: {
5: var board = new char[9,9]
6: {
7: {'5', '3', ' ', ' ', '7', ' ', ' ', ' ', ' '},
8: {'6', ' ', ' ', '1', '9', '5', ' ', ' ', ' '},
9: {' ', '9', '8', ' ', ' ', ' ', ' ', '6', ' '},
10: {'8', ' ', '2', ' ', '6', ' ', ' ', ' ', '3'},
11: {'4', ' ', ' ', '8', ' ', '3', ' ', ' ', '1'},
12: {'7', ' ', ' ', ' ', '2', ' ', ' ', ' ', '6'},
13: {' ', '6', ' ', ' ', ' ', ' ', '2', '8', ' '},
14: {' ', ' ', ' ', '4', '1', '9', ' ', ' ', '5'},
15: {' ', ' ', ' ', ' ', '8', ' ', ' ', '7', '9'},
16: };
17:
18: var validator = new SudokuBoard(board);
19:
20: Console.WriteLine("The Sudoku board is " + (validator.Validate() ? "valid" : "invalid"));
21: }
22: } Wednesday, June 3, 2015 2:52 AM | | http://geekswithblogs.net/BlackRabbitCoder/archive/2015/06/02/solution-to-little-puzzlersndashvalidate-a-sudoku-board.aspx | CC-MAIN-2020-34 | en | refinedweb |
Published: 08/15/2012, Last Updated: 08/15/2012
By Dr. Michael J. Gourlay
Download Fluid Simulation for Video Games (part 14) [PDF 1.1MB]
Download MjgIntelFluidDemo14.zip [ZIP 3.8MB]
Figure 1. Dyed fluid drop inside convex polyhedral container. Cyan arrows show the density gradient. Yellow balls show vortex particles (vortons).. Part 1.
Detecting collisions between objects with arbitrary geometry requires sophisticated algorithms and mathematics, especially if you want to do that in real time. Fortunately, however, for visual effects, detecting collisions between particles and objects is much simpler, because you can treat particles as points or spheres. Furthermore, for video games visual effects (VFX), you can usually get away with colliding against the convex hull of an object rather than dealing with its detailed geometry.
Part 13 described how to collide particles against convex polyhedral solids. Let's turn those solids inside out to make convex polyhedral holes. These holes let you contain fluid particles, which is useful when modeling liquids.
As in part 13, this article uses planes to create a half-space model of a convex polyhedron.
Before jumping directly into computing the distance between points and the multiple planes that form a convex polytope, you should use a broad-phase bounding-volume test to avoid expensive computations. Broad-phase collision detection should be fast, and although it’s okay for the broad-phase to detect particles that will not collide with the target, it must detect all that will. For a solid, a bounding sphere that circumscribes the target volume satisfies these requirements.
For testing whether a particle is inside a hole, however, instead of a sphere that circumscribes the outside, you need a sphere that inscribes the inside; any particles outside that sphere might collide with the exterior, and all particles inside the sphere definitely will not. Figure 2 shows an example of circumscribed and inscribed spheres. Inscribed spheres are to holes what circumscribed spheres are to solids.
Figure 2. Circumscribed and inscribed spheres serve as broad-phase collision volumes for solids and holes, respectively.
Change
CollisionShape to discriminate between solids and holes:
The broad-phase tests in
FluidBodySim::CollideVortonsSlice> and
FluidBodySim::CollideTracersSlice depend on that property. For example,
CollideTracersSlice uses this test:
if( collisionShape->IsHole() ) { const float fCombinedRadii = Max2( boundingRadius - rTracer.GetRadius() , 0.0f ) ; broadPhaseCollision = fSphereToTracer > fCombinedRadii ; } else { const float fCombinedRadii = ( boundingRadius + rTracer.GetRadius() ) ; broadPhaseCollision = fSphereToTracer < fCombinedRadii ; }
The narrow-phase collision detection tests also require some changes for holes. The change depends on whether you want to compute the distance between stationary or moving objects.
Contact Distance
First, consider stationary objects. As part 13 described, the distance of a point to a convex polytope is the same as the largest distance of the point to any feature of that polytope, where a feature could be a vertex, edge, or face. If you consider only faces, that will still tell you whether the point lies inside or outside the polyhedron, so to keep things simple, consider only faces.
Holes use exactly the opposite logic. Figure 3 compares collision detection for solids and holes. You can use identical logic to detect whether a point is inside or outside a hole simply by multiplying the point-to-plane distance by the parity.
Figure 3. Computing the distance between a point and solid and hole polytopes
This routine computes the point-to-plane distance for both solids and holes:
float ConvexPolytope::ContactDistance( const Vec3 & queryPoint , unsigned & idxPlaneLeastPenetration ) const { float largestDistance = - FLT_MAX ; const size_t numPlanes = mPlanes.Size() ; for( unsigned iPlane = 0 ; iPlane < numPlanes ; ++ iPlane ) { // For each planar face of this convex hull... const float distToPlane = mPlanes[ iPlane ].Distance( queryPoint ) ; if( distToPlane > largestDistance ) { // Point distance to iPlane is largest of all planes visited so far. largestDistance = distToPlane ; // Remember this plane. idxPlaneLeastPenetration = iPlane ; } } return largestDistance * GetParity() ; }
The code in purple bold above that is multiplying by parity shows the key aspect that lets this code work for both solids and holes.
Likewise, computing the contact point and normal direction also needs the parity factor:
Vec3 ConvexPolytope::ContactPoint( const Vec3 & queryPoint , const unsigned idxPlaneLeastPenetration , const float distance ) const { const Vec3 & contactNormal = mPlanes[ idxPlaneLeastPenetration ].GetNormal() * GetParity(); Vec3 contactPoint = queryPoint - contactNormal * distance ; return contactPoint ; }
You could potentially make this code more elegant by changing the definition of a plane so that the d parameter is negative, thereby eliminating the need to use
GetParity in these two routines. But I find it easier to make the solid-versus-hole more explicit, and the logic differs for moving objects anyway.
Collision Detection
For detecting collisions between moving objects, the difference between solids and holes is subtler and slightly more complicated. Figure 4 shows why. The difference arises when the point is behind a planar face. If a point lies inside a solid, it could hypothetically get pushed in any direction and escape the solid, so which direction is correct? For the solid case, the point could have traveled so far in a single update that even though it’s closer to one face, it might be moving along that face normal-hence, not getting deeper. That implies that the point penetrated some other face. To determine which face the point is “most behind,” compute
, where
is the point velocity relative to the body and
is the surface normal direction. That way, when the point is ejected from the body, it goes in the correct direction-opposite its precollision direction.
For holes, the situation differs. If a particle is outside the hole, then the velocity is irrelevant; the most appropriate direction for the particle to move is toward the inside of the hole. That is true regardless of whether the particle was already moving that way. The situation is not perfectly analogous to the solid case, where the correct “ejection direction” depends on the particle velocity. For holes, the particle velocity does not determine in which direction to move the particle to resolve the collision.
Figure 4. Detecting collisions between a point and exterior and interior polytopes
This routine computes the collision distance between a point and convex polyhedron:
float ConvexPolytope::CollisionDistance( const Vec3 & queryPoint , const Vec3 & queryPointRelativeVelocity , unsigned & idxPlaneLeastPenetration ) const { float largestDistance = - FLT_MAX ; const size_t numPlanes = mPlanes.Size() ; for( unsigned iPlane = 0 ; iPlane < numPlanes ; ++ iPlane ) { // For each planar face of this convex hull... const Math::Plane & testPlane = mPlanes[ iPlane ] ; const float distToPlane = testPlane.Distance( queryPoint ) ; if( distToPlane > largestDistance ) { // Point distance to iPlane is largest of all planes visited so far. const float speedThroughPlane = queryPointRelativeVelocity * testPlane.GetNormal() * GetParity() ; if( ( speedThroughPlane <= 0.0f ) // Query point is going deeper thru face. || ( distToPlane >= 0.0f ) // Query point is outside polytope. || ( IsHole() ) // Polytope is a hole. ) { // Query point is moving deeper through this plane. largestDistance = distToPlane ; // Remember this plane. idxPlaneLeastPenetration = iPlane ; } } } return largestDistance * GetParity(); }
Note the use of IsHole and GetParity, in purple bold, to handle holes.
If you made only the changes above to collision detection and put vortex particles in a hole, you would get nonsense. The trouble started back in parts 4 and 8.
Heavy fluid sinks, and light fluid rises. To capture that behavior, the fluid dynamics equations need to depend on density variation. But in the most general form, density variation includes sound waves, which are expensive to simulate and provide no improvement to VFX. The Boussinesq approximation dodges that problem by assuming that the fluid is in hydrostatic equilibrium, meaning that the only pressure gradient is the result of the weight of a stack of fluid pressing down on itself.
That idea worked well enough for unbounded fluids, as parts 8, 9, 10, and 11 demonstrated. But fluids remain in containers because the walls push back, as shown in Figure 5. Such a flow has multiple domains (separated by rigid body walls) across which density and pressure jump discontinuously. The “Further Reading” section in this article describes recent research into this interesting problem.
Figure 5. Density and pressure gradients at boundaries.
For the sake of brevity, however, this article postpones any reformulation of the simulation formulae and instead takes an easy way out. Let’s see what we can hack up.
The immediate problem is that the baroclinic term is not zero when it should be zero:
The simulation code uses the simplification
(where
points down) and so never computes
. The baroclinic term gets computed like this:
Vec3 densityGradient ; mDensityGradientGrid.Interpolate( densityGradient , rVorton.mPosition ) ; const Vec3 baroclinicGeneration = densityGradient ^ mGravAccel * oneOverDensity ;
At vertical walls, however, both pressure and density should jump horizontally. They should not generate vorticity when at equilibrium. But the simulation computes density on the grid, which does not incorporate the rigid body surface, nor does the mesh have sufficient resolution to do so. Furthermore, the simulation treats the pressure as being in hydrostatic balance and its gradient as being purely vertical. The spurious horizontal density gradient crosses with the vertical pressure gradient to create a spurious torque on the fluid.
To model this correctly, fluid simulations adapt their mesh to conform to the boundaries and treat the jump explicitly. (For example, read up on the Reimann problem and Reimann solvers.) Future articles will revisit this problem.
In lieu of adding a separate pressure variable and precisely resolving boundaries, you could “poison” the density gradient at container boundaries so to prevent spurious baroclinic generation at walls. You could introduce this poison in at least two ways:
VortonSimand the
FluidBodySim.
Both approaches have benefits and drawbacks. Try both.
This first approach uses “hit” information to poison grid cells near boundaries:
static void PoisonDensityGradientSlice( UniformGrid< Vec3 > & densityGradientGrid , const Vector< Vorton > & particles , size_t iPclStart , size_t iPclEnd ) { // Poison density gradient grid based on vortons in contact with container boundaries. const Vec3 zero( 0.0f , 0.0f , 0.0f ) ; for( size_t iParticle = iPclStart ; iParticle < iPclEnd ; ++ iParticle ) { // For each particle in the array... const Particle & rParticle = particles[ iParticle ] ; if( rParticle.mHitBoundary ) { // This particle hit a boundary. const Vec3 & rPosition = rParticle.mPosition ; // Zero out any gradients along contact normal. // Remove normal component at each surrounding gridpoint. #if USE_TBB densityGradientGrid.RemoveComponent_ThreadSafe( rPosition , rParticle.mHitNormal ); #else densityGradientGrid.RemoveComponent( rPosition , rParticle.mHitNormal ) ; #endif } } }
Another routine,
FluidBodySim::PoisonDensityGradient, determines which grid points reside inside walls and zeroes the horizontal density gradient there. This snippet shows the most interesting part (and you can see the whole routine in the accompanying code archive):
const float contactDistance = convexPolytope->ContactDistance( gridPointPosition , physObjPosition , physObjOrientation , idxPlane ) ; if( contactDistance < testRadius ) { // Gridpoint is in contact with rigid body. // Zero out any gradients along contact normal. const Vec3 vContactPtWorld = convexPolytope->ContactPoint( gridPointPosition , physObjOrientation , idxPlane , contactDistance , contactNormal ) ; const float densGradAlongNormalMag = densGrad * contactNormal ; const Vec3 densGradAlongNormal = densGradAlongNormalMag * contactNormal ; densGrad -= densGradAlongNormal ; }
As physics simulations go, this is a pretty bizarre hack. It causes important problems that need to be addressed, but as Alton Brown says, that’s another show.
The
PoisonDensityGradient routine above needs special attention for it to become thread safe. It writes to grid points depending on particle positions. The loop iterates over particles. It would therefore be most natural to partition across threads by particle index. But particle index has no relationship to particle position, so multiple threads could simultaneously write to the same grid point. That would be a race condition.
You could first spatially partition vortons into the grid, then parallelize the problem across a spatial coordinate, thereby avoiding contention. Indeed, the simulation already partitions particles for computing viscous and thermal diffusion. So that approach would work.
Intel® TBB provides another option: lock-free atomic operations. One of those operations is called
fetch_and_add and does what you want: It atomically adds a value to a variable. Unfortunately, it only operates on integer types, and you need to operate on floats. Instead, you can use
compare_and_swap. It’s much slower than
fetch_and_add it requires additional code, but it’s faster and cheaper than using a mutex, because it requires neither a context switch nor additional memory for the lock.
This utility routine provides similar functionality to
fetch_and_add but works for floats:
inline void Float_FetchAndAdd( float & sum , const float & increment ) { /// Atomically increment sum. tbb::atomic<float> & atomicSum = reinterpret_cast< tbb::atomic<float> & >( sum ); float sumOld , sumNew ; do { sumOld = atomicSum ; sumNew = sumOld + increment ; } while( atomicSum.compare_and_swap( sumNew , sumOld ) != sumOld ) ; }
The code archive accompanying this article provides a thread-safe implementations for
UniformGrid::RemoveComponent that uses
Float_FetchAndAdd, along with an Intel® TBB functor that wraps
PoisonDensityGradientSlice for use with
tbb::parallel_for.
Consider a container about half-full of fluid particles, as shown in Figure 6.
Figure 6. Dyed fluid sloshing in a container moving left and right. Cyan arrows show the density gradient. Yellow balls show vortex particles (vortons).
As expected, using Intel® TBB to parallelize
PoisonDensityGradient speeds up that process by running it on multiple threads, but unfortunately, the naive implementation is not thread safe. It’s a useful comparison, but its yields unreliable results. For the numbers of particles used in the demonstration scenario, the thread-safe
PoisonDensityGradient routine that uses
compare_and_swap runs more slowly than even the serial version. That is an artifact of the low number of particles and the extremely large overhead associated with using compare_and_swap along with its conditional branches. If the hardware supported
fetch_and_add for floats, perhaps the runtime would diminish (see Figure 7).
Figure 7. Runtimes for
PoisonDensityGradient
This article showed how to turn convex polyhedra inside-out to make containers for particles using a half-space representation. For stationary particles, the technique for holes exactly mirrors for solids, but particles have an asymmetry, so holes require slightly different calculations than for solids.
Putting particles inside a container exposes issues that occur when the simulation has to resolve collisions from all sides. Ironically, getting a pile of particles to become and remain stationary is more difficult than getting them to swirl around each other. Although this article shows the geometry of how to keep particles contained, it opens a broader spectrum of problems, ripe for discussion in future articles.
Baroclinic generation is difficult to get right at walls. A fully accurate solution would overextend the scope of this article. So for brevity, this article described a quick-and-dirty way to eliminate spurious baroclinic torque by changing the density gradient near walls. That also exposed an opportunity to use
compare_and_swap, a lock-free synchronization method.
Liquids have a free surface that gases lack. Simulating that surface requires a numerical model for surface tension. That, in turn, requires identifying and tracking the surface. Modeling surface tension also entails a physical model of intermolecular forces. And to visualize the surface requires a different rendering technique. Future articles will delve into these interesting challenges. as a senior software engineer Boulder. | https://software.intel.com/content/www/us/en/develop/articles/fluid-simulation-for-video-games-part-14.html | CC-MAIN-2020-34 | en | refinedweb |
PageRank (PR) is an algorithm used by Google Search to rank websites in their search engine is used to find out the importance of a page to estimate how good a website is.PageRank (PR) is an algorithm used by Google Search to rank websites in their search engine is used to find out the importance of a page to estimate how good a website is.
It is not the only algorithm used by Google to order search engine results.
In this topic I will explain
What is PageRank?
· Page rank is vote which is given by all other pages on the web about how important a particular page on the web is.
· A link to a page counts as a vote of support.
· The number of times a page is refers to by the forward link it adds up to the website value.
· The number of times it is taken as an input to the previous page it also adds up to the web value.
Simplified algorithm of PageRank:
Equation:
PR(A) = (1-d) + d[PR(Ti)/C(Ti) + …. + PR(Tn)/C(Tn)]
Where:
PR(A) = Page Rank of a page (page A)
PR(Ti) = Page Rank of pages Ti which link to page A
C(Ti) = Number of outbound links on page Ti
d = Damping factor which can be set between 0 and 1.
Let’s say we have three pages A, B and C. Where,
1. A linked to B and C
2. B linked to C
3. C linked to A
Calculate Page Rank:
Final Page Rank of a page is determined after many more iterations. Now what is happening at each iteration?
Note: Keeping
· Standard damping factor = 0.85
· At initial stage assume page rank of all page is equal to 1
Iteration 1:
Page Rank of page A:
PR(A) = (1-d) + d[PR(C)/C(C)] # As only Page C is linked to page A
= (1-0.85) + 0.85[1/1] # Number of outbound link of Page C = 1(only to A)
= 0.15 + 0.85
= 1
Page Rank of page B:
PR(B) = (1-d) + d[PR(A)/C(A)] # As only Page A is linked to page C
= (1-0.85) + 0.85[1/2] # Number of outbound link of Page A = 2 (B and C)
= 0.15 + 0.425 # and page rank of A was 1 (calculated from previous
= 0.575 # step)
Page Rank of page C:
· As Page A and page B is linked to page C
· Number of outbound link of Page A [C(A)] = 2 (ie. Page C and Page B)
· Number of outbound link of Page B [C(B)] = 1 (ie. Page C)
· PR(A) = 1 (Result from previous step not initial page rank)
· PR(B) = 0.575 (Result from previous step)
PR(B) = (1-d) + d[PR(A)/C(A) + PR(B)/C(B)]
= (1-0.85) + 0.85[(1/2) + (0.575/1)]
= 0.15 + 0.85[0.5 + 0.575]
= 1.06375
This is how page rank is calculated at each iteration. In real world it iteration number can be 100, 1000 or may be more than that to come up with final Page Rank score.
Implementation
of PageRank in Python:
By networkx package in python we can calculate page rank like below.
import networkx as nx import pylab as plt # Create blank graph D=nx.DiGraph() # Feed page link to graph D.add_weighted_edges_from([('A','B',1),('A','C',1),('C','A',1),('B','C',1)]) # Print page rank for each pages print (nx.pagerank(D))
Output:
{'A': 0.38778944270725907, 'C': 0.3974000441421556, 'B': 0.21481051315058508}
# Plot graph nx.draw(D, with_labels=True) plt.show()
How
PageRank works?
Official networkx documentation saying the PageRank algorithm takes edge weights into account when scoring.
Link:
Let’s test how it works. Let’s say we have three pages A, B and C and its graph as follows.
Weight matrix:
Explain:
· Weight of A to B is 3 A to C is 2 and total number of out link of A =0+2+3 =5
· Weight of C to A is 1 C to B is 0 and total number of out link of C =1+0+0 =1
· Weight of B to A is 0 B to C is 1 and total number of out link of B =0+1+0 =1
So Weighted Score (WS) for:
· A-B = 3
· A-C = 2
· C-A = 1
· B-C = 1
Note: A-B implies to A to B
Equation:
PR(A) = (1-d)*p + d(x * WS(Ti)/C(Ti))
Where:
PR(A) = Page Rank of a page (page A)
WS(Ti) = Weighted Score of a page Ti
C(Ti) = Number of outbound links on page Ti
d = Damping factor which can be set between 0 and 1. (0.85 is default value)
p = Personalized vector which is ignorable
x = Initial page rank of a page = 1/3 for each page as total 3 pages we have.
Calculate WS(Ti)/C(Ti):
Calculate (x * WS(Ti)/C(Ti)):
So from above:
(x * WS(Ti)/C(Ti)) value of A = 0.33
(x * WS(Ti)/C(Ti)) value of C = 0.462
(x * WS(Ti)/C(Ti)) value of B = 0.198
Now let’s calculate Page Rank:
Page Rank of A:
PR(A) = (1-d)*p + d(x * WS(Ti)/C(Ti))
PR(A) = (1-0.85) + 0.85(0.33) # Ignoring personalization factor(p)
= 0.15 + 0.2805
= 0.4305
Page Rank of C:
PR(A) = (1-d) + d(x * WS(Ti)/C(Ti))
PR(A) = (1-0.85) + 0.85(0.462) # Ignoring personalization factor(p)
= 0.15 + 0.3927
= 0.5427
Page Rank of B:
PR(A) = (1-d) + d(x * WS(Ti)/C(Ti))
PR(A) = (1-0.85) + 0.85(0.198) # Ignoring personalization factor(p)
= 0.15 + 0.1683
= 0.3183
Note: This all is just one iteration.In networkx package pagerank function have 100 iteration by default.
Conclusion:
In this article I have covered
· Overview of Page Rank
· What is Page Rank Algorithm
· How Page Rank Algorithm works
· Implementation of Page Rank Algorithm in Python by networkx package (pagerank function)
· How pagerank function of networkx package works.
Do you have any question?
Ask your question in the comment below and I will do my best to answer. | https://www.thinkinfi.com/2018/08/how-google-page-rank-works-and.html | CC-MAIN-2020-34 | en | refinedweb |
How to always call a class method, forcefully, on return in python
I have a
ReportEntry class
class ReportEntry(object): def __init__(self): # Many attributes defined here ... # Lot many setattr/getattr here def validate(self): # Lot of validation code in here return self
Multiple other classes maintain
has-a relation with
ReportEntry class
class A(object): def test1(self): t1 = ReportEntry() # Assign the attribute values to t1 return t1.validate() def test2(self): t2 = ReportEntry() # Assign the attribute values to t2 return t2.validate()
And there are multiple such classes as A.
I need to enforce each
ReportEntry class instance to call
validate() on
return or maybe just before
return.
Basically, no instance of
ReportEntry should escape validation since the final report generation will fail if something is missing.
How may I achieve that ?
You can write a class decorator:
import inspect def validate_entries(cls): def validator(fnc): # this is a function decorator ... def wrapper(*args, **kwargs): rval = fnc(*args, **kwargs) if isinstance(rval, ReportEntry): # print('validating') return rval.validate() return rval return wrapper for name, f in inspect.getmembers(cls, predicate=inspect.isfunction): setattr(cls, name, validator(f)) # .. that we apply to all functions return cls
Now you can define all
A-like classes:
@validate_entries class A(object): # ...
This will validate any
ReportEntry that is returned by any of
A's methods.
Python's Instance, Class, and Static Methods Demystified – Real , class MyClass: def method(self): return 'instance method called', self @classmethod def Here's what happens when we call an instance method: Notice how Python automatically passes the class as the first argument to the function when The classmethod () method returns a class method for the given function. The syntax of classmethod () method is: classmethod () is considered un-Pythonic so in newer Python versions, you can use the @classmethod decorator for classmethod definition. @classmethod def func (cls, args)
One approach that I could think of is to define
__enter__ and
__exit__ methods where
validate is called upon
__exit__ in
ReportEntry
class ReportEntry(object): def __enter__(self): return self def __init__(self): # Many attributes defined here ... # Lot many setattr/getattr here def validate(self): # Lot of validation code in here return self def __exit__(self, a,b,c): self.validate() return True # And then use it as with ReportEntry() as report: ...
But again, this will be enforced only when used
with ReportEntry() as report:
2. Built-in Functions, Note that classes are callable (calling a class returns a new instance); instances are If object is not an object of the given type, the function always returns false. by file, but if the flush keyword argument is true, the stream is forcibly flushed. :
There are two ways I can think about to go about this. I cannot say more without knowing more implementation details:
Decorate your methods: Where every return instance is run through the decorator function. You may want to put this as a stand-alone function or part of a class depending on your specific use case.
def validate(func): return func().validate() class A(object): @validate def test1(self): t1 = ReportEntry() # Assign the attribute values to t1 return t1 @validate def test2(self): t2 = ReportEntry() # Assign the attribute values to t2 return t2
Updating the __setattr__ and decorate your class:
def always_validate(cls): # save the old set attribute method old_setattr = getattr(cls, '__setattr__', None) def __setattr__(self, name, value): # set the attribute validate(name, value) old_setattr(self, name, value) cls.__setattr__ = __setattr__ return cls
and then you could decorate your ReportEntry:
@alway_validate class ReportEntry(object): ...
Built-in Functions, Note that classes are callable (calling a class returns a new instance); instances This is always the dictionary of the current module (inside a function or method, by file, but if the flush keyword argument is true, the stream is forcibly flushed. Instance, Class, and Static Methods — An Overview Let’s begin by writing a (Python 3) class that contains simple examples for all three method types: class MyClass : def method ( self ): return 'instance method called' , self @classmethod def classmethod ( cls ): return 'class method called' , cls @staticmethod def staticmethod (): return
9. Classes, __doc__ is also a valid attribute, returning the docstring belonging to the The instantiation operation (“calling” a class object) creates an empty object. __init__() method, class instantiation automatically invokes __init__() ** * The methods that are test scenarios start with test_ to help python discover them easily. In Python, there are two ways of defining static methods within a class. In this tutorial we will learn about the class __del__ method in Python. b import classa class classb: def method_1(): = classa() a.
Python - Call function from another function, When the called function completes its execution and returns then the calling function is popped from the Python code to demonstrate calling parent class..
Is it possible to call constructor and destructor explicitly , C · C++ · Java · Python · C# · Javascript · JQuery · SQL · PHP · Scala · Perl Constructor is a special member function that is automatically called by Test(); // Explicit call to constructor. Test t; // local object. t.~Test(); // Explicit call to destructor. return 0;. } Dr. Bjarne: No. temporary objects of the class types are useful.. | http://thetopsites.net/article/54167069.shtml | CC-MAIN-2020-34 | en | refinedweb |
SyncMutexUnlock(), SyncMutexUnlock_r()
Unlock a mutex synchronization object
Synopsis:
#include <sys/neutrino.h> int SyncMutexUnlock( sync_t * sync ); int SyncMutexUnlock_r( sync_t * sync );
Arguments:
- sync
- A pointer to the synchronization object for the mutex that you want to unlock.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:.
Blocking states
These calls don't block.
Returns:
The only difference between these functions is the way they indicate errors:
Errors:
- EFAULT
- A fault occurred when the kernel tried to access the buffers provided.
- EINVAL
- The synchronization ID specified in sync doesn't exist. The calling thread doesn't own the mutex. | https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/s/syncmutexunlock.html | CC-MAIN-2020-34 | en | refinedweb |
- Type:
Task
- Status: Closed (View Workflow)
- Priority:
Critical
- Resolution: Cannot Reproduce
- Component/s: job-dsl-plugin, pipeline
- Labels:None
- Similar Issues:
I've marked this one as "cannot reproduce" because it sounds like the issue is not reproducible, but we can re-open it if you can provide the information to reproduce it and confirm that Jose's information did not resolve it for you. Thanks!
After some more research I found this issue which also shows my problems. A 1.71 release is highly appreciated.
Hi Markus Baur,
I think you reported this issue, could you please take a look into the comment above regarding configuration and versions?
Yes of course, but it's already addressed somewhere else.
jenkins 2.138.1
Job DSL 1.70
Error:
[Pipeline] jobDsl
Processing DSL script jobs.groovy
Warning: (jobs.groovy, line 12) concurrentBuild is deprecated
Warning: (jobs.groovy, line 21) authenticationToken is deprecated
Warning: (jobs.groovy, line 23) scm is deprecated
Code:
def job = readFileFromWorkspace('jenkins_jobs/ansible-job-template.groovy') hosts.each { name -> pipelineJob("ansible-${name}") { concurrentBuild(false) // There should never be 2 ansible runs on the same host at the same time definition { cps { script(String.format(job, name)) // use the template from above as the script sandbox() // run in the groovy sandbox } } authenticationToken("super secret token") // Allows the job to be triggered remotely scm { git("ssh://git@***", "develop") } } }
Hi Markus Baur,
I am working in this issue, and I'm able to use the curl call to trigger the job nicely.
Could you confirm the following: | https://issues.jenkins-ci.org/browse/JENKINS-53246 | CC-MAIN-2020-34 | en | refinedweb |
Provided by: gnutls-doc_3.0.11+really2.12.14-5ubuntu3_all
NAME
gnutls_session_get_ptr - API function
SYNOPSIS
#include <gnutls/gnutls.h> void * gnutls_session_get_ptr(gnutls_session_t session);
ARGUMENTS
gnutls_session_t session is a gnutls_session_t structure.
DESCRIPTION
Get user pointer for session. Useful in callbacks. This is the pointer set with gnutls_session_set_ptr().
RETURNS
the user given pointer from the session structure, or NULL if it was never set.. | http://manpages.ubuntu.com/manpages/precise/man3/gnutls_session_get_ptr.3.html | CC-MAIN-2020-34 | en | refinedweb |
Introduction: Servo Driven Automatic Vice
You will need:
- Parallel Gripper Kit -
- Standard Size Servo - (See Note)
Base - I show a couple of ideas
To build it on a breadboard you will need:
- Arduino - (I used an Uno)
- Breadboard
- Jumper Wires
- 10k Linear Taper Potentiometer -
- Long Male Headers -
To build the permanent version you will need:
- Adafruit Perma-Proto Breadboard -
- 2.1mm barrel jack
- 2 - 10mf electrolytic capacitors
- 7805 Voltage regulator
- Heat Sink for Voltage Regulator (see note)
- 330-560 Ohm 1/4 Watt resistor (purchased locally)
- Red 5mm LED (purchased locally)
- 22 gauge hookup wire (Red, Black, Yellow, and Green, purchased locally)
- 10k Linear Taper Potentiometer -
- Potentiometer Knob -
- Long Male Headers -
- 28 Pin IC Socket -
- ATmega328 - (see note)
- 16 MHz Ceramic Resonator -
- 9 Volt Power Adapter - (see note)
Servo note: On the page with the gripper they show a few different servos that will work. I chose a more expensive servo with metal gears. Sparkfun lists standard size servos for $12.95 and $13.95. These might be good enough. That said, I like metal gears better than plastic.
ATmega328 note: The chip is to replace the chip in your Arduino after you take it out to use in your project.
Voltage Regulator note: Purchased locally, if you cant find one use a small machine screw to bolt a piece of aluminium to the voltage regulator.
Power Adapter note: It works better with a plug in power source than with a battery.
I wish it was possible to purchase all these parts from one supplier to save on shipping but that is not the case. Many of these parts are only available from one of the parts suppliers.
Step 1: Assemble the Gripper and Vice Base
Assemble the gripper and the servo.
This link shows how to assemble the gripper:
In this example I am using a Panavice base with the jaw assembly removed.
Put the gripper assembly in the Panavice base. I used an eraser that is about 3/8" thick as a spacer.
Step 2: Build It on a Breadboard
Follow the diagram to build the circuit.
Use a piece of the male headers with three pins to connect the servo.
The positive and negative wires to the potentiometer can be reversed to adjust the direction.
Step 3: The Program Code
Copy/Paste this sketch into the Arduino IDE and upload it to your Arduino:
#include <Servo.h>
Servo myservo; // create servo object int pot = 0; // analog pin used for pot (A0) int val; // value from analog pin int lastVal; // prevoius value long LastChange; void setup() { myservo.attach(9); // attaches the servo on pin 9 } void loop() { val = analogRead(pot); // reads pot 0 - 1023 val = map(val, 0, 1023, 1, 179); // scale it to servo 0 - 180 if (val < (lastVal-5) || val > (lastVal+5)) { myservo.write(val); // sets servo position delay(15); // waits for servo to get there LastChange = millis(); lastVal = val; } else if (millis() - LastChange > 500); { myservo.write(val); } }
Step 4: Permanent Version: the Power Supply 5: Permanent Version: the Potentiometer
Drill a 9/32 to 5/16 hole in the printed circuit board as shown.
Break off the little tab on the potentiometer so it will mount flat.
Solder the wires onto the pot.
Mount the pot
And solder the wires to the board. The middle wire is soldered into hole A18.
Step 6: Permanent Version: Finish the Circuit
Solder the chip socket, it goes in the middle of the board in holes 26 through 39. The alignment notch points away from the power supply.
Solder the male headers for the servo. The pins go toward the back, in holes B21 - B23.
Attach a black wire from A21 to the ground rail and a red wire from A22 to the positive rail.
Attach a yellow wire from D23 to D26. This connects the servo to digital pin nine.
Attach a yellow wire from C18 to C34. This connects the pot to analog pin zero.
Solder the ceramic resonator into holes H22 through H24.
Solder the wires for the resonator: G22 to G31, I24 to I30, and J23 to the ground Rail.
Step 7: Final Assembly
Upload the program is step three onto the Arduino.
Pull the chip out of you Arduino and insert it on the circuit board, the alignment notch points away from the power supply.
Replace the chip in the Arduino with the new chip.
Place the gripper and the circuit board in the base.
In this picture and the picture on the introductory page I am using a Mitutoyo Micrometer Stand for a base.
I already had the part. I searched for it of Amazon.com, they sell it but it is expensive, $60.00. And it is heavy so the shipping will also be expensive.
You can use either of the bases I used or use an idea of your own.
When you close the vice on a part close it just enough to hold the part, closing it farther does not increase the torque holding the part.
Participated in the
Guerilla Design Contest
Be the First to Share
Recommendations
2 Discussions
5 years ago on Introduction
if you like my idea please vote for me in the Guerilla Design Contest.
5 years ago on Introduction
I just made improvements to this instructable:
A video link showing how to assemble the gripper
A heat sink on the voltage regulator.
Power Adapter replaces battery
New improved Arduino code | https://www.instructables.com/id/Servo-Driven-Automatic-Vice/ | CC-MAIN-2020-34 | en | refinedweb |
KIMAP2::FetchJob
#include <fetchjob.h>
Inherits KIMAP2::Job.
Detailed Description
Fetch message data from the server.
All data is returned using the signals, so you need to connect to the relevant signal (or all of them) before starting the job.
This job will always use BODY.PEEK rather than BODY to fetch message content, so it will not set the flag.
This job can only be run when the session is in the selected state.
Definition at line 59 of file fetchjob.h.
Member Function Documentation
How to interpret the sequence set.
- Returns
- if
truethe result of sequenceSet() should be interpreted as UIDs, if
falseit should be interpreted as sequence numbers
Definition at line 104 of file fetchjob.cpp.
Specifies what data will be fetched.
Definition at line 116 of file fetchjob.cpp.
The messages that will be fetched.
Definition at line 92 of file fetchjob.cpp.
Avoid calling parse() on returned KMime::Messages.
Definition at line 79 of file fetchjob.cpp.
Sets what data should be fetched.
The default scope is FetchScope::Content (all content parts).
- Parameters
-
Definition at line 110 of file fetchjob.cpp.
Set which messages to fetch data for.
If sequence numbers are given, isUidBased() should be false. If UIDs are given, isUidBased() should be true.
- Parameters
-
Definition at line 85 of file fetchjob.cpp.
Set how the sequence set should be interpreted.
- Parameters
-
Definition at line 98 of file fetchjob.cpp.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Sun Aug 2 2020 23:06:44 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/kdepim/kimap2/html/classKIMAP2_1_1FetchJob.html | CC-MAIN-2020-34 | en | refinedweb |
RAIL_IEEE802154_AddrConfig_t Struct Reference
A configuration structure for IEEE 802.15.4 Address Filtering.
#include <
rail_ieee802154.h>
A configuration structure for IEEE 802.15.4 Address Filtering.
The broadcast addresses are handled separately and do not need to be specified here. Any address to be ignored should be set with all bits high.
This structure allows configuration of multi
229 of file
rail_ieee802154.h.
Field Documentation
◆ longAddr
A 64-bit address for destination filtering.
All must be specified. This field is parsed in over-the-air (OTA) byte order. To disable a long address, set it to the reserved value of 0x00 00 00 00 00 00 00 00.
Definition at line
245 of file
rail_ieee802154.h.
◆ panId
PAN IDs for destination filtering.
All must be specified. To disable a PAN ID, set it to the broadcast value, 0xFFFF.
Definition at line
234 of file
rail_ieee802154.h.
◆ shortAddr
A short network addresses for destination filtering.
All must be specified. To disable a short address, set it to the broadcast value, 0xFFFF.
Definition at line
239 of file
rail_ieee802154.h.
The documentation for this struct was generated from the following file:
- protocol/ieee802154/
rail_ieee802154.h | https://docs.silabs.com/rail/2.8/struct-r-a-i-l-i-e-e-e802154-addr-config-t | CC-MAIN-2020-34 | en | refinedweb |
Hi,
I have placed the license registration before the apphost initializing and still I am getting the error in the image below.
Thanks
Thanks
Hi,
I have placed the license registration before the apphost initializing and still I am getting the error in the image below.
Thanks
I don’t see
TestAppHost or
NewInstance() in your StackTrace? I’m assuming the Exception is not occurring where you think it does.
Can you put together a small stand-alone repro without the License Key and publish it on GitHub so I can repro it locally, please.
This will not be reproducable since it is related to my entire solution. I dont even know how to simulate above the quota error in tests…
You just need more than 10 custom services, e.g:
public class MyRequest1 {} public class MyRequest2 {} //... public class MyRequest11 {} public class MyServices : Service { public object Any(MyRequest1 request) => request: //... public object Any(MyRequest11 request) => request: }
I’m assuming there’s something weird in your environment/solution causing it, I just can’t tell from here.
This wont trigger the error in the new test project:
public class UnitTest1 { [Fact] public void Test1() { TestAppHostInitializer test = new TestAppHostInitializer(); test.Init().Start(""); } } public class TestAppHostInitializer : AppSelfHostBase { public TestAppHostInitializer() : base("EYEZ", typeof(UnitTest1).Assembly) { } public override void Configure(Container container) { } } public class MyRequest1 : IReturnVoid { } public class MyServices : Service { public object Any (MyRequest1 request) => null; public object Any1(MyRequest1 request) => null; public object Any2(MyRequest1 request) => null; public object Any4(MyRequest1 request) => null; public object Any5(MyRequest1 request) => null; public object Any6(MyRequest1 request) => null; public object Any7(MyRequest1 request) => null; public object Any8(MyRequest1 request) => null; public object Any9(MyRequest1 request) => null; public object Any10(MyRequest1 request) => null; public object Any11(MyRequest1 request) => null; }
Because only one of them is considered a Service, you need 11 different empty Request DTOs, all using the Any (or other HTTP Method) for them to be treated as a Service, I’ve updated my example to include the method name. I cant reproduce the error
You’re still using
AnyN, it needs to be exactly
Any() for all Request DTOs (or any other HTTP Verb) for it to be recognized as a Service in ServiceStack.
Reproduced, lisence exists and getting quota error
This doesn’t repro the issue:
I suspect you have something interfering with verification in your environment, if you want to register the license key inline remove any license keys you may have in
SERVICESTACK_LICENSE environment variable, otherwise you may have a dirty
bin/ or
obj/ folders, try removing the folders and rebuilding the solution.
Or maybe you have an internal Exception masquerading the error, can you try enabling Strict Mode to see if you get a different Exception:
Env.StrictMode = true; Licensing.RegisterLicense(@"...");
Otherwise what Culture are you running in? and does changing to use InvariantCulture resolve the issue? e.g:
using System.Globalization; //... CultureInfo.CurrentCulture = CultureInfo.InvariantCulture; Licensing.RegisterLicense(@"...");
this was in the way:
SERVICESTACK_LICENSE, now its working. thanks.
I had problems activating with env variable so the support long time ago and the team suggested inline as well. I assume that it is not good advice for production use… | https://forums.servicestack.net/t/free-quota-limit-errro-apears-in-tests/8758 | CC-MAIN-2020-34 | en | refinedweb |
IMX241 CX3 2592x1944 @ 30FPS issuesZaTu_4258396 Aug 8, 2019 3:05 PM
Hello all, I am attempting to add full functionality to our firmware for multiple available framerates and the first one I am trying to tackle is changing from 2592x1944 @ 15 FPS to 30 FPS. This seems like a simple change but I am having a hard time getting the MIPI config tool to create the necessary files. Everything I put in I cannot get the pixel clock to get to a usable value that doesnt throw an error.
The settings that are currently functioning on the camera is 2592x1944 @ ~13.7 FPS
This is for the 30FPS 2592x1944
I will go into more detail on my issue but curious first if this setup is even possible with the CX3... the datasheet states that it can handle the speed and the camera can handle the output.
I have also attempted to get a 1920*1080 @ 30FPS to work as well with no luck. I am not sure that I have changed the correct code in the right locations.
1. Added new USB config descriptors to the existing USBDSC.c file that is already in our firmware build.
2. Added config probe control
if (CyU3PUsbGetSpeed() == CY_U3P_SUPER_SPEED)
{
if (FrameIndex == 2)
{
// Write 5Mp setting
#ifdef SENSOR_IMX241
status = CyU3PMipicsiSetIntfParams( (CyU3PMipicsiCfg_t *) &imx241_RAW10_Resolution1, CyFalse);
/* Host requests for probe data of 34 bytes (UVC 1.1) or 26 Bytes (UVC 1.0). Send it over EP0. */
if (CyU3PUsbGetSpeed() == CY_U3P_SUPER_SPEED)
{
if (glCurrentFrameIndex == 2)
{
// #error this appears to overwrite the const descriptor glProbeCtrl and we should send gl5Mp15ProbeCtrl instead of glProbeCtrl below and delete this section
#ifdef SENSOR_IMX241
CyU3PMemCopy( (uint8_t *) glProbeCtrl, (uint8_t *) glResolution1ProbeCtrl, ES_UVC_MAX_PROBE_SETTING);
#endif // SENSOR_IMX241
}
I have been unsuccessful in moving forward in my tasks and just need a push in the right direction with this firmware. I feel like I am missing some key information points and just cannot find the required documentation to help me out. Please let me know if I can provide any extra information. Thank you for your help.
1. Re: IMX241 CX3 2592x1944 @ 30FPS issuesKandlaguntaR_36
Aug 8, 2019 11:32 PM (in response to ZaTu_4258396)
Hello,
Please set the output data format as 24-bit and check the pixel lock.
You may refer this KBA: Streaming RAW10 Format Input Data to 16/24-bit Output Format in CX3 MIPI CSI-2 - KBA224387
Regards,
Sridhar
2. Re: IMX241 CX3 2592x1944 @ 30FPS issuesZaTu_4258396 Aug 9, 2019 8:59 AM (in response to KandlaguntaR_36)
I have only changed the setting to 24-bit and adjusted a couple other variables to let them fall into acceptable values. However, the Output Pixel Clock states that there is a min of 166 with these settings. Is there something else we are missing? I am going to follow that link you sent and follow instructions to get the 1080p to work if I can. Will report on that later this afternoon.
3. Re: IMX241 CX3 2592x1944 @ 30FPS issuesKandlaguntaR_36
Aug 9, 2019 9:04 AM (in response to ZaTu_4258396)
There is no 166 MHz as you said. The maximum value of CSI clock is 500 MHz. You have set it as 920 MHz. Please set it less than 500 MHz.
Regards,
Sridhar
4. Re: IMX241 CX3 2592x1944 @ 30FPS issuesZaTu_4258396 Aug 9, 2019 10:28 AM (in response to KandlaguntaR_36)
The 166Mhz is what the red "x" next to the output pixel clock states is the minimum, which is outside the available range for the value. Also, the config states that the acceptable minimum for the CSI clock starts at 920.99 for the 2592x1944 solution. Reducing this to 450 only gives me another error stating that the clock must be higher and still gives the same errors on the output pixel clock.
5. Re: IMX241 CX3 2592x1944 @ 30FPS issuesZaTu_4258396 Aug 9, 2019 1:39 PM (in response to KandlaguntaR_36)
Is there any methods to probing the camera module/cx3 lines in order to view the timing of the data transfer to confirm the changes that I am making in the firmware? I have followed the instructions to change the necessary location in the firmware, but am unsuccessful in getting an image that is not black. I was able to get the image height/width to change from 2592x1944 to 1920x1080 and have the data seem to be streaming at 30FPS due to capturing the black images and timing it with what it was with the 2592x1944 and saw the decrease in timing by more than half. Please let me know if there is any other methods to take to move forward.
If I could have just the barebones neccesities to get this to run that would be ideal. All of the extra functionality I can add from our existing code into the new project. I just do not know the extra steps that need to be taken to add to the mipi config project in order to have it change all the necessary parameters and all of Cypresses documentation is scattered like crazy its impossible to know if I am looking at the right document for my issue. Thank you for your time.
6. Re: IMX241 CX3 2592x1944 @ 30FPS issuesKandlaguntaR_36
Aug 12, 2019 4:31 AM (in response to ZaTu_4258396)
Hello,
Please use the following MIPI Settings:
Let us focus on one resolution at a time.
Go with 1920X1080 first:
I have attached an example project as per your MIPI Transmitter.
I have added CyU3PMipicsiSetPhyTimeDelay API. Rest of all is geerated by the CX3 Configtool.
You need to add sensor settings imagesensor.c file and .h file.
Once everything is done, load the firmware to CX3 device.
Ensure that the device enumerated in USB Superspeed mode and came up as UVC device.
Open a UVC host application which can decode the RAW 10. Standard host applications cannot decode the RAW10 video data.
The standard UVC host applications intiates the data transfer, but they may not show the actual video to you.
In order to see whether there is some data streaming to host PC, you may use Wireshark Software USB sniffer. By looking at the capture, we can confirm the size of a frame received by the host application.
Other Debugging steps:
1. Check the VSYNC, HSYNC test pins and measure the HACTIVE and VACTIVE timing - This will tell that whether the MIPI receiver is getting valid video data/MIPI receiver is configured correctly
2. You can enabled printing the MIPI error after every frame by uncommenting "Uncomment the code below to check for MIPI errors per frame"
CyU3PMipicsiGetErrors( CyTrue, &errCnts);
CyU3PDebugPrint(4,"\n\r%d %d %d %d %d %d %d %d %d",errCnts.crcErrCnt,errCnts.ctlErrCnt, errCnts.eidErrCnt, errCnts.frmErrCnt, errCnts.mdlErrCnt, errCnts.recSyncErrCnt, errCnts.recrErrCnt, errCnts.unrSyncErrCnt, errCnts.unrcErrCnt );
If the MIPI receiver is not configured correctly, you will see some errors.
- Createxample.rar 535.4 K
7. Re: IMX241 CX3 2592x1944 @ 30FPS issuesZaTu_4258396 Aug 14, 2019 12:29 PM (in response to KandlaguntaR_36)
Thank you so much for providing a barebones project! I was wanting to use the 1080p as an example to learn on, but Sony has yet to get back to me on the specific register settings for that resolution. I am not sure of their turnaround time on responses. Is there any way we could just jump onto the solution I need right now, which is the 2592x1944 solution @ 30fps.
I have gotten all of the correct register settings for this resolution on the imager module. But need to correct USB descriptors in order to run this resolution.Currently I have the 2592x1944 resolution streaming images ~13.7 FPS. The current USB descriptor settings and probe controls are attached as images to the first post. What are the new and updated ones supposed to look like for this resolution?
Thank you for your time!!
-Zach
8. Re: IMX241 CX3 2592x1944 @ 30FPS issuesZaTu_4258396 Aug 15, 2019 2:15 PM (in response to KandlaguntaR_36)
So I still have no response from Sony. Still looking for updated USB descriptors that actually work with this Cypress Config. Everything I try will not allow me to use 2592x1944 @ 30 fps. Is this even possible with the cx3? Judging from the data rate transfer we will be somewhere around 2.4 gbps of data transfer which is well within the max of 5 gbps we have available. Really looking forward to your response. Thank you for your help.
-Zach
9. Re: IMX241 CX3 2592x1944 @ 30FPS issueszatu_4398686 Aug 19, 2019 9:36 AM (in response to KandlaguntaR_36)
Bump.
Still looking for applicable setting for 2592x1944 @ 30 fps. Have yet to get past this issue. Sony has been in contact with me and is getting register information for me, but still stuck in limbo with this setting.
-Zach | https://community.cypress.com/message/206156 | CC-MAIN-2020-16 | en | refinedweb |
Provided by: allegro4-doc_4.4.3.1-1_all
NAME
show_mouse - Tells Allegro to display a mouse pointer on the screen.
SYNOPSIS
#include <allegro.h> void show_mouse(BITMAP *bmp);
DESCRIPTION graphics drawing code will get confused and will leave 'mouse droppings' all over the screen. To prevent this, you must make sure you turn off the mouse pointer whenever you draw onto the screen. This is not needed if you are using a hardware cursor. Note: you must not be showing a mouse pointer on a bitmap at the time that the bitmap is destroyed with destroy_bitmap(), e.g. call show_mouse(NULL); before destroying the bitmap. This does not apply to `screen' since you never destroy `screen' with destroy_bitmap().
SEE ALSO
install_mouse(3alleg4), install_timer(3alleg4), set_mouse_sprite(3alleg4), scare_mouse(3alleg4), freeze_mouse_flag(3alleg4), show_os_cursor(3alleg4), exmouse(3alleg4), expal(3alleg4), exshade(3alleg4), exspline(3alleg4), exsyscur(3alleg4) | http://manpages.ubuntu.com/manpages/focal/man3/show_mouse.3alleg4.html | CC-MAIN-2020-16 | en | refinedweb |
imports in sage/combinat/free_module.py
I was reading the code in
sage/combinat/free_module.py and I noticed some weirdness in the imports. In the second line there is a
from sage.structure.element import Element, have_same_parent
and then in the fourth line there is
from sage.structure.element import have_same_parent
and later on we find
import sage.structure.element
Why is this so? Can't some of these lines be erased? | https://ask.sagemath.org/question/28710/imports-in-sagecombinatfree_modulepy/?answer=28714 | CC-MAIN-2020-16 | en | refinedweb |
Author: Ulrich Schoebel <[email protected]> Tcl-Version: 8.5 State: Withdrawn Type: Project Vote: Pending Created: 23-Jul-2003 Post-History: Keywords: namespace, command lookup, search path
Abstract
This TIP adds a Tcl variable to define the search path for command name lookup across namespaces.
Rationale
Command names (as well as variable names) are currently looked up first in the current namspace, then, if not found, in the global namespace.
It is often very useful to hide the commands defined in a subnamespace from being visible from upper namespaces by info commands namespace::*. On the other hand, programmers want to use these commands without having to type a qualified name.
Example:
namespace eval ns1 { proc p1 {} { puts "[p2]" } } namespace eval ns1::ns2 { proc p2 {} { return hello } }
Evaluation of ns1::p1 would currently lead to an error, because p2 could not be found. Even worse, if a procedure p2 exists in the global namespace, the wrong procedure would be evaluated.
Proposal
Add a variable tcl_namespacePath or, to avoid confusion with variables containing file system paths, tcl_namespaceSearch, that contains a list of namespaces to be searched in that order.
The default value would be [list [namespace current] ::].
In the above example tcl_namespacePath would be set to [list [namespace current] [namespace current]::ns2]. p2 would be found and not unintentionally be substituted by ::p2.
Alternative
For ease of implementation and, maybe, for programmers convenience it might be useful to always prepend the contents of this variable with [namespace current]. The programmer expects a certain "automatism" for this component of the search path.
Then the default value would be ::.
Implementation
To be done when this TIP is accepted.
Notice of Withdrawal
This TIP was Withdrawn by the TIP Editor following discussion on the tcl-core mailing list. The following is a summary of reasons for withdrawal:
Insufficiently subtle. 52 will break any code that assumes the current behaviour (and you can bet someone will have that assumption) and 142 doesn't let two namespaces have different search paths (unless the variable is always interpreted locally, which just creates bizarre variable name magic.)
This document is placed in the public domain. | https://core.tcl-lang.org/tips/doc/trunk/tip/142.md | CC-MAIN-2020-16 | en | refinedweb |
A small point to point out a difference.
A lot of optimisation is done with gradient systems. In this blogpost I’d just like to point out a very simple example to demonstrate that you need to be careful with calling this “optimisation”. Especially when you have a system with a constaint. I’ll pick an example from wikipedia.
Note that you can play with the notebook I used for this here.
\[ \begin{align} \text{max } f(x,y) &= x^2 y \\ \text{subject to } g(x,y) & = x^2 + y^2 - 3 = 0 \end{align} \]
This system is a system that has a constraint which makes it somewhat hard to optimise. If we were to draw a picture of the general problem, we might notice that only a certain set of points is of interest to us.
We might be able to use a little bit of mathematics to help us out. We can write our original problem into another one;
\[ L(x, y, \lambda) = f(x,y) - \lambda g(x,y) \]
One interpretation of this new function is that the parameter \(\lambda\) can be seen as a punishment for not having a feasible allocation. Note that even if \(\lambda\) is big, if \(g(x,y) = 0\) then it will not cause any form of punishment. This might remind you of a regulariser. Let’s go and differentiate \(L\) with regards to \(x, y, \lambda\).
\[ \begin{align} \frac{\delta L}{\delta x} &= \Delta_x f(x,y) - \Delta_x \lambda g(x,y) = 2xy - \lambda 2x \\ \frac{\delta L}{\delta y} &= \Delta_y f(x,y) - \Delta_y \lambda g(x,y) = x^2 - \lambda 2 y\\ \frac{\delta L}{\delta \lambda} &= g(x,y) = x^2 - y^2 - 3 \end{align} \] All three of these expressions need to be equal to zero. In the case of \(\frac{\delta L}{\delta \lambda}\) that’s great because this will allow us to guarantee that our problem is indeed feasible! So what one might consider doing is to rewrite this into an expression such that a gradient method will minimise this.
\[q(x, y, \lambda) = \sqrt{\Big( \frac{\delta L}{\delta x}\Big)^2 + \Big( \frac{\delta L}{\delta y}\Big)^2 + \Big( \frac{\delta L}{\delta \lambda}\Big)^2}\]
It’d be great if we didn’t need to do all of this via maths. As lucky would have it; python has great support for autodifferentiation so we’ll use that to look for a solution. The code is shown below.
from autograd import grad from autograd import elementwise_grad as egrad import autograd.numpy as np import matplotlib.pylab as plt def f(weights): x, y, l = weights[0], weights[1], weights[2] return y * x**2 def g(weights): x, y, l = weights[0], weights[1], weights[2] return l*(x**2 + y**2 - 3) def q(weights): dx = grad(f)(weights)[0] + grad(g)(weights)[0] dy = grad(f)(weights)[1] + grad(g)(weights)[1] dl = grad(f)(weights)[2] + grad(g)(weights)[2] return np.sqrt(dx**2 + dy**2 + dl**2) n = 100 wts = np.array(np.random.normal(0, 1, (3, ))) for i in range(n): wts -= egrad(q)(wts) * 0.01
This script was ran and logged and produced the following plot:
When we ignore the small numerical inaccuracy we can confirm that our solution seems feasible enough since \(q(x, y, \lambda) \approx 0\). That said, this solution feels a bit strange.Taking the found values \(f(x^*,y^*) \approx f(0.000, 1.739)\) suggests that the best value found is \(\approx 0\). Are we sure we’re in the right spot?
We’ve used our tool
AutoGrad the right way but there’s another issue: the gradient might get stuck in a place that is not optimal. There are more than 1 point that satisy the three derivates shown earlier. To demonstrate this, let us use
sympy instead.
import sympy as sp x, y, l = sp.symbols("x, y, l") f = y*x**2 g = x**2 + y**2 - 3 lagrange = f - l * g sp.solve([sp.diff(lagrange, x), sp.diff(lagrange, y), sp.diff(lagrange, l)], [x, y, l])
This yields the following set of solutions:
\[\left[\left(0, -\sqrt{3}, 0\right ), \left ( 0, \sqrt{3}, 0\right ), \left ( - \sqrt{2}, -1, -1\right ), \left ( - \sqrt{2}, 1, 1\right ), \left ( \sqrt{2}, -1, -1\right ), \left ( \sqrt{2}, 1, 1\right )\right ]\]
Note that one of these solutions found with
sympy yields \((0, \sqrt{3}, 0)\) which corresponds \(\approx [-0.0001, 1.739, -0.0001]\). We can confirm that our gradient solver found a solution that was feasible but it did not find one that is optimal.
A solutions out of our gradient solver can be a saddle point, local minimum or local maximum but the gradient solver has no way of figuring out which one of these is the global optimum (which in this problem is \(x,y = \pm (-\sqrt{2}, 1)\)).
Weather or not we get close to the correct optima depends on the starting values of the variables too. To demonstrate this I’ve attempted to run the above script multiple times with random starting values to see if a pattern emerges.
This example helps point out some downsides of the gradient approach:
In short: gradient descent is a general tactic but when we add a constraint we’re in trouble. It also seems like variants of gradient descent like Adam will also suffer from these downsides.
You might wonder why to care. Many machine learning algorithms don’t have a constraint. Imagine we have a function \(f(x|w) \approx y\) where \(f\) is a machine learning algorithm. A lot of these algorithms involve minimising a system like below;
\[ \text{min } q(w) = \sum_i \text{loss}(f(x_i|w) - y_i) \]
This common system does not have a constraint. But would it not be much more interesting to optimise another type of system?
\[ \begin{align} \text{min } q(w) & = \sum_i \text{loss}(f(x_i|w) - y_i) \\ \text{subject to } h(w) & = \text{max}\{\text{loss}(f(x_i|w) - y_i) \} = \sigma \end{align} \] This would be fundamentally different than regularising the model to prevent a form of overfitting. Instead of revalue-ing an allocation of \(w\) we’d like to restrict an algorithm from ever doing something we don’t want it to do. The goal is not to optimise for two things at the same time but instead we would impose a hard constraint on the ML system.
Another idea for an interesting constraint:
\[ \begin{align} \text{min } q(w) & = \sum_i \text{loss}(f(x_i|w) - y_i) \\ \text{subject to } & \\ h_1(w) & = \text{loss}(f(x_1|w) - y_1) \leq \epsilon_1 \\ & \vdots \\ h_k(w) & = \text{loss}(f(x_k|w) - y_k) \leq \epsilon_k \end{align} \] The idea here is that one determines some constraints \(1 ... k\) on the loss of some subset of points in the original dataset. The idea here being that you might say “these points must be predicted within a certain accuracy while the other points matter less”.
\[ \begin{align} \text{min } q(w) & = \sum_i \text{loss}(f(x_i|w) - y_i) \\ \text{subject to } & \\ h_1(w) & =\frac{\delta f(x|w)}{\delta x_{k}} \geq 0 \\ h_2(w) & =\frac{\delta f(x|w)}{\delta x_{m}} \leq 0 \end{align} \] Here we’re trying to tell the model that for some feature \(k\) we demand a monotonic increasing relationship with the output of the model and for some feature \(m\) we demand a monotonic decreasing relationship with the output. For some features a modeller might be able to declare upfront that certain features should have a relationship with the final model and being able to constrain a model to keep this in mind might make the model a lot more robust to certain types of overfitting.
This flexibility of modelling with constraints might do wonders for interpretability and feasibility of models in production. The big downer is that currently we do not have general tools to guarantee optimality under constraints in general machine learning algorithms.
That is, this is my current understanding. If I’m missing out on something: hit me up on twitter.
As a note I figured it might be nice to mention a hack I tried to improve the performance of the gradient algorithm.
The second derivatives of \(L\) with regards to \(x,y\) also need to be negative if we want \(L\) to be a maximum. Keeping this in mind we might add more information to our function \(q\).
\[q(x, y, \lambda) = \sqrt{\Big( \frac{\delta L}{\delta x}\Big)^2 + \Big( \frac{\delta L}{\delta y}\Big)^2 + \Big( \frac{\delta L}{\delta \lambda}\Big)^2 + \text{relu} \Big(\frac{\delta L}{(\delta x)^2}\Big)^2 + \text{relu} \Big(\frac{\delta L}{(\delta y)^2}\Big)^2}\]
The reason why I’m adding a \(\text{relu}()\) function here is because if the second derivative is indeed negative the error out of \(\text{relu}()\) is zero. I’m squaring the \(\text{relu}()\) effect such that the error made is in proportion to the rest. This approach also has a big downside though: the relu has large regions where the gradient is zero. So we might approximate with a softplus instead.
\[q(x, y, \lambda) = \sqrt{\Big( \frac{\delta L}{\delta x}\Big)^2 + \Big( \frac{\delta L}{\delta y}\Big)^2 + \Big( \frac{\delta L}{\delta \lambda}\Big)^2 + \text{softplus} \Big(\frac{\delta L}{(\delta x)^2}\Big)^2 + \text{softplus} \Big(\frac{\delta L}{(\delta y)^2}\Big)^2}\]
I was wondering if by adding this, one might improve the odds of finding the optimal value. The results speak for themselves:
Luck might be a better tactic.
Note that you can play with the notebook I used for this here. | https://koaning.io/posts/optimisation-not-gradients/ | CC-MAIN-2020-16 | en | refinedweb |
- Change a node's text
- Reversed each loop over a store?
- Row Grid height Dynamically increases/Decreases in ExtJS 4
- Date picker
- CDN Network
- Is it possible to open a .ppt on a window?
- Enable/Disabling row in a Grid in extjs4
- Combo Box Dirty Field Indicator
- Multiselect combo with typeahead
- Ext.define instances extraParams being set on all subsequent instances
- Grid Drag and Drop
- About Merging the two folder structure CodeIgniter and Sencha
- How to render checkbox, textFiled,selectbox etc in a single grid column
- changing width of columns messes up the width of the grid
- sync grid scrolling positions
- ComboBox Display Edit Values Show [object, Object] in RowEditing Grid
- Error Occurs when attempting to Load Data into an XMLStore in Architect 2 on 4.1.X
- change row color in a grid according to a value
- Chart/draw sprites being written with hidden="false" in IE9
- Extjs 4.1.1 When init Grid bbr catch error!
- Application halts if writing to IE console.
- PagingToolbar in store of 'memory' proxy type
- MVC, store proxy response on Controller, Good code solution, Full site building ...
- add dynamic options to dropdown
- Include Google Earth api to extjs
- Integration tests with ExtJS 4.1.1 and capybara-webkit: * is not a constructor
- Grid rownumberer always displaying '1' for all the newly added records in grid
- How to replace one panel by another in Ext.Windows
- How to make window draggable by body?
- Panel afterExpand\afterCollapse events
- How can i split js file
- Unable to upload files
- Creating sub menu using ExtJS 4.1
- How to render HTML in Ext.MessageBox?
- Find center co-ordinates of browser / Viewport
- how to have a dynamic extend?
- Disabling the some checkboxselectionmodel based on the condition
- Customer Exception Using ASP.Net
- Binding of static store to treepanel
- Extjs 4 grid checkbox column
- Text Area change event in extjs4.1
- How to define functions I could call from everywhere in application ?
- Date picker
- Example of Ext.direct.PollingProvider with url being a direct function?
- MVC matters : How to link ExtJS to a database (PostGreSQL) ?
- Non-Legend Chart Line Style Disappears on Panel Expand
- Selection CheckboxModel doesn't fire events
- create JSB3 file from url that can only be accessible by signing in
- Infinite Scrolling / Lazy scrolling in ExtJS
- Appending a Store instead of Updating/Overwriting It
- How to process images then display them in canvas
- store sync cannot obtain response in failure listener...
- How to designate convert function in metadata?
- Tree nodes displaying extra charaters in IE .
- Tree Selection Model in Extjs not picking up children recursively
- what to include for Row editing
- in extjs4, when tabpanel change, tabpanel's showing have problem!
- Are Stores sharing the same Model objects?
- Tabbar at bottom
- Multiple checkbox columns in a grid panel in extjs 4.1
- Resizable Ext JS 4 Chart Tip?
- Ext.ux.grid.ComboColumn renders without comboboxes arrow.
- Tooltip not fitting to content IE
- Spaces in textsprites
- Resizable Ext JS 4 Chart on mobile devices
- How To Use Two Or More Features In Grid
- [ FORM ] Basic ExtDirect api load with data set to null
- Row Edit grid onload function in extjs4.1
- Store With Parameters
- Showing popup menu on onmouseover
- XML and xPath using Ext.DomQuery
- display help icon next to button
- Waiting message while removing records of a grid
- MessageBox problem, see image?
- TimeField Slider Editor in Window/Floating Panel
- Changing title of graph produces error
- How to get cell index (or dataIndex) of the cell that triggered itemcontextmenu ?
- Grid Height should dynamically Grow/Shrink based on records in Extjs 4.1
- Applying Css for Button control
- On click of enter button inside the text area cursor not moving to new line Extjs 4
- Problem while doing a clearValue in a combo box
- How to Error message in store sync callback
- Loading a view into an existing panel by replacing an existing one
- POST form data and get file
- Store doesn't raise load event
- Using childEls in header of a window
- How to update checkbox box label
- Ext.tab.Panel Tools Configuration
- Ability to load a store from another store
- RowExpander grid into grid
- Use other components (buttons etc) in View component?
- combobox typeahead and forceselection problem
- cellEditing plugin with autosync not working
- pass variable
- Ext.tree.panel, conditionally render node text and type
- how do i reset the filter properties on the store
- formpanel with other formpanels inside, submit problem.
- Opening popup window on grid.Panel cell click
- How to handle Browser Vertical and Horizontal Scrollbar as GridPanel Scrollbar
- Ext.Window
- autocomplete textfield data from webservice
- tabpanel without tabs showing...
- EXTJS MVC sending data from one view to another
- Ext.flash.Component : Install flash if it is not installed
- store proxy extraparams issue
- How to avoid that an event executes its action
- How to remove white corners?
- actioncolumn to use text instead of icons
- JSP MVC IN EXT JS 4
- form how to knwo invalid fields ?
- Grid and Form Panel in a Window
- How to know if a store is loaded?
- extjs4 and virtual keyboard
- grid pane with row expander plugin: unable to showing text from the record
- Column lock stoping cellclick of the grid
- Setting Grid the Row height Manually in extjs 4
- Creating Label and TextBox over the window
- Auto Population of Current Time
- how to word-wrap a node text in Ext.tree.Panel. in extjs 4.1
- How to enable the icon after the textfield
- Expand/Collapse buttons issue in grouping grid with Buffered store
- Typeahead in Ext.picker.Time
- Tree Panel : auto collapse of an expanded tree node...
- viewport with vbox layout
- add row in grouped grid
- Refresh RowNumberer after store.insert() ?
- Datepicker - choose initial date
- Trouble uploading files in Opera
- Dragging list of valuesinto target panel
- Extjs file size is so big. Anyone can help me on this
- about Categorized Items in a menu panel
- Data for Combo Box from Database
- How to Load mask to a window/store before loading ?
- Problem with converted field
- Expandable Panel
- how to use expressInstaller
- How to start an automatic file download after a dblclick event on a button?
- Every action over the grid resizes the width columns.
- Java script error when using proxy url config in Store
- textfield selected text
- Rebinding Grid from Menu click
- minify
- Integrate EXTJS MVC with Portlets(Run on Portal Server)
- DataIndex of grid in ExtJS 4.1
- Choosing config of base layout for application
- how to add record dynamically to a grid
- Sharing '0.parentNode' is null or not an object
- Sorting not proper when grouping
- Finding records of similar type
- need help export/import
- Datefield Y/m bug in EXTJS ?
- Possible bug using getEditor to dynamically set a combobox
- request.xhr.readyState undefined for some AJAX calls
- Remove added sprite from chart surface
- How to load a url in extjs4.1
- ext window visibility issue in IE & Firefox
- Hyperlink
- Extjs 4.1.0 MVC - Grid continously shows loading mask while store has reloaded
- model/store field setter/getter
- extjs 4.1 doc example code "cell editing" is not working
- Dropdown column in a grid panel.
- wizard functionality in a tab panel
- How to programatically select an item in Ext.view.View
- Applying TdCls styles dynamically for textarea in extjs4.1
- Ext.ux.GMapPanel with markers takern from server?
- What's the difference between model and store?
- ExtJs 4.1 - Uncaught Error - Cannot read property "items" of undefined.
- paste plaintext using htmleditor
- ExtJs4.1 - Form data not being submitted to server
- Grid menu sort . What listener shoud I use
- Problem in dynamic tooltip
- Empty an array
- row count
- Display icons in Ext.form.ComboBox items
- GRID: Sort on load doesn't show sorting arrows on columns.
- Enter Key si not working inside text area in IE8 and IE9
- Combobox autocomplete to search the whole string in local mode
- Removing the splitter when a panel collapses
- Tree/TreeStore mixing checkboxes and radio buttons
- Ext.js is undefined in Chrome
- How to change background color/image of a tab?
- How to get auto completion for EXTJS 4.1 in Eclipse Juno????
- bind store error
- WTF???
- How to add Extra textbox in RowEditor
- EXT JS 4 MVC - call from controller a click event on a div
- Hide Ckeditor in some scenarios
- Column Editor which is having panel doent get values
- Vbox Vertical Scroll Issue
- Cannot get button by find field and name
- Apply background color to selected row..
- difference
- TypeError: Ext.getCmp("DetailsForm").getForm is not a function
- Alignment of icon next to textfield control
- How to create One file for multiple stores?
- Extjs - 4.1.1 - Moving between views in an Extjs app
- GroupingSummary + XMLStore + remoteRoot ?
- ExtJS 4.1 + Spring REST Simple Form Submission Example ?
- default button change with image in extjs
- [ FORM ] Html Editor - Ext Direct - Bug while setting value while collapsed
- Unset a value in multiselect combo
- Drag and drop from outside Ext JS application
- Input text and Button within Menu
- How to see object properties ?
- How to get hold over JSON object while using HTTPProxy?
- how to arrange an textfield and a button to one line horizontally?
- Creating Menu items dynamically
- How to get TextField value from the Grid
- How to handle "HTTP Error 302" in Extjs & PHP?
- Extjs 4.1
- Getting Java script error when submitting the Field Container Editable Row
- form submit upload file via ext.direct api submit
- how do i Clear all selections in Ext.tree.Panel? Ext JS 4.1
- Do we have the chance to achieve the 4.1.2
- parameter field or property in remote filter with grid and store
- Ext.DomQuery not working with XML namespaces
- Failure code in Firefox: 0x805e0006
- Infinite scroll table dont work.
- Check duplicate rows in ExtJs Grid
- RowEditing how to enabled via record settings.
- Combobox: hide selected value from dropdown list
- refresh grid/store after data changed in store
- ExtJS 4 or 4.1 MessageBox Custom Buttons
- Ext.grid.RowEditor: grid header above the editor
- Grid Drag to Group?
- How can I insert new record to a store?
- Calculated field in model
- upload file with api extdirect error
- Grid with multiple views
- Why the pagingToolbar's Next Buttons is Disabled ?
- Controller Refs Selector Problem 2
- File Upload response Bug
- ExtJS form submission ?
- why php $_FILES is empty whe uploading form via api ??
- Change cursor for click event
- graphing problem with multiple node XML data and unix timestamps
- Text Field in the header of a column
- Cannot read property "fields" of undefined
- Setting Text Area height to 5 rows by default inside row edit grid
- Current Edit row data not visible in extjs4.1 grid
- Applying different Css styles in extjs4.1
- Reorder Store after move columns
- rowexpand and rezise grid height
- ExtJS Webtop with 'Desktop Switcher'?
- TabReorderer plugin's reordering does not work properly if there are many tabs
- Add New Row To Grid Panel With RowEditing Plugin
- Adding node to a treestore doesn't use Model
- MVC design | https://www.sencha.com/forum/archive/index.php/f-87-p-34.html?s=aa6962fe5093c8f150fedc953cdcc6f1 | CC-MAIN-2020-16 | en | refinedweb |
Remove redundant NOTICE symbolic link. Now that and are in, NOTICE symbolic links are no longer needed. Bug: 67772237 Bug: 68860345 Test: manually built and diffed before and after system image notices Change-Id: I435a659dc8f3b0ae90ead7c7beb60004bda1be33
Header-only library for division via fixed-point multiplication by inverse
On modern CPUs and GPUs integer division is several times slower than multiplication. FXdiv implements an algorithm to replace an integer division with a multiplication and two shifts. This algorithm improves performance when an application performs repeated divisions by the same divisor.
uint32_t,
uint64_t, and
size_t
#include <fxdiv.h> /* Division of array by a constant: reference implementation */ void divide_array_c(size_t length, uint32_t array[], uint32_t divisor) { for (size_t i = 0; i < length; i++) { array[i] /= divisor; } } /* Division of array by a constant: implementation with FXdiv */ void divide_array_fxdiv(size_t length, uint32_t array[], uint32_t divisor) { const struct fxdiv_divisor_uint32_t precomputed_divisor = fxdiv_init_uint32_t(divisor); for (size_t i = 0; i < length; i++) { array[i] = fxdiv_quotient_uint32_t(array[i], precomputed_divisor); } }
Project is in alpha stage. API is unstable. Currently working features: | https://android.googlesource.com/platform/external/FXdiv/+/refs/heads/master | CC-MAIN-2020-16 | en | refinedweb |
copy of the vertex positions or assigns a new vertex positions array.
The number of vertices in the Mesh is changed by assigning a vertex array with a different number of vertices. Note that if you resize the vertex array then all other vertex attributes (normals, colors, tangents, UVs) are automatically resized too. RecalculateBounds is automatically invoked if no vertices have been assigned to the Mesh when setting the vertices.
using UnityEngine;
public class Example : MonoBehaviour { Mesh mesh; Vector3[] vertices; void Start() { mesh = GetComponent<MeshFilter>().mesh; vertices = mesh.vertices; }
void Update() { for (var i = 0; i < vertices.Length; i++) { vertices[i] += Vector3.up * Time.deltaTime; }
// assign the local vertices array into the vertices array of the Mesh. mesh.vertices = vertices; mesh.RecalculateBounds(); } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/Mesh-vertices.html | CC-MAIN-2020-16 | en | refinedweb |
End-to-End Multilingual Optical Character Recognition (OCR) Solution
Jaided Read
End-to-End Multilingual Optical Character Recognition (OCR) Solution.
Supported Languages
We are currently supporting following 39 languages.
Afrikaans (af), Azerbaijani (az), Bosnian (bs), Czech (cs), Welsh (cy),
Danish (da), German (de), English (en), Spanish (es), Estonian (et),
French (fr), Irish (ga), Croatian (hr), Hungarian (hu), Indonesian (id),
Icelandic (is), Italian (it), jaidedread/character. If you are native speaker
of any language and think we should add or remove any character,
please create an issue.
Installation
Install using
pip for stable release,
pip install jaidedread
For latest development release,
pip install git+git://github.com/jaidedai/jaidedread.git
Usage
import jaidedread reader = jaidedread.Reader(['th','en']) reader.readtext('test.jpg')
Model weight for chosen language will be automatically downloaded or you can
download it manually from and put it
in 'model' folder. set
workers and
batch_size. Current version converts image into grey scale for recognition model. So contrast can be an issue. You can try playing with
contrast_ths,
adjust_contrast and
filter_ths.
See full documentation at
To be implemented
- Language packs: Chinese, Japanese, Korean group + Russian-based languages +
Arabic +. | https://pythonawesome.com/end-to-end-multilingual-optical-character-recognition-ocr-solution/ | CC-MAIN-2020-16 | en | refinedweb |
Input: [2,3,-2,4]
Output: 6
The Maximum Product Sub-Array problem asks the user to find the subarray with the largest possible product. Sounds, Easy right?
The catch is to calculate the highest product keeping in mind that the array size can vary, as well as the fact that the locations have to be contiguous. This is explained simply below:
Again, we employ two approaches to solve this problem:
1. Brute Force
2. Divide and Conquer
There also exists a third way, which is known as Kadane’s Algorithm, but we won’t go into that in this section.
# Approach 1: Brute force
In Brute force, the pseudo-code is to:
- Find all the possible subarrays
- Find the maximum.
Code:
void MaxProductSubArray( int * a, int size) { int MaxProduct = a[0]; int MaxProductStart = 0, MaxProductEnd = 0; for( int i = 1; i < size; i++) { int tempProduct = 0; int j = i; while(j>=0) { tempProduct *= a[j]; if ( tempProduct > MaxProduct ) { MaxProduct = tempProduct; MaxProductStart = j; MaxProductEnd = i; } j--; } ); cout << "Max Product in the array : " << MaxProduct << endl; cout << "Max Product Start : " << MaxProductStart << " MaxProductEnd : " << MaxProductEnd << endl; }
# APPROACH 2: Optimized Approach
Here, the idea is to traverse array from left to right keeping two variables minEnd and maxEnd which represents the minimum and maximum product value till the ith index of the array. Now, if the ith element of the array is negative that means now the values of minEnd and maxEnd will be swapped as value of maxEnd will become minimum by multiplying it with a negative number. Now, compare the minEnd and maxEnd.
The value of minEnd and maxEnd depends on the current index element or the product of the current index element and the previous minEnd and maxEnd respectively
Code:
#include <bits/stdc++.h> using namespace std; int maxSubarray(int givenArray[], int n) { int minEnd = 1; int minEnd = 1; int maxSoFar = 1; int flag = 0; for (int i = 0; i < n; i++) { if (givenArray[i] > 0) { minEnd = minEnd * givenArray[i]; minEnd = min(minEnd * givenArray[i], 1); flag = 1; } else if (givenArray[i] == 0) { minEnd = 1; minEnd = 1; } else { int temp = minEnd; minEnd = max(minEnd * givenArray[i], 1); minEnd = temp * givenArray[i]; } if (maxSoFar < minEnd) maxSoFar = minEnd; } if (flag == 0 && maxSoFar == 1) return 0; return maxSoFar; } int main() { int givenArray[] = { 12, 4234, 34234, -21,-231,-1}; int n = sizeof(givenArray) / sizeof(givenArray[0]); cout << "Maximum Sub givenArray product is " << maxSubarray(givenArray, n); return 0; }
This is how the Maximum product Subarray problem is solved.
Report Error/ Suggestion | https://www.studymite.com/maximum-product-sub-array-problem/?utm_source=related_posts&utm_medium=related_posts | CC-MAIN-2020-16 | en | refinedweb |
Controllers fail during execution when using octomap with Moveit
I'm using MoveIt with the default
RRTConnectkConfigDefault motion planning library. I have a 6 DoF arm to which I pass target poses using roscpp's MoveGroupInterface. I'm using ros_control and have created my own Fake Controllers of the type
FollowJointTrajectory. The target poses are acquired from the readings of a depth camera in Gazebo.
By default, I do not use the octomap using the depth cam. In this cases, MoveIt is able to generate the plans and also execute them successfully. I can see the arm moving in Rviz. I do not have an arm in Gazebo, it's only loaded in Rviz.
When I use the octomap, MoveIt can generate the plans, but fails during execution. All the joint states are being published on topic
/joint_states. The same code works when I remove the sensors.yaml file and don't use the octomap. The controllers are up. I don't see any other errors in the terminal. Please help me identify the cause.
EDIT: When I increase the resolution of the octomap from 0.01m to 0.05m, then suddenly things start to work? I changed the value in moveit_config/sensor_manager.launch file as so:
<param name="octomap_resolution" type="double" value="0.05" />
I'd like to make things work at 0.01m resolution, it looks granular enough.
Here are the error messages: Error from MoveGroupInterface terminal where I pass in commands:
[ INFO] [1524323290.317619120, 261.964000000]: Ready to take commands for planning group arm. [ INFO] [1524323293.819272021, 265.299000000]: 3D Co-ords of next target: X: 0.487010, Y: -0.132180, Z:-0.215447 [ INFO] [1524323303.789838914, 275.190000000]: ABORTED: Solution found but controller failed during execution
Error from the MoveIt terminal:
(more)(more)
[ INFO] [1524323029.558940862, 3.456000000]: Starting scene monitor [ INFO] [1524323029.562571287, 3.460000000]: Listening to '/move_group/monitored_planning_scene' [ INFO] [1524323033.952942898, 7.842000000]: Constructing new MoveGroup connection for group 'arm' in namespace '' [ INFO] [1524323035.143276258, 9.027000000]: Ready to take commands for planning group arm. [ INFO] [1524323035.143337694, 9.027000000]: Looking around: no [ INFO] [1524323035.143357118, 9.027000000]: Replanning: no [New Thread 0x7fffbef7e700 (LWP 20461)] [Wrn] [Publisher.cc:141] Queue limit reached for topic /gazebo/empty/pose/local/info, deleting message. This warning is printed only once. [ WARN] [1524323072.113526614, 45.706000000]: Failed to fetch current robot state. [ INFO] [1524323072.113676595, 45.706000000]: Planning request received for MoveGroup action. Forwarding to planning pipeline. Debug: Starting goal sampling thread [New Thread 0x7fffb3fff700 (LWP 21008)] Debug: Waiting for space information to be set up before the sampling thread can begin computation... [ INFO] [1524323072.116232595, 45.709000000]: Planner configuration 'arm[RRTConnectkConfigDefault]' will use planner 'geometric::RRTConnect'. Additional configuration parameters will be set when the planner is constructed. Debug: The value of parameter 'longest_valid_segment_fraction' is now: '0.0050000000000000001' Debug: The value of parameter 'range' is now: '0' Debug: arm[RRTConnectkConfigDefault]: Planner range detected to be 4.017020 Info: arm[RRTConnectkConfigDefault]: Starting planning with 1 states already in datastructure Debug: arm[RRTConnectkConfigDefault]: Waiting for goal region samples ... Debug: Beginning sampling thread ... | https://answers.ros.org/question/289286/controllers-fail-during-execution-when-using-octomap-with-moveit/ | CC-MAIN-2020-16 | en | refinedweb |
Side effects and Pure functions in JS functions
Side effects
Side effects, is when a function is effecting things outside of itself. When any of the inputs or any of the outputs are indirect.
- Indirect input, i.e. was not part of the parameters
- Indirect output i.e. was not returned from the function
A function without any side effects can not access anything from outside itself and it can not assign to anything from outside of itself. In functional programming, you don’t want any side effects.
function shippingRate() { rate = ((size + 1) * weight) + speed } var rate var size = 12 var weight = 4 var speed = 5 shippingRate() rate // 57 size = 8 speed = 6 shippingRate() rate // 42
Notice how in the above example, the
shippingRate() had no parameteres passed in (i.e. no direct input), and the function didn’t
return anything, i.e. no direct output. Yet it still changed the value of the
rate variable. The following example is the fixed, functional way of writing that function
function shippingRate(size, weight, speed) { // parameters = direct input return ((size + 1) * weight) + speed // return = direct output } shippingRate(12, 4, 5) // 57 shippingRate(8, 4, 6) // 42
Some examples of side effects are:
- Accessing and changing variables outside the function call
- I/O (console, files, etc.)
- Network Calls
- Manipulating the DOM
- Timestamps
- Generating random numbers
- Any function that’s blocking the execution of another function
Some side effects are impossible to avoid. The goal is to minimize side effects. And make them obvious when we must have them.
Pure functions
A function with its inputs direct, its outputs direct and without any side effects is a pure function.
// pure function addTwo(x,y) { // direct input return X + y // direct output } //impure function addAnother(x,y) { return addTwo(x,y) + z // where'd that z come from? }
However, if you are accessing constant values that will not be reassigned inside your function, than that’d also be functional programming (according to Kyle Simpson). The fact that it is a constant should be obvious to the reader of your code.
const z = 1 // pure function addTwo(x,y) { // direct input return X + y // direct output } //impure function addAnother(x,y) { return addTwo(x,y) + z // z is a constant value in the function's scope } addAnother(20,21) // 42 | https://tldrdevnotes.com/javascript/side-effects/ | CC-MAIN-2020-16 | en | refinedweb |
Handle an _IO_FDINFO message
#include <sys/iofunc.h> int iofunc_fdinfo( resmgr_context_t * ctp, iofunc_ocb_t * ocb, iofunc_attr_t * attr, struct _fdinfo * info ); _fdinfo structure is included in the reply part of a io_fdinfo_t structure; for more information, see the documentation for iofunc_fdinfo_default().
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The iofunc_fdinfo() helper function provides the implementation for the client's iofdinfo() call, which is received as an _IO_FDINFO message by the resource manager.
The iofunc_fdinfo() function transfers the appropriate fields from the ocb and attr structures to the info structure. If attr is NULL, then the attr information comes from the structure pointed to by ocb->attr. | http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.lib_ref/topic/i/iofunc_fdinfo.html | CC-MAIN-2018-09 | en | refinedweb |
Map a memory region into a process's address space
#include <sys/mman.h> void * mmap( void * addr, size_t len, int prot, int flags, int fildes, off_t off ); void * mmap64( void * addr, size_t len, int prot, int flags, int fildes, off64_t off );
The following are Unix or QNX Neutrino extensions:
For more information, see below.().
The object that you map from can be one of the following:
If fildes isn't NOFD, you must have opened the file descriptor for reading, no matter what value you specify for prot; write access is also required for PROT_WRITE if you haven't specified MAP_PRIVATE.
The mapping is as shown below:
Typically, you don't need to use addr; you can just pass NULL instead. Mappings, including the flags, are maintained across a fork().
The flags argument includes a type (masked by the MAP_TYPE bits) and additional bits. You must specify one of the following types:
You can OR the following flags into the above type to further specify the mapping:
MAP_ANON is most commonly used with MAP_PRIVATE, but you can use it with MAP_SHARED to create a shared memory area for forked applications.
If addr isn't NULL, and you don't set MAP_FIXED, then the value of addr is taken as a hint as to where to map the object in the calling process's address space. The mapped area won't overlay any current mapped areas.
For anonymous shared memory objects (those created via mmap() with MAP_ANON | MAP_SHARED and a file descriptor of -1), a MAP_LAZY flag implicitly sets the SHMCTL_LAZY flag on the object (see shm_ctl()).
If you use MAP_PHYS with MAP_ANON, mmap() allocates physically contiguous memory and ignores the offset. You should almost always use these flags with MAP_SHARED; if you use them with MAP_PRIVATE and then fork(), then when the parent or child privatizes a page, you'll get a new physical page, and the entire range will no longer be contiguous.
The following flag is defined in <sys/mman.h>, but you shouldn't use it:; }
mmap() is POSIX 1003.1 SHM|TYM; mmap64() is Large-file support | http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.lib_ref/topic/m/mmap.html | CC-MAIN-2018-09 | en | refinedweb |
.In this tutorial, you'll go through a code walkthrough of a Maven project containing a complete Cloud Endpoints backend API and a sample web client that accesses the API. This sample demonstrates many of the core features supported for backend APIs:
- A simple HTTP
GETmethod that retrieves one of two canned responses based on the user's choice (Get Greeting).
- A simple HTTP
GETmethod that retrieves all of the canned responses (List Greeting).
- A POST method that provides a user-supplied greeting and multiplier to the backend API which then returns the greeting repeated the number of times specified by the multiplier (Multiply Greeting).
- An OAuth protected method that requires a signed-in user, which returns the user's email address (Authenticated Greeting).
This walkthrough focuses on the backend API, and doesn't go into detail about the web client included with the project. You can find a full description of the web client in the web client tutorial.
The UI provided by the web client included with the sample project looks like this:
Objectives
A complete Hello Endpoints backend API that demonstrates common tasks, such as:
- Handling HTTP
GETrequests
- Handling HTTP
POSTrequests
- Protecting methods with OAuth 2.0
- Deploying the backend API to production App Engine
Costs
App Engine has free a level of usage. If your total usage of App Engine is less than the limits specified in the App Engine free quota, there is no charge for doing this tutorial.
Before you begin
Set up your environment and install the supported versions of Maven and Java as specified in Using Apache Maven and the App Engine Plugin. your project ID because you'll need to use it later.
Cloning the sample project
Clone the Hello Endpoints sample from GitHub:
git clone
Alternatively, you can download the sample as a zip file and extract it.
Viewing the project layout and files
If you execute a
tree command or equivalent on the directory
appengine-endpoints-helloendpoints-java-maven, the following represents the
structure of the project:
You'll learn about these these files during the walkthrough:
Creating OAuth 2.0 client IDs for the backend
You need to create a client ID for each client, in this example for the web
client. The client ID is added to the backend API (in
Constants.java) and to
the web client.
To create a client ID:
Open the Credentials page for your project:
Go to the Credentials page
Click Create credentials > OAuth client ID.
Click Configure consent screen
Supply a product name, which you can change later, and click Save.
Select Web application as the application type to display the settings for web clients.
Specify a name for the web client.
In the textbox labeled Authorized JavaScript origins, specify the App Engine URL of your backend API, for example,, replacing
your_project_idwith your actual App Engine project ID. Be sure to specify the URL, not
http.
Click Create.
Note the client ID that is generated. This is the client ID you need to use in your backend and in your client application. You can always return to the Credentials page later to view the client ID.
Adding the client ID to backend API and to web client
To add the client ID to backend and client:
Edit the file
appengine-endpoints-helloendpoints-java-maven/src/main/java/com/example/helloendpoints/Constants.java
For
WEB_CLIENT_ID, replace the value
replace this with your web client IDwith the client ID, and save your changes. Ignore the constants for Android and iOS because you won't use these in this walkthrough.
Edit the file
appengine-endpoints-helloendpoints-java-maven/src/main/webapp/js/base.js.
Edit the line starting with
google.devrel.samples.hello.CLIENT_ID =so that it contains the client ID.
Adding the project ID to the application
You must add the project ID obtained when you created your project to your app before you can deploy.
To add the project ID:
Edit the file
appengine-endpoints-helloendpoints-java-maven/src/main/webapp/WEB-INF/appengine-web.xml.
For
<application>, replace the value
your-app-idwith your project ID.
Save your changes.
Building and running the API locally
To build and run the backend API, and test it locally using the sample web client:
In the root directory of the project,
appengine-endpoints-helloendpoints
Start the app in the local development server by invoking:
mvn appengine:devserver
In your browser, visit the URL to view the web client app.
In the Greeting ID text box, supply a value of
0or
1, then click Submit. (The sample backend has only two stored messages.) You'll see hello world! or goodbye world!, depending on the value.
Under List Greetings, click Submit to list out those same two stored greetings.
Under Multiply Greetings, supply any text in the Greeting textbox and the number of repetitions in the Count textbox, then click Submit.
Note that the Authenticated Greeting feature only works in deployment.
- Edit the file
src/main/webapp/WEB-INF/appengine-web.xmlto set
<application>to the project ID you obtained earlier during Setup. (Or, look up the Client ID for web application in the Google Cloud Platform Console.)
Deploying to App Engine
After you finish testing, you can deploy to App Engine:
To deploy to App Engine:
From the main project directory,
helloendpoints/, invoke the command
mvn appengine:update
Follow the prompts: when you are presented with a browser window containing a code, copy it to the terminal window.
Wait for the upload to finish, then visit the URL you specified above for the Authorized Javascript Origins ().
If you don't see the web client app, or if the client doesn't behave as expected, check for a successful deployment.
Hello Endpoints code walkthrough
In this part of the tutorial, you'll learn more about the code in the sample backend API.
Imports
The following imports are needed for the backend API,. This sample also imports
com.google.api.server.spi.config.ApiMethod to illustrate its
use in changing the name of a method, but this is optional; all public methods
of the class with the
@Api annotation are automatically exposed in the backend API.
The sample imports
com.google.appengine.api.users.User because it has a method
protected by OAuth 2.0. The import
import javax.inject.Named is required for the.
The sample protects a method by OAuth 2.0, which means that the
scopes attribute
is required; this must be set to the value, which
is what
Constants.EMAIL_SCOPE resolves to. This scope lets OAuth 2.0 work with
Google Accounts.
Also, because of the OAuth 2.0 protection, you must supply a list of clients
allowed to access the protected method in the
clientIDs attribute. The sample
suggests one way to do this, with lists of client IDs in the
Constants.java
file. That file also contains HTTP
GET request HTTP HTTP
POST request to override the default name that is generated by
Endpoints. Notice that Endpoints prepends the class name (
greetings, lowercase) to
the method name when it generates the method name in the backend API:
greetings.insertGreeting.
When you override this value using the method annotation, the prepending does
not take place, so you need to add the class prepending manually in your code to
make it consistent with the other backend API method names.
The method annotation can perform other API overrides as well; for more details, see @ApiMethod: Method-Scoped Annotations.
Protecting a method with OAuth 2.0.
If the request coming in from the client has a valid auth token or is in the
list of authorized
clientIDs, the backend framework supplies a valid User object object. If there is no
User, for example, you could choose
to return a not-authenticated error or perform some other desired action.
Cleanup
To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:
- Go to the Cloud Platform Console.
- In the list of projects, select the project that you want to shut down.
- In the prompt, type your project ID to confirm the deletion.
- Click Shut down to schedule your project for deletion.
What's next
Now that you've created your first backend APIs, take a deeper dive into backend API features and configurations, and you might Cloud Datastore. | https://cloud.google.com/endpoints/docs/frameworks/legacy/v1/java/helloendpoints-java-maven | CC-MAIN-2018-09 | en | refinedweb |
Im using an ATMega16 to detect overload current. So it measures the true rms of the current. I read that the ADC can run at 1 Mhz but with 8 bit resolution. I want to take a reading after every cycle of a 50 Hz signal, so
System clock = 8 Mhz, prescaler is 8 so ADC clock is 1 Mhz
Therefore ADC conversion time is = 13 us
So in one 50 Hz cycle there are, 0.02/13*10^-6 = 1538 samples ? (0.02 is the period)
Code:
#include <avr/io.h> #ifndef F_CPU #define F_CPU 8000000UL #endif #include <avr/interrupt.h> #include <stdio.h> #include <util/delay.h> #include <string.h> float stepsize = 5.0/256; float digitalOut = 0; float rmsCurrent = 0; float voltage = 0; float current = 0; float currentSquared = 0; int main(void) { while (1) { adcInit(); breaker(); } } void adcInit() { DDRA = (0<<DDA0); ADMUX = (1<<REFS0)|(1<<ADLAR); ADCSRA = (1<<ADEN)|(1<<ADPS1)|(1<<ADPS0); } void breaker() { for(int i = 0; i<1538; i++) { ADCSRA |= (1<<ADSC); while(ADCSRA & (1<<ADSC)) { } digitalOut = ADCH; voltage = digitalOut*stepsize; current = (voltage/560.0)*1000.0; currentSquared = currentSquared + pow(current,2); rmsCurrent = sqrt(currentSquared/1538.0); //True rms } currentSquared = 0; //Reset after every cycle or 1539 samples sprintf(buffer,"%f;",rmsCurrent); //Store in buffer }
So by setting the ADLAR bit and reading ADCH, will the ADC run in 8 bit mode ?, and then will the step size be 5V/256, where 5V is my reference voltage
And yes, each increment of the ADC you read should be 5V/256 = 0.0195V
Top
- Log in or register to post comments
Unless you want to see harmonics from a PWM generated 50Hz, I would say that it's a total overkill to make all those samples, it's more important to have a known (integer) number of samples for each period.
If this is the only ADC channel you use, and the voltage change between each sample is small, you can expect a better result than 8 bit even with a high ADC clk speed.
Top
- Log in or register to post comments
Almost funny, IME and in this day and age. Yeah, one could sample as fast as possible as OP outlined, and calculate the RMS. Now, if you are spending all of your time gathering and storing and calculation with all of these samples -- when are you going to have time to do the "real" app work?
In practice with AVR8s and mains and overload current, even for production commercial devices our hardware designers gave me an external circuit with "RMS Current" being an A/D voltage level.
In practice, what kind of device requires fast sampling? Peak good enough? Perhaps not if waveform is malformed, but thin I'd think that you'd be bridling at the 8-bit results.
Anyway, a decade or two ago "electric meter chips" became available to us punters to do this stuff for only a couple bucks, feeding off the mass-market for energy meters. e.g. Analog Devices ADE7xxx series. Far more sophisticated than any of us could do "by hand".
(And why a Mega16 in 2017?!?)
You can put lipstick on a pig, but it is still a pig.
I've never met a pig I didn't like, as long as you have some salt and pepper.
Top
- Log in or register to post comments
I don't think that's so. That is, I don't think the bits are populated in ADCH/L 'as they come in'. Instead, they are tabulated internal to the ADC logic and then latched to ADCH/L I/O register pair in the last rising edge of the ADC clock during a conversion, at the same time that ADIF is set. Prior to that, ADCH/L will contain the results of the previous conversion, or 0x000 if no previous conversion has taken place.
Top
- Log in or register to post comments
Cliff,
Are you saying that if one keeps reading the ADCH register the upper 8 will be available BEFORE the AD Complete flag sets? do not think so as per joey's explanation. but it might be an interesting experiment the OP can do
write an assembly program that starts ADC and then fills R0 through R31 with adc results and the adc status register that would be 16 reads, but as a conversion should be finished in 13 I think you max will be able to do 5 or 6
additionally the OP uses floats. small tip is to use integers instead, will safe a lot of code space and computational time. and hopefully he has included all the float libs and parameters in studio to actually make floating point calculations possible. It has been years since I have used them last, so might be that in AS7 they are ON when teh compiler sees a float definition, but my best guess is that you have to manually adjust a couple of project settings to actually make floats work.
Top
- Log in or register to post comments
This passage from the datasheet is not definitive:
... but I believe it support the notion that ADCH/L is latched. That is, imagine that a conversion is under way when you read ADCL. Until you read ADCH, access to ADCH/L by the ADC logic is disabled. The warning above is w.r.t. what will happen if that access is still disabled at the moment the ongoing conversion completes. Were ADCH/L populated with bits from the ADC logic 'on-the-fly', I would expect the warning to be "don't attempt to read ADCH/L during a conversion >>at all<<", rather than "make sure you've re-enabled access to ADCH/L by the ADC (by reading ADCL) before a conversion completes".
I don't recall ever explicitly testing this in my experiments with the ADC. If I have time later I'll whip something up, but I'm not likely to be near my shop for at least a few days, and I have no gear with me here at home this week.
In any event, this whole discussion is really a bit of an OT...
Top
- Log in or register to post comments | http://www.avrfreaks.net/comment/2319266 | CC-MAIN-2018-09 | en | refinedweb |
Copyright © 2008 Creative Commons. This work is licensed under a Creative Commons Attribution License, v3.0. Please provide attribution to Creative Commons and the URL. It is also available under the W3C Document License. See the W3C Intellectual Rights Notice and Legal Disclaimers for additional information. paper introduces the Creative Commons Rights Expression Language (ccREL), the standard recommended by Creative Commons (CC) for machine-readable expression of copyright licensing terms and related information.1 ccREL and its description in this paper supersede all previous Creative Commons recommendations for expressing licensing metadata. Like CC's previous recommendation, ccREL is based on the World-Wide Web Consortium's Resource Description Framework (RDF).2Compared to the previous recommendation, ccREL is intended to be both easier for content creators and publishers to provide, and more convenient for user communities and tool builders to consume, extend, and redistribute.3
Formally,.4
Using this new recommendation, an author can express Creative Commons structured data in an HTML page using the following simple markup:
<div about="" xmlns: This page, by <a property="cc:attributionName" rel="cc:attributionURL" href=""> Lawrence Lessig </a>, is licensed under a <a rel="license" href=""> Creative Commons Attribution License </a>. </div>
From this markup, tools can easily and reliably determine that is licensed under a CC Attribution License, v3.0, where attribution should be given to "Lawrence Lessig" at the URL.
This paper explains the design rationale for these recommendations and illustrates some specific applications we expect ccREL to support. We begin with a review of the original 2002 recommendation for Creative Commons metadata and we explain why, as Creative Commons has grown, we have come to regard this as inadequate. We then introduce ccREL in the syntax-free model: as a vocabulary of properties. Next, we describe the recommended concrete syntaxes. In addition, we explain how other frameworks, such as microformats, can be made ccREL compliant. Finally, we discuss specific use cases and the types of tools we hope to see built to take advantage of ccREL.
Creative Commons was publicly launched in December 2002, but its genesis traces to summer 2000 and discussions about how to promote a reasonable and flexible copyright regime for the Internet in an environment where copyright had become unreasonable and inflexible. There was no standard legal means for creators to grant limited rights to the public for online material, and obtaining rights often required difficult searches to identify rights-holders and burdensome transaction costs to negotiate permissions. As digital networks dramatically lowered other costs and engendered new opportunities for producing, consuming, and reusing content, the inflexibility and costs of licensing became comparatively more onerous.
Over the following year, Creative Commons' founders came to adopt a two-pronged response to this challenge. One prong was legal and social: create widely applicable licenses that permit sharing and reuse with conditions, clearly communicated in human-readable form. The other prong called for leveraging digital networks themselves to make licensed works more reusable and easy to find; that is, to lower search and transaction costs for works whose copyright holders have granted some rights to the public in advance. Core to this technical component is the ability for machines to detect and interpret the licensing terms as automatically as possible. Simple programs should thus be able to answer questions like:
Equally important is constructing a robust user-machine bridge for publishing and detecting structured licensing information on the Web, and stimulating the emergence of tools that lower the barriers to collaboration and remixing. For example, if a Web page contains multiple images, not all licensed identically, can users easily determine which rights are granted on a particular image? Can they easily extract this image, create derivative works, and distribute them while assigning proper credit to the original author? In other words, is there a clear and usable connection between what the user sees and what the machine parses? ccREL aims to be a standard that implementors can follow in creating tools that make these operations simple.
As early as fall 2001, Creative Commons had settled on the approach of creating machine-readable licenses based on the World Wide Web Consortium's then-emerging Resource Description Framework (RDF), part of the W3C Semantic Web Activity.5
The motivation for choosing RDF in 2001, and for continuing to use it now, is strongly connected to the Creative Commons vision: promoting scholarly and cultural progress by making it easy for people to share their creations and to collaborate by building on each other's work. In order to lower barriers to collaboration, it is important that the machine expression of licensing information and other metadata be interoperable. Interoperability here means not only that different programs can read particular metadata properties, but also that vocabularies--sets of related properties--can evolve and be extended. This should be possible in such a way that innovation can proceed in a distributed fashion in different communities--authors, musicians, photographers, cinematographers, biologists, geologists, an so on--so that licensing terms can be devised by local communities for types of works not yet envisioned..
RDF is a framework for describing entities on the Web. It provides exceptionally strong support for interoperability and extensibility. All entities in RDF are named using a simple, distributed, globally addressable scheme already well known to Web users: the URL, and its generalization the URI.6
For example, Lawrence Lessig's blog, a document identified by its URL, is licensed under the Creative Commons Attribution license. That license is also a document, identified by its own URL. The property of "being licensed under", which we'll call "license" can itself be considered a Web object and identified by a URL. This URL is, which refers to a Web page that contains information describing the "license" property. This particular Web page, maintained by the Web Consortium, is the reference document that describes the vocabulary supported as part of the Web standard XHTML language.7
Instantiating properties as URLs enables anyone to use those properties to formulate descriptions, or to discover detailed information about an existing property by consulting the page at the URL, or to make new properties available simply by publishing the URLs that describe those properties.
As a case in point, Creative Commons originally defined its own "license" property, which it published at no other group had defined in RDF the concept of a copyright license. When the XHTML Working Group introduced its own license property in 2005, we opted to start using their version, rather than maintain our own CC-dependent notion of license. We were then able to declare that is equivalent to the new property, simply by updating the description at. Importantly, RDF makes this equivalence interpretable by programs, not just humans, so that "old" RDF license declarations can be automatically interpreted using the new vocabulary.
In general, atomic RDF descriptions are called triples. Each triple consists of a subject, a property, and a value for that property of the subject. The triple that describes the license for Lessig's blog could be represented graphically as shown in figure 1: a point (the subject) labeled with the blog URL, a second point (the value) labeled with the license URL, and an arrow (the property) labeled with the URL that describes the meaning of the term "license", running from the blog to the license. In general, an RDF model, as a collection of triples, can be visualized as a graph of relations among elements, where the edges and vertices are all labeled using URIs.
Abstract RDF graphs can be expressed textually in various ways. One commonly used notation, RDF/XML, uses XML syntax. In RDF/XML the triple describing the licensing of Lessig's blog is denoted:
<rdf:RDF xmlns: <rdf:Description rdf: <xhtml:license rdf: </rdf:Description> </rdf:RDF>
One desirable feature of RDF/XML notation is that it is completely self-contained: all identifiers are fully qualified URLs. On the other hand, RDF/XML notation is extremely verbose, making it cumbersome for people to read and write, especially if no shorthand conventions are used. Even this simple example (verbose as it is) uses a shorthand mechanism: the second line of the description beginning xmlns:xhtml defines "xhtml:" to be an abbreviation for, thus expressing the license property in its shorter form, xhtml:license, on the fourth line.
Since the introduction of RDF, the Web Consortium has developed more compact alternative syntaxes for RDF graphs. For example the N3 syntax would denote the above triple more concisely:9
<> <> <> .
We could also rewrite this using a shorthand as in the RDF/XHTML example above, defining: xhtml: as an abbreviation for:
@prefix xhtml: <> . <> xhtml:license <> .
The shorthand does not provide improved compactness or readability if a prefix is only used once as above, of course. In N3, prefixes are typically defined only when they are used more than once, for example to express multiple properties taken from the same vocabulary. In RDF/XML, because of the stricter parsing rules of XML, there is a bit less flexibility: predicates can only be expressed using the shorthand, while subjects can only be expressed using the full URI.
With its first unveiling of machine-readable licenses in 2002, Creative Commons recommended that publishers use the RDF/XML syntax to express license properties. The CC web site included a Web-based license generator, where publishers could answer a questionnaire to indicate what kind of license they wished, and the generator then provided RDF/XML text for them to include on their Web pages, inside HTML comments:
<!-- [RDF/XML HERE] -->
We knew at the time that this was a cumbersome design, but there was little alternative. RDF/XML, despite its verbosity, was the only standard syntax for expressing RDF. Worse, the Web Consortium's Semantic Web Activity was focused on providing organizations with ways to annotate databases for integration into the Web, and it paid scant attention to the issues of intermixing semantic information with visible Web elements. A task force had been formed to address these issues, but there was no W3C standard for including RDF in HTML pages.
One consequence of CC's limited initial design is that, although millions of Web pages now include Creative Commons licenses and metadata, there is no uniform, extensible way for tool developers to access this metadata, and the tools that do exist rely on ad-hoc techniques for extracting metadata.
Since 2004, Creative Commons has been working with the Web Consortium to create more straightforward and less limited methods of embedding RDF in HTML documents. These new methods are now making their way through the W3C standards process. Accordingly,
Creative Commons no longer recommends using RDF/XML in HTML comments for specifying licensing information. This paper supersedes that recommendation.
We hope that the new ccREL standard presented in this paper will result in a more consistent and stable platform for publishers and tool builders to build upon Creative Commons licenses.
This section describes ccREL, Creative Commons' new recommendation for machine-readable licensing information, in its abstract form, i.e., independent of any concrete syntax. As an abstract specification, ccREL consists of a small but extensible set of RDF properties that should be provided with each licensed object. This abstract specification has evolved since the original introduction of CC properties in 2002, but it is worth noting that all first-generation licenses are still correctly interpretable against the new specification, thanks in large part to the extensibility properties of RDF itself.
The abstract model for ccREL distinguishes two classes of properties:
Publishers will normally be concerned only with Work properties: this is the only information publishers provide to describe a Work's licensing terms. License properties are used by Creative Commons itself to define the authoritative specifications of the licenses we offer. Other organizations are free to use these components for describing their own licenses. Such licenses, although related to Creative Commons licenses, would not themselves be Creative Commons licenses nor would they be endorsed necessarily by Creative Commons.
A publisher who wishes to license a Work under a Creative Commons license must, at a minimum, provide one RDF triple that specifies the value of the Work's license property (i.e., the license that governs the Work), for example
<> xhtml:license <> .
Although this is the minimum amount of information, Creative Commons also encourages publishers to include additional triples giving information about licensed works: the title, the name and URL for assigning attribution, and the document type. An example might be
<> dc:title "The Lessig Blog" . <> cc:attributionName "Larry Lessig" . <> cc:attributionURL <> . <> dc:type dcmitype:Text .
The specific work properties illustrated here are
Incidentally, the above list of four triples could be alternately expressed using the N3 semicolon convention, which indicates a list of triples that all have the same subject:
@prefix dc: <> . @prefix cc: <> . @prefix dcmitype: <> . <> dc:title "The Lessig Blog" ; cc:attributionName "Larry Lessig" ; cc:attributionURL <> ; dc:type dcmitype:Text .
There are two more Work properties available to publishers of CC material:
<> dc:source <> .
A typical use would then be:
<> cc:morePermissions <> .
The information at the designated URL is completely up to the publisher, as are the terms of the associated additional permissions, with one proviso: The additional permissions must be additional permissions, i.e., they cannot restrict the rights granted by the Creative Commons license. Said another way, any use of the work that is valid without taking the morePermissions property into account, must remain valid after taking morePermissions into account.
This is the current set of ccREL Work properties. New properties may be added over time, defined by Creative Commons or by others. Observe that ccREL inherits the underlying extensibility of RDF--all that is required to create new properties is to include additional triples that use these. For example, a community of photography publishers could agree to use an additional photoResolution property, and this would not disrupt the operation of pre-existing tools, so long as the old properties remain available. We'll see below that the concrete syntax (RDFa) recommended by Creative Commons for ccREL enjoys this same extensibility property.
Distributed creation of new properties notwithstanding, only Creative Commons can include new elements in the cc: namespace, because Creative Commons controls the defining document at. This ability to retain this kind of control, without loss of extensibility, is a direct consequence of using RDF.
We now consider properties used for describing Licenses. With ccREL, Creative Commons does not expect publishers to use these license properties directly, or even to deal with them at all.
In contrast, Creative Commons' original metadata recommendation encouraged publishers to provide the license properties with every licensed work. This design was awkward, because once a publisher has already indicated which license governs the Work, specifying the license properties in addition is redundant and thus error prone. The ccREL recommendation does away with this duplication and leaves it to Creative Commons to provide the license properties.
Tool builders, on the other hand, should take these License properties into account so that they can interpret the particulars of each Creative Commons license. The License properties governing a Work will typically be found by URL-based discovery. A tool examining a Work notices the xhtml:license property and follows the indicated link to a page for the designated license. Those license description pages--the "Creative Commons Deeds"-- are maintained by Creative Commons, and include the license properties in the CC recommended concrete syntax (RDFa), as described in section 7.2:
Here are the License properties defined as part of ccREL:
Importantly, Creative Commons does not allow third parties to modify these properties for existing Creative Commons licenses. That said, publishers may certainly use these properties to create new licenses of their own, which they should host on their own servers, and not represent as being Creative Commons licenses.
The possible values for cc:permits, i.e., the possible permissions granted by a CC License are:
The possible values for cc:prohibits, i.e., possible prohibitions that modulate permissions (but do not affect permissions granted by copyright law, such as fair use) are:
The possible values for cc:requires are:
For example, the Attribution Share-Alike v3.0 Creative Commons license is described as:12
@prefix cc: . <> cc:permits cc:Reproduction ; cc:permits cc:Distribution ; cc:permits cc:DerivativeWorks ; cc:requires cc:Attribution ; cc:requires cc:ShareAlike ; cc:requires cc:Notice .
As new copyright licenses are introduced, Creative Commons expects to add new permissions, requirements, and prohibitions. However, it is unlikely that Creative Commons will introduce new license property types beyond permits, requires, and prohibits. As a result, tools built to understand these three property types will be able to interpret future licenses, at least by listing the license's permissions, requirements, and prohibitions: thanks to the underlying RDF framework of designating properties by URLs, these tools can easily discover human-readable descriptions of these as-yet-undefined property values.
While the previous examples illustrate ccREL using the RDF/XML and N3 notations, ccREL is meant to be independent of any particular syntax for expressing RDF triples. To create compliant ccREL implementations, publishers need only arrange that tool builders can extract RDF triples for the relevant ccREL properties--typically only the Work properties, since Creative Commons provides the License properties--through a discoverable process. We expect that different publishers will do this in different ways, using syntaxes of their choice that take into account the kinds of environments they would like to provide for their users. In each case, however, it is the publisher's responsibility to associate their pages with appropriate extraction mechanisms and to arrange for these mechanisms to be discoverable by tool builders.
Creative Commons also recommends concrete ccREL syntaxes that tool builders should recognize by default, so that publishers who do not want to be explicitly concerned with extraction mechanisms have a clear implementation path. These recommended syntaxes--RDFa for HTML Web pages, and XMP for free-floating content--are described in the following sections. This section presents the principles underlying our recommendations.
Licensing information for a Web document will be expressed in some form of HTML. What properties would an ideal HTML syntax for expressing Creative Commons terms exhibit? Given the use cases we've observed over the past several years, we can call out the following desiderata:
Some important works are not typically conveyed via HTML. Examples are MP3s, MPEGs, and other media files. The technique for embedding licensing data into these files should achieve the following design principles:
Consider the abstract model for ccREL. Here, again, are the triples from the Lessig blog example, expressed in N3.13
@prefix xhtml: <> . @prefix cc: <> . <> xhtml:license <> . <> cc:attributionName "Lawrence Lessig" . <> cc:attributionURL <> .
The Web page to which this information refers typically already contains some HTML that describes this same information (redundantly), in human-readable form, for example:
<div> This page, by <a href=""> Lawrence Lessig </a>, is licensed under a <a href=""> Creative Commons Attribution License </a>. </div>
What we would like is a way to quickly augment this HTML with just enough structure to enable the extraction of the RDF triples, using the principles articulated above, including, notably, Don't Repeat Yourself: the existing markup and links should be used both for human and machine readability.
RDFa was designed by the W3C with Creative Commons' input. The design was motivated in part by the principles noted above. Using existing HTML properties and a handful of new ones, RDFa enables a chunk of HTML to express RDF triples, reusing the content wherever possible. For example, the HTML above would be extended by including additional attributes within the HTML anchor tags as follows:
<div about="" xmlns: This page, by <a property="cc:attributionName" rel="cc:attributionURL" href=""> Lawrence Lessig </a>, is licensed under a <a rel="license" href=""> Creative Commons Attribution License </a>. </div>
The rules for understanding the meaning of the above markup are as follows:
The fragment of HTML (within the div) is entirely self-contained (and thus remix-friendly). Its meaning would be preserved if it were copied and pasted into another Web page. The data's structure is local to the data itself: a human looking at the page could easily identify the structured data by pointing to the rendered page and finding the enclosing chunk of HTML. In addition, the clickable links and rendered author names gain semantic meaning without repeating the core data. Finally, as this is embedded RDF, the extensibility and independence properties of RDF vocabularies are automatically inherited: anyone can create a new vocabulary or reuse portions of existing vocabularies.
Of course, one can continue to add additional data, both visible and structured. Figure 2 shows a more complex example that includes all Work properties currently supported by Creative Commons, including how this HTML+RDFa would be rendered on a Web page. Notice how properties can be associated with HTML spans as well as anchors, or in fact with any HTML elements--see the RDFa specification for details.
The examples in this section illustrate how publishers can specify Work properties. One can also use RDFa to express License properties. This is what Creative Commons does with the license description pages on its own site, as described below in section 7.2.
Microformats are a set of simple, open data formats "designed for humans first and machines second." They provide domain-specific syntaxes for annotating data in HTML. At the moment, the two widely deployed "compound" microformats annotate contact information (hCard) and calendar events (hCal). Of the "elemental" microformats, those meant to annotate a single data point, the most popular is rel-tag, used to denote a "tag" on an item, e.g. a blog post. Another elemental microformat is rel-license, which is meant to indicate the current page's license and which, conveniently, uses a syntax which overlaps with RDFa: rel="license". Other microformats may, over time, integrate Creative Commons properties, for example when licensing images, videos, and other multimedia content.14
Microformat designers have focused on simplicity and readability, and Creative Commons encourages publishers who use microformats to make it easy for tool builders to extract the relevant ccREL triples. Nonetheless, microformats' syntactic simplicity comes at the cost of independence and extensibility, which makes them limited from the Creative Commons perspective.
For example, every time a Creative Commons license needs to be expressed in a new context--e.g. videos instead of still images--a new microformat and syntax must be designed, and all parsers must then, somehow, become aware of the change. It is also not obvious how one might combine different microformats on a single Web page, given that the syntax rules may differ and even conflict from one microformat to the next.15 Finally, when it comes time to express complex data sets with ever expanding sets of properties, e.g., scientific data, microformats do not appear to scale appropriately, given their lack of vocabulary scoping and general inability to mix vocabularies from independently developed sources--the kind of mixing that is enabled by RDF's use of namespaces.
Thus, Creative Commons does not recommend any particular microformat syntax for ccREL, but we do recommend a method for ensuring that, when publishers use microformats, tool builders can extract the corresponding ccREL properties: use an appropriate profile URL in the header of the HTML document.16This profile URL significantly improves the independence and extensibility of microformats by ensuring that the tools can find the appropriate parser code for extracting the ccREL abstract model from the microformat, without having to know about all microformats in advance. One downside is that the microformat syntax then becomes less remix-friendly, with two disparate fragments: one in the head to declare the profile, and one in the body to express the data. Even so, the profile approach is likely good enough for simple data. It is worth noting that this use of a profile URL is already recommended as part of microformats' best practices, though it is unfortunately rarely implemented today in deployed applications.
Not all documents on the web are HTML: one popular syntax for representing structured data in XML. Given that XML is a machine-readable syntax, often with a strict schema depending on the type of data expressed, not all of the principles we outlined are useful here. In particular, visual locality is not relevant when the reader is a machine rather than a human, and remix-friendliness doesn't really apply when XML fragments are rarely remixable in the first place, given schema validation. Thus, we focus on independence and extensibility, as well as DRY.
When publishing Creative Commons licensing information inside an XML document, Creative Commons recommends exposing a mechanism to extract the ccREL abstract model from the XML, so that Creative Commons tools need not know about every possible XML schema ahead of time. The W3C's GRDDL recommendation performs exactly this task by letting publishers specify, either in each XML document or once in an XML schema, an XSL Transformation that extracts RDF/XML from XML.17Consider, for example, a small extension of the Atom XML publishing schema for news feeds:18
<entry> <title>Lessig 2.0 - the site</title> <link rel="alternate" type="text/html" href="" /> <id>tag:lessig.org,2007:/blog//1.3401</id> <published>2007-06-25T19:44:48Z</published> <link rel="license" type="text/html" href="" /> </entry>
An appropriate XSL Transform can easily process this data to extract the ccREL property that specifies the license:
<rdf:RDF <cc:license </rdf:RDF>
Similarly, the Open Archives Initiative, defines a complex XML schema for library resources.19 These resources may include megabytes of data, including sometimes the entire resource in full text. Using XSLT, one can extract the relevant ccREL information, exactly as above. Using GRDDL, the Open Archives Initiative can specify the XSLT in its XML schema file, so that all OAI documents are automatically transformable to RDF/XML, which immediately conveys ccREL.
Interestingly, because RDF can be expressed using the RDF/XML syntax, one might be tempted to use RDF/XML directly inside an XML document with an appropriate schema definition that enables such direct embedding. This very approach is taken by SVG,20 and there are cases of SVG graphics that include licensing information using directly embedded RDF/XML.
This approach can be made ccREL compliant with very little work--a simple GRDDL transform, declared in the XML schema definition, that extracts the RDF/XML and expresses it on its own. Note that, for ccREL compliance, this transform, although simple, is necessary. The reason for its necessity goes to the crux of the ccREL principles: without such a transform provided by each XML schema designer, tools would have to be aware of all the various XML schemas that include RDF/XML in this way. For extensibility and future-proofing, ccREL asks that publishers of the schema make the effort to provide the extraction mechanism. With explicit extraction mechanisms, publishers have a little bit more work to do, while tool builders are immediately empowered to create generic programs that can process data they have never seen before.
We turn to the precise Creative Commons recommendation for embedding ccREL metadata inside MP3s, Word documents, and other "free-floating" content that is often passed around in a peer-to-peer fashion, via email or P2P networks. We note that there are two distinct issues to resolve:
We handle accountability for free-floating content by connecting any free-floating document to a Web page, and placing the ccREL information on that Web page. Thus, publishers of free-floating content are just as accountable as publishers of Web-based content: rights are always expressed on a Web page. The connection between the Web page and the binary file it describes is achieved using a cryptographic hash, i.e. a fingerprint, of the file. For example, the PDF file of Lawrence Lessig's "Code v2" will contain a reference to, which itself will contain a reference to the SHA1 hash of the PDF file. The owner of the URL is thus taking responsibility for the ccREL statements it makes about the file.
For expression, we recommend XMP. XMP has the broadest support of any embedded metadata format (perhaps it is the only such format with anything approaching broad support) across many different media formats. With the exception of media formats where a workable embedded metadata format is already ubiquitous (e.g. MP3), Creative Commons recommends adopting XMP as an embedded metadata standard and using the following two fields in particular:
Consider our example of Lessig's "Code v2", a Creative Commons licensed, community-edited second version of his original "Code and Other Laws of Cyberspace." The PDF of this book, available at, contains XMP metadata as follows:
<?xpacket begin="" id=""?> <x:xmpmeta xmlns: <rdf:RDF xmlns: <rdf:Description rdf: <xapRights:Marked>True</xapRights:Marked> <xapRights:WebStatement rdf: </rdf:Description> ... <rdf:Description rdf: <cc:license rdf: </rdf:Description> </rdf:RDF> </x:xmpmeta> <?xpacket end="r"?>
Notice how this is RDF/XML, including a xapRights:WebStatement pointer to the web page, which itself contains RDFa:
Any derivative must be licensed under a <a about="urn:sha1:W4XGZGCD4D6TVXJSCIG3BJFLJNWFATTE" rel="license" href=""> Creative Commons Attribution-ShareAlike 2.5 License </a>
This RDFa references the PDF using its SHA1 hash--a secure fingerprint of the file that matches only the given PDF file--and declares its Creative Commons license. Thus, anyone that finds the "Code v2" PDF can find its WebStatement pointer, look up that URL, verify that it properly references the file via its SHA1 hash, and confirm the file's Creative Commons license on the web-based deed.
This section describes several examples, first by publishers of Creative Commons licensed works, then by tool builders who wish to consume the licensing information. Some of these examples include existing, real implementations of ccREL, while others are potential implementations and applications we believe would significantly benefit from ccREL.
Publishers can mix ccREL with other markup with great flexibility. Thanks to ccREL's independence and extensibility principle, publishers can use ccREL descriptions in combination with additional attributes taken from other publishers, or with entirely new attributes they define for their own purposes. Thanks to ccREL's DRY principle, even small publishers get the benefit of updating data in one location and automatically keeping the human- and machine-readable in sync.
A common use case for Web publishers working in a mashup-friendly world is the issue of mixing content with different licenses. Consider, for example, what happens if Larry Lessig's blog reuses an image published by another author and licensed for non-commercial use. Recall that Lessig's Blog is licensed to permit commercial use.
The HTML markup in this case is straightforward:
<div about="" xmlns: This page, by <a property="cc:attributionName" rel="cc:attributionURL" href=""> Lawrence Lessig </a>, is licensed under a <a rel="license" href=""> Creative Commons Attribution License </a>. <div about="/photos/constitution.jpg"> The photo of the constitution used in this post was originally published by <a rel="dc:source" href="">Joe Example</a>, and is licensed under a <a rel="license" href=""> Creative Commons Attribution-NonCommercial License </a>. </div> </div>
The inner <div> uses the about attribute to indicate that its statements concern the photo in question. A link to the original source is provided using the dc:source property, and a different license pointer is given for this photo using the normal anchor with a rel="license" attribute.
Bitmunk is a service that supports artists with a legal, copyright-aware, content distribution service. The service needed a mechanism for embedding structured data about songs and albums directly into their web pages, including licensing information, so that browser add-ons might provide additional functionality around the music, e.g. comparing the price of a particular song at various online stores. Bitmunk first created a microformat called hAudio. They soon realized, however, that they would be duplicating fields when it came time to define hVideo, and that these duplicated fields would no longer be compatible with those of hAudio. More immediately problematic, hAudio's basic fields, e.g. title, would not be compatible with other "title" fields of other microformats.
Thus, Bitmunk created the hAudio RDFa vocabulary. The design process for this vocabulary immediately revealed separate, logical components: Dublin Core for basic properties e.g. title, Creative Commons for licensing, a new vocabulary called "hMedia" for media-specific properties e.g. duration, and a new vocabulary called "hCommerce" for transaction-specific properties e.g. price. Bitmunk was thus able to reuse two existing vocabularies and add features. It was also able to clearly delineate logical components to make it particularly easy for other vocabulary developers to reuse only certain components of the hAudio vocabulary, e.g. hCommerce. Meanwhile, all Creative Commons licensing information is still expressible without alteration.
Figure 3 shows an excerpt of the markup available from Bitmunk at. Note that this particular sample is not CC-licensed: it uses standard copyright. A CC-licensed album would be marked up in the same way, with a different license value: Bitmunk was able to develop its vocabulary independent of ccREL, and can now integrate with ccREL simply by adding the appropriate attributes.
Flickr hosts approximately 50 million CC-licensed images (as of October 2007). Currently Flickr denotes a license on each image's page with a link to the relevant license qualified by rel="license". This ad-hoc convention, encouraged by the microformats effort, was "grandfathered" into RDFa thanks to the reserved HTML keyword license. Unfortunately, it works only for simple use cases, with a single image on a single page. This approach breaks down when multiple images are viewed on a single page, or when further information, such as the photographer's name, is required.
Flickr could significantly benefit from the ccREL recommendations, by providing, in addition to simple license linking:
In addition, Flickr recently deployed "machine tags," where photographers can add metadata about their images using custom properties. Flickr's machine tags are, in fact, a subset of RDF, which can be represented easily using RDFa. Thus, Creative Commons licensing can be easily expressed alongside Flickr's machine tags using the same technology, without interfering.
Figure 4 shows how the CC-licensed photo at would be marked up using ccREL, including the machine tag upcoming:event that associates the photo with an event at.
Nature, one of the world's top scientific journals, recently launched a web-only "precedings" site, where early results can be announced rapidly in advance of full-blown peer review. Papers on Nature Precedings are distributed under a Creative Commons license. Like Flickr, Nature Precedings currently uses CC's prior metadata recommendation: RDF/XML included in an HTML comment. Nature could significantly benefit from the ccREL recommendation, which would let them publish structured Creative Commons licensing information in a more robust, more extensible, and more human-readable way.
Consider, for example, the Nature Preceding paper at. Figure 5 shows how the markup at that page can be extended with simple RDFa attributes, using the Dublin Core, Creative Commons, FOAF, and PRISM publication vocabularies.21Notice how any HTML element, including the existing H1 used for the title, can be used to carry RDFa attributes. Figure 6 shows how this page could appear in an RDFa-aware browser.
Open publication of scientific data on the Internet has begun, with the Nature Publishing Group recently announcing the release of genomic data sets under a Creative Commons license.22 Beyond simple licensing, thousands of new metadata vocabularies and properties are being developed to express research results. Creative Commons, through its Science Commons subsidiary,23 is playing an active role here, working to remove barriers to scientific cooperation and sharing. Science Commons is specifically encouraging the creation of RDF-based vocabularies for describing scientific information and is stimulating collaboration among research communities with tools that build on RDF's extensibility and interoperability.
As these vocabularies become more widespread, it's easy to envision uses of ccREL and RDFa that extend the bibliographic and licensing markup to include these new scientific data tags. Tools may then emerge to take advantage of this additional markup, enabling dynamic, distributed scientific collaboration through interoperable referencing of scientific concepts.
Imagine, for example, an excerpt from a (hypothetical) Web-based newsletter about genomics research, which references an (actual) article from BioMed Central Neurosciences, as it might be rendered by a browser (Figure 7). The words "recent study on rat brains", and "CEBP-beta" are clickable links, leading respectively, to a Web page for the paper, and a Web page that describes the protein CEBP-5#5 in the Uniprot protein database.
The RDFa generating this excerpt could be
<div xmlns: recent study on rat brains </a> by von Gertten <em>et. al.</em> reports that <div about=""> <span property="rdfs:label">inflammatory stimuli</span> upregulate expression of <a rel="OBO_REL:precedes" href=""> <span property="rdfs:label">CEPB-beta</span> </a> </div> </div>
This RDFa not only links to the paper in the usual way, but it also provides machine-readable information that this is a statement about inflammatory stimuli (as defined by the Open Biomedical Ontologies initiative) activating expression of the CEPB protein (as specified in the UniProt database of proteins). Since the URI of the protein is visually meaningful, it can be marked up with a clickable link that also provides the object of a triple.
A CC license grants certain permissions to the public; others may be available privately. A coarse-grained "more permissions" link indicates this availability. Creative Commons has branded this scheme CC+. Also, since CC licenses are non-exclusive, other options for a work may be offered in addition to a CC license. Here is an example from, showing the use of RDFa to annotate the standard CC license image and also the Magnatune logo:
<a href="" rel="license"> <img src=""> </a> <a href="" xmlns: <img border=0 </a>
This snippet contains two statements: the public CC license and the availability of more permissions. Sophisticated users of this protocol will one day publish company, media, or genre-specific descriptions of the permissions available privately at the target URL. Tools built to recognize a Creative Commons license will still be able to detect the Creative Commons license after the addition of the morePermissions property, which is exactly the desired behavior. More sophisticated versions of the tools could inform the user that "more permissions" may be granted by following the indicated link.
As mentioned above, Creative Commons doesn't expect content publishers to deal with license properties. However, others may find themselves publishing licenses using ccREL's license properties. Here, too, RDFa is available as a framework for creating license descriptions that are human-readable, from which automated tools can also extract the required properties.
One example of this is Creative Commons itself, and the publication of the "Commons Deeds". Figure 8 shows the HTML source of the Web page at which describes the U.S. version of the CC Attribution-NoDerivatives license. As this markup shows, any HTML attribute, including LI, can carry RDFa attributes. The href attribute, typically used for clickable links, can be used to indicate a structured relation, even when the element to which it is attached is not an HTML anchor.
In this markup, the "Attribution-NoDerivatives" license permits distribution and reproduction, while requiring attribution and notice. Recall that ccREL is meant to be interpreted in addition to the baseline copyright regulation. In other words, the restriction "NoDerivatives" is not expressed in ccREL, since that is already a default in copyright law. The opposite, where derivative works are allowed, would be denoted with an additional CC permission.
Tool builders who then want to extract RDF from this page can do so using, for example, the W3C's RDFa Distiller,24which, when given the CC Deed URL, produces the RDF/XML serialization of the same structured data, ready to be imported into any programming language with RDF/XML support:
<?xml version="1.0" encoding="utf-8"?> <rdf:RDF xmlns: <rdf:Description rdf: <cc:requires rdf: <cc:requires rdf: <cc:permits rdf: <cc:permits rdf: </rdf:Description> </rdf:RDF>
MozCC25 is an extension to Mozilla-based browsers for extracting and displaying metadata embedded in web pages. MozCC was initially developed in 2004 as a work-around to some of the deficiencies in the prior Creative Commons metadata recommendation. That version of MozCC specifically looked for Creative Commons RDF in HTML comments, a place most other parsers ignore. Once the metadata detected, MozCC provided users with a visual notification, via icons in the status bar, of the Creative Commons license. In addition, MozCC provided a simple interface to expose the work and license properties.
Since the initial development, MozCC has been rewritten to provide general purpose extraction of all RDFa metadata, as well as a specialized interface for ccREL. The status-bar icons and detailed metadata visualization features have been preserved and expanded. A MozCC user receives immediate visual cues when he encounters a page with RDFa metadata, including specific CC-branded icons when the metadata indicates the presence of a Creative Commons license. The experience is pictured in Figure 9.
MozCC processes pages by listening for load events and then calling one or more metadata extractors on the content. Metadata extractors are JavaScript classes registered on browser startup; they may be provided by MozCC or other extensions. MozCC ships with extractors for all current and previous Creative Commons metadata recommendations, in particular ccREL. Each registered extractor is called for every page. The extractors are passed information about the page to be processed, including the URL and whether the page has changed since it was last processed. This allows individual extractors to determine whether re-processing is needed. The RDFa extractor, for example, can stop processing if it sees the document hasn't been updated. An extractor which looks for metadata specified in external files via <link> tags, however, would still retrieve them and see if they have been updated.
The results of each extractor are stored in a local metadata store. In the case of Firefox, this is a SQLite database stored as part of the user's profile. The local metadata store serves as an abstraction layer between the extractors and user interface code. The contents are visible through the Page Info interface. The current software only exposes this information as status bar icons; one can imagine other user interfaces (provided by MozCC or other extensions) which expose the metadata in different ways.
Operator26is an add-on to the Firefox browser that detects microformats and RDFa in the web pages a user visits. Operator can be extended with "action scripts" that are triggered by specific data found in the web page. The regions of the page that contain data are themselves highlighted so that users can visually detect and receive contextual information about the data.
It is relatively straight-forward to write a Creative Commons action script that finds all Creative Commons licensed content inside a web page by looking for the RDFa syntax. This allows users to easily identify their rights and responsibilities when reusing content they find on the web. No matter the item, even types of items with properties currently unanticipated, the simple action script can detect them, display the item's name and rights description.
Putting aside for now the definition of some utility functions, an action handler for the license property is declared as follows:27
RDFa.DEFAULT_NS.cc = ""; RDFa.ns.cc = function(name) { return RDFa.DEFAULT_NS.cc + name; }; var view_license = { description: "View License", shortDescription: "View", scope: { semantic: { "RDF": { property : RDFa.ns.cc("license"), defaultNS : RDFa.ns.cc("") } } }, doAction: function(semanticObject, semanticObjectType, propertyIndex) { if (semanticObjectType == "RDF") { return semanticObject.license; } } }; SemanticActions.add("view_license", view_license);
Once this action script enabled, Operator automatically lights up Creative Commons licensed "Resources" it finds on the web. For example, browsing to the Lessig blog, Operator highlights two resources that are CC-licensed: the Lessig Blog itself, and a Creative Commons licensed photo used in one of the blog posts. The result is shown in Figure 10.
Creative Commons wants to make it easy for artists and scientists to build upon the works of others when they choose to: licensing your work for reuse and finding properly licensed works to reuse should be easy. To achieve this on the technical front, we have defined ccREL, an abstract model for rights expression based on the W3C's RDF, and we recommend two syntaxes for web-based and free-floating content: RDFa and XMP, respectively. The major goal of our technological approach is to make it easy to publish and read rights expression data now and in the future, when the kinds of licensed items and the data expressed about them goes far beyond what we can imagine today. By using RDF, ccREL links Creative Commons to the fast-growing RDF data interoperability infrastructure and its extensive developer toolset: other data sets can be integrated with ccREL, and RDF technologies, e.g. data provenance with digital signatures, can eventually benefit ccREL.
We believe that the technologies we have selected for ccREL will enable the kind of powerful, distributed technological innovation that is characteristic of the Internet. Anyone can create new vocabularies for their own purposes and combine them with ccREL as they please, without seeking central approval. Just as we did with the legal text of the licenses, we aim to create the minimal infrastructure required to enable collaboration and invention, while letting it flourish as an organic, distributed process. We believe ccREL provides this primordial technical layer that can enable a vibrant application ecosystem, and we look forward to the community's innovative ideas that can now freely build upon ccREL.
The authors wish to credit Neeru Paharia, past Executive Director of Creative Commons, for the "free-floating" content accountability architecture, Manu Sporny, CEO of Bitmunk, for the Creative Commons Operator code, and Aaron Swartz for the original Creative Commons RDF data model and metadata strategy. More broadly, the authors wish to acknowledge the work of a number of W3C groups, in particular all members of the RDF-in-HTML task force (Mark Birbeck, Jeremy Carroll, Michael Hausenblas, Shane McCarron, Steven Pemberton, and Elias Torres), the Semantic Web Deployment Working Group chaired by Guus Schreiber and David Wood, and the tireless W3C staff without whom there would be no RDFa, GRDDL, or RDF, and thus no ccREL: Eric Miller, Ralph Swick, Ivan Herman, and Dan Connolly. | http://www.w3.org/Submission/2008/SUBM-ccREL-20080501/ | CC-MAIN-2018-09 | en | refinedweb |
Using Open XML Schema with .NET
Using Open Elements and Attributes
When an application encounters an XML document that contains some unexpected elements, there are multiple outcomes that you can choose from. If your only requirements are that debugging information be logged during deserialization, you can simply handle the UnknownElement and UnknownAttribute events, as shown earlier. But, what if you actually need to preserve the contents and structure of the data from the XML document so that it can be passed to another system component? The .NET Framework includes two attributes that enable you to capture unknown XML elements and attributes:
- XmlAnyElementAttribute is used to control how unknown elements are stored in an object.
- XmlAnyAttributeAttribute is used to control how unknown attributes are stored in an object.
XmlAnyElementAttribute is attached to a field, property, parameter, or return value and is essentially a "wild card" that is capable of converting to and from XML elements during serialization. You'll probably want to use it with an array of XmlNode objects, becaise each unknown element from the XML document will be placed into its own XmlNode object. If you use the attribute with a single XmlNode object, you'll lose data if multiple unknown elements are received. The XmlAnyElement attribute is typically used like this:
XmlNode[] _openElements = null; [XmlAnyElement()] public XmlNode[] OpenElements { get { return _openElements; } set { _openElements = value; } }
XmlNode[] _openAttributes = null; [XmlAnyAttribute()] public XmlNode[] OpenAttributes { get { return _openAttributes; } set { _openAttributes = value; } }
When you use XmlAnyElement and XmlAnyAttribute attributes, the serializer will not generate UnknownElement and UnknownAttribute events.
Validating with Schema and XmlAnyElement
Now, let's consider how open elements affect validation. Many systems use XML schema to validate incoming XML documents. For a large number of applications, the default non-validated behavior is undesirable, for some of the same reasons that most developers prefer strongly typed programming languages. When messages arrive at a server application, these developers want to know whether the data arrives with unexpected formatting or content. The standard answer in this case is to use XML schema, which will reject XML documents that are presented with an unrecognized structure.
However, blindly using XML simply exchanges one problem for another. As discussed earlier, XML documents that are agile offer advantages over fixed structures. When XML is used as the common language for data exchange in an enterprise, XML schema can cause undesirable rigidity in the communication infrastructure—if documents don't precisely match the expected structure, they're rejected.
If a typical schema (such as one inferred from an existing XML instance document) is used to validate messages, the schema can become a brake on system evolution. Instead of avoiding schema altogether, or attempting to update all components that use a particular schema simultaneously, you can use an XML schema that defines open elements and attributes to create a schema that has just enough structure to validate specific behavior, while maintaining flexibility for extension.
Using an XML schema with open elements and attributes enables you to balance consistency with extendibility. Open elements and attributes create expansion points that enable clients to send XML documents that follow an updated structure to a server, without the need to update all edges of a system simultaneously. It also enables a client to use XML schema that is shared among multiple applications—if a client is updated prior to the server, the server can simply ignore the updated elements for the purposes of schema validation.
The process begins by defining open elements in your schema, using xs:any type to indicate the part of the XML element that's open for undefined elements, as shown in the following code.
<?xml version="1.0" encoding="utf-8" ?> <xs:schema <xs:element <xs:complexType> <xs:sequence> <xs:element <xs:element <xs:complexType> <xs:sequence> <xs:element <xs:element <xs:element <xs:sequence> <xs:any </xs:sequence> </xs:sequence> </xs:complexType> </xs:element> <xs:sequence> <xs:any </xs:sequence> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>
The definition of a sequence of zero or more xs:any elements in this schema enables any number of additional XML nodes to be included in the XML document inside a Location element as well as immediately after the element.
In this example, the namespace for the open elements are declared as '##any', which is actually the default value. Alternatively, you can supply the specific namespace that is allowed, or another predefined namespace token:
- ##other The XML must be from any namespace other than the target namespace is allowed.
- ##local The XML must be must not be in a namespace.
- ##targetNamespace The XML must be in the target namespace.
A specific allowed namespace can also be provided for this attribute.
The processContents attribute in the xs:any element is set to 'skip' in this example, which instructs a schema validator to ignore the nodes that are part of the open element. A complete list of options follows:
- lax: Enforce schema if a namespace is declared and the validator has access to the schema definition.
- skip: No schema enforcement.
- strict: Always enforce schema for this open element.
Open sets of attributes can also be defined using a similar syntax, as shown below:
<xs:schema <xs:element <xs:complexType> <xs:sequence> <xs:element ... </xs:sequence> <xs:anyAttribute </xs:complexType> </xs:element> </xs:schema>
Open attributes follow the same rules as elements with regard to namespace and processContents. This schema fragment defines an element named Car that is allowed to have additional attributes without namespace restriction or schema enforcement.
Taken to an extreme, consider a gateway component that only tests a Shipment XML message for the presence of an OrderNo element. The schema used by this component could be reduced to something like this:
<?xml version="1.0" encoding="utf-8" ?> <xs:schema <xs:element <xs:complexType> <xs:sequence> <xs:element <xs:sequence> <xs:any </xs:sequence> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>
This enables documents to have additional elements that can pass as schema-valid until an updated schema is provided. It's also useful when performing SOA-like validation, where different actors have diverse validation needs. Components and applications have up-to-date knowledge of message requirements. Components that serve as intermediaries can perform coarser validation on those portions of the message schema that are required by the component.
Summary
In this article, I've discussed two complimentary approaches to using XML messaging in enterprise applications. The XmlAnyElement and XmlAnyAttribute attributes are used to capture open XML content in your .NET classes. The xs:any and xs:anyAttribute schema elements are used to define expansion points and enable just enough schema validation for system components.
More Information
More information about XML Schema is available at the W3C's XML Schema page at:
About the Author
Mickey Williams is a Microsoft C# MVP, and the author of Microsoft Visual C# .NET Core Reference for Microsoft Press. He works as a Principal Consultant for Neudesic, LLC in Southern California, building service-oriented applications for enterprise customers. His weblog can be found at.
Page 2 of 2
| https://www.developer.com/net/article.php/10916_3396691_2/Using-Open-XML-Schema-with-NET.htm | CC-MAIN-2018-09 | en | refinedweb |
Really Simple Tricks to Speed up Your CLIs 10 Times Using vSphere Java API
I recently had a short discussion with my colleague on implementing CLIs with vSphere Java API. One problem is that if you have multiple commands to run, each of them connects to the server and authenticate over and over. You’d better remember to logout the connection each time after you are done, or leave many un-used connections on the server that could significantly slow down your ESX or vCenter server (read this blog for details).
You can have two solutions to this problem. The first one is to have your own “interpreter” command. After you type the command, it shows you prompt for more sub-commands. It’s very much like the “ftp” command in that sense. You can have subcommands like “login” or “open” or “connect” for connecting to a server, and other commands. The “interpreter” command can then hold the ServiceInstance object until it’s closed in the end.
You can save about 0.3 to 0.5 second on creating new HTTPS connection and login for each command after the first one. It’s not a big deal given that vSphere Java API has hugely reduced that time from 3 to 4 seconds with Apache AXIS. So if you switch to vSphere Java API, you get instant 10 time performance gain. Still, if you have many commands to run, it could be a decent saving.
With this solution, you can also implement batch mode in which you can save all your commands into a file and then execute them all with one command. You can find many examples like PowerShell which support interactive mode and batch mode.
Another solution is just having normal commands. The problem becomes how to avoid the authentication for each command after the first. Luckily we have something for you in the API. The ServiceInstance type has a constructor like the following. It was originally designed for vSphere Client Plug-in which reuses the same session ID of the vSphere Client.
public ServiceInstance(URL url, String sessionStr, boolean ignoreCert, String namespace)
The expected session string is as follows. Note that you have to escape the ” if you include it in your Java source code.
vmware_soap_session="B3240D15-34DF-4BB8-B902-A844FDF42E85"
What you can do is to get the session ID and save it to a well-known temporary file. Each command checks if the file exists. If yes, it loads the session string and pass into the above constructor. By default a session expires for 30 minutes on server side. You have to guide against this in your code with normal login if that happens.
“But wait, how can I get the session ID from an existing ServiceInstance?” you may ask. It’s in fact pretty easy. From any ManagedObject like ServiceInstance, you can have one line like this:
String sessionStr = si.getServerConnection().getSessionStr();
It will have the same format as you would pass in to the above constructor.
Hi, Steve, thanks foк this tip.
Also I have another one question to you – is there any way to receive current CPU and memory usage of ESX host?
I’ve thought that this info should be in properties of HostSystem, but not found it there.
These are performance related stats. You may want to check out the PerformanceManager.
Steve
One issue that we had on our end:
On calling si.getServerConnection().getSessionStr();
we got the following:
vmware_soap_session=”528857ed-ae51-cb5e-ad97-0691f2fe8056″; Path=/;
which as you can see has some extra characters appended to it as opposed to the version shown in the blog post.
When I passed in this whole string in ServiceInstnace constructor it did gave me a ServiceInstance object but it returned null on some of the methods of ServiceInstance.
To correct this issue we had to extract the session id from the string returned from this method. e.g. we used the regex: (.*”.*?”) to extract this out:
vmware_soap_session=”528857ed-ae51-cb5e-ad97-0691f2fe8056″
Now, on passing this string in the constructor gives us a working ServiceInstance.
Now, my question is that as we saw, on passing an incorrect session string to the constructor we still get a service instance but a wrong one, how to test if it is the right service instance? Simply checking against null fails.
The ServiceInstance is a proxy on the client side. Until you try it, for example call an inexpensive method, you never know its validness. So, just call the currentTime() method.
-Steve | http://www.doublecloud.org/2010/10/really-simple-tricks-to-speed-up-your-clis-10-times-using-vsphere-java-api/ | CC-MAIN-2018-09 | en | refinedweb |
Ingo Molnar wrote:>> +static inline pmd_t native_make_pmd(unsigned long long val)>> +{>> + return (pmd_t) { val };>> +}>> +static inline pte_t native_make_pte(unsigned long long val)>> +{>> + return (pte_t) { .pte_low = val, .pte_high = (val >> 32) } ;>> +}>> >> missing newlines between inline functions.> OK.>> +#ifndef CONFIG_PARAVIRT>> +#define pmd_val(x) native_pmd_val(x)>> +#define __pmd(x) native_make_pmd(x)>> +#endif /* !CONFIG_PARAVIRT */>> >> no need for the closing !CONFIG_PARAVIRT comment: this define is 2 lines > long so it's not that hard to find the start of the block. We typically > do the /* !CONFIG_XX */ comment only for larger blocks, and when > multiple #endif's intermix.> Yeah, I tend to put them there reflexively. Its so easy for an #endifto drift away over time, and suddenly you have no idea what's going on. I agree its overkill in this case.>> #define HPAGE_SHIFT 22>> #include <asm-generic/pgtable-nopmd.h>>> -#endif>> +#endif /* CONFIG_X86_PAE */>> >> (for example here the #endif comment is justified.)> Yeah, and it probably started life much closer to the #ifdef... J-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2007/3/16/302 | CC-MAIN-2018-09 | en | refinedweb |
CPA 2011
a. Spontaneous Financing
b. Spontaneous financing is the amount of working capital that arises naturally in the ordinary course of business without the firm's financial managers needing to take deliberate action.
c. Trade credit arises when a company is offered credit terms by its suppliers.
d. Accrued expenses, such as salaries, wages, interest, dividends, and taxes payable, are another source of (interest-free) spontaneous financing.
e. The portion of capital needs that cannot be satisfied through spontaneous means must be the subject of careful financial planning
Conservative Policy to financing
A firm that adopts a conservative working capital policy seeks to minimize liquidity risk by holding a greater proportion of permanent working capital.
Aggressive Policy financing
a. An aggressive working capital policy involves reducing liquidity and accepting a higher risk of short-term cash flow problems in an effort to increase profitability.
Reasons to hold cash according to Keynes 3
1. the transactional motive to use as a medium of exchange
2. precautionary motive to provide a cushion for the unexpected
3. speculative motive-to take advantage of unexpected opportunities
compensating balance
demand (checking) account. Compensating balances are noninterest¬bearing and are meant to compensate the bank for various services rendered, such as unlimited check writing.
draft
. A draft is a three-party instrument in which one person (the drawer) orders a second person (the drawee) to pay money to a third person (the payee).
payable through draft
payable through draft
(PTD) differs from a check in that (1) it is not payable on demand and (2) the drawee is the payor, not a bank. After the payee presents the PTD to a bank, the bank in turn presents it to the issuer. The issuer then must deposit sufficient funds to cover the PTD. Use of PTDs thus allows a firm to maintain lower cash balances.
zero-balance account
A zero-balance account (ZBA) carries, as the name implies, a balance of $0. At the end of each processing day, the bank transfers just enough from the firm's master account to cover all checks presented against the ZBA that day.
1) This practice allows the firm to maintain higher balances in the master account from which short-term investments can be made. The bank generally charges a fee for this service.
Disbursement float
Disbursement float is the period of time from when the payor puts a check in the mail until the funds are deducted from the payor's account. In an effort to stretch disbursement float, a firm may mail checks to its vendors while being unsure that sufficient funds will be available to cover them all
1) Treasury bills
2) Treasury notes
3) Treasury bonds
1) Treasury bills (T-bills) have maturities of 1 year or less. Rather than bear interest, they are sold on a discount basis.
2) Treasury notes (T-notes) have maturities of 1 to 10 years. They provide the lender with a coupon (interest) payment every 6 months.
3) Treasury bonds (T-bonds) have maturities of 10 years or longer. They provide the lender with a coupon (interest) payment every 6 months.
Repurchase agreements
Repurchase agreements (repos) are a means for dealers in government securities to finance their portfolios. When a company buys a repo, the firm is temporarily purchasing some of the dealer's government securities. The dealer agrees to repurchase them at a later time for a specific (higher) price.
Bankers' acceptances
Bankers' acceptances are drafts drawn by a nonfinancial firm on deposits at a bank. The acceptance by the bank is a guarantee of payment at maturity. The payee can thus rely on the creditworthiness of the bank rather than on that of the (presumably riskier) drawer. Because they are backed by the prestige of a large bank, these
instruments are highly marketable once they have been accepted.
Commercial paper
Commercial paper consists of unsecured, short-term notes issued by large companies that are very good credit risks.
f. Certificates of deposit
f. Certificates of deposit (CDs) are a form of savings deposit that cannot be withdrawn before maturity without a high penalty. CDs often yield a lower return than commercial paper because they are less risky. Negotiable CDs are traded under the regulation of the Federal Reserve System.
Basic Receivables Formula
-AVG days outstanding
The most common credit terms offered are 2/10, net 30. This is a convention meaning that the customer may either deduct 2% of the invoice amount if the invoice is paid within 10 days or pay the entire balance by the 30th day
account receivable, therefore, is outstanding for 28 days [(10 days × 20%) + (30 days × 60%) + (40 days × 20%)].
-AVG accounts receivable
balance in receivables = Daily credit sales x Avg. collection period
Step 1-28 days [(10 days × 20%) + (30 days × 60%) + (40 days × 20%)].
Step 2. The firm in the previous example has $15,000 in daily sales on credit. The firm's average balance in receivables is thus $420,000 ($15,000 × 28 days). So the 15,000 daily credit sales is a known variable
Average balance in receivables formula use knowns from previous
The firm has annual credit sales of $5,400,000. The firm's average balance in receivables is thus $420,000 [$5,400,000 × (28 days ÷ 360 days)].
Accounts receivable turnover use previous example known variable
A/R turnover ratio
The firm turned its accounts receivable over 12.9 times during the year ($5,400,000 ÷ $420,000).
Costs related to inventory
1) Purchase costs
2) Carrying costs
3) Ordering costs
4) Stockout costs
1) Purchase costs are the actual invoice amounts charged by suppliers. This is also referred to as investment in inventory.
2) Carrying costs is a broad category consisting of all those costs associated with holding inventory: storage, insurance, security, inventory taxes, depreciation or rent of facilities, interest, obsolescence and spoilage, and the opportunity cost of funds tied up in inventory. This is sometimes stated as a percentage of investment in inventory.
3) Ordering costs are the fixed costs of placing an order with a vendor, independent of the number of units ordered. For internally manufactured units, these consist of the set-up costs of a production line.
4) Stockout costs are the opportunity cost of missing a customer order. These can also include the costs of expediting a special shipment necessitated by insufficient inventory on hand.
safety stock
Accordingly, safety stock is an inventory buffer held as a hedge against contingencies. Determining the appropriate level of safety stock involves a probabilistic calculation that balances the variability of demand with the level of risk the firm is willing to accept of having to incur stockout costs.
The reorder point is established with the following equation (Average daily demand x Lead time in days) + Safety stock
Non-value adding activities
2) JIT is a pull system
2) JIT is a pull system, meaning it is demand-driven: In a manufacturing environment, production of goods does not begin until an order has been received. In this way, finished goods inventories are also eliminated.
3) A backflush costing system
3) A backflush costing system is often used in a JIT environment. Backflush costing eliminates the traditional sequential tracking of costs. Instead, entries to inventory may be delayed until as late as the end of the period.
kanban system
1) Kanban means ticket. Tickets (also described as cards or markers) control the flow of production or parts so that they are produced or obtained in the needed amounts at the needed times.
2).
A firm's operating cycle is the amount of time that passes between the acquisition of inventory and the collection of cash on the sale of that inventory.
1) The (overlapping) steps in the operating cycle are
a) Acquisition of inventory and incurrence of a payable
b) Settlement of the payable
c) Holding of inventory
d) Selling of inventory and incurrence of a receivable
e) Collection on the receivable and acquisition of further inventory
Which one of the following provides a spontaneous source of financing for a firm?
A. Accounts payable.
B. Mortgage bonds.
C. Accounts receivable.
D. Debentures.
Answer (A) is correct. Trade credit is a spontaneous source of financing because it arises
automatically as part of a purchase transaction. Because of its ease in use, trade credit is
the largest source of short-term financing for many firms both large and small.
Net working capital is the difference between
Current assets and current A. liabilities
Net working capital is defined by accountants as the difference
between current assets and current liabilities. Working capital is a measure of short-term
solvency.
Recording the payment (as distinguished from the declaration) of a cash dividend, the
declaration of which was already recorded, will
Increase the current ratio but have no effect A. on working capital.
B. Decrease both the current ratio and working capital.
C. Increase both the current ratio and working capital.
D. Have no effect on the current ratio or earnings per share.
Answer (A) is correct. The payment of a previously declared cash dividend reduces
current assets and current liabilities equally. An equal reduction in current assets and
current liabilities causes an increase in a positive (greater than 1.0) current ratio.
Depoole's payment of a trade account payable of $64,500 will
Increase the current ratio, but the quick ratio would A. not be affected.
B. Increase the quick ratio, but the current ratio would not be affected.
C. Increase both the current and quick ratios.
D. Decrease both the current and quick ratios.
The current ratio and the quick ratio will increase.
Answer (C) is correct. Given that the quick assets exceed current liabilities, both the
current and quick ratios exceed 1 because the numerator of the current ratio includes other
current assets in addition to the quick assets of cash, net accounts receivable, and shortterm
marketable securities. An equal reduction in the numerator and the denominator,
such as a payment of a trade payable, will cause each ratio to increase.
Depoole's purchase of raw materials for $85,000 on open account will
A. Increase the current ratio.
B. Decrease the current ratio.
C. Increase net working capital.
D. Decrease net working capital.
Answer (B) is correct. The purchase increases both the numerator and denominator of the
current ratio by adding inventory to the numerator and payables to the denominator.
Because the ratio before the purchase was greater than 1, the ratio is decreased
Obsolete inventory of $125,000 was written off by Depoole during the year. This transaction
A. Decreased the quick ratio.
B. Increased the quick ratio.
C. Increased net working capital.
D. Decreased the current ratio
Answer (D) is correct. Writing off obsolete inventory reduced current assets, but not
quick assets (cash, receivables, and marketable securities). Thus, the current ratio was
reduced and the quick ratio was unaffected.
Depoole's issuance of serial bonds in exchange for an office building, with the first installment
of the bonds due late this year,
Decreases net A. working capital.
B. Decreases the current ratio.
C. Decreases the quick ratio.
D. Affects all of the answers as indicated.
(d) The first installment is a current liability; thus the amount of
current liabilities increases with no corresponding increase in current assets. The effect is
to decrease working capital, the current ratio, and the quick ratio.
Depoole's early liquidation of a long-term note with cash affects the
A. Current ratio to a greater degree than the quick ratio.
B. Quick ratio to a greater degree than the current ratio.
C. Current and quick ratio to the same degree.
D. Current ratio but not the quick ratio.
Answer (B) is correct. The numerators of the quick and current ratios are decreased when
cash is expended. Early payment of a long-term liability has no effect on the denominator
(current liabilities). Since the numerator of the quick ratio, which includes cash, net
receivables, and marketable securities, is less than the numerator of the current ratio,
which includes all current assets, the quick ratio is affected to a greater degree.
North Bank is analyzing Belle Corp.'s financial statements for a possible extension of credit.
Belle's quick ratio is significantly better than the industry average. Which of the following
factors should North consider as a possible limitation of using this ratio when evaluating
Belle's creditworthiness?
Fluctuating market prices of short-term investments may adversely A. affect the ratio.
B. Increasing market prices for Belle's inventory may adversely affect the ratio.
C. Belle may need to sell its available-for-sale investments to meet its current obligations.
D. Belle may need to liquidate its inventory to meet its long-term obligations.
Answer (A) is correct. The quick ratio equals current assets minus inventory, divided by
current liabilities. Because short-term marketable securities are included in the numerator,
fluctuating market prices of short-term investments may adversely affect the ratio if Belle
holds a substantial amount of such current assets.
Windham Company has current assets of $400,000 and current liabilities of $500,000.
Windham Company's current ratio will be increased by
A. The purchase of $100,000 of inventory on account.
B. The payment of $100,000 of accounts payable.
C. The collection of $100,000 of accounts receivable.
D. Refinancing a $100,000 long-term loan with short-term debt.
Answer (A) is correct. The current ratio equals current assets divided by current
liabilities. An equal increase in both the numerator and denominator of a current ratio less
than 1.0 causes the ratio to increase. Windham Company's current ratio is .8 ($400,000 ÷
$500,000). The purchase of $100,000 of inventory on account would increase the current
assets to $500,000 and the current liabilities to $600,000, resulting in a new current ratio
of .83.
Given an acid test ratio of 2.0, current assets of $5,000, and inventory of $2,000, the value of
current liabilities is
The acid test or quick ratio equals the ratio of the quick assets
(cash, net accounts receivable, and marketable securities)).
Bond Corporation has a current ratio of 2 to 1 and a (acid test) quick ratio of 1 to 1. A
transaction that would change Bond's quick ratio but not its current ratio is the
Sale of inventory A. on account at cost.
B. Collection of accounts receivable.
C. Payment of accounts payable.
D. Purchase of a patent for cash.
Answer (A) is correct. The quick ratio is determined by dividing the sum of cash, shortterm
marketable securities, and accounts receivable by current liabilities. The current ratio
is equal to current assets divided by current liabilities. The sale of inventory (a nonquick
current asset) on account would increase cash (a quick asset), thereby changing the quick
ratio. The sale of inventory for cash, however, would be replacing one current asset with
another, and the current ratio would be unaffected
Rice, Inc. uses the allowance method to account for uncollectible accounts. An account
receivable that was previously determined uncollectible and written off was collected during
May. The effect of the collection on Rice's current ratio and total working capital is
The entry to record this transaction is to debit receivables, credit
the allowance, debit cash, and credit receivables. The result is to increase both an asset
(cash) and a contra asset (allowance for bad debts). These appear in the current asset
section of the balance sheet. Thus, the collection changes neither the current ratio nor
working capital because the effects are offsetting. The credit for the journal entry is made
to the allowance account on the assumption that another account will become
uncollectible. The company had previously estimated its bad debts and established an
appropriate allowance. It then (presumably) wrote off the wrong account. Accordingly, the
journal entry reinstates a balance in the allowance account to absorb future uncollectibles
Merit, Inc. uses the direct write-off method to account for uncollectible accounts receivable. If
the company subsequently collects an account receivable that was written off in a prior
accounting period, the effect of the collection of the account receivable on Merit's current ratio
and total working capital would be
Because the company uses the direct write-off method, the original
entry involved a debit to a bad debt expense account (closed to retained earnings). The
subsequent collection required a debit to cash and a credit to bad debt expense or retained
earnings. Thus, only one current asset account was involved in the collection entry, and
current assets (cash) increased as a result. If current assets increase and no change occurs
in current liabilities, the current ratio and working capital both increase.
Corp declares and two weeks later pays dividend, how do both incidents effect current ratio.
Decreased by the dividend declaration and increased by the A. dividend payment.
Which one of the following would increase the net working capital of a firm?
Cash payment of payroll A. taxes payable.
B. Purchase of a new plant financed by a 20-year mortgage.
C. Cash collection of accounts receivable.
D. Refinancing a short-term note payable with a 2-year note payable.
Answer (D) is correct. Net working capital equals current assets minus current liabilities.
Refinancing a short-term note with a 2-year note payable decreases current liabilities, thus
increasing working capital.
Badoglio Co.'s current ratio is 3:1. Which of the following transactions would normally
increase its current ratio?
Purchasing inventory A. on account.
B. Selling inventory on account.
C. Collecting an account receivable.
D. Purchasing machinery for cash.
Answer (B) is correct. The current ratio is equal to current assets divided by current
liabilities. Given that the company has a current ratio of 3:1, an increase in current assets
or decrease in current liabilities would cause this ratio to increase. If the company sold
merchandise on open account that earned a normal gross margin, receivables would be
increased at the time of recording the sales revenue in an amount greater than the decrease
in inventory from recording the cost of goods sold. The effect would be an increase in the
current assets and no change in the current liabilities. Thus, the current ratio would be
increased.
According to John Maynard Keynes, the three major motives for holding cash are for
John Maynard Keynes, founder of Keynesian economics,
concluded that there were three major motives for holding cash: for transactional purposes
as a medium of exchange, precautionary purposes, and speculative purposes (but only
during deflationary periods).
An increase in sales resulting from an increased cash discount for prompt payment would be
expected to cause a(n)
Increase in the A. operating cycle.
B. Increase in the average collection period.
C. Decrease in the cash conversion cycle.
D. Decrease in purchase discounts taken.
Answer (C) is correct. If the cause of increased sales is an increase in the cash discount, it
can be inferred that the additional customers would pay during the discount period. Thus,
cash would be collected more quickly than previously and the cash conversion cycle
would be shortened
1. Net credit sales=500k
2. net sales =250k
A/R balance
Jan 1 75k
dec 31 50k
What is a/r turnover for the year?
=500/((75k+50k)/2)=62,500
projected sales collection
1. 40% by 15 day discount date
2. 40% by 30 due date
3. 20% 15 days late
What is the projected sales outstanding?
multiply together like WAC you get 27 days.
Yonder Motors sells 20,000 automobiles per year for $25,000 each. The firm's average
receivables are $30,000,000 and average inventory is $40,000,000.Yonder's average collection
period is closest to which one of the following? Assume a 365-day year.
Average collection period = Days in year ÷ Accounts receivable turnover
= 365 ÷ (Net credit sales ÷ Average net receivables)
= 365 ÷ [(20,000 × $25,000) ÷ $30,000,000]
= 365 ÷ ($500,000,000 ÷ $30,000,000)
= 365 ÷ 16.667
= 21.9 days
Which of the following assumptions is associated with the economic order quantity formula?
The carrying cost per unit will vary with A. quantity ordered.
B. The cost of placing an order will vary with quantity ordered.
C. Periodic demand is known.
D. The purchase cost per unit will vary based on quantity discounts.
Answer (C) is correct. The economic order quantity (EOQ) model is a mathematical tool
for determining the order quantity that minimizes the sum of ordering costs and carrying
costs. The following assumptions underlay the EOQ model: (1) Demand is uniform;
(2) Order (setup) costs and carrying costs are constant; and (3) No quantity discounts are
allowed.
As a consequence of finding a more dependable supplier, Dee Co. reduced its safety stock of
raw materials by 80%. What is the effect of this safety stock reduction on Dee's economic order
quantity?
The variables in the EOQ formula are periodic demand, cost per
order, and the unit carrying cost for the period. Thus, safety stock does not affect the
EOQ. Although the total of the carrying costs changes with the safety stock, the costminimizing
order quantity is not affected
corp just instituted Just in time production system, cost per order has been reduced from 28 to 2 dollars, fixed facility and admin cost increased 2 to 32, how does this effect lot size and relevant cost.
The economic lot size for a production system is similar to the
EOQ. For example, the cost per set-up is equivalent to the cost per order (a numerator
value in the EOQ model). Hence, a reduction in the setup costs reduces the economic lot
size as well as the relevant costs. The fixed facility and administrative costs, however, are
not relevant. The EOQ model includes variable costs only.
The carrying costs associated with inventory management include
Storage costs, handling costs, capital invested, and obsolescence.
The ordering costs associated with inventory management include
Ordering costs are costs incurred when placing and receiving
orders. Ordering costs include purchasing costs, shipping costs, setup costs for a
production run, and quantity discounts lost
1. in avg weekly demand
2. explain reorder formula
1. (Sales/weeks in year)
2. Reorder point = (Average weekly demand × Lead time) + Safety stock
The level of safety stock in inventory management depends on all of the following except the
Level of uncertainty of A. the sales forecast.
B. Level of customer dissatisfaction for back orders.
C. Cost of running out of inventory.
D. Cost to reorder stock.
Answer (D) is correct. Determining the appropriate level of safety stock involves a
complex probabilistic calculation that balances (1) the variability of demand for the good,
(2) the variability in lead time, and (3) the level of risk the firm is willing to accept of
having to incur stockout costs. Thus, the only one of the items listed that does not affect
the level of safety stock is reorder costs.
The result of the economic order quantity (EOQ) formula indicates the
The EOQ model is a deterministic model that calculates the ideal
order (or production lot) quantity given specified demand, ordering or setup costs, and
carrying costs. The model minimizes the sum of inventory carrying costs and either
ordering or production setup costs.
Key Co. changed from a traditional manufacturing operation with a job-order costing system to
a just-in-time operation with a backflush costing system. What are the expected effects of these
changes on Key's inspection costs and recording detail of costs tracked to jobs in process?
JIT system, materials go directly into production without
being inspected. The assumption is that the vendor has already performed all necessary
inspections. The minimization of inventory reduces the number of suppliers, storage costs,
transaction costs, etc. Backflush costing eliminates the traditional sequential tracking of
costs. Instead, entries to inventory may be delayed until as late as the end of the period.
For example, all product costs may be charged initially to cost of sales, and costs may be
flushed back to the inventory accounts only at the end of the period. Thus, the detail of
cost accounting is decreased.
To determine the inventory reorder point, calculations normally include the
A. Ordering cost.
B. Carrying cost.
C. Average daily usage.
D. Economic order quantity.
Answer (C) is correct. The reorder point is the amount of inventory on hand indicating
that a new order should be placed. It equals the sales per unit of time multiplied by the
time required to receive the new order (lead time).
Accounts receivable turnover ratio will normally decrease as a result of
The write-off of an uncollectible account (assume the use of the allowance for doubtful
accounts method).
A.
B. A significant sales volume decrease near the end of the accounting period.
C. An increase in cash sales in proportion to credit sales.
D. A change in credit policy to lengthen the period for cash discounts.
Answer (D) is correct. The accounts receivable turnover ratio equals net credit sales
divided by average receivables. Hence, it will decrease if a company lengthens the credit
period or the discount period because the denominator will increase as receivables are
held for longer times
Inventory turnover ratio formula?
Cost of goods sold/ ( avg inventory. (current former year inventory/ 2)
Yr 1 a/r = 60
Yr 2 a/r =90
Sales = 600
1. What is a/r turnover
2. what is it in days?
600/(90+60/2)=8 times
360/8=45
What ratio's do you need to find operating cycle?
1. time from purchase of inventory to collection of cash
operating cycle = sum of number of days sales in inventory and teh number of days' sales in receivables.
The theory underlying the cost of capital is primarily concerned with the cost of:
a. Long-term funds and old funds.
b. Short-term funds and new funds.
c. Long-term funds and new funds.
d. Any combination of old or new, short-term or long-term funds.
Choice "d" is correct. The cost of capital considers the cost of all funds - whether they are short-term, longterm
, new or old.
Sylvan Corporation has the following capital structure:
Debenture bonds
Preferred equity
Common equity
$10,000,000
1,000,000
39,000,000
The financial leverage of Sylvan Corp. would increase as a result of:
a. Issuing common stock and using the proceeds to retire preferred stock.
b. Issuing common stock and using the proceeds to retire debenture bonds.
c. Financing its future investments with a higher percentage of bonds.
d. Financing its future investments with a higher percentage of equity funds.
Choice "c" is correct. Financial leverage increases when the debt to equity ratio increases. Using a higher
percentage of debt (bonds) for future investments would increase financial leverage.
Residual income is a better measure for performance evaluation of an investment center manager than return
on investment because:
a. The problems associated with measuring the asset base are eliminated.
b. Desirable investment decisions will not be neglected by high-return divisions.
c. Only the gross book value of assets needs to be calculated.
d. The arguments about the implicit cost of interest are eliminated.
Choice "b" is correct. Residual income measures actual dollars that an investment earns over its required
return rate. Performance evaluation on this basis will mean that desirable investment decisions will not be
rejected by high-return divisions.
The basic objective of the residual income approach of performance measurement and evaluation is to have
a division maximize its:
a. Return on investment rate.
b. Imputed interest rate charge.
c. Cash flows in excess of a desired minimum amount.
d. Income in excess of a desired minimum amount.
Choice "d" is correct. Residual income is defined as income
Capital investments require balancing risk and return. Managers have a responsibility to ensure that theinvestrnents that they make in their own firms increase shareholder value. Managers have met that responsibility if the return on the capital investment:
a. Exceeds the rate of return associated with the firm's beta factor.
b. Is less than the rate of return associated with the firm's beta factor.
c. Is greater than the prime rate of return.
d. Is less than the prime rate of return
Choice "a" is correct. A capital investment whose rate of return exceeds the rate of return associated with the
firm's beta factor will increase the value of the firm.
The Stewart Co. uses the Economic Order Quantity (EOQ) model for inventory management. A decrease in
which one of the following variables would increase the EOQ?
a. Cost per order.
b. Safety stock leve l.
c. Carrying costs.
d. Quantity demanded.
Choice "c" is correct. A decrease in carrying costs would increase the Economic Order Quantity (EOQ).
Order size
Annual Sales quantity in units
Cost per purchase Qrder
Annual cost of Carrying one unit in stock for one year
Order size gets larger as "S" or "0" gets bigger (numerator) or as "C" gets smaller (denominator).
The working capital financing policy that subjects the firm to the greatest risk of being unable to meet the
firm's maturing obligations is the policy that finances :
a. Fluctuating current assets with long-term debt.
b. Permanent current assets with long-term debt.
c. Permanent current assets with short-term debt.
d. Fluctuating current assets with short-term debt.
Choice "c" is correct. The working capital financing policy that finances permanent current assets with shortterm
debt subjects the firm to the greatest risk of being unable to meet the firm's maturing obligations.
Calculate ROI Yr2 Yr3
Revenue 900k 1,100k
Expense 650k 700k
Assets 1,200k 2,000k
revenue =400k
Divide:
YR2 +YR3 / 2 =1,600k
ROI= .25
Which of the following inventory management approaches orders at the point where carrying costs equate
nearest to restocking costs in order to minimize total inventory cost?
a. Economic order quantity.
b. Just-in-time.
c. Materials requirements planning.
d. ABC.
Choice "a" is correct. The economic order quantity (EOO) method of inventory control anticipates orders at
the point where carrying costs are nearest to restocking costs. The objective of EOO is to minimize total
inventory costs. The formula for EOO is:
What is the primary disadvantage of using return on investment (ROI) rather than residual income (RI) to
evaluate the performance of investment center managers?
a. ROI is a percentage, while RI is a dollar amount.
b. ROI may lead to rejecting projects that yield positive cash flows .
c. ROI does not necessarily reflect the company's cost of capital.
d. ROI does not reflect all economic gains.
Choice "b" is correct. The primary disadvantage of using return on investment (ROI) rather than residual
income (RI) to evaluate the performance of investment center managers is that ROI may lead to rejecting
projects that yield positive cash fiows . Profitable investment center managers might be reluctant to invest in
projects that might lower their ROI (especially if their bonuses are based only on their investment center's
ROI), even though those projects might generate positive cash flows for the company as a whole. This
characteristic is often known as the "disincentive to invest."
Amicable Wireless, Inc. offers credit terms of 2/1 0, net 30 for its customers. Sixty percent of Amicable's
customers take the 2% discount and pay on day 10. The remainder of Amicable's customers pay on day 30.
How many days' sales are in Amicable's accounts receivable?
a. 6
b. 12
c. 18
d. 20
Choice "c" is correct. Days' sales in accounts receivable is normally calculated as Days' sales = Ending
accounts receivable 1 Average daily sales. However, that formula will not work in this case because the
necessary information is not provided. However, enough information about payments is provided so that the
total days' sales can be determined on a weighted average basis. In this question, nobody pays before the
10th day and 60% of the customers pay on the 10th day, so there are 10 x .60, or 6 day's sales there. The
other 40% of the customers pay on the 30th day so there are 30 x AD, or 12 day's sales there. The total is 18
days sales.
Why would a firm want to finance temporary assets with short-term debt.
Which of the following rates is most commonly compared to the internal rate of return to evaluate whether to
make an investment?
a. Short-term rate on U.S. Treasury bonds.
b. Prime rate of interest.
c. Weighted-average cost of capital.
d. Long-term rate on U.S. Treasury bonds.
Choice "c" is correct. The weighted-average cost of capital is frequently used as the hurdle rate within capital
budgeting techniques. Investments that provide a return that exceeds the weighted-average cost of capital
should continuously add to the value of the firm.
Which of the following assumptions is associated with the economic order quantity formula?
a. The carrying cost per unit will vary with quantity ordered.
b. The cost of placing an order will vary with quantity ordered.
c. Periodic demand is known.
d. The purchase cost per unit will vary based on quantity discounts.
Choice "c" is correct. The economic order quantity formula (EOQ) assumes that periodic demand is known .
Annual sales volume is a crucial variable in the EOQ formula .
Which of the following types of bonds is most likely to maintain a constant market value?
a. Zero-coupon.
b. Floating-rate.
c. Callable.
d. Convertible.
Choice "b" is correct. Floating-rate bonds would automatically adjust the return on a financial instrument to
produce a constant market value for that instrument. No premium or discount would be required since market
changes would be accounted for through the interest rate.
Capital budgeting decisions include all but which of the following?
a. Selecting among long-term investment alternatives.
b. Financing short-term working capital needs.
c. Making investments that produce returns over a long period of time.
d. Financing large expenditures.
Choice "b" is correct. Capital budgeting decisions do not include the financing of short-term working capital
needs, which are more operational in nature.
Which one of the following is most relevant to a manufacturing equipment replacement decision?
a. Original cost of the old equipment.
b. Disposal price of the old equipment.
c. Gain or loss on the disposal of the old equipment.
d. A lump-sum write-off amount from the disposal of the old equipment.
Choice "b" is correct. The disposal price of the old equipment is most relevant because it is an expected
future inflow that will differ among alternatives. If this old equipment is replaced , there will be a cash inflow
from the sale of the old equipment. If the old equipment is kept, there will be no cash inflow from the sale of
the old equipment.
All of the following items are included in discounted cash flow analysis, except:
a. Future operating cash savings.
b. The current asset disposal price.
c. The future asset depreciation expense.
d. The tax effects of future asset depreciation.
Choice "c" is correct. The future asset depreciation expense is not included in discounted cash flow analysis.
• Future operating cash savings
• Current asset disposal price
• Tax effects of future asset depreciation
• Future asset disposal price
All of the following are the rates used in net present value analysis, except for the:
a. Cost of capital.
b. Hurdle rate.
c. Discount rate.
d. Accounting rate of return .
Choice "d" is correct. The accounting rate of return is a capital budgeting technique, not a rate.
• Cost of capital
• Hurdle rate
• Discount rate
• Required rate of return
The net present value (NPV) of a project has been calculated to be $215,000. Which one of the following
changes in assumptions would decrease the NPV?
a. Decrease the estimated effective income tax rate.
b. Extend the project life and associated cash inflows.
c. Increase the estimated salvage value.
d. Increase the discount rate.
Choice "d" is correct. An increase in the discount rate will decrease the present value of future cash inflows
and, therefore, decrease the net present value of the project.
Andrew Corporation is evaluating a capital investment that would result in a $30,000 higher contribution
margin benefit and increased annual personnel costs of $20,000. The effects of income taxes on the net
present value computation on these benefits and costs for the project are to:
a. Decrease both benefits and costs.
b. Decrease benefits but increase costs.
c. Increase benefits but decrease costs.
d. Increase both benefits and costs.
Choice "a" is correct. The effects of income taxes on the net present value computations will decrease both
benefits and costs for the project. Net present value computations focus of the present value of cash flows.
Income taxes decrease both the benefit and the cost of cash flows .
The internal rate of return for a project can be determined:
a. Only if the project cash flows are constant.
b. By finding the discount rate that yields a net present value of zero for the project.
c. By subtracting the firm's cost of capital from the project's profitability index.
d. Only if the project's profitability index is greater than one.
Choice "b" is correct. The internal rate of return (IRR) is the discount rate that produces a NPV of zero.
The internal rate of return is the:
a. Rate of interest that equates the present value of cash outflows and the present value of cash inflows.
b. Risk-adjusted rate of return.
c. Required rate of return.
d. Weighted average rate of return generated by internal funds.
Choice "a" is correct. The internal rate of return is defined as the technique that determines the present value
factor such that the present value of the after-tax cash flows equals the initial investment on the project.
Alternately, the internal rate of return (IRR) is the discount rate that produces a NPV of zero.
Do you use NPV in calculating payback period?
do you use salvage value in factoring payback period ?
NO DUMB ASS
NO DUMB ASS
When evaluating capital budgeting analysis techniques, the payback period emphasizes:
a. Liquidity.
b. Profitability.
c. Net income.
d. The accounting period .
Choice "a" is correct. The payback period is the time period required for cash inflows to recover the initial
investment. The emphasis of the technique is on liquidity (i.e., cash flow) .
The term underwriting spread refers to the:
a. Commission percentage an investment banker receives for underwriting a security lease.
b. Discount investment bankers receive on securities they purchase from the issuing company.
c. Difference between the price the investment banker pays for a new security issue and the price at which
the securities are resold .
d. Commission a broker receives for either buying or selling a security on behalf of an investor.
Choice "c" is correct. Investment bankers are paid their fees partly by being allowed to purchase the new
securities they are underwriting for a discount and then reselling those securities on the market. This is
known as the underwriting spread.
A firm with a higher degree of operating leverage when compared to the industry average implies that the:
a. Firm has higher variable costs.
b. Firm's profits are more sensitive to changes in sales volume.
c. Firm is more profitable.
d. Firm uses a significant amount of debt financing.
Choice "b" is correct. A firm with a higher degree of operating leverage when compared to the industry
average implies that the firm's profits are more sensitive to changes in sales volume.
Rule: Operating leverage is the presence of fixed costs in operations, which allows a small change in sales to
produce a larger relative change in profits.
Which of the following transactions does not change the current ratio and does not change the total current
assets?
a. A cash advance is made to a divisional office.
b. A cash dividend is declared.
c. Short-term notes payable are retired with cash.
d. Equipment is purchased with a three-year note and a 10 percent cash down payment.
Choice "a" is correct. This does not change the current ratio because the reduction of cash is offset by an
increase in accounts receivable
An increase in sales collections resulting from an increased cash discount for prompt payment would be
expected to cause a (n):
a. Increase in the operating cycle.
b. Increase in the average collection period.
c. Decrease in the cash conversion cycle.
d. Increase in bad debt losses.
Choice "a" is correct. This does not change the current ratio because the reduction of cash is offset by an
increase in accounts receivable
Which one of the following represents methods for converting accounts receivable to cash?
a. Trade discounts, collection agencies, and credit approval.
b. Factoring, pledging, and electronic funds transfers.
c. Cash discounts, collection agencies, and electronic funds transfers.
d. Trade discounts, cash discounts, and electronic funds transfers.
Choice "c" is correct. The following are methods of converting accounts receivable (AR) into cash:
1. Collection agencies - used to collect overdue AR.
2. Factoring AR - selling AR to a factor for cash.
3. Cash discounts - offering cash discounts to customers for paying AR quickly (or paying at all). For
example: 2/10, net 30.
4. Electronic fund transfers - a method of payment, which electronically transfers funds between banks.
Which one of the following statements concerning cash discounts is correct?
a. The cost of not taking a 2/10, net 30 cash discount is usually less than the prime rate.
b. With trade terms of 2/15, net 60, if the discount is not taken , the buyer receives 45 days of free credit.
c. The cost of not taking the discount is higher for terms of 2/10, net 60 than for 2/10, net 30.
d. The cost of not taking a cash discount is generally higher than the cost of a bank loan.
Choice "d" is correct. The cost of not taking a cash discount is generally higher than the cost of a bank loan.
Which one of the following is not a characteristic of a negotiable certificate of deposit? Negotiable certificates
of deposit:
a. Have a secondary market for investors.
b. Are regulated by the Federal Reserve System.
c. Are usually sold in denominations of a minimum of $100,000.
d. Have yields considerably greater than bankers' acceptances and commercial paper.
Choice "d" is correct. Negotiable CDs generally carry interest rates slightly lower than bankers' acceptances
(which are drafts drawn on deposits at a bank) or commercial paper (which is unsecured debt issued by credit
worthy customers).
All of the following are alternative marketable securities suitable for investment, except:
a. Eurodollars.
b. Commercial paper.
c. Bankers' acceptances.
d. Convertible bonds.
Choice "d" is correct. Convertible bonds. Temporarily idle cash should be inverted in very liquid , low risk
short-term investments only. U.S. T-bills are basically risk-free. Banker's acceptances and Eurodollars are
only slightly more risky. Commercial paper, the short-term unsecured notes of the most credit-worthy large
U.S. corporations is a little riskier, but still relatively low risk. However, convertible bonds are subject to
default risk, liquidity risk, and maturity (interest rate) risk, and as such are inappropriate securities for shortterm
marketable security investment.
Which one of the following responses is not an advantage to a corporation that uses the commercial paper
market for short-term financing?
a. The borrower avoids the expense of maintaining a compensating balance with a commercial bank.
b. There are no restrictions as to the type of corporation that can enter into this market.
c. This market provides a broad distribution for borrowing.
d. A benefit accrues to the borrower because its name becomes more widely known.
Choice "b" is correct. There are restrictions as to the type of corporation that can enter into the commercial
paper market for short-term financing , since the use of the open market is restricted to a comparatively small
number of the most credit-worthy large corporations.
The commercial paper market:
a. Avoids the expense of maintaining a compensating balance with a commercial bank.
c. Provides a broad distribution for borrowing.
d. Accrues a benefit to the borrower because its name becomes more widely known
Which of the following represents a firm's average gross receivable balance?
I. Days' sales in receivables x accounts receivable turnover.
II. Average daily sales x average collection period.
III. Net sales + average gross receivables.
a. I only.
b. I and II only.
c. II only.
d. II and III only.
Choice "c" is correct. II only - Average daily sales ($27,397) x Average collection period (36.5) = $1,000,000
Avg gross AIR
Which one of the following statements is most correct if a seller extends credit to a purchaser for a period of
time longer than the purchaser's operating cycle? The seller:
a. Will have a lower level of accounts receivable than those companies whose credit period is shorter than
the purchaser's operating cycle.
b. Is, in effect, financing more than just the purchaser's inventory needs.
c. Is, in effect, financing the purchaser's long-term assets.
d. Has no need for a stated discount rate or credit period.
Choice "b" is correct. If a seller extends credit to a purchaser for a period of time longer than the purchaser's
operating cycle, the seller is, in effect, financing more than just the purchaser's inventory needs.
Calculate reorder point:
50 week year
sales: 10,000 units per year
Order quantity: 2,000 units
Safety stock 1,300 units
lead-time 4 weeks
-50 week year would mean that 200units are sold a week,
-therefore 800 units are sold during lead-time 4*200,
-rq safety stock is 1,300 units.
Therefore: 1,300+800 units = 2,100 is reorder point. | https://quizlet.com/5202232/chapter-8-2011-bec-flash-cards/ | CC-MAIN-2015-48 | en | refinedweb |
Amit is a technical architect in Software Engineering and Technology Labs at Infosys Technologies and can be reached at chaturvedi_a@infosys.com.
Error detection in software is usually done via code reviews, unit testing, system testing, integration testing, and user-acceptance testing. The first possible error-detection stage subsequent to the build phase is the code review, which helps in early defect detection by enforcing language-specific programming standards and best practices. The probability of errors during the build phase are therefore reduced. These rules are typically verified manually as part of the code review process. However, the process can become cumbersome and unmanageable, especially when undertaken at the end of the build phase by senior programmers or architects. In short, manual code review of large projects is neither efficient nor always possible, as the effort required for manual code review from senior staff is huge.
Static analysis provides a mechanism for tool-based automated code reviews to find code defects early in the build phase. Static analysis at early stages not only keeps bugs out of the code, but helps in locating bugs even before programs run. Static analysis ensures early bug detection and remediation by comparing source code with predefined language patterns, improving development productivity and end-product reliability. Static analysis also helps in enforcing coding conventions, thus making maintainability easier. Among the common methodologies used by static analysis are:
- Syntactic analysis is done by determining the structure of the input Java code and comparing it with predefined patterns. Common defects found using this methodology are coding conventions such as naming standards, or always having default clauses in switch statements. For instance, the absence of a default clause in Example 1 would hide potential bugs that could have been caught by the default clause.
- Data-flow analysis tracks objects (variables) and their states (data values) in a particular execution segment of a program or method. This methodology monitors the state of variables in all possible flows, thus predicting things such as null pointer exceptions or database objects that aren't closed in all possible flows. Example 2 shows the database connection object con2 that is not closed in all flows of the method process. This is critical, as there is no guarantee that garbage collection will happen in a timely manner. Moreover, if the database connections are a limited resource, running short of them could trigger more problems.
- Flow-graph analysis, also known as "Cyclomatic Complexity," checks the complexity of an operation's body. Cyclomatic complexity counts every decision point (if, while, for, or case statements) in a method. Cyclomatic complexity's strict academic definition from graph theory is: CC=E-N+P, where E represents the number of edges on a graph, N the number of nodes, and P the number of connected components.
- Another way of calculating cyclomatic complexity is CC=Number of decision points+1. Example 3, for instance, shows the method checkCC with cyclomatic complexity as "3;" two decision points (if and else) plus "1" are added for a method entry point.
- Reflection gives Java classes the ability to look inside dynamically loaded classes. The Java Reflection API provides a mechanism to fetch class information (super classes, method names, and the like) that is used to implement rules like inheritance hierarchy, object coupling, and the maximum number of methods in a class.
Static Analysis
In general, large projects spend more than half of their lifecycle effort on code reviews and defect prevention. This effort can be significantly reduced by automating the code review process. Automation also helps in achieving consistency in terms of coding standards and best practices across project teams and organizations. In this article, I refer to coding standards and best practices enforced by static-analysis tools as "rules."
Here, I focus on Java in exploring the different techniques used to achieve code review automation. In the process, I also examine parameters and techniques required for Java static analysis. Finally, I showcase and analyze the quantitative data from real-world projects to underscore the benefits of static analysis.
Where does static analysis fit in the software lifecycle for maximum benefit? Figure 1 illustrates the role of static analysis in the software development lifecycle and the participants involved in the process:
- Developers are responsible for writing code and performing static analysis to identify and fix defects early.
- Architects are responsible for the selection of static-analysis tools and configuration of rules.
- Software Quality Advisors are responsible for defect analysis and prevention.
In Figure 1, static analysis does not undermine the importance of senior programmers and architects, because the selection of reliable and appropriate static-analysis tools is critical. Project managers who understand organizational quality standards and architects who understand architecture and language requirements are critical.
Additionally, the automation of static analysis involves developers who ensure defect detection and remediation in the development stage. The process illustrates how automated static analysis involves programmers whose code has to go through the tool before any testing is done on their programs. This ensures that program bugs are caught while testing.
The process lifecycle clearly illustrates how software quality advisors (SQA) collect defect reports after several rounds of feedback and correction in the source code.
In short, all participants want highly reliable, automated review processes that let project team members concentrate on other important tasks to fulfill functional requirements. Before getting into the details of various techniques used to automate code review, it is important to understand the basic parameters required for automated code review tools from a Java standpoint. Tools should have the capability of scanning Java and Java bytecode in a structured way and should provide APIs to simplify the implementation of rules.
In this article, I look at various techniques commonly used for Java static analysis. These techniques can be categorized into two categoriesanalysis performed on Java source (Java files) or Java compiler-generated bytecode (class files).
Scanning Java Source
Tools taking Java source as input first scan through Java source files using parsers, then execute predefined rules on that source code. A thorough understanding of the language is a must for any tool to be able to identify bugs in that particular language program. There are numerous Java language parsers that adhere to the Java Language Specifications (JLS), including JavaCC () and ANTLR ().
Java parsers and tools also simplify Java source by converting source code into tree-like structures. There are numerous Java parsers that simplify the code into a tree-like structure known as "Abstract Syntax Tree." Listing One shows an abstract syntax tree (AST) generated for Java code. I used JavaCC and the JJTree parser () to build a program that recognizes matches to grammar specifications. In addition to this, it also generates an AST of the Java source file. I use the PMD rule designer () in Figure 2 to show an AST generated by JavaCC and JJTree from Java source. The parsed and structured source helps tool you in implementing rules that not only involves syntactic errors, but also errors requiring data-flow analysis of source code.
The generated AST is scanned by predefined patterns applied in rules by using parser-generated AST APIs. I illustrate this technique by implementing a rule to catch a common mistake and a potential performance issue committed by inexperienced developers by doing string concatenation in a loop instead of using StringBuffer. The StringConcatInForRule class in Listing Two shows the core implementation of this rule. The Visitor design pattern can be used to execute individual rules with AST being passed as argument. All the classes imported for AST blocks are generated using the JavaCC parser and JJTree tool. The complete source code implementation of various predefined rules with details of Visitor pattern and AST blocks can be found at the PMD Sourceforge repository.
Scanning Java Bytecode
The scanning Java bytecode technique scans through Java bytecode in Java class files. This approach utilizes bytecode libraries to access Java bytecode and implements predefined patterns and rules using these libraries. Java bytecode libraries help you in accessing bytecode by providing interfaces for source-level abstraction. You can read a class file without detailed knowledge of bytecode. For rules requiring pattern matching and understanding of Java bytecode, the java -p javaclassname command helps in analyzing bytecode and subsequently implementing rules using bytecode engineering libraries.
Listing Three is a sample Java program and the bytecode generated using the javap -c (JDK1.4) command. Rules can be implemented using Java bytecode libraries that provide APIs to scan bytecode in a class.
Listing Four presents a rule implementation to catch a design defect using BCEL Java bytecode library ( .sourceforge.net/). The rule counts the depth of the inheritance hierarchy for a particular class, meaning the number of classes and interfaces a class is inherited from. The program raises an alert whenever it finds a class inherited from more than MAX_INHERETANCE_DEPTH, which is set to "5" in Listing Four. A high value indicates that the class is quite specialized and tightly coupled with other classes. Most tools provide configuration mechanism to set this limit as per project requirements; for instance, Java Swing classes have high inheritance depth.
FindClassInheritanceDepth takes a jar file as an argument and scans through all the class files present in that jar file. It gets all hierarchical super classes and interfaces information for these classes, compares the total number of hierarchy (by adding super classes and interfaces) with the predefined inheritance depth limit, and raises an alert if hierarchy exceeds this limit. The Java classpath should be set to include all jar files used to compile input classes in order to avoid exceptions. For instance, if the classes zipped in an input jar file are using the testing framework JUnit (), the JUnit jar file should be added to the classpath before running FindClassInheritanceDepth. I have caught ClassNotFoundException and intentionally added printStackTrace() in the catch block to cause class information to be added to the classpath.
Guidelines for Tool Selection
There are no hard-and-fast rules for selecting static-analysis tools and techniques. Tool selection primarily depends on the requirements of the project and organization. The reason I have implemented two different varieties of rules in this article is to demonstrate the strength of each technique. Tools using Java source code for analysis are good at checking code conventions, like Sun Code Conventions (). These tools use techniques such as syntactic pattern matching and data-flow analysis to implement these rules. These rules are also implemented in tools using the bytecode approach, but extending or modifying these tools to write your own rules is cumbersome. Tools using the AST technique are more extensible when it comes to new organizational standards and to ensure that the tool does not become obsolete.
Bytecode-analysis tools are good for implementing rules related to class-level and class-relationship checks, such as coupling between objects, inheritance depth (Listing Four), and Halstead's software science metrics. The disadvantage of scanning bytecode is that the process ignores defects that are fixed due to compiler optimization, hence the defect never gets fixed in original source code.
Tools using the bytecode technique are more useful in large maintenance projects where class files are already available for review, while Java code scanner tools are ideal for developers during the build phase in development projects.
Open-source tools such as PMD and CheckStyle () are good examples of Java code scanning using AST technique. FindBugs () and JLint () use bytecode-scanning technique for static analysis. Most of these tools provide plug-ins for Eclipse, JBuilder, and Emacs, thus providing a uniform total development environment.
Each approach and tool has some pluses and minuses. For instance, some tools tend to miss instances of incorrect code in their analysis that are caught by other tools. I recommend that projects go with multiple tools, based on their requirements as there's not much effort required in executing these tools, while the benefits justify the effort.
Static-Analysis Benefits
Qualitative benefits from static analysis include a significant improvement in the overall reliability and quality of the software. For instance, Figure 3 illustrates the defects found in a development project using static-analysis tools. The automation tools primarily catch defects in categories such as coding standards, potential performance issues, potential bugs, and design defects. It is obvious from Figure 3 that more than 50 percent of the defects caught during static analysis are cosmetic (Coding Standards) in nature. However, don't forget that without static analysis, detecting these cosmetic defects also requires a cumbersome manual code review process. It is hard to ignore the rest of the defects that are caught, although they are small in number compared to cosmetic defects. Any potential bugs and performance issues caught before the application is run reduces testing and bug fixing at later stages. As shown in Figure 3, the second highest category of defects fall into an irrelevant category; these are also known as "false positives." Such defects are either not applicable to the particular project or are the result of too much automation. These false positives can be suppressed to an extent by the right tool selection and rules configuration.
Figure 4 shows the comparison of automated static analysis with a manual code review process. I took quality statistical data from five live projects using static-analysis techniques and running at CMM Level 4 with average function points around 700. The same data is collected from projects relying only on manual code review. The comparative study of data collected implies that projects using static-analysis techniques are distinctively ahead of other projects. The number of delivered defects in projects using static-analysis tools is significantly lower compared to projects doing manual code review. Figure 4 also points out productivity gains, while overall defects per person hour and line count have come down.
Conclusion
Making review process more manageable and predictable by using static analysis improves software development productivity and end-product reliability. However, the biggest benefit of all is the ability to identify defects at the coding stage, thus directly impacting the reliability of software systems.
There are plenty of tools on the market for Java static analysis, both commercial and freely available, open source and proprietary. In this article, I examined four open-source tools as part of another research project and concluded that none could supplement or supercede each other in terms of functionality. It is imperative that static analysis be applied with the right tools, rules configuration, and well-timed usage to be a potent mechanism for overall quantitative measurable benefits in projects and organizations.
DDJ
public class GenerateAST { private String printFuncName() { System.out.println(funcName + "Generate AST"); } } CompilationUnit TypeDeclaration ClassDeclaration:(public) UnmodifiedClassDeclaration(GenerateAST) ClassBody ClassBodyDeclaration MethodDeclaration:(private) ResultType Type Name:String MethodDeclarator(printFuncName) FormalParameters Block BlockStatement Statement StatementExpression PrimaryExpression PrimaryPrefix Name:System.out.println PrimarySuffix Arguments ArgumentList Expression AdditiveExpression:+ PrimaryExpression PrimaryPrefix Name:funcName PrimaryExpression PrimaryPrefix Literal:"Generate AST"Back to article
Listing Two
import java.util.ArrayList; import java.util.Iterator; import java.util.List; import com.ddj.ast.ASTAdditiveExpression; import com.ddj.ast.ASTBlockStatement; import com.ddj.ast.ASTExpression; import com.ddj.ast.ASTForStatement; import com.ddj.ast.ASTLocalVariableDeclaration; import com.ddj.ast.ASTMethodDeclaration; import com.ddj.ast.ASTName; import com.ddj.ast.ASTVariableDeclarator; import com.ddj.ast.ASTVariableDeclaratorId; public class StringConcatInForRule { static int lineNo = 0; public Object visitRule(ASTForStatement node, Object data) { ASTMethodDeclaration method = (ASTMethodDeclaration) node .getFirstParentOfType(ASTMethodDeclaration.class); List localvars = new ArrayList(); if (method != null) { method.findChildrenOfType(ASTLocalVariableDeclaration.class, localvars, true); for (Iterator lvars = localvars.iterator(); lvars.hasNext();) { ASTLocalVariableDeclaration localvar = (ASTLocalVariableDeclaration) lvars.next(); if (localvar.jjtGetChild(0).jjtGetChild(0) instanceof ASTName) { ASTName localVarName = (ASTName) localvar.jjtGetChild(0) .jjtGetChild(0); if (localVarName.getImage().equals("String")) { if (localvar.jjtGetChild(1) instanceof ASTVariableDeclarator) { ASTVariableDeclaratorId varId = (ASTVariableDeclaratorId) localvar .jjtGetChild(1).jjtGetChild(0); checkForConcat(node, varId.getImage(), data); } } } } } return data; } public boolean checkForConcat(ASTForStatement node, String varName, Object data) { boolean closed = false; List blocks = new ArrayList(); int oldLineNo = 0; node.findChildrenOfType(ASTBlockStatement.class, blocks, true); for (Iterator it2 = blocks.iterator(); it2.hasNext();) { ASTBlockStatement block = (ASTBlockStatement) it2.next(); List exps = new ArrayList(); block.findChildrenOfType(ASTExpression.class, exps, true); for (Iterator it = exps.iterator(); it.hasNext();) { ASTExpression exp = (ASTExpression) it.next(); if (exp.jjtGetChild(0) instanceof ASTAdditiveExpression) { List names = new ArrayList(); exp.findChildrenOfType(ASTName.class, names, true); for (Iterator it1 = names.iterator(); it1.hasNext();) { ASTName name = (ASTName) it1.next(); if (name.getImage().equals(varName)) { lineNo = block.getBeginLine(); if (oldLineNo != lineNo) { System.out.println("String Concatenation in loop at line no. " + lineNo + " use Stringbuffer"); } closed = true; oldLineNo = lineNo; } } } } } return closed; } }Back to article
Listing Three
Compiled from CreateObjects.java public class CreateObjects extends SimpleObjects { public CreateObjects(); public java.lang.String create(); } Method CreateObjects() 0 aload_0 1 invokespecial #1 <Method SimpleObjects()> 4 return Method java.lang.String create() 0 new #2 <Class java.lang.String> 3 dup 4 invokespecial #3 <Method java.lang.String()> 7 areturnBack to article
Listing Four
// FindClassInheritanceDepth.java import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.IOException; import java.util.zip.ZipEntry; import java.util.zip.ZipInputStream; import org.apache.bcel.classfile.ClassParser; import org.apache.bcel.classfile.JavaClass; public class FindClassInheritanceDepth { private static ZipInputStream zipInputStream; private static ClassParser classParser; private static FileInputStream fileInputStream; private static final int MAX_INHERETANCE_DEPTH = 5; static String getFileExtension(String fileName) { int lastDot = fileName.lastIndexOf('.'); return (lastDot >= 0) ? fileName.substring(lastDot) : null; } static int CheckClassDepth(JavaClass javaClass) { int inheritanceDepth = 0; try { JavaClass[] aJavaClass = javaClass.getSuperClasses(); JavaClass[] aJavaInterfaces = javaClass.getAllInterfaces(); inheritanceDepth = aJavaClass.length + aJavaInterfaces.length; } catch (ClassNotFoundException cnfe) { System.out.println("Base Class: " + javaClass.getClassName()); cnfe.printStackTrace(); } return inheritanceDepth; } public static void main(String args[]) { if (args.length == 0) { System.out.println("Usage: java FindClassInheritanceDepth jarFile"); return; } try { fileInputStream = new FileInputStream(args[0]); } catch (FileNotFoundException fe) { fe.printStackTrace(); } zipInputStream = new ZipInputStream(fileInputStream); for (;;) { try { ZipEntry zipEntry = zipInputStream.getNextEntry(); if (zipEntry == null) { return; } String fileExtension = getFileExtension(zipEntry.getName()); if (fileExtension != null) { if (fileExtension.equals(".class")) { classParser = new ClassParser(args[0], zipEntry .getName()); JavaClass jClass = classParser.parse(); if(CheckClassDepth(jClass) > MAX_INHERETANCE_DEPTH) System.out.println("Class: "+ jClass.getClassName() + " have excedded inheretance hierarchy limit(max 5 is allowed)" + "\n"); } } } catch (IOException ie) { ie.printStackTrace(); } } } }Back to article | http://www.drdobbs.com/jvm/java-static-analysis/184406143 | CC-MAIN-2015-48 | en | refinedweb |
This action might not be possible to undo. Are you sure you want to continue?
11/19/2011
text
original
discovering Maven while searching for a simpler way to define a common build process across projects. specializing in open source consulting. published by O'Reilly in 2005 (ISBN 0-596-00750-7). Jason van Zyl: Jason van Zyl focuses on improving the Software Development Infrastructure associated with medium to large scale projects. Vincent has directly contributed to Maven's core. Brett is a co-founder and the Vice President of Engineering at Exist Global. Immediately hooked. and in 2005. Brett Porter has been involved in the Apache Maven project since early 2003. He created his own company. John lives in Gainesville. which has led to the founding of the Apache Maven project. when he began looking for something to make his job as Ant “buildmeister” simpler.About the Authors Vincent Massol has been an active participant in the Maven community as both a committer and a member of the Project Management Committee (PMC) since Maven's early days in 2002. supporting both European and American companies to deliver pragmatic solutions for a variety of business problems in areas like e-commerce. Florida with his wife. Brett has become involved in a variety of other open source projects. published by Manning in 2003 (ISBN 1-930-11099-5) and Maven: A Developer's Notebook. He is grateful to work and live in the suburbs of Sydney. Additionally. joining the Maven Project Management Committee (PMC) and directing traffic for both the 1. financial. software development. He enjoys cycling and raced competitively when he was younger. and working on his house. He continues to work directly on Maven and serves as the Chair of the Apache Maven Project Management Committee.0 and 2. John was elected to the Maven Project Management Committee (PMC). This is Vincent's third book. Build management and open source involvement have been common threads throughout his professional career. Carlos Sanchez received his Computer Engineering degree in the University of Coruña. and is a Member of the Apache Software Foundation. he founded the Jakarta Cactus project-a simple testing framework for server-side Java code and the Cargo project-a J2EE container manipulation framework. his focus in the Maven project has been the development of Maven 2. where he hopes to be able to make the lives of other developers easier. CSSC. Brett became increasingly involved in the project's development. Vincent lives and works in Paris. . Spain. When he's not working on Maven. He was invited to become a Maven committer in 2004. telecommunications and. roasting coffee. John Casey became involved in the Maven community in early 2002. where he is the technical director of Pivolis. he is a co-author of JUnit in Action. as well as to various Maven plugins. Emily. Australia. and today a large part of John's job focus is to continue the advancement of Maven as a premier software development tool.0 major releases. In addition to his work on Maven. and started early in the open source technology world. Since 2004. of course. John enjoys amateur astrophotography. a company which specializes in collaborative offshore software development using Agile methodologies.
This page left intentionally blank. .
2. Declarative Execution Maven's project object model (POM) Maven's build life cycle 1.2.7.Table of Contents Preface 1.1.6.2.4.4. Introduction 3.5.6.1.3. Reuse of Build Logic 1. Resolving Dependency Conflicts and Using Version Ranges 3. Coherent Organization of Dependencies Local Maven repository Locating dependency artifacts 22 22 23 24 25 26 27 27 28 28 28 28 30 31 32 34 1.7.1.1. Maven Overview 1. Packaging and Installation to Your Local Repository 2. Getting Started with Maven 36 37 2.2. Creating Your First Maven Project 2. Managing Dependencies 3. Using Profiles 56 56 59 61 63 64 68 70 9 .3. Maven’s Principles 1.3.1. Maven's Benefits 2.2.2.1.1. Setting Up an Application Directory Structure 3.6. Using Maven Plugins 2. Compiling Application Sources 2. Filtering Classpath Resources 2.3. Creating Applications with Maven 38 39 40 42 44 46 48 49 52 53 54 55 3. What is Maven? 1.3. Handling Test Classpath Resources 2.2. Using Snapshots 3. Compiling Test Sources and Running Unit Tests 2.2.2. Using Project Inheritance 3. Convention Over Configuration Standard directory layout for projects One primary output per project Standard naming conventions 1. What Does Maven Provide? 1. Maven's Origins 1. Summary 3. Introducing Maven 17 21 1.6.1.1. Preparing to Use Maven 2.2.8.5.3. Preventing Filtering of Binary Resources 2. Utilizing the Build Life Cycle 3.8.4. Handling Classpath Resources 2.6.
Building an EAR Project 4.6.10. Bootstrapping into Plugin Development 5. Deploying EJBs 4. Creating a Web Site for your Application 3.11.9. BuildInfo Example: Capturing Information with a Java Mojo Prerequisite: Building the buildinfo generator project Using the archetype plugin to generate a stub plugin project The mojo The Plugin POM Binding to the life cycle The output 5.9. Deploying a J2EE Application 4. Testing J2EE Application 4.9.9. Deploying with SFTP 3.5. Deploying with FTP 3.2.2.3.3.11.12. A Review of Plugin Terminology 5. Deploying with an External SSH 3.7. Improving Web Development Productivity 4.3.1. Deploying with SSH2 3. Summary 5.1.13.10. Plugin Development Tools Choose your mojo implementation language 5. Developing Your First Mojo 5.3.2.14.3.8.4. Building a Web Services Client Project 4.3. Summary 4.4.2.9. Deploying your Application 3. Introduction 4.5. Building J2EE Applications 74 74 75 75 76 77 78 84 85 4.1. Building an EJB Module With Xdoclet 4.4.3.9.4. Deploying Web Applications 4.1. Developing Custom Maven Plugins 86 86 87 91 95 100 103 105 108 114 117 122 126 132 133 5. A Note on the Examples in this Chapter 134 134 135 135 136 137 137 138 140 140 5.3.9. Building an EJB Project 4. Building a Web Application Project 4. Deploying to the File System 3.4. The Plugin Framework Participation in the build life cycle Accessing build information The plugin descriptor 5.2. Introduction 5. Introducing the DayTrader Application 4.1. BuildInfo Example: Notifying Other Developers with an Ant Mojo The Ant target The Mojo Metadata file 141 141 141 142 142 145 146 147 148 148 149 10 . Organizing the DayTrader Directory Structure 4.
1.2.7.8. Monitoring and Improving the Health of Your Dependencies 6.8. Cutting a Release 7. How to Set up a Consistent Developer Environment 7. Accessing Project Dependencies Injecting the project dependency set Requiring dependency resolution BuildInfo example: logging dependency versions 5. What Does Maven Have to Do with Project Health? 6.3. Team Collaboration with Maven 170 171 173 177 183 185 189 197 203 206 210 210 211 7. Team Dependency Management Using Snapshots 7. Adding Reports to the Project Web site 6.3.7. Monitoring and Improving the Health of Your Tests 6.2. Creating a Standard Project Archetype 7. Where to Begin? 8.Modifying the Plugin POM for Ant Mojos Binding the Notify Mojo to the life cycle 150 152 5.5. Monitoring and Improving the Health of Your Source Code 6.1. Summary 7. Creating a Shared Repository 7.9. Introduction 8.5. Attaching Artifacts for Installation and Deployment 153 153 154 154 155 156 157 158 159 160 162 163 164 5.6. Separating Developer Reports From User Documentation 6.5. Monitoring and Improving the Health of Your Releases 6.3.9.3.11. Accessing Project Sources and Resources Adding a source directory to the build Adding a resource to the build Accessing the source-root list Accessing the resource list Note on testing source-roots and resources 5. Introducing the Spring Framework 8. Creating Reference Material 6.4. Choosing Which Reports to Include 6.4.1. The Issues Facing Teams 7.2.5. Viewing Overall Project Health 6. Configuration of Reports 6.6. Migrating to Maven 212 213 216 219 222 233 238 241 245 247 8.1. Summary 8.10. Assessing Project Health with Maven 167 169 6.5. Creating an Organization POM 7.12. Continuous Integration with Maestro 7. Creating POM files 248 248 250 256 11 .6. Gaining Access to Maven APIs 5. Summary 6.1.5.2.4. Advanced Mojo Development 5.1.5.
1.2.3. Some Special Cases 8.2.7.2. Java Mojo Metadata: Supported Javadoc Annotations Class-level annotations Field-level annotations A.2.3. Standard Directory Structure B. Non-redistributable Jars 8. Avoiding Duplication 8. Maven's Life Cycles A. The clean Life Cycle Life-cycle phases Default life-cycle bindings A.8.5.2. Maven’s Super POM B.1. Referring to Test Classes from Other Modules 8. Compiling Tests 8. Restructuring the Code 8.4. Other Modules 8.6. Building Java 5 Classes 8. Maven’s Default Build Life Cycle Bibliography Index 292 293 294 295 297 12 .1.2. The default Life Cycle Life-cycle phases Bindings for the jar packaging Bindings for the maven-plugin packaging A.4.2. Summary Appendix A: Resources for Plugin Developers 256 260 260 262 264 264 265 265 268 270 270 271 271 273 A.8. Using Ant Tasks From Inside Maven 8.2.1. Compiling 8.2.5.1.5.2.6.6.3.1. The site Life Cycle Life-cycle phases Default Life Cycle Bindings 274 274 274 276 277 278 278 278 279 279 279 A. Complex Expression Roots A. Testing 8.6.5. Ant Metadata Syntax Appendix B: Standard Conventions 280 280 281 281 282 282 286 286 287 287 291 B.2.4.1. Mojo Parameter Expressions A.5. The Expression Resolution Algorithm Plugin metadata Plugin descriptor syntax A.6. Running Tests 8.3.6. Simple Expressions A.6.1.6 .
.......................................... 34 Figure 1-3: Sample directory structure.................................................................. 127 Figure 6-1: The reports generated by Maven.................. 179 Figure 6-4: The directory layout with a user guide......................... 78 Figure 3-6: The target directory.................................................... 47 Figure 2-4: Directory structure after adding test resources................................................................................... 57 Figure 3-2: Proficio-stores directory............................................................................................................................................................................................................................... 48 Figure 3-1: Proficio directory structure............................................................................................................................................................................... 189 Figure 6-8: An example CPD report................................................................ 83 Figure 4-1: Architecture of the DayTrader application.............................................................................. EJB and Web modules................................................ 58 Figure 3-3: Version parsing...........105 Figure 4-9: DayTrader JSP registration page served by the Jetty plugin....... 86 Figure 4-2: Module names and a simple flat directory structure........ 67 Figure 3-5: The site directory structure.............................................................. 91 Figure 4-6: Directory structure for the DayTrader ejb module............................................................. 66 Figure 3-4: Version Parsing........................... 34 Figure 2-1: Directory structure after archetype generation.................................................................................................................................................................................................. 181 Figure 6-6: An example source code cross reference............................................................. 88 Figure 4-3: Modules split according to a server-side vs client-side directory organization.... 195 Figure 6-10: An example Cobertura report......................................................................... 186 Figure 6-7: An example PMD report................................... 171 Figure 6-2: The Surefire report........................................................................... 95 Figure 4-7: Directory structure for the DayTrader ejb module when using Xdoclet.................................. 40 Figure 2-2: Directory structure after adding the resources directory............... 172 Figure 6-3: The initial setup................................................................................................................................ 82 Figure 3-7: The sample generated site.............. 117 Figure 4-12: Directory structure of the ear module showing the Geronimo deployment plan......................................................................................................................................................... 111 Figure 4-11: Directory structure of the ear module........................................................................................................................ 193 Figure 6-9: An example Checkstyle report.......................................................................... 111 Figure 4-10: Modified registration page automatically reflecting our source change................................................ 123 Figure 4-13: The new functional-tests module amongst the other DayTrader modules ...................................................................................................................................................................................................................... 199 14 ................................ 33 Figure 1-2: General pattern for the repository layout............................................................................................................... 180 Figure 6-5: The new Web site.................. 89 Figure 4-4: Nested directory structure for the EAR.....List of Figures Figure 1-1: Artifact movement from remote to local repository......................................................................................................................................................................................................................................................................................................... 126 Figure 4-14: Directory structure for the functional-tests module................................................... 89 Figure 4-5: Directory structure of the wsappclient module.............................................................................. 46 Figure 2-3: Directory structure of the JAR file created by Maven.................................................................................................................................................................................................................... 101 Figure 4-8: Directory structure for the DayTrader web module showing some Web application resources............................................................................................................................................................................
.. Figure 7-4: Summary page after projects have built................................................................................................................................................................... Figure 7-7: Build Management configuration............................................................................................ Figure 8-4: The final directory structure................................................ Figure 8-1: Dependency relationship between Spring modules............ Figure 7-1: The Administrator account screen.................. Figure 7-6: Adding a build definition for site deployment.................................................................................................................................................................................................................................................................................................Figure 6-11: An example dependency report............................................................ Figure 8-5: Dependency relationship.................................................................................................................................... Figure 8-3: A tiger module directory............................................. with all modules........................................................................ Figure 7-8: Archetype directory layout...... Figure 6-12: The dependency convergence report........................................ Figure 8-2: A sample spring module directory....................... Figure 7-2: Build Management general configuration screen................................... Figure 7-5: Schedule configuration................................................................................................................................................................... Figure 7-3: Add project screen shot.................. Figure 6-13: An example Clirr report................................................................................................................................. 204 205 206 223 223 226 227 229 231 236 239 249 250 266 266 267 15 ............................................................................................................................................................................................................................
16 .This page left intentionally blank.
Maven works equally well for small and large projects. which provides a wide range of topics from understanding Maven's build platform to programming nuances.0. it is recommended that you step through the material in a sequential fashion. this guide is written to provide a quick solution for the need at hand.Preface Preface Welcome to Better Builds with Maven. an indispensable guide to understand and use Maven 2. For first time users. 17 . For users more familiar with Maven (including Maven 1. This guide is intended for Java developers who wish to implement the project management and comprehension capabilities of Maven 2 and use it to make their day-to-day work easier and to get help with the comprehension of any Java-based project. reading this book will take you longer. it does not take long to realize these benefits. Perhaps. We hope that this book will be useful for Java project managers as well.x). As you will soon find. Maven 2 is a product that offers immediate value to many users and organizations. but Maven shines in helping teams operate more effectively by allowing team members to focus on what the stakeholders of a project require -leaving the build infrastructure to Maven! This guide is not meant to be an in-depth and comprehensive resource but rather an introduction.
you will be able to take an existing Ant-based build. focuses on the task of writing custom plugins. and how to use Maven to deploy J2EE archives to a container. looks at Maven as a set of practices and tools that enable effective team communication and collaboration. and Chapter 8 shows you how to migrate Ant builds to Maven. EJB. how to use Maven to build J2EE archives (JAR. and how to use Maven to generate a Web site for your project. In this chapter you will learn to set up the directory structure for a typical application and the basics of managing an application's development with Maven. Team Collaboration with Maven. split it into modular components if needed. Getting Started with Maven. WAR. visualize. At this stage you'll pretty much become an expert Maven user. compiling and packaging your first project. Chapter 7 discusses using Maven in a team development environment. Chapter 4 shows you how to build and deploy a J2EE application. Assessing Project Health with Maven. Building J2EE Applications. you should be up and running with Maven.Better Builds with Maven Organization The first two chapters of the book are geared toward a new user of Maven 2. discusses Maven's monitoring tools. goes through the background and philosophy behind Maven and defines what Maven is. and learning more about the health of the project. Creating Applications with Maven. Finally. and install those JARs in your local repository using Maven. Chapter 4. it discusses the various ways that a plugin can interact with the Maven build environment and explores some examples. Chapter 7. These tools aid the team to organize. you will be able to keep your current build working. Chapter 8. Chapter 1. gives detailed instructions on creating. In this chapter. Web Services). including a review of plugin terminology and the basic mechanics of the Maven plugin framework. create JARs. shows how to create the build for a full-fledged J2EE application. you will be revisiting the Proficio application that was developed in Chapter 3. You will learn how to use Maven to ensure successful team development. Developing Custom Maven Plugins. compile and test the code. they discuss what Maven is and get you started with your first Maven project. After reading this second chapter. explains a migration path from an existing build in Ant to Maven. Introducing Maven. illustrates Maven's best practices and advanced uses by working on a real-world example application. 18 . Chapter 2. Chapter 5. Chapter 5 focuses on developing plugins for Maven. At the same time. Chapter 3 builds on that and shows you how to build a real-world project. Chapter 6 discusses project monitoring issues and reporting. After reading this chapter. From there. EAR. the chapter covers the tools available to simplify the life of the plugin developer. and document for reuse the artifacts that result from a software project. reporting tools. Chapter 6. Migrating to Maven. It starts by describing fundamentals. Chapter 3.
exist. click the chapter link to obtain the source code for the book. However. starting with the new Maestro Support Forum and With Maven Support Forum for additional content on Better Builds with Maven. Eclipse Kepler and other activities at Exist Global.com/?q=node/151. we are human. So if you have Maven 2. Maestro users will find additional content here for them.com/?q=node/151 and locate the Submit Errata link to notify us of any errors that you might have found. go to. Q for Eclipse. so occasionally something will come up that none of us caught prior to publication. Once at the site. 19 .Preface Errata We have made every effort to ensure that there are no errors in the text or in the code.com/?q=node/151. Visit Exist Global Forums for information about the latest happening in the Apache Maven community. We offer source code for download.exist. How to Download the Source Code All of the source code used in this book is available for download at. To send an errata for this book. then you're ready to go.exist. and technical support from the Exist Global Web site at installed.
Better Builds with Maven This page left intentionally blank. 20 .
1.Albert Einstein 21 . but not any simpler. ..
1. It simultaneously reduces your duplication effort and leads to higher code quality . first-order problems such as simplifying builds. 1 You can tell your manager: “Maven is a declarative project management tool that decreases your overall time to market by effectively leveraging cross-project intelligence. This book focuses on the core tool produced by the Maven project. It's the most obvious three-word definition of Maven the authors could come up with. and the deployment process. richer definition of Maven read this introduction. What is Maven? Maven is a project management framework. Maven can be the build tool you need. and with repetition phrases such as project management and enterprise software start to lose concrete meaning. and software. In addition to solving straightforward. to distribution. If you are reading this introduction just to find something to tell your manager1. Maven 2. While you are free to use Maven as “just another build tool”. an artifact repository model. you can stop reading now and skip to Chapter 2. but the term project management framework is a meaningless abstraction that doesn't do justice to the richness and complexity of Maven. The Maven project at the Apache Software Foundation is an open source community which produces software tools that understand a common declarative Project Object Model (POM). documentation. You may have been expecting a more straightforward answer. 1. If you are interested in a fuller.” 22 . standards. When someone wants to know what Maven is. It defines a standard life cycle for building. it will prime you for the concepts that are to follow. From compilation.Better Builds with Maven 1. are beginning to have a transformative effect on the Java community. Too often technologists rely on abstract phrases to capture complex topics in three or four words.” Maven is more than three boring. and a software engine that manages and describes projects. Don't worry. but this doesn't tell you much about Maven. a framework that greatly simplifies the process of managing a software project. Maven Overview Maven provides a comprehensive approach to managing software projects. and many developers who have approached Maven as another build tool have come away with a finely tuned build system.1. and the technologies related to the Maven project. Maven also brings with it some compelling second-order benefits. Perhaps you picked up this book because someone told you that Maven is a build tool. Maven. and deploying project artifacts.1. testing. to documentation. It provides a framework that enables easy reuse of common build logic for all projects following Maven's standards. Revolutionary ideas are often difficult to convey with words. it is a build tool or a scripting framework. So. they expect a short. It is a combination of ideas. to view it in such limited terms is akin to saying that a web browser is nothing more than a tool that reads hypertext. sound-bite answer. and it is impossible to distill the definition of Maven to simply digested sound-bites. to team collaboration. what exactly is Maven? Maven encompasses a set of build standards. Maven provides the necessary abstractions that encourage reuse and take much of the work out of project builds. uninspiring words. distribution. “Well.
predictable way. and instead. as much as it is a piece of software. started focusing on component development. The build process for Tomcat was different than the build process for Struts. and not necessarily a replacement for Ant. common build strategies. Instead of focusing on creating good component libraries or MVC frameworks. which can be described in a common format. 1. such as Jakarta Commons. Ultimately. and deploying. Many people come to Maven familiar with Ant. The ASF was effectively a series of isolated islands of innovation. Developers at the ASF stopped figuring out creative ways to compile. generating documentation. you will wonder how you ever developed without it. Using Maven has made it easier to add external dependencies and publish your own project components. but Maven is an entirely different creature from Ant. projects such as Jakarta Taglibs had (and continue to have) a tough time attracting developer interest because it could take an hour to configure everything in just the right way. If you followed the Maven Build Life Cycle.1. the barrier to entry was extremely high. every project at the ASF had a different approach to compilation. test.2. 23 . developers were building yet another build system. So. This lack of a common approach to building software meant that every new project tended to copy and paste another project's build system. The same standards extended to testing. It is the next step in the evolution of how individuals and organizations collaborate to create software systems. Developers within the Turbine project could freely move between subcomponents.Introducing Maven As more and more projects and products adopt Maven as a foundation for project management. In addition. It is a set of standards and an approach to project development. the Codehaus community started to adopt Maven 1 as a foundation for project management. knowing clearly how they all worked just by understanding how one of the components worked. While there were some common themes across the separate builds. each community was creating its own build systems and there was no reuse of build logic across projects. this copy and paste approach to build reuse reached a critical tipping point at which the amount of work required to maintain the collection of build systems was distracting from the central task of developing high-quality software. to answer the original question: Maven is many things to many people. Maven's standards and centralized repository model offer an easy-touse naming system for projects. Maven entered the scene by way of the Turbine project. Once developers spent time learning how one project was built. and it immediately sparked interest as a sort of Rosetta Stone for software project management. your project gained a build by default. for a project with a difficult build system. Maven's standard formats enable a sort of "Semantic Web" for programming projects. Soon after the creation of Maven other projects. Whereas Ant provides a toolbox for scripting builds. and Web site generation. Prior to Maven. so it's a natural association. and package software. generating metrics and reports. Maven provides standards and a set of patterns in order to facilitate project management through reusable. Maven is a way of approaching a set of software as a collection of highly-interdependent components. Maven is not just a build tool. Once you get up to speed on the fundamentals of Maven. and the Turbine developers had a different site generation process than the Jakarta Commons developers. Maven's Origins Maven was borne of the practical desire to make several projects at the Apache Software Foundation (ASF) work in the same. distribution. they did not have to go through the process again when they moved on to the next project. it becomes easier to understand the relationships between projects and to establish a system that navigates and reports on these relationships.
to provide a common layout for project documentation. Maven allows developers to declare life-cycle goals and project dependencies that rely on Maven’s default structures and plugin capabilities. you can easily drive a Camry. Given the highly inter-dependent nature of projects in open source. What Does Maven Provide? Maven provides a useful abstraction for building software in the same way an automobile provides an abstraction for driving. When you purchase a new car. and the software tool (named Maven) is just a supporting element within this model. in order to perform the build. if you've learned how to drive a Jeep. Maven takes a similar approach to software projects: if you can build one Maven project you can build them all. referred to as "building the build". The model uses a common project “language”. documentation. declarative build approach tend to be more transparent. and you gain access to expertise and best-practices of an entire industry. existing Ant scripts (or Make files) can be complementary to Maven and used through Maven's plugin architecture. assemble. and if you can apply a testing plugin to one project. the car provides a known interface. and much more transparent. An individual Maven project's structure and contents are declared in a Project Object Model (POM). 24 . more maintainable. You describe your project using Maven's model. Maven’s ability to standardize locations for source files. 1.Better Builds with Maven However. test.1. which forms the basis of the entire Maven system. you can apply it to all projects. and output. and to retrieve project dependencies from a shared storage area makes the building process much less time consuming.3. Projects and systems that use Maven's standard. Plugins allow developers to call existing Ant scripts and Make files and incorporate those existing functions into the Maven build life cycle. Developers can build any given project without having to understand how the individual plugins work (scripts in the Ant world). if your project currently relies on an existing Ant build script that must be maintained. install) is effectively delegated to the POM and the appropriate plugins. The key value to developers from Maven is that it takes a declarative approach rather than requiring developers to create the build process themselves. more reusable. Much of the project management and build orchestration (compile. and easier to comprehend. Maven provides you with: A comprehensive model for software projects • Tools that interact with this declarative model • Maven provides a comprehensive model that can be applied to all software projects.
Maven makes it is easier to create a component and then integrate it into a multi-project build. • • • Without these advantages.“ Reusability . home-grown build systems. The following Maven principles were inspired by Christopher Alexander's idea of creating a shared language: Convention over configuration Declarative execution • Reuse of build logic • Coherent organization of dependencies • • Maven provides a shared language for software development projects.Maven allows organizations to standardize on a set of best practices. As mentioned earlier. Further. When everyone is constantly searching to find all the different bits and pieces that make up a project. You will see these principles in action in the following chapter. Without visibility it is unlikely one individual will know what another has accomplished and it is likely that useful code will not be reused. logical. Agility . 25 . Developers can jump between different projects without the steep learning curve that accompanies custom. Maintainability . allowing more effective communication and freeing team members to get on with the important work of creating value at the application level.Organizations that adopt Maven can stop “building the build”. there is little chance anyone is going to comprehend the project as a whole. This is a natural effect when processes don't work the same way for everyone.Introducing Maven Organizations and projects that adopt Maven benefit from: • Coherence . Because Maven projects adhere to a standard model they are less opaque. 1. Maven’s Principles According to Christopher Alexander "patterns help create a shared language for communicating insight and experience about problems and their solutions". when you create your first Maven project. This chapter will examine each of these principles in detail. and focus on building the application. it is improbable that multiple individuals can work productively together on a project. and aesthetically consistent relation of parts. When you adopt Maven you are effectively reusing the best practices of an entire industry. The definition of this term from the American Heritage dictionary captures the meaning perfectly: “Marked by an orderly. As a result you end up with a lack of shared knowledge. Each of the principles above enables developers to describe their projects at a higher level of abstraction. when code is not reused it is very hard to create a maintainable system.2. but also for software components.Maven lowers the barrier to reuse not only for build logic.Maven is built upon a foundation of reuse. along with a commensurate degree of frustration among team members. Maven projects are more maintainable because they follow a common. Maven provides a structured build life cycle so that problems can be approached in terms of this structure. publicly-defined model.
and allows you to create value in your applications faster with less effort. but the use of sensible default strategies is highly encouraged. One of those ideals is flexibility. which all add up to make a huge difference in daily use. With Maven you slot the various pieces in where it asks and Maven will take care of almost all of the mundane aspects for you. you gain an immense reward in terms of productivity that allows you to do more. such as classes are singular and tables are plural (a person class relates to a people table). so that you don't have to think about the mundane details.1. and better at the application level. sooner. One characteristic of opinionated software is the notion of 'convention over configuration'. With Rails. Rails does.”2 David Heinemeier Hansson articulates very well what Maven has aimed to accomplish since its inception (note that David Heinemeier Hansson in no way endorses the use of Maven. you're rewarded by not having to configure that link. Convention Over Configuration One of the central tenets of Maven is to provide sensible default strategies for the most common tasks.2. If you follow basic conventions.Better Builds with Maven 1. All of these things should simply work. This is not to say that you can't override Maven's defaults. Well. so stray from these defaults when absolutely necessary only. We have a ton of examples like that. If you are happy to work along the golden path that I've embedded in Rails. You don’t want to spend time fiddling with building. generating documentation. he probably doesn't even know what Maven is and wouldn't like it if he did because it's not written in Ruby yet!): that is that you shouldn't need to spend a lot of time getting your development infrastructure functioning Using standard conventions saves time. or deploying. and I believe that's why it works.. It eschews placing the old ideals of software in a primary position. makes it easier to communicate to others. you trade flexibility at the infrastructure level to gain flexibility at the application level. the notion that we should try to accommodate as many approaches as possible. and this is what Maven provides. This "convention over configuration" tenet has been popularized by the Ruby on Rails (ROR) community and specifically encouraged by ROR's creator David Heinemeier Hansson who summarizes the notion as follows: “Rails is opinionated software. 2 O'Reilly interview with DHH 26 . that we shouldn't pass judgment on one form of development over another. The class automatically knows which table to use for persistence.
Maven encourages a common arrangement of project content so that once you are familiar with these standard. and shared utility code. You will be able to look at other projects and immediately understand the project layout. default locations. you will be able to adapt your project to your customized layout at a cost. If this saves you 30 minutes for each new project you look at. In this scenario. Follow the standard directory layout. If you have no choice in the matter due to organizational policy or integration issues with existing systems. generated output. but you can also take a look in Appendix B for a full listing of the standard conventions. server code. you might be forced to use a directory structure that diverges from Maven's defaults.consider a set of sources for a client/server-based application that contains client code. but Maven would encourage you to have three. and you will make it easier to communicate about your project. To illustrate. increased complexity of your project's POM. configuration files. If you have placed all the sources together in a single project. the code contained in each project has a different concern (role to play) and they should be separated. and a project for the shared utility code portion. These components are generally referred to as project content. when you do this. makes it much easier to reuse. maintainability.Introducing Maven Standard directory layout for projects The first convention used by Maven is a standard directory layout for project sources. you will be able to navigate within any Maven project you build in the future. In this case. extendibility and reusability. You can override any of Maven's defaults to create a directory layout of your choosing. project resources. and documentation. One primary output per project The second convention used by Maven is the concept that a single Maven project produces only one primary output. If you do have a choice then why not harness the collective knowledge that has built up as a result of using this convention? You will see clear examples of the standard directory structure in the next chapter. First time users often complain about Maven forcing you to do things a certain way and the formalization of the directory structure is the source of most of the complaints. Maven pushes you to think clearly about the separation of concerns when setting up your projects because modularity leads to reuse. the boundaries between our three separate concerns can easily become blurred and the ability to reuse the utility code could prove to be difficult. which should be identified and separated to cope with complexity and to achieve the required engineering quality factors such as adaptability. separate projects: a project for the client portion of the application. You could produce a single JAR file which includes all the compiled classes. you need to ask yourself if the extra configuration that comes with customization is really worth it. It is a very simple idea but it can save you a lot of time. The separation of concerns (SoC) principle states that a given problem involves different kinds of concerns. even if you only look at a few new projects a year that's time better spent on your application. 27 . but. Having the utility code in a separate project (a separate JAR file). a project for the server portion of the application.
2. 1. The execution of Maven's plugins is coordinated by Maven's build life cycle in a declarative fashion with instructions from Maven's POM. Maven's project object model (POM) Maven is project-centric by design.jar. you would not even be able to get the information from the jar's manifest.2. Maven is useless . and many other functions. Maven puts this SoC principle into practice by encapsulating build logic into coherent modules called plugins. The naming conventions provide clarity and immediate comprehension. The intent behind the standard naming conventions employed by Maven is that it lets you understand exactly what you are looking at by. It doesn't make much sense to exclude pertinent information when you can have it at hand to use. and the POM is Maven's description of a single project. well. Maven promotes reuse by encouraging a separation of concerns (SoC) . a plugin for creating JARs. is the use of a standard naming convention for directories and for the primary output of each project. This is illustrated in the Coherent Organization of Dependencies section. This is important if there are multiple sub-projects involved in a build process. Plugins are the key building blocks for everything in Maven. Without the POM. 28 . It is immediately obvious that this is version 1.jar you would not really have any idea of the version of Commons Logging. Even from this short list of examples you can see that a plugin in Maven has a very specific role to play in the grand scheme of things.3. A simple example of a standard naming convention might be commons-logging-1. In Maven there is a plugin for compiling source code. Reuse of Build Logic As you have already learned. later in this chapter. Maven can be thought of as a framework that coordinates the execution of plugins in a well defined way.2. 1. It is the POM that drives execution in Maven and this approach can be described as model-driven or declarative execution. One important concept to keep in mind is that everything accomplished in Maven is the result of a plugin executing. the plugin configurations contained in the POM. looking at it. which results because the wrong version of a JAR file was used. It's happened to all of us.2.2.Better Builds with Maven Standard naming conventions The third convention in Maven.2 of Commons Logging. in a lot of cases. a set of conventions really. because the naming convention keeps each one separate in a logical. Systems that cannot cope with information rich artifacts like commons-logging-1.jar are inherently flawed because eventually. Moreover. you'll track it down to a ClassNotFound exception. If the JAR were named commonslogging.the POM is Maven's currency. easily comprehensible manner. when something is misplaced. a plugin for creating Javadocs. but with Maven. a plugin for running tests. Declarative Execution Everything in Maven is driven in a declarative fashion using Maven's Project Object Model (POM) and specifically. and it doesn't have to happen again.
0</modelVersion> <groupId>com. The POM is an XML document and looks like the following (very) simplified example: <project> <modelVersion>4. generated by this project. in Maven all POMs have an implicit parent in Maven's Super POM.xml files. The key feature to remember is the Super POM contains important default information so you don't have to repeat this information in the POMs you create.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.This is the top-level element in all Maven pom. You. • artifactId . The POM shown previously is a very simple POM.jar). all objects have the implicit parent of java. myapp-1.This element indicates the unique base name of the primary artifact being 29 .Object.mycompany.lang. Additional artifacts such as source bundles also use the artifactId as part of their file name.This required element indicates the version of the object model that the POM is using. • • project . being the observant reader. The answer lies in Maven's implicit use of its Super POM.Introducing Maven The POM below is an example of what you could use to build and test a project.Object class.1</version> <scope>test</scope> </dependency> </dependencies> </project> This POM will allow you to compile. so if you wish to find out more about it you can refer to Appendix B. will ask “How this is possible using a 15 line file?”. The POM contains every important piece of information about your project. The version of the model itself changes very infrequently. and generate basic documentation.apache. For example org. The Super POM can be rather intimidating at first glance.<extension> (for example. A typical artifact produced by Maven would have the form <artifactId>-<version>. and is the analog of the Java language's java.lang. The groupId is one of the key identifiers of a project and is typically based on the fully qualified domain name of your organization. • groupId . but it is mandatory in order to ensure stability when Maven introduces new features or other model changes.maven.0.This element indicates the unique identifier of the organization or group that created the project. but still displays the key elements that every POM contains. In Java. modelVersion . Maven's Super POM carries with it all the default conventions that Maven encourages.8. test. Likewise.plugins is the designated groupId for all Maven plugins.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.0.
• • For a complete reference of the elements available for use in the POM please refer to the POM reference at. For now. url . and compile phases that precede it automatically. compilation. WAR. See Chapter 2. When you need to add some functionality to the build life cycle you do so with a plugin. For example.This element indicates where the project's site can be found. generate-resources.This element indicates the display name used for the project. Maven plugins provide reusable build logic that can be slotted into the standard build life cycle. Maven's build life cycle Software projects generally follow similar. or install.org/maven-model/maven. the compile phase invokes a certain set of goals to compile a set of classes. or other projects that use it as a dependency. The actions that have to be performed are stated at a high level. It is important to note that each phase in the life cycle will be executed up to and including the phase you specify. The default value for the packaging element is jar so you do not have to specify this in most cases. and Maven deals with the details behind the scenes. or EAR. or goals. the build life cycle consists of a series of phases where each phase can perform one or more actions. or test. WAR. In Maven. or create a custom plugin for the task at hand. description . Any time you need to customize the way your project builds you either use an existing plugin. 30 .7 Using Maven Plugins and Chapter 5 Developing Custom Maven Plugins for examples and details on how to customize the Maven build. related to that phase. if you tell Maven to compile. installation. This not only means that the artifact produced is a JAR. Maven will execute the validate. • version .This element indicates the package type to be used by this artifact (JAR. initialize.html. The standard build life cycle consists of many phases and these can be thought of as extension points. For example. testing. packaging. EAR. etc. just keep in mind that the selected packaging of a project plays a part in customizing the build life cycle. So.apache. The path that Maven moves along to accommodate an infinite variety of projects is called the build life cycle. etc. Maven • name .Better Builds with Maven • packaging .This element indicates the version of the artifact generated by the project. which indicates that a project is in a state of development. but also indicates a specific life cycle to use as part of the build process.). you tell Maven that you want to compile. The life cycle is a topic dealt with later in this chapter. process-sources. goes a long way to help you with version management and you will often see the SNAPSHOT designator in a version. generate-sources. and during the build process for your project. In Maven you do day-to-day work by invoking particular phases in this standard build life cycle. This is often used in Maven's generated documentation.This element provides a basic description of your project. or package. well-trodden build paths: preparation.
Dependency Management is one of the most powerful features in Maven.1. in order to find the artifacts that most closely match the dependency request. If a matching artifact is located.jar. but you may be asking yourself “Where does that dependency come from?” and “Where is the JAR?” The answers to those questions are not readily apparent without some explanation of how Maven's dependencies. Maven takes the dependency coordinates you provide in the POM. artifacts and repositories work. and repositories. you are simply telling Maven what a specific project expects.1 of the junit artifact produced by the junit group.4. but a Java artifact could also be a WAR. instead you deal with logical dependencies. instead it depends on version 3. Your project doesn't require junit-3. Maven transports it from that remote repository to your local repository for project use. and it supplies these coordinates to its own internal dependency mechanisms.0.2. which is straightforward. our example POM has a single dependency listed for Junit: <project> <modelVersion>4. Maven needs to know what repository to search as well as the dependency's coordinates. Coherent Organization of Dependencies We are now going to delve into how Maven resolves dependencies and discuss the intimately connected concepts of dependencies. Maven tries to satisfy that dependency by looking in all of the remote repositories to which it has access. A dependency is a reference to a specific artifact that resides in a repository.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. With Maven.0</modelVersion> <groupId>com.8.8. If you recall.8. or EAR file.mycompany. 31 .app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1. artifactId and version. In the POM you are not specifically telling Maven where the dependencies are physically located. grabbing a dependency. SAR. and providing this dependency to your software project.Introducing Maven 1. but the key concept is that Maven dependencies are declarative. A dependency is uniquely identified by the following identifiers: groupId. In “Maven-speak” an artifact is a specific piece of software. In Java.1</version> <scope>test</scope> </dependency> </dependencies> </project> This POM states that your project has a dependency on JUnit. the most common artifact is a JAR file. At a basic level. When a dependency is declared within the context of your project. artifacts. we can describe the process of dependency management as Maven reaching out into the world. In order for Maven to attempt to satisfy a dependency. you stop focusing on a collection of JAR files. There is more going on behind the scenes.
Read the following sections for specific details regarding where Maven searches for these dependencies. but when a declared dependency is not present in your local repository Maven searches all the remote repositories to which it has access to find what’s missing. Maven creates your local repository in <user_home>/. By default.Better Builds with Maven Maven has two types of repositories: local and remote.m2/repository. 32 . You must have a local repository in order for Maven to work. The following folder structure shows the layout of a local Maven repository that has a few locally installed dependency artifacts such as junit-3.8. Maven usually interacts with your local repository. it will create your local repository and populate it with artifacts as a result of dependency requests.jar. Local Maven repository When you install and run Maven for the first time.1.
but in practice the repository is a directory structure in your file system.jar artifact that are now in your local repository. In theory. On the next page is the general pattern used to create the repository layout: 33 .1. We’ll stick with our JUnit example and examine the junit-3.8.Introducing Maven Figure 1-1: Artifact movement from remote to local repository So you understand how the layout works. a repository is just an abstract storage mechanism.. Above you can see the directory structure that is created when the JUnit dependency is resolved. take a closer look at one of the artifacts that appeared in your local repository.
8.1. If this file is not present.y. and a version of “3. artifactId of “junit”.apache.1” in <user_home>/. Maven will generate a path to the artifact in your local repository.8.maven.jar.m2/repository/junit/junit/3.1/junit-3.z then you will end up with a directory structure like the following: Figure 1-3: Sample directory structure In the first directory listing you can see that Maven artifacts are stored in a directory structure that corresponds to Maven’s groupId of org. Maven will fetch it from a remote repository. Locating dependency artifacts When satisfying dependencies.8.Better Builds with Maven Figure 1-2: General pattern for the repository layout If the groupId is a fully qualified domain name (something Maven encourages) such as x. for example. 34 . Maven attempts to locate a dependency's artifact using the following process: first. Maven will attempt to find the artifact with a groupId of “junit”.
upon which your project relies. internal Maven repository. which can be managed by Exist Global Maestro. With Maven.maven.0 JARs to every project. the artifact is downloaded and installed in your local repository. it doesn't scale easily to support an application with a great number of small components. 4 The history of how Maven communicates to the central repository has changed over time based on the Maven client release version. While this approach works for a few projects. if your project has ten web applications.mergere. Once the dependency is satisfied.0 by changing your dependency declarations. there is no need to store the various spring JAR files in your project. and they shouldn't be versioned in an SCM. Each project relies upon a specific artifact via the dependencies listed in a POM.. 3 Alternatively. Your local repository is one-stop-shopping for all artifacts that you need regardless of how many projects you are building. Dependencies are not your project's code.com/maven2. the Maven Super POM sets the central repository to. 35 .ibiblio. modular project arrangements. into a lib directory.. Continuum and Archiva build platform. From Maven version 2. Maestro is an Apache License 2.0.0 through 2. Declare your dependencies and let Maven take care of details like compilation and testing classpaths.2.1.0. which all depend on version 1.8.org/maven2/ 2. every project with a POM that references the same dependency will use this single copy installed in your local repository.Introducing Maven By default. The following repositories have been the central/default repository in the Maven Super POM: 1.3 If your project's POM contains more than one remote repository.6 there have been three central repository URLs and a fourth URL is under discussion at this time.com/. you don’t store a copy of junit3. artifacts can be downloaded from a secure.4 From this point forward. Before Maven. In other words. but it is incompatible with the concept of small.org/maven2/ If you are using the Maestro Developer Client from Exist Global.maven. you would check the 10-20 JAR files. all projects referencing this dependency share a single copy of this JAR. Maven will attempt to fetch an artifact from the central Maven repository at. Storing artifacts in your SCM along with your project may seem appealing. For more information on Maestro please see:. the common pattern in most projects was to store JAR files in a project's subdirectory.0 distribution based on a pre-integrated Maven.ibiblio.exist. Instead of adding the Spring 2.. If you were coding a web application. Maven will attempt to download an artifact from each remote repository in the order defined in your POM.org/maven2.jar for each project that needs it. you simply change some configurations in Maven.org/pub/mirrors/maven2/ 3.6 of the Spring Framework. and it is a trivial process to upgrade all ten web applications to Spring 2. and you would add these dependencies to your classpath.
Maven is a set of standards. you don't have to jump through hoops trying to get it to work. simplifies the process of development. and. To summarize. a useful technology just works. in the background. Maven is a repository. Like the engine in your car or the processor in your laptop. 36 .3. You don't have to worry about whether or not it's going to work. it is the adoption of a build life-cycle process that allows you to take your software development to the next level. Maven's Benefits A successful technology takes away burden. it should rarely. Maven is a framework. active open-source community that produces software focused on project management. if ever. Maven is also a vibrant. shielding you from complexity and allowing you to focus on your specific task. Maven provides such a technology for project management. rather than imposing it. and Maven is software. Using Maven is more than just downloading another JAR file and a set of scripts.Better Builds with Maven 1. be a part of your thought process. in doing so.. The terrible temptation to tweak should be resisted unless the payoff is really noticeable.Jon Bentley and Doug McIlroy 37 .
mycompany. If you are behind a firewall. so for now simply assume that the above settings will work. it may be necessary to make a few more preparations for Maven to function correctly.com/maven2</url> <mirrorOf>central</mirrorOf> </mirror> </mirrors> </settings> In its optimal mode.xml file.mycompany.xml file with the following content: <settings> <proxies> <proxy> <active>true</active> <protocol>http</protocol> <host>proxy.mycompany.m2/settings.xml file with the following content. 38 .1. it is assumed that you are a first time Maven user and have already set up Maven on your local system.Better Builds with Maven 2.m2/settings. If there is an active Maven proxy running. then please refer to Maven's Download and Installation Instructions before continuing. To do this. create a <user_home>/. Depending on where your machine is located. then note the URL and let Maven know you will be using a proxy.com</host> <port>8080</port> <username>your-username</username> <password>your-password</password> </proxy> </proxies> </settings> If Maven is already in use at your workplace. then you will have to set up Maven to understand that.xml file will be explained in more detail in the following chapter and you can refer to the Maven Web site for the complete details on the settings. Create a <user_home>/.com</id> <name>My Company's Maven Proxy</name> <url>. Now you can perform the following basic check to ensure Maven is working correctly: mvn -version If Maven's version is displayed. Maven requires network access. <settings> <mirrors> <mirror> <id>maven. If you have not set up Maven yet. then you should be all set to create your first Maven project. The settings. Preparing to Use Maven In this chapter. ask your administrator if there is an internal Maven proxy.
39 . which looks like the following: <project> <modelVersion>4.xml file.2. an archetype is a template of a project.0-SNAPSHOT</version> <name>Maven Quick Start Archetype</name> <url></version> <scope>test</scope> </dependency> </dependencies> </project> At the top level of every project is your pom.8. execute the following: C:\mvnbook> mvn archetype:create -DgroupId=com. which contains a pom.xml file. you know you are dealing with a Maven project. but if you would like more information about archetypes. In Maven. you will notice that the following directory structure has been created. To create the Quick Start Maven project. This chapter will show you how the archetype mechanism works.apache.mycompany. After the archetype generation has completed. you will notice that a directory named my-app has been created for the new project.app \ -DartifactId=my-app You will notice a few things happened when you executed this command.0</modelVersion> <groupId>com.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1. and that it in fact adheres to Maven's standard directory layout discussed in Chapter 1. which is combined with some user input to produce a fullyfunctional Maven project. you will use Maven's Archetype mechanism. Whenever you see a directory structure. First. An archetype is defined as an original pattern or model from which all other things of the same kind are made. and this directory contains your pom.mycompany. please refer to the Introduction to Archetypes. Creating Your First Maven Project To create your first project.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.Getting Started with Maven 2.0.xml.
3. Then. and deploying the project (source files. Compiling Application Sources As mentioned in the introduction. you tell Maven what you need. 2. but later in the chapter you will see how the standard directory layout is employed for other project content. and so on). in order to accomplish the desired task. Change to the <my-app> directory. you are ready to build your project. some application sources. the site. note that this one simple command encompasses Maven's four foundational principles: Convention over configuration • Reuse of build logic • Declarative execution • Coherent organization of dependencies • These principles are ingrained in all aspects of Maven.Better Builds with Maven Figure 2-1: Directory structure after archetype generation The src directory contains all of the inputs required for building. various descriptors such as assembly descriptors. The <my-app> directory is the base directory. ${basedir}. documenting. for the my-app project. in one fell swoop. Before you issue the command to compile the application sources. and some test sources. configuration files. compile your application sources using the following command: C:\mvnbook\my-app> mvn compile 40 . but the following analysis of the simple compile command shows you the four principles in action and makes clear their fundamental importance in simplifying the development of a project. Now that you have a POM. testing. In this first stage you have Java source files only. at a very high level. in a declarative way.
This default value (though not visible in the POM above) was. how was Maven able to decide to use the compiler plugin. is the tool used to compile your application sources. How did Maven know where to look for sources in order to compile them? And how did Maven know where to put the compiled classes? This is where Maven's principle of “convention over configuration” comes into play.plugins:maven-resources-plugin: checking for updates from central . in fact. now you know how Maven finds application sources. application sources are placed in src/main/java.. This means you don't have to state this location at all in any of your POMs. What actually compiled the application sources? This is where Maven's second principle of “reusable build logic” comes into play. You can. how was Maven able to retrieve the compiler plugin? After all.maven. By default. The same build logic encapsulated in the compiler plugin will be executed consistently across any number of projects. The next question is. along with its default configuration. is target/classes.Getting Started with Maven After executing this command you should see output similar to the following: [INFO-------------------------------------------------------------------[INFO] Building Maven Quick Start Archetype [INFO] task-segment: [compile] [INFO]------------------------------------------------------------------[INFO] artifact org. in the first place? You might be guessing that there is some background process that maps a simple command to a particular plugin. The standard compiler plugin.apache.. Maven downloads plugins as they are needed. [INFO] artifact org. there is a form of mapping and it is called Maven's default build life cycle.... but there is very little reason to do so. and how Maven invokes the compiler plugin. So.plugins:maven-compiler-plugin: checking for updates from central . override this default location. of course. 41 . if you use the default location for application sources. [INFO] [resources:resources] . The same holds true for the location of the compiled classes which. Although you now know that the compiler plugin was used to compile the application sources. what Maven uses to compile the application sources. if you poke around the standard Maven installation. you won't find the compiler plugin since it is not shipped with the Maven distribution. inherited from the Super POM.apache. by default. Even the simplest of POMs knows the default location for application sources. Instead. . In fact.maven..
Therefore. programmers always write and execute their own unit tests *nudge nudge. Compiling Test Sources and Running Unit Tests Now that you're successfully compiling your application's sources. the compiled classes were placed in target/classes. it took almost 4 minutes with a broadband connection).com/. which is specified by the standard directory layout. because Maven already has what it needs.Better Builds with Maven The first time you execute this (or any other) command. 42 . wink wink*). simply tell Maven you want to test your sources. By following the standard Maven conventions you can get a lot done with very little effort! 2. This internal repository can be managed by Exist Global Maestro. Continuum and Archiva build platform. it won't download anything new. artifacts can be downloaded from a secure. or where your output should go. If you're a keen observer you'll notice that using the standard conventions makes the POM above very small. high-performance.4. This implies that all prerequisite phases in the life cycle will be performed to ensure that testing will be successful. Maven will download all the plugins and related dependencies it needs to fulfill the command. Use the following simple command to test: C:\mvnbook\my-app> mvn test 5 Alternatively. and eliminates the requirement for you to explicitly tell Maven where any of your sources are. Maven repository that is internal to your organization.0 distribution based on a pre-integrated Maven.exist. As you can see from the output. For more information on Maestro please see: The next time you execute the same command again. Again. you probably have unit tests that you want to compile and execute as well (after all. Maestro is an Apache License 2. From a clean installation of Maven this can take quite a while (in the output above. Maven will execute the command much quicker.
. 43 . you'll want to move on to the next logical step. [INFO] [surefire:test] [INFO] Setting reports dir: C:\Test\Maven2\test\my-app\target/surefire-reports ------------------------------------------------------T E S T S ------------------------------------------------------[surefire] Running com. • If you simply want to compile your test sources (but not execute the tests). [INFO] [resources:resources] [INFO] [compiler:compile] [INFO] Nothing to compile . Time elapsed: 0 sec Results : [surefire] Tests run: 1. how to package your application.apache.Getting Started with Maven After executing this command you should see output similar to the following: [INFO]------------------------------------------------------------------[INFO] Building Maven Quick Start Archetype [INFO] task-segment: [test] [INFO]------------------------------------------------------------------[INFO] artifact org. and execute the tests. Failures: 0.all classes are up to date [INFO] [resources:testResources] [INFO] [compiler:testCompile] Compiling 1 source file to C:\Test\Maven2\test\my-app\target\test-classes . • Before compiling and executing the tests. Errors: 0 [INFO]------------------------------------------------------------------[INFO] BUILD SUCCESSFUL [INFO]------------------------------------------------------------------[INFO] Total time: 15 seconds [INFO] Finished at: Thu Oct 06 08:12:17 MDT 2005 [INFO] Final Memory: 2M/8M [INFO]------------------------------------------------------------------- Some things to notice about the output: Maven downloads more dependencies this time. remember that it isn't necessary to run this every time.app.. mvn test will always run the compile and test-compile phases first.AppTest [surefire] Tests run: 1.. as well as all the others defined before it.mycompany. Errors: 0. since we haven't changed anything since we compiled last)..maven. Maven compiles the main code (all these classes are up-to-date. Now that you can compile the application sources. Failures: 0. These are the dependencies and plugins necessary for executing the tests (recall that it already has the dependencies it needs for compiling and won't download them again).plugins:maven-surefire-plugin: checking for updates from central . compile the tests. you can execute the following command: C:\mvnbook\my-app> mvn test-compile However.
Packaging and Installation to Your Local Repository Making a JAR file is straightforward and can be accomplished by executing the following command: C:\mvnbook\my-app> mvn package If you take a look at the POM for your project.Better Builds with Maven 2. Take a look in the the target directory and you will see the generated JAR file.jar [INFO]------------------------------------------------------------------[INFO] BUILD SUCCESSFUL [INFO]------------------------------------------------------------------[INFO] Total time: 5 seconds [INFO] Finished at: Tue Oct 04 13:20:32 GMT-05:00 2005 [INFO] Final Memory: 3M/8M [INFO]------------------------------------------------------------------- 44 . Errors: 0.0-SNAPSHOT.001 sec Results : [surefire] Tests run: 1. Time elapsed: 0. you will notice the packaging element is set to jar. Errors: 0 [INFO] [jar:jar] [INFO] Building jar: <dir>/my-app/target/my-app-1. To install.jar to <localrepository>\com\mycompany\app\my-app\1. It can then be used by other projects as a dependency.m2/repository is the default location of the repository. Now.app. Failures: 0.mycompany. This is how Maven knows to produce a JAR file from the above command (you'll read more about this later). The directory <user_home>/.0-SNAPSHOT.0-SNAPSHOT\my-app-1.0-SNAPSHOT. you'll want to install the artifact (the JAR file) you've generated into your local repository.jar [INFO] [install:install] [INFO] Installing c:\mvnbook\my-app\target\my-app.AppTest [surefire] Tests run: 1.5. Failures: 0.
packaging. alternatively you might like to generate an Eclipse descriptor: C:\mvnbook\my-app> mvn eclipse:eclipse 45 . as it is one of the highly-prized features in Maven. what other functionality can you leverage. there are a great number of Maven plugins that work out-of-the-box. testing. the following tests are included: **/*Test. So. there is far more functionality available to you from Maven without requiring any additions to the POM. Of course. given Maven's re-usable build logic? With even the simplest POM.Getting Started with Maven Note that the Surefire plugin (which executes the test) looks for tests contained in files with a particular naming convention.java • You have now completed the process for setting up. This chapter will cover one in particular. In contrast. Without any work on your part. By default.java • **/*TestCase. In this case. to get any more functionality out of an Ant build script. this covers the majority of tasks users perform. you must keep making error-prone additions. Perhaps you'd like to generate an IntelliJ IDEA descriptor for the project: C:\mvnbook\my-app> mvn idea:idea This can be run over the top of a previous IDEA project. as it currently stands.java • Conversely. for example: C:\mvnbook\my-app> mvn clean This will remove the target directory with the old build data before starting. simply execute the following command: C:\mvnbook\my-app> mvn site There are plenty of other stand-alone goals that can be executed as well. this POM has enough information to generate a Web site for your project! Though you will typically want to customize your Maven site. and if you've noticed. and installing a typical Maven project. so it is fresh. Or.java • **/Test*. it will update the settings rather than starting fresh. building. the following tests are excluded: **/Abstract*Test. everything done up to this point has been driven by an 18-line POM. For projects that are built with Maven.java • **/Abstract*TestCase. if you're pressed for time and just need to create a basic Web site for your project.
you can package resources within JARs. If you unpacked the JAR that Maven created you would see the following: 46 . simply by placing those resources in a standard directory structure. The rule employed by Maven is that all directories or files placed within the src/main/resources directory are packaged in your JAR with the exact same structure. starting at the base of the JAR. is the packaging of resources into a JAR file.Better Builds with Maven 2.properties file within that directory. Handling Classpath Resources Another common use case.6. This means that by adopting Maven's standard conventions. Figure 2-2: Directory structure after adding the resources directory You can see in the preceding example that there is a META-INF directory with an application. you need to add the directory src/main/resources. which requires no changes to the POM shown previously. That is where you place any resources you wish to package in the JAR. In the following example. For this common task. Maven again uses the standard directory layout.
47 . You will also notice some other files like META-INF/MANIFEST.properties file is there in the META-INF directory.Getting Started with Maven Figure 2-3: Directory structure of the JAR file created by Maven The original contents of src/main/resources can be found starting at the base of the JAR and the application.xml and pom. These come standard with the creation of a JAR in Maven. You can create your own manifest if you choose. but the properties can be utilized using the standard Java APIs. Operating on the POM file would require you to use Maven utilities. as well as a pom.properties file.properties files are packaged up in the JAR so that each artifact produced by Maven is self-describing and also allows you to utilize the metadata in your own application.xml and pom. Then run mvn install and examine the jar file in the target directory. One simple use might be to retrieve the version of your application.MF. simply create the resources and META-INF directories and create an empty file called application. but Maven will generate one by default if you don't. If you would like to try this example. should the need arise. The pom.properties inside.
follow the same pattern as you do for adding resources to the JAR. except place resources in the src/test/resources directory..] // Retrieve resource InputStream is = getClass()..6.] 48 .. you could use a simple snippet of code like the following for access to the resource required for testing: [. Handling Test Classpath Resources To add resources to the classpath for your unit tests. At this point you have a project directory structure that should look like the following: Figure 2-4: Directory structure after adding test resources In a unit test.1.properties" ).Better Builds with Maven 2.getResourceAsStream( "/test. // Do something with the resource [..
you can filter your resource files dynamically by putting a reference to the property that will contain the value into your resource file using the syntax ${<property name>}.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.maven. Filtering Classpath Resources Sometimes a resource file will need to contain a value that can be supplied at build time only.apache.apache. a property defined in an external properties file. you can use the following configuration for the maven-jarplugin: <plugin> <groupId>org.0</modelVersion> <groupId>com.0-SNAPSHOT</version> <name>Maven Quick Start Archetype</name> <url>. To accomplish this in Maven.xml.0. a value defined in the user's settings.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <archive> <manifestFile>META-INF/MANIFEST. or a system property. To have Maven filter resources when copying. simply set filtering to true for the resource directory in your pom.MF</manifestFile> </archive> </configuration> </plugin> 2.xml: <project> <modelVersion>4.8.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.mycompany.6.1</version> <scope>test</scope> </dependency> </dependencies> <build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build> </project> 49 . The property can be either one of the values defined in your pom.2.xml.Getting Started with Maven To override the manifest file yourself.
which will eventually go into the JAR looks like this: # application. ${project. and ${project.version=${project. add a reference to this new file in the pom.properties: # filter.properties application.name=Maven Quick Start Archetype application.version} With that in place.0-SNAPSHOT To reference a property defined in an external file.version} refers to the version of the project.properties my.have been added. All of this information was previously provided as default values and now must be added to the pom.xml. any element in your POM is available when filtering resources. whose values will be supplied when the resource is filtered as follows: # application. So ${project. the POM has to explicitly state that the resources are located in the src/main/resources directory.xml to override the default value for filtering and set it to true.name} application. resources.build. To reference a property defined in your pom. you can execute the following command (process-resources is the build life cycle phase where the resources are copied and filtered): mvn process-resources The application. First.xml file: <build> <filters> <filter>src/main/filters/filter.name=${project. all you need to do is add a reference to this external file in your pom. In addition.version=1. create an external properties file and call it src/main/filters/filter. To continue the example. and resource elements .value=hello! Next.properties application. the property name uses the names of the XML elements that define the value. create an src/main/resources/METAINF/application.name} refers to the name of the project.xml.which weren't there before .finalName} refers to the final name of the file created.properties</filter> </filters> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build> 50 .properties file.Better Builds with Maven You'll notice that the build. In fact. when the built project is packaged.filter.properties file under target/classes.
0-SNAPSHOT</version> <name>Maven Quick Start Archetype</name> <url> and you'd get the same effect (notice you don't need the references to src/main/filters/filter.mycompany.home).value>hello</my.line.version} command.filter.properties java.line.version=${java.Getting Started with Maven Then. either the system properties built into Java (like java.name} application.version or user.value} The next execution of the mvn process-resources command will put the new property value into application.version=${project.filter.prop=${command.filter.version} message=${my.filter.value> </properties> </project> Filtering resources can also retrieve values from system properties.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.apache.app</groupId> <artifactId>my-app</artifactId> <packaging>jar</packaging> <version>1.0. To continue the example.properties.properties either): <project> <modelVersion>4. add a reference to this property in the application.8. you could have defined it in the properties section of your pom.name=${project.0</modelVersion> <groupId>com. or properties defined on the command line using the standard Java -D parameter.properties file as follows: # application.1</version> <scope>test</scope> </dependency> </dependencies> <build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build> <properties> <my.properties application. As an alternative to defining the my.value property in an external file.prop} 51 . change the application.properties file to look like the following: # application.
then you would create a resource entry to handle the filtering of resources with an exclusion for the resources you wanted unfiltered. and an inclusion of your images directory. mvn process-resources "-Dcommand.line.] </project> 52 .properties file will contain the values from the system properties..] <build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> <excludes> <exclude>images/**</exclude> </excludes> </resource> <resource> <directory>src/main/resources</directory> <includes> <include>images/**</include> </includes> </resource> </resources> </build> [. Preventing Filtering of Binary Resources Sometimes there are classpath resources that you want to include in your JAR.. The build element would look like the following: <project> [. If you had a src/main/resources/images that you didn't want to be filtered. the application. This is most often the case with binary resources. with filtering disabled..line. In addition you would add another resource entry. when you execute the following command (note the definition of the command.6. but you do not want them filtered.3.prop=hello again" 2.Better Builds with Maven Now.prop property on the command line). for example image files..
Getting Started with Maven 2. To illustrate the similarity between plugins and dependencies.0 sources. 53 ..] </project> You'll notice that all plugins in Maven 2 look very similar to a dependency.apache.xml. to customize the build for a Maven project. the groupId and version elements have been shown. Using Maven Plugins As noted earlier in the chapter.plugins or the org. This is as simple as adding the following to your POM: <project> [.you can lock down a specific version. you must include additional Maven plugins.maven.codehaus. If it is not present on your local system. If you do not specify a groupId.maven. plugin developers take care to ensure that new versions of plugins are backward compatible so you are usually OK with the latest release. The configuration element applies the given parameters to every goal from the compiler plugin. or configure parameters for the plugins already included in the build. If you do not specify a version then Maven will attempt to use the latest released version of the specified plugin. but in most cases these elements are not required.. In the above case. you may want to configure the Java compiler to allow JDK 5.. This is often the most convenient way to use a plugin.. or settings. the compiler plugin is already used as part of the build process and this just changes the configuration.5</source> <target>1. For the most part. and in some ways they are. this plugin will be downloaded and installed automatically in much the same way that a dependency would be handled. then Maven will default to looking for the plugin with the org. but if you find something has changed . but you may want to specify the version of a plugin to ensure reproducibility.apache. You can specify an additional groupId to search within your POM.7.] <build> <plugins> <plugin> <groupId>org.5</target> </configuration> </plugin> </plugins> </build> [.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2..mojo groupId label. For example.0</version> <configuration> <source>1.
Summary After reading Chapter 2. although you might want to refer to the next chapter for more information about customizing your build to fit your project's unique needs. testing a project. 2.Better Builds with Maven If you want to find out what the plugin's configuration options are.apache. you could stop reading this book now. you have gained access to every single project using Maven. you'll know how to use the basic features of Maven: creating a project. you should be up and running with Maven.apache. If you want to see the options for the maven-compiler-plugin shown previously.maven. compiling a project. and packaging a project.plugins \ -DartifactId=maven-compiler-plugin -Dfull=true You can also find out what plugin configuration is available by using the Maven Plugin Reference section at. If someone throws a Maven project at you.org/plugins/ and navigating to the plugin and goal you are using. If you were looking for just a build tool. If you are interested in learning how Maven builds upon the concepts described in the Introduction and obtaining a deeper working knowledge of the tools introduced in Chapter 2. By learning how to build a Maven project. In eighteen pages. you've seen how you can use Maven to build your project. read on. You've learned a new language and you've taken Maven for a test drive.8. You should also have some insight into how Maven handles dependencies and provides an avenue for customization using Maven plugins. The next few chapters provide you with the how-to guidelines to customize Maven's behavior and use Maven to manage interdependent software projects. use the mvn help:describe command. use the following command: mvn help:describe -DgroupId=org. 54 . .3.Edward V. .
using a real-world example. The application that you are going to create is called Proficio. encapsulate. which consists of all the classes that will be used by Proficio as a whole.1. and be able to easily identify what a particular module does simply by looking at its name. you are going to learn about some of Maven’s best practices and advanced uses by working on a small application to manage frequently asked questions (FAQ). In this chapter. goal. So. which consists of a set of interfaces. or purpose. • Proficio Core: The implementation of the API. Proficio has a very simple memory-based store and a simple XStream-based store. In examining the top-level POM for Proficio. a key goal for every software development project. which really means a reference to another POM. Moreover. lets start by discussing the ideal directory structure for Proficio. more manageable and comprehensible parts. you will be guided through the specifics of setting up an application and managing that application's Maven structure. This setup is typically referred to as a multi-module build and this is how it looks in the top-level Proficio POM: 56 . The only real criterion to which to adhere is that your team agrees to and uses a single naming convention. • These are default naming conventions that Maven uses. it is important to keep in mind that Maven emphasizes the practice of standardized and modular builds. Now you will delve in a little deeper. A module is a reference to another Maven project. SoC refers to the ability to identify. task. • Proficio Model: The data model for the Proficio application. As such.Better Builds with Maven 3. The natural outcome of this practice is the generation of discrete and coherent components. Concerns are the primary motivation for organizing and decomposing software into smaller. houses all the store modules. but you are free to name your modules in any fashion your team decides. The interfaces for the APIs of major components. you will see that the Proficio sample application is made up of several Maven modules: Proficio API: The application programming interface for Proficio.2. are also kept here. you can see in the modules element all the sub-modules that make up the Proficio application. each of which addresses one or more specific concerns. which is Latin for “help”. everyone on the team needs to clearly understand the convention. • Proficio Stores: The module which itself. The guiding principle in determining how best to decompose your application is called the Separation of Concerns (SoC). Introduction In the second chapter you stepped though the basics of setting up a simple project. • Proficio CLI: The code which provides a command line interface to Proficio. 3. Setting Up an Application Directory Structure In setting up Proficio's directory structure. In doing so. which enable code reusability. like the store. and operate on the pieces of software that are relevant to a particular concept.
It is recommended that you specify the application version in the top-level POM and use that version across all the modules that make up your application.. For POMs that contain modules.apache.0</modelVersion> <groupId>com. In Maven 1.x documentation. but the Maven team is trying to consistently refer to these setups as multimodule builds now. If you were to look at Proficio's directory structure you would see the following: Figure 3-1: Proficio directory structure 57 .] </project> An important feature to note in the POM above is the value of the version element.. Currently there is some variance on the Maven Web site when referring to directory structures that contain more than one Maven project. so it makes sense that all the modules have a common application version.0-SNAPSHOT.] .proficio</groupId> <artifactId>proficio</artifactId> <packaging>pom</packaging> <version>1.0-SNAPSHOT</version> <name>Maven Proficio</name> <url>. which you can see is 1.exist.0. For an application that has multiple modules.mvnbook. You should take note of the packaging element..x these were commonly referred to as multi-project builds and some of this vestigial terminology carried over to the Maven 2. which in this case has a value of pom.. it is very common to release all the sub-modules together.org</url> [.Creating Applications with Maven <project> <modelVersion>4.
proficio</groupId> <artifactId>proficio</artifactId> <version>1.mvnbook.0.exist-SNAPSHOT</version> </parent> <modelVersion>4.Better Builds with Maven>com. Looking at the module names is how Maven steps into the right directory to process the respective POMs located there. but the interesting thing here is that we have another project with a packaging type of pom. which is the proficio-stores module.
.3.mvnbook. Being the observant user. Using project inheritance allows you to do things like state your organizational information. In this case the assumption being made is that JUnit will be used for testing in all our child projects. in any of your child POMs. Let's examine a case where it makes sense to put a resource in the top-level POM.. you will see that in the dependencies section there is a declaration for JUnit version 3. you never have to declare this dependency again. 3....] <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. If you look at the top-level POM for Proficio.] This is the snippet in each of the POMs that lets you draw on the resources stated in the specified toplevel POM and from which you can inherit down to the level required .. So. The dependency is stated as following: <project> [.enabling you to add resources where it makes sense in the hierarchy of your projects.8.proficio</groupId> <artifactId>proficio</artifactId> <version>1. or state your common dependencies . you have probably taken a peek at all the POMs in each of the projects that make up the Proficio project and noticed the following at the top of each of the POMs: [.] <parent> <groupId>com.all in a single place.8..1</version> <scope>test</scope> </dependency> </dependencies> [. state your deployment information.Creating Applications with Maven Whenever Maven sees a POM with a packaging of type pom Maven knows to look for a set of related sub-modules and then process each of those modules. You can nest sets of projects like this to any level.0-SNAPSHOT</version> </parent> [. organizing your projects in groups according to concern.exist.] </project> 59 . Using Project Inheritance One of the most powerful features in Maven is project inheritance. by stating the dependency in the top-level POM once. just as has been done with Proficio’s multiple storage mechanisms. which are all placed in one directory.1.. using our top-level POM for the sample Proficio application.
] </dependencies> [.0-SNAPSHOT</version> </parent> <modelVersion>4.mvnbook.mvnbook.] <dependencies> [.plexus</groupId> <artifactId>plexus-container-default</artifactId> </dependency> </dependencies> </project> In order for you to see what happens during the inheritance process.1 dependency: <project> [.. you will see the JUnit version 3. take a look 60 .8.. This command will show you the final result for a target POM.. After you move into the proficio-core module directory and run the command. you will need to use the handy at the resulting POM.exist.proficio</groupId> <artifactId>proficio-api</artifactId> </dependency> <dependency> <groupId>org.] </project> mvn help:effective-pom command.codehaus.0. if you take a look at the POM for the proficio-core module you will see the following (Note: there is no visible dependency declaration for JUnit): <project> <parent> <groupId>com.1</version> <scope>test</scope> </dependency> [..proficio</groupId> <artifactId>proficio</artifactId> <version>1.8.exist.Better Builds with Maven What specifically happens for each child POM. So.] <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3..0</modelVersion> <artifactId>proficio-core</artifactId> <packaging>jar</packaging> <name>Maven Proficio Core</name> <dependencies> <dependency> <groupId>com.. is that each one inherits the dependencies section of the top-level POM...
But remember from Chapter 2 that the Super POM sits at the top of the inheritance hierarchy. Managing Dependencies When you are building applications you typically have a number of dependencies to manage and that number only increases over time.exist. which in turn inherits from the Super POM.exist.mvnbook.proficio</groupId> <artifactId>proficio-api</artifactId> <version>${project.proficio</groupId> 61 . the proficio-core project inherits from the top-level Proficio project. it is likely that some of those projects will share common dependencies. To illustrate how this mechanism works. individual projects.4. or align. You want to make sure that all the versions.mvnbook. You don't want.mvnbook. Looking at the effective POM includes everything and is useful to view when trying to figure out what is going on when you are having problems. to end up with multiple versions of a dependency on the classpath when your application executes.version}</version> </dependency> <dependency> <groupId>com.. so that the final application works correctly. making dependency management difficult to say the least. for example. across all of your projects are in alignment so that your testing accurately reflects what you will deploy as your final result.] <dependencyManagement> <dependencies> <dependency> <groupId>com. as the results can be far from desirable.mvnbook. Maven's strategy for dealing with this problem is to combine the power of project inheritance with specific dependency management elements in the POM.. of all your dependencies. 3. you use the dependency management section in the top-level POM of an application. versions of dependencies across several projects.exist. When you write applications which consist of multiple. So in this case. In order to manage.exist. When this happens it is critical that the same version of a given dependency is used for all your projects.proficio</groupId> <artifactId>proficio-model</artifactId> <version>${project.Creating Applications with Maven You will have noticed that the POM that you see when using the mvn help:effective-pom is bigger than you expected.version}</version> </dependency> <dependency> <groupId>com.proficio</groupId> <artifactId>proficio-store-memory</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>com. let's look at the dependency management section of the Proficio top-level POM: <project> [.
mvnbook.0-alpha-9</version> </dependency> </dependencies> </dependencyManagement> [.proficio</groupId> <artifactId>proficio-model</artifactId> </dependency> </dependencies> </project> The version for this dependency is derived from the dependencyManagement element which is inherited from the Proficio top-level POM. There is an important distinction to be made between the dependencies element contained within the dependencyManagment element and the top-level dependencies element in the POM.mvnbook. whereas the top-level dependencies element does affect the dependency graph..proficio</groupId> <artifactId>proficio-core</artifactId> <version>${project.codehaus.Better Builds with Maven <artifactId>proficio-store-xstream</artifactId> <version>${project.. we have several Proficio dependencies and a dependency for the Plexus IoC container. If you take a look at the POM for the proficio-api module.version}) for proficio-model so that version is injected into the dependency above. The dependencies stated in the dependencyManagement only come into play when a dependency is declared without a version.] </project> Note that the ${project.version} specification is the version specified by the top-level POM's version element. you will see a single dependency declaration and that it does not specify a version: <project> [.0-SNAPSHOT (stated as ${project. which is the application version.] <dependencies> <dependency> <groupId>com.plexus</groupId> <artifactId>plexus-container-default</artifactId> <version>1. As you can see within the dependency management section.. 62 .version}</version> </dependency> <dependency> <groupId>com. to make it complete.exist. The dependencyManagement declares a stated preference for the 1.. The dependencies element contained within the dependencyManagement element is used only to state the preference for a version and by itself does not affect a project's dependency graph.version}</version> </dependency> <dependency> <groupId>org.exist.
.] </project> Specifying a snapshot version for a dependency means that Maven will look for new versions of that dependency without you having to manually specify a new version.codehaus.plexus</groupId> <artifactId>plexus-container-default</artifactId> <version>1..mvnbook.Creating Applications with Maven 3.version}</version> </dependency> <dependency> <groupId>com.mvnbook.5. If you look at the top-level POM for Proficio you will see a snapshot version specified: <project> [. so Maven will attempt to update them. Snapshot dependencies are assumed to be changing. but you can use the -U command line option to force the search for updates. By default Maven will look for snapshots on a daily basis.] <version>1.exist.proficio</groupId> <artifactId>proficio-model</artifactId> <version>${project. Your build system needs to be able to deal easily with this real-time flux. When you specify a non-snapshot version of a dependency Maven will download that dependency once and never attempt to retrieve it again. and this is where Maven's concept of a snapshot comes into play. it is usually the case that each of the modules are in flux.exist. Your APIs might be undergoing some change or your implementations are undergoing change and are being fleshed out. 63 ..0-SNAPSHOT</version> <dependencyManagement> <dependencies> <dependency> <groupId>com. A snapshot in Maven is an artifact that has been prepared using the most recent sources available.. or you may be doing some refactoring. Using Snapshots While you are developing an application with multiple modules.version}</version> </dependency> <dependency> <groupId>org.0-alpha-9</version> </dependency> </dependencies> </dependencyManagement> [. Controlling how snapshots work will be explained in detail in Chapter 7.proficio</groupId> <artifactId>proficio-api</artifactId> <version>${project.
the version selected is the one declared “nearest” to the top of the tree .0-SNAPSHOT (selected for compile) proficio-model:1.Better Builds with Maven 3. if you run mvn -X test on the proficio-core module.0-alpha-9 (selected for compile) plexus-utils:1. For example. as the graph grows. the output will contain something similar to: proficio-core:1.0-SNAPSHOT junit:3. and Proficio requires version 1. Maven selects the version that requires the least number of dependencies to be traversed.that is.codehaus. Resolving Dependency Conflicts and Using Version Ranges With the introduction of transitive dependencies in Maven 2. To ensure this.1 (selected for compile) It should be noted that running mvn -X test depends on other parts of the build having been executed beforehand. In Maven. modify the plexus-container-default dependency in the proficio-core/pom. you can exclude the dependency from the graph by adding an exclusion to the dependency that introduced it. and allowing Maven to calculate the full dependency graph.6. • While further dependency management features are scheduled for the next release of Maven at the time of writing. then the result is undefined. you can remove the incorrect version from the tree.0. or you can override both with the correct version. local scope test wins) proficio-api:1.plexus</groupId> <artifactId>plexus-container-default</artifactId> <version>1. there are ways to manually resolve these conflicts as the end user of a dependency. this has limitations: The version chosen may not have all the features required by the other dependencies.1 (selected for test) plexus-container-default:1. In this example. it became possible to simplify a POM by including only the dependencies you need directly. However. • If multiple versions are selected at the same depth.8. so it is useful to run mvn install at the top level of the project (in the proficio directory)to ensure that needed components are installed into the local repository. Once the path to the version has been identified.0. and more importantly ways to avoid it as the author of a reusable library.8.4 (selected for compile) classworlds:1. Removing the incorrect version requires identifying the source of the incorrect version by running Maven with the -X flag (for more information on how to do this.xml file as follows: <dependency> <groupId>org.0-alpha-9</version> <exclusions> <exclusion> 64 . Maven must choose which version to provide.9 in Chapter 6). it is inevitable that two or more artifacts will require different versions of a particular dependency. To manually resolve conflicts.1 be used.1 (not setting scope to compile. see section 6. A dependency in the POM being built will be used over anything else. In this case.0-SNAPSHOT (selected for compile) plexus-utils:1. plexus-utils occurs twice.1-alpha-2 (selected for compile) junit:3. However.
1.1.0.1. Neither of these solutions is ideal. Maven has no knowledge regarding which versions will work. you may require a feature that was introduced in plexus-utils version 1. This is because. for a library or framework. so that the 1. not for compilation.1</version> <scope>runtime</scope> </dependency> </dependencies> However.1. for stability it would always be declared in the current POM as a dependency . In fact. The alternate way to ensure that a particular version of a dependency is used. in this situation. a WAR file).plexus</groupId> <artifactId>plexus-utils</artifactId> <version>1.codehaus. if the dependency were required for compilation. which will accumulate if this project is reused as a dependency itself. as shown above for plexus-utils. In this case. the dependency is used only for packaging. When a version is declared as 1. use version ranges instead.Creating Applications with Maven <groupId>org. the dependency should be specified as follows: <dependency> <groupId>org. as follows: <dependencies> <dependency> <groupId>org. However.regardless of whether another dependency introduces it.1 version is used instead. but it is possible to improve the quality of your own dependencies to reduce the risk of these issues occurring with your own build artifacts. To accomplish this.codehaus.4 version of plexus-utils in the dependency graph.)</version> </dependency> 65 .plexus</groupId> <artifactId>plexus-utils</artifactId> </exclusion> </exclusions> </dependency> This ensures that Maven ignores the 1. this indicates that the preferred version of the dependency is 1.codehaus. that will be used widely by others. The reason for this is that it distorts the true dependency graph. You'll notice that the runtime scope is used here. This is extremely important if you are publishing a build. this approach is not recommended unless you are producing an artifact that is bundling its dependencies and is not used as a dependency itself (for example.plexus</groupId> <artifactId>plexus-utils</artifactId> <version>[1. Maven assumes that all versions are valid and uses the “nearest dependency” technique described previously to determine which version to use. is to include it directly in the POM. but that other versions may be acceptable. so in the case of a conflict with another dependency.
66 .3 (inclusive) Greater than or equal to 1. the build will fail. rc1). will be retrieved from the repository.1.0) [1. If the nearest version does not match.1. In the current version scheme. It is intended that the qualifier indicates a version prior to release (for example. it is possible to make the dependency mechanism more reliable for your builds and to reduce the number of exception cases that will be required.(1. beta-1. but less than 2. For instance. Figure 3-3: Version parsing As you can see. For a qualifier to be a snapshot the qualifier must be the text “snapshot” or a time stamp. you need to avoid being overly specific as well. then the next nearest will be tested. except 1.1). and table 3-2 shows some of the values that can be used. Finally. if two version ranges in a dependency graph do not intersect at all.2. In figure 3-1. while the nearest dependency technique will still be used in the case of a conflict. you can provide only the qualifier. The build number is an increment after release to indicate patched builds.1.0 Greater than or equal to 1.) (. the version you are left with is [1. and so on.3] [1.5.0 Between 1.1. it is necessary to understand how versions are compared. However.). alpha-1. a version is broken down into five parts: the major. the version that is used must fit the range given. In a regular version.) Less than or equal to 1.0.1 By being more specific through the use of version ranges. minor and bug fix releases.1.2 and 1. then the qualifier and finally a build number. the snapshot (as shown above) is a special case where the qualifier and build number are both allowed. Table 3-2: Examples of Version Ranges Range Meaning (. or there were no conflicts originally. if none of them match.0] [1. which is greater than or equal to 1. or only the build number.Better Builds with Maven What this means is that.0. The notation used above is set notation. The time stamp in figure 3-1 was generated on 11-022006 at 13:11:41.2. To understand how version ranges work.5 Any version.1. you can see how a version is partitioned by Maven. This means that the latest version.
67 . the versions will not match this syntax. 1.).first by major version. However.2-beta-1 will be selected. A final note relates to how version updates are determined when a range is in use. second .2-beta. so to avoid such a situation.2-beta-1 is newer than 1. the two versions are compared entirely as strings. and the versions 1.2-beta-1 exist in a referenced repository. you must structure your releases accordingly. This mechanism is identical to that of the snapshots that you learned in section 3. this can be configured perrepository to be on a more regular interval. fourth by qualifier (using string comparison). Figure 3-4: Version Parsing The use of version parsing in Maven as defined here is considered the best practice.Creating Applications with Maven With regard to ordering. or through the use of a separate repository containing only the artifacts and versions you strictly desire.6. for example. for example. the elements are considered in sequence to determine which is newer .1. This will ensure that the beta versions are used in a range only if the project has declared the snapshot (or development) repository explicitly. A version that also contains a build number is considered newer than a version without a build number. All of these elements are considered part of the version and as such the ranges do not differentiate. If you use the range [1. 1. and finally. By default. the repository is checked once a day for updates to the versions of artifacts in use.1 and 1.by minor version. or forced from the command line using the -U option for a particular Maven execution. by build number.2. In some cases. Based on Maven's version parsing rules you may also define your own version practices. or release betas as milestones along the way. Please see the figure below for more examples of the ordering of version parsing schemes. Whether you use snapshots until the final release.2-beta is older than version 1. In those cases.if the major versions were equal . third by bug fix version. either avoiding the naming convention that would result in that behavior. then 1. Often this is not desired. A version that contains a qualifier is older than a version without a qualifier. you should deploy them to a snapshot repository as is discussed in Chapter 7 of this book.
the updatePolicy value (which is in minutes) is changed for releases.mvnbook. Proficio has a requirement to generate Java sources from a model. which means the plugin is bound to a specific phase in the default life cycle. typically.0.modello</groupId> <artifactId>modello-maven-plugin</artifactId> <version>1. For example: <repository> [. which binds itself to a standard phase in Maven's default life cycle.. In Proficio. which is actually Maven’s default build life cycle.0-alpha-5</version> <executions> <execution> <goals> <goal>java</goal> </goals> </execution> </executions> <configuration> 68 .exist.codehaus. Maven’s default build life cycle will suffice for a great number of projects without any augmentation – but.proficio</groupId> <artifactId>proficio</artifactId> <version>1.] <releases> <updatePolicy>interval:60</updatePolicy> </releases> </repository> 3. If you look at the POM for the proficio-model you will see the plugins element with a configuration for the Modello plugin: <project> <parent> <groupId>com. projects will have different requirements and it is sometimes necessary to augment the default Maven life cycle to satisfy these requirements.Better Builds with Maven If it will be configured for a particular repository. Maven accommodates this requirement by allowing the declaration of a plugin.0</modelVersion> <artifactId>proficio-model</artifactId> <packaging>jar</packaging> <name>Proficio Model</name> <build> <plugins> <plugin> <groupId>org. Utilizing the Build Life Cycle In Chapter 2 Maven was described as a framework that coordinates the execution of its plugins in a well-defined way or process.. of course.7.0-SNAPSHOT</version> </parent> <modelVersion>4. the Modello plugin is used to generate the Java sources for Proficio’s data model. Plugins in Maven are created with a specific task in mind. the generate-sources phase. For example.
by specifying the goal in the executions element.0.mdo</model> </configuration> </plugin> </plugins> </build> </project> This is very similar to the declaration for the maven-compiler-plugin that you saw in Chapter 2. A plugin in Maven may have several goals.0</version> <packageWithVersion>false</packageWithVersion> <model>src/main/mdo/proficio. but here you see an additional executions element.Creating Applications with Maven <version>1. so you need to specify which goal in the plugin you wish to run. 69 .
Therefore. profiles. since these profiles are portable (they will be distributed to the repository on deploy.xml file. POM-specified profiles override those in profiles. that local always wins. or in the POM itself.xml allows you to augment a single project's build without altering the POM.xml are only allowed to define: repositories • pluginRepositories • properties • Everything else must be specified in a POM profile. However. called profiles. but sometimes you simply have to take into consideration variation across systems and this is why profiles were introduced in Maven.xml profiles have the potential to affect all builds.xml overrides those in settings. Profiles modify the POM at build time. Typically. used properly. any files which are not distributed to the repository are NOT allowed to change the fundamental build in any way. testing with different databases. profiles can easily lead to differing build results from different members of your team.xml. and are available for subsequent builds originating from the repository or as transitive dependencies). if you had a profile in settings. and can be activated in several ways. and are meant to be used in complementary sets to give equivalent-but-different parameters for a set of target environments (providing. Using Profiles Profiles are Maven's way of letting you create environmental variations in the build life cycle to accommodate things like building on different platforms. Because of the portability implications. And the POM-based profiles are preferred.xml) • A file in the the same directory as the POM. then once that project is deployed to the repository it will never fully resolve its dependencies transitively when asked to do so. This is a pattern that is repeated throughout Maven. As such. so they're sort of a "global" location for profiles.xml and settings. 70 . the profiles specified in profiles. and production environments). you can still preserve build portability with profiles. That's because it left one of its dependencies sitting in a profile inside your settings.xml. you try to encapsulate as much as possible in the POM to ensure that builds are portable.m2/settings. Profiles are specified using a subset of the elements available in the POM itself (plus one extra section). So. You can define profiles in one of the following three places: • The Maven settings file (typically <user_home>/.xml • The POM itself In terms of which profile takes precedence.8. for example. and the project you were working on actually did depend on that settings-injected dependency in order to run. and profiles. settings.xml that was able to inject a new dependency. testing. For example. or referencing the local file system. because it is assumed to be a modification of a more general case. the path of the application server root in the development. or not at all. the local-most profile wins.Better Builds with Maven 3. building with different JVMs.
. Note that you must have defined the profiles in your settings. the profiles specified outside the POM are only allowed a small subset of the options available within the POM.... For example: <settings> [.xml file as well. via the activeProfiles section. but used behind the scenes) modules reporting dependencyManagement distributionManagement A subset of the build element. For example: mvn -Pprofile1.profile2 install Profiles can be activated in the Maven settings. This section takes a list of activeProfile elements.Creating Applications with Maven Note: repositories... no profiles other than those specified in the option argument will be activated.. So.] </profile> </profiles> <activeProfiles> <activeProfile>profile1</activeProfile> </activeProfiles> [. pluginRepositories.] <profiles> <profile> <id>profile1</id> [.] </settings> • 71 . When this option is specified. and properties can also be specified in profiles within the POM. You can define the following elements in the POM profile: • • • • • • • • • repositories pluginRepositories dependencies plugins properties (not actually available in the main POM. each containing a profile-id.
.] <activation> <property> <name>environment</name> <value>test</value> </property> </activation> </profile> This last example will activate the profile when the system property "environment" is specified with the value "test".4. "1. <profile> <id>profile1</id> [..0_08". Here are some examples: <profile> <id>profile1</id> [.4</jdk> </activation> </profile> This activator will trigger the profile when the JDK's version starts with "1. which uses the XStream-based store. "1.. Now that you are familiar with profiles. "1. Currently. or the value of a system property. These assemblies will be created in the proficio-cli module and the profiles used to control the creation of our tailored assemblies are defined there as well.Better Builds with Maven • Profiles can be triggered automatically based on the detected state of the build environment. this detection is limited to prefix-matching of the JDK version.. which uses the memory-based store. the presence of a system property.g.. These activators are specified via an activation section in the profile itself.4. 72 .2_07". you are going to use them to create tailored assemblies: an assembly of Proficio... and an assembly of Proficio.4").] <activation> <jdk>1.] <activation> <property> <name>debug</name> </property> </activation> </profile> This will activate the profile when the system property "debug" is specified with any value. <profile> <id>profile1</id> [.4" (e.
] <!-.xml</descriptor> </descriptors> </configuration> </plugin> </plugins> </build> <activation> <property> <name>memory</name> </property> </activation> </profile> <!-.Creating Applications with Maven If you take a look at the POM for the proficio-cli module you will see the following profile definitions: <project> [..Profiles for the two assemblies to create for deployment --> <profiles> <!-.
so that all child POMs can inherit this information.] <distributionManagement> <repository> <id>proficio-repository</id> <name>Proficio Repository</name> <url>{basedir}/target/deploy</url> </repository> </distributionManagement> [.. but it illustrates how you can customize the execution of the life cycle using profiles to suit any requirement you might have. you would execute the following: mvn -Dmemory clean assembly:assembly If you wanted to create the assembly using the XStream-based store. SSH2 deployment.0-SNAPSHOT.. If you wanted to create the assembly using the memory-based store. SFTP deployment. 3. while the XStream-based store contains the proficio-store-xstream-1..jar file only.Better Builds with Maven You can see there are two profiles: one with an id of memory and another with an id of xstream. Deploying your Application Now that you have an application assembly.9. you will see that the memory-based assembly contains the proficiostore-memory-1. It should be noted that the examples below depend on other parts of the build having been executed beforehand.9. 3. it is now time to deploy your application assembly.1. Deploying to the File System To deploy to the file system you would use something like the following: <project> [.jar file only. You will also notice that the profiles are activated using a system property. Here are some examples of how to configure your POM via the various deployment mechanisms. This is a very simple example. Currently Maven supports several methods of deployment. FTP deployment. and external SSH deployment. In order to deploy. including simple file-based deployment.0-SNAPSHOT.] </project> 74 . In each of these profiles you are configuring the assembly plugin to point at the assembly descriptor that will create a tailored assembly. so it might be useful to run mvn install at the top level of the project to ensure that needed components are installed into the local repository. which would typically be your top-level POM. you’ll want to share it with as many people as possible! So. you would execute the following: mvn -Dxstream clean assembly:assembly Both of the assemblies are created in the target directory and if you use the jar tvf command on the resulting assemblies.. you need to correctly configure your distributionManagement element in your P
often you will want to customize the projects reports that are created and displayed in your Web site..maven. You do so by configuring the plugin as follows: <project> [. you need to list each report that you want to include as part of the site generation.... You can do so by executing the following command: mvn site 81 . and how to configure reports.apache. The reports created and displayed are controlled in the build/reports element in the POM..] <plugins> <plugin> <groupId>org. You may want to be more selective about the reports that you generate and to do so.] <reporting> [. it’s time to generate your project's web site.Creating Applications with Maven Even though the standard reports are useful> [.. how the site descriptor works.] </project> Now that you have a good grasp of what formats are supported..
you will end up with a directory structure (generated inside the target directory) with the generated content that looks like this: Figure 3-6: The target directory 82 .Better Builds with Maven After executing this command.
and as you can see in the directory listing above. you can add any resources you wish to your site. which contains an images directory. Keeping this simple rule in mind. 83 . you will have noticed the src/site/resources directory. it is located within the images directory of the generated site.
how to deploy your application. and more advanced uses of Maven. like creating your own plugins. and using Maven in a collaborative environment.11. augmenting your site to view quality metrics. 84 . You are now prepared to move on and learn about more advanced application directory structures like the J2EE example you will see in Chapter 4. You should now have a grasp of how project inheritance works.Better Builds with Maven 3. how to make small modifications to Maven's build life cycle. how to manage your application's dependencies. Summary In this chapter you have learned how to setup a directory structure for a typical application and learned the basics of managing the application's development with Maven. and how to create a simple web site for your application.. WAR.4. EAR. .Helen Keller 85 .
you’ll learn how to build EARs.2. it's likely that you are using J2EE in some of your projects. Introducing the DayTrader Application DayTrader is a real world application developed by IBM and then donated to the Apache Geronimo project. Figure 4-1: Architecture of the DayTrader application 86 . 4. Through this example. The functional goal of the DayTrader application is to buy and sell stock. As importantly.4 application and as a test bed for running performance tests. Introduction J2EE (or Java EE as it is now called) applications are everywhere. you’ll learn how to automate configuration and deployment of J2EE application servers.Better Builds with Maven 4. and its architecture is shown in Figure 4-1. As a consequence the Maven community has developed plugins to cover every aspect of building J2EE applications. Its goal is to serve as both a functional example of a full-stack J2EE 1.1. This chapter demonstrates how to use Maven on a real application to show how to address the complex issues related to automated builds. Web services. You'll learn not only how to create a J2EE build but also how to create a productive development environment (especially for Web application development) and how to deploy J2EE modules into your container. This chapter will take you through the journey of creating the build for a full-fledged J2EE application called DayTrader. Whether you are using the full J2EE stack with EJBs or only using Web applications with frameworks such as Spring or Hibernate. EJBs. and Web applications.
The user gives a buy order (by using the Web client or the Web services client).Quote and AccountProfile). 5. The Data layer consists of a database used for storing the business objects and the status of each purchase. get a stock quote. 3. 2. 87 . cancel an order. • A module producing another JAR that will contain the Web services client application. and a JMS Server for interacting with the outside world. • A module producing a JAR that will contain the Quote Streamer client application. • • • A typical “buy stock” use case consists of the following steps that were shown in Figure 4-1: 1. and using the Quote Streamer. A new “open” order is saved in the database using the CMP Entity Beans. The Trade Session is a stateless session bean that offers the business services such as login. Organizing the DayTrader Directory Structure The first step to organizing the directory structure is deciding what build modules are required. using Web services. Looking again at Figure 4-1. • In addition you may need another module producing an EAR which will contain the EJB and WAR produced from the other modules. you can see that the following modules will be needed: A module producing an EJB which will contain all of the server-side EJBs. The easy answer is to follow Maven’s artifact guideline: one module = one main artifact.3. logout. The Web layer offers a view of the application for both the Web client and the Web services client. Asynchronously the order that was placed on the queue is processed and the purchase completed. The user is notified of the completed order on a subsequent request. 4. Once this happens the Trade Broker MDB is notified 6. It uses servlets and JSPs. • A module producing a WAR which will contain the Web application. 4. The Trade Broker calls the Trade Session bean which in turn calls the CMP entity beans to mark the order as “completed". The EJB layer is where the business logic is. Holding. buy or sell a stock. The creation of the “open” order is confirmed for the user. and Message-Driven Beans (MDB) to send purchase orders and get quote changes.Building J2EE Applications There are 4 layers in the architecture: • The Client layer offers 3 ways to access the application: using a browser. The order is then queued for processing in the JMS Message Server. This request is handled by the Trade Session bean. The Quote Streamer is a Swing GUI application that monitors quote information about stocks in real-time as the price changes. This EAR will be used to easily deploy the server code into a J2EE container. and so on. Account. Thus you simply need to figure out what artifacts you need. It uses container-managed persistence (CMP) entity beans for storing the business objects (Order.
This file also contains the list of modules that Maven will build when executed from this directory (see the Chapter 3.xml file contains the POM elements that are shared between all of the modules. for more details): [.the module containing the client side streamer application wsappclient .. The next step is to give these modules names and map them to a directory structure.the module containing the EJBs web . it is important to split the modules when it is appropriate for flexibility. Creating Applications with Maven. it is usually easier to choose names that represent a technology instead. Best practices suggest to do this only when the need arises.the module producing the EAR which packages the EJBs and the Web application There are two possible layouts that you can use to organize these modules: a flat directory structure and a nested one.Better Builds with Maven Note that this is the minimal number of modules required. For example. It is flat because you're locating all the modules in the same directory.. On the other hand... Figure 4-2 shows these modules in a flat directory structure. For the DayTrader application the following names were chosen: • • • • • ejb .the module containing the Web application streamer . Let's discuss the pros and cons of each layout.] 88 . If there isn't a strong need you may find that managing several modules can be more cumbersome than useful. As a general rule. Figure 4-2: Module names and a simple flat directory structure The top-level daytrader/pom. For example. However. it is better to find functional names for modules.the module containing the Web services client application ear . you may want to split the WAR module into 2 WAR modules: one for the browser client and one for the Web services client. It is possible to come up with more. if you needed to physically locate the WARs in separate servlet containers to distribute the load.] <modules> <module>ejb</module> <module>web</module> <module>streamer</module> <module>wsappclient</module> <module>ear</module> </modules> [.
and is the structure used in this chapter. the structure clearly shows how nested modules are linked to their parent. In this case. However. you might separate the client side modules from the server side modules in the way shown in Figure 4-3. EJB and Web modules 89 . The other alternative is to use a nested directory structure. Figure 4-3: Modules split according to a server-side vs client-side directory organization As before. Having this nested Figure 4-4: Nested directory structure for the EAR. as shown in Figure 4-4.Building J2EE Applications This is the easiest and most flexible structure to use.xml file containing the shared POM elements and the list of modules underneath. For example. if you have many modules in the same directory you may consider finding commonalities between them and create subdirectories to partition them. Note that in this case the modules are still separate. each directory level containing several modules contains a pom. ejb and web modules are nested in the ear module. not nested within each other. This makes sense as the EAR artifact is composed of the EJB and WAR artifacts produced by the ejb and web modules.
• These examples show that there are times when there is not a clear parent for a module. In addition.xml of the project. you're going to create the Maven build for each module. In those cases using a nested directory structure should be avoided. For example. Or the ejb module might be producing a client EJB JAR which is not used by the EAR.0\daytrad er-1. starting with the wsappclient module after we take care of one more matter of business. so before we move on to developing these sub-projects we need to install the parent POM into our local repository so it can be further built on.]\. A flat layout is more neutral with regard to assembly and should thus be preferred. • It doesn’t allow flexible packaging. Depending on the target deployment environment the Assembler may package things differently: one EAR for one environment or two EARs for another environment where a different set of machines are used. but by some client-side application. the ejb or web modules might depend on a utility JAR and this JAR may be also required for some other EAR. the nested strategy doesn’t fit very well with the Assembler role as described in the J2EE specification. EAR project)..m2\repository\org\apache\geronimo\samples\daytrader\daytrader\1. [INFO] --------------------------------------------------------------------[INFO] Building DayTrader :: Performance Benchmark Sample [INFO] task-segment: [install] [INFO] --------------------------------------------------------------------[INFO] [site:attach-descriptor] [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\pom. You’d need to consider the three modules as one project.0. For example. but then you’ll be restricted in several ways. etc.xml to C:\[. the three modules wouldn’t be able to have different natures (Web application project.Better Builds with Maven However.. We are now ready to continue on with developing the sub-projects! 90 .. EJB project.. it has several drawbacks: Eclipse users will have issues with this structure as Eclipse doesn’t yet support nested projects. Now that you have decided on the directory structure for the DayTrader application. The modules we will work with from here on will each be referring to the parent pom.p Assembler has a pool of modules and its role is to package those modules for deployment. even though the nested directory structure seems to work quite well here.
We start our building process off by visiting the Web services portion of the build since it is a dependency of later build stages.apache. and Maven's ability to integrate toolkits can make them easier to add to the build process. the plugin uses the Axis framework (. Figure 4-5 shows the directory structure of the wsappclient module..org/axis/java/userguide. As you may notice.Building J2EE Applications 4. Building a Web Services Client Project Web Services are a part of many J2EE applications. see. which is the default used by the Axis Tools plugin: Figure 4-5: Directory structure of the wsappclient module 91 . As the name suggests.4. the Maven plugin called Axis Tools plugin takes WSDL files and generates the Java files needed to interact with the Web services it defines. and this will be used from DayTrader’s wsappclient module.org/axis/java/). For example.
codehaus..] <build> <plugins> [. it would fail..Better Builds with Maven The location of WSDL source can be customized using the sourceDirectory property.] <plugin> <groupId>org. This is because after the sources are generated.wsdl file. Similarly. While you might expect the Axis Tools plugin to define this for you.xml file must declare and configure the Axis Tools plugin: <project> [..] In order to generate the Java source files from the TradeServices. and more importantly. it allows users of your project to automatically get the dependency transitively.mojo</groupId> <artifactId>axistools-maven-plugin</artifactId> <configuration> <sourceDirectory> src/main/resources/META-INF/wsdl </sourceDirectory> </configuration> [. For example: [... you will require a dependency on Axis and Axis JAXRPC in your pom. any tools that report on the POM will be able to recognize the dependency.. the wsappclient/pom. it is required for two reasons: it allows you to control what version of the dependency to use regardless of what the Axis Tools plugin was built against.xml.] <plugin> <groupId>org.codehaus.
org/maven2/.2</version> <scope>provided</scope> </dependency> <dependency> <groupId>axis</groupId> <artifactId>axis-jaxrpc</artifactId> <version>1. Thus the following three dependencies have been added to your POM: <dependencies> <dependency> <groupId>axis</groupId> <artifactId>axis</artifactId> <version>1.geronimo.Building J2EE Applications As before.4_spec</artifactId> <version>1. they are not present on ibiblio6 and you'll need to install them manually. 6 Artifacts can also be obtained from</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.0</version> <scope>provided</scope> </dependency> </dependencies> The Axis JAR depends on the Mail and Activation Sun JARs which cannot be redistributed by Maven. 93 . you need to add the J2EE specifications JAR to compile the project's Java sources. Run mvn install and Maven will fail and print the installation instructions.com/maven2/ and. Thus.specs</groupId> <artifactId>geronimo-j2ee_1.
m2\repository\org\apache\geronimo\samples\daytrader\ daytrader-wsappclient\1. Now that we have discussed and built the Web services portion. The generated WSDL file could then be injected into the Web Services client module to generate client-side Java files.jar [. But that's another story. 94 . [INFO] [compiler:compile] Compiling 13 source files to C:\dev\m2book\code\j2ee\daytrader\wsappclient\target\classes [INFO] [resources:testResources] [INFO] Using default encoding to copy filtered resources. [INFO] [jar:jar] [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\wsappclient\ target\daytrader-wsappclient-1. The Axis Tools reference documentation can be found at [INFO] [resources:resources] [INFO] Using default encoding to copy filtered resources.jar to C:\[.. lets visit EJBs next.0...0. The Axis Tools plugin boasts several other goals including java2wsdl that is useful for generating the server-side WSDL file from handcrafted Java classes.]\. running the build with mvn install leads to: C:\dev\m2book\code\j2ee\daytrader\wsappclient>mvn install [.0.. in addition to the sources from the standard source directory.org/axistools-mavenplugin/. [INFO] [compiler:testCompile] [INFO] No sources to compile [INFO] [surefire:test] [INFO] No tests to run.] [INFO] [axistools:wsdl2java {execution: default}] [INFO] about to add compile source root [INFO] processing wsdl: C:\dev\m2book\code\j2ee\daytrader\wsappclient\ src\main\wsdl\TradeServices..0\daytrader-wsappclient-1.codehaus..Better Builds with Maven After manually installing Mail and Activation.
Tests that require the container to run are called integration tests and are covered at the end of this chapter. 95 .5. Any container-specific deployment descriptor should also be placed in this directory. the standard ejb-jar. Unit tests are tests that execute in isolation from the container.xml.xml deployment descriptor is in src/main/resources/META-INF/ejbjar. • • Unit tests in src/test/java and classpath resources for the unit tests in src/test/resources.Building J2EE Applications. More specifically. • Runtime classpath resources in src/main/resources.
0</version> </parent> <artifactId>daytrader-ejb</artifactId> <name>Apache Geronimo DayTrader EJB Module</name> <packaging>ejb</packaging> <description>DayTrader EJBs</description> <dependencies> <dependency> <groupId>org.apache.apache.3</version> <scope>provided</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.daytrader</groupId> <artifactId>daytrader-wsappclient</artifactId> <version>1. This is because the DayTrader build is a multi-module build and you are gathering common POM elements in a parent daytrader/pom. take a look at the content of this project’s pom.geronimo.geronimo.0</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.0. you're extending a parent POM using the parent element. If you look through all the dependencies you should see that we are ready to continue with building and installing this portion of the build.geronimo.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <configuration> <generateClient>true</generateClient> <clientExcludes> <clientExclude>**/ejb/*Bean.xml file: <project> <modelVersion>4.samples.class</clientExclude> </clientExcludes> </configuration> </plugin> </plugins> </build> </project> As you can see.specs</groupId> <artifactId>geronimo-j2ee_1.4_spec</artifactId> <version>1.Better Builds with Maven Now.samples.apache.0.xml file.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.0</modelVersion> <parent> <groupId>org.apache.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1. 96 .maven.
In this example.class. **/*CMP.maven. it still needs to be listed in the POM so that the code can be compiled. This is achieved by specifying a dependency element on the J2EE JAR. you need to override the defaults using a clientExclude element because it happens that there are some required non-EJB files matching the default **/*Bean. However. The Client will be used in a later examples when building the web module. this prevents the EAR module from including the J2EE JAR when it is packaged.class</clientExclude> </clientExcludes> </configuration> </plugin> The EJB plugin has a default set of files to exclude from the client EJB JAR: **/*Bean. Even though this dependency is provided at runtime. so you must explicitly tell it to do so: <plugin> <groupId>org. the Geronimo project has made the J2EE JAR available under an Apache license and this JAR can be found on ibiblio.xml contains a configuration to tell the Maven EJB plugin to generate a Client EJB JAR file when mvn install is called. You could instead specify a dependency on Sun’s J2EE JAR.apache.class). 97 . • Lastly.html. Fortunately.class pattern and which need to be present in the generated client EJB JAR.class and **/package. You should note that you're using a provided scope instead of the default compile scope. the pom. this JAR is not redistributable and as such cannot be found on ibiblio. **/*Session. The reason is that this dependency will already be present in the environment (being the J2EE application server) where your EJB will execute. By default the EJB plugin does not generate the client JAR. This is done by specifying: <packaging>ejb</packaging> • As you’re compiling J2EE code you need to have the J2EE specifications JAR in the project’s build classpath.plugins</groupId> <artifactId>maven-ejb-plugin</artifactId> <configuration> <generateClient>true</generateClient> <clientExcludes> <clientExclude>**/ejb/*Bean. You make this clear to Maven by using the provided scope. Note that it's also possible to specify a list of files to include using clientInclude elements.class. Thus you're specifying a pattern that only excludes from the generated client EJB JAR all EJB implementation classes located in the ejb package (**/ejb/*Bean.
]\.0\daytrader-ejb-1. Errors: 0.daytrader. Time elapsed: 0..jar [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.jar [INFO] Building ejb client daytrader-ejb-1.0.jar 98 . [INFO] [compiler:compile] Compiling 49 source files to C:\dev\m2book\code\j2ee\daytrader\ejb\target\classes [INFO] [resources:testResources] [INFO] Using default encoding to copy filtered resources.. [INFO] ----------------------------------------------------------[INFO] Building DayTrader :: EJBs [INFO] task-segment: [install] [INFO] ----------------------------------------------------------[INFO] [resources:resources] [INFO] Using default encoding to copy filtered resources.0.geronimo.]\.m2\repository\org\apache\geronimo\samples\ daytrader\daytrader-ejb\1.FinancialUtilsTest [surefire] Tests run: 1..jar [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.02 sec Results : [surefire] Tests run: 1.0-client..0-client..0-client.Better Builds with Maven You’re now ready to execute the build.samples. [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.apache..jar to C:\[.0\daytrader-ejb-1. Failures: 0.0-client [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\ejb\ target\daytrader-ejb-1.0.m2\repository\org\apache\geronimo\samples\ daytrader\daytrader-ejb\1. Failures: 0. Relax and type mvn install: C:\dev\m2book\code\j2ee\daytrader\ejb>mvn install [INFO] Scanning for projects. Errors: 0 [INFO] [ejb:ejb] [INFO] Building ejb daytrader-ejb-1.jar to C:\[.
apache. There is a working prototype of an EJB3 Maven plugin. the EJB3 specification is still not final. Please refer to the EJB plugin documentation on. Stay tuned! 99 .Building J2EE Applications Maven has created both the EJB JAR and the client EJB JAR and installed them in your local repository. At the time of writing. Early adopters of EJB3 may be interested to know how Maven supports EJB3. however in the future it will be added to the main EJB plugin after the specification is finalized. The EJB plugin has several other configuration elements that you can use to suit your exact needs.
daytrader.interface * generate="remote" * remote-class= * "org.Trade" * […] */ public class TradeBean implements SessionBean { […] /** * Queue the Order identified by orderID to be processed in a * One Phase commit […] * * @ejb. the container-specific deployment descriptors.samples. the Remote and Local interfaces.daytrader..transaction * type="RequiresNew" *[…] */ public void queueOrderOnePhase(Integer orderID) throws javax.bean * display-name="TradeEJB" * name="TradeEJB" * view-type="remote" * impl-class-name= * "org. you can run the XDoclet processor to generate those files for you.samples.Better Builds with Maven 4.interface-method * view-type="remote" * @ejb.TradeBean" * @ejb.samples. you can safely skip this section – you won’t need it! Here’s an extract of the TradeBean session EJB using Xdoclet: /** * Trade Session EJB manages all Trading services * * @ejb.home * generate="remote" * remote-class= * "org.ejb.6. Note that if you’re an EJB3 user.daytrader.xml descriptor.geronimo.ejb.jms.TradeHome" * @ejb.JMSException. When writing EJBs it means you simply have to write your EJB implementation class and XDoclet will generate the Home interface. Exception […] 100 . and the ejb-jar.geronimo.apache.apache.apache.geronimo.
the project’s directory structure is the same as in Figure 4-6.java classes and remove all of the Home.java"></include> </fileset> <homeinterface/> <remoteinterface/> <localhomeinterface/> <localinterface/> <deploymentdescriptor destDir="${project.. As you can see in Figure 4-7. but you don’t need the ejb-jar.build. this has to be run before the compilation phase occurs.build.sourceDirectory}"> <include name="**/*Bean.outputDirectory}/META-INF"/> </ejbdoclet> </tasks> </configuration> </execution> </executions> </plugin> 101 .java"></include> <include name="**/*MDB.mojo</groupId> <artifactId>xdoclet-maven-plugin</artifactId> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>xdoclet</goal> </goals> <configuration> <tasks> <ejbdoclet verbose="true" force="true" ejbSpec="2.directory}/generated-sources/xdoclet"> <fileset dir="${project. Now you need to tell Maven to run XDoclet on your project.xml file anymore as it’s going to be generated by Xdoclet.build.codehaus. Since XDoclet generates source files.xml that configures the plugin: <plugin> <groupId>org. Here’s the portion of the pom.1" destDir= "${project. Local and Remote interfaces as they’ll also get generated. This is achieved by using the Maven XDoclet plugin and binding it to the generate-sources life cycle phase.Building J2EE Applications To demonstrate XDoclet.
codehaus.build.sourceforge. 2006 16:53:50 xdoclet.XDocletMain start INFO: Running <localinterface/> Generating Local interface for 'org. In addition.XDocletMain start INFO: Running <homeinterface/> Generating Home interface for 'org.directory}/generated-sources/xdoclet (you can configure this using the generatedSourcesDirectory configuration element).apache.daytrader.daytrader.ejb.Better Builds with Maven The XDoclet plugin is configured within an execution element.html). There’s also a Maven 2 plugin for XDoclet2 at. The plugin generates sources by default in ${project.net/xdoclet/ant/xdoclet/modules/ejb/EjbDocletTask.ejb.ejb.org/Maven2+Plugin. 2006 16:53:51 xdoclet.samples. However.XDocletMain start INFO: Running <localhomeinterface/> Generating Local Home interface for 'org. This is required by Maven to bind the xdoclet goal to a phase. Finally. […] 10 janv.TradeBean'.daytrader. In practice you can use any XDoclet task (or more generally any Ant task) within the tasks element. […] 10 janv.AccountBean'.XDocletMain start INFO: Running <deploymentdescriptor/> Generating EJB deployment descriptor (ejb-jar. […] INFO: Running <remoteinterface/> Generating Remote interface for 'org.TradeBean'. it should be noted that XDoclet2 is a work in progress and is not yet fully mature.samples.AccountBean'. 2006 16:53:50 xdoclet.apache.geronimo. the XDoclet plugin will also trigger Maven to download the XDoclet libraries from Maven’s remote repository and add them to the execution classpath.geronimo.. […] [INFO] [ejb:ejb] [INFO] Building ejb daytrader-ejb-1. 2006 16:53:51 xdoclet. but here the need is to use the ejbdoclet task to instrument the EJB class files.ejb.apache.apache. nor does it boast all the plugins that XDoclet1 has.samples. in the tasks element you use the ejbdoclet Ant task provided by the XDoclet project (for reference documentation see. It’s based on a new architecture but the tag syntax is backwardcompatible in most cases.0 […] You might also want to try XDoclet2.samples. […] 10 janv.geronimo.xml). It also tells Maven that this directory contains sources that will need to be compiled when the compile phase executes. 102 .geronimo.daytrader.
you will also learn how to test it automatically.codehaus.0.2. First.7. configuring them and deploying modules to them.Building J2EE Applications 4.x (containerId element) and that you want Cargo to download the JBoss 4.net/ sourceforge/jboss/jboss-4.directory}/cargo. For example: <container> <containerId>jboss4x</containerId> <output>${project. Cargo is a framework for manipulating containers. etc. Ant. Let's discover how you can automatically start a container and deploy your EJBs into it. The ejb/pom.. IntelliJ IDEA.org/Debugging for full details.xml file has been edited adding following Cargo plugin configuration: <build> <plugins> [. the JBoss container will be used. In this example. Maven 2. you will learn how to deploy it.log</log> [.. In order to build this project you need to create a Profile where you define the ${installDir} property's value.] <plugin> <groupId>org. Deploying EJBs Now that you know how to build an EJB project. In the container element you tell the Cargo plugin that you want to use JBoss 4. Later.directory}/jboss4x.. Netbeans.) for performing various actions on containers such as starting. you will need to have Maven start the container automatically. in the Testing J2EE Applications section of this chapter. 103 . stopping.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>jboss4x</containerId> <zipUrlInstaller> <url>. It offers generic APIs (Java. The location where Cargo should install JBoss is a user-dependent choice and this is why the ${installDir} property was introduced.build. To do so you're going to use the Maven plugin for Cargo.log</output> <log>${project.0.sourceforge. you can use the log element to specify a file where Cargo logs will go and you can also use the output element to specify a file where the container's output will be dumped.dl.] See</url> <installDir>${installDir}</installDir> </zipUrlInstaller> </container> </configuration> </plugin> </plugins> </build> If you want to debug Cargo's execution.build..codehaus.2 distribution from the specified URL and install it in ${installDir}. Maven 1.
Nor should the content be shared with other Maven projects at large. and the EJB JAR has been deployed. In that case replace the zipURLInstaller element with a home element. That's it! JBoss is running. Thus the best place is to create a profiles. in a settings.2] [INFO] [talledLocalContainer] JBoss 4. It's also possible to tell Cargo that you already have JBoss installed locally....xml file. as the content of the Profile is user-dependent you wouldn't want to define it in the POM. the EJB JAR should first be created.0. Of course. activated by default and in which the ${installDir} property points to c:/apps/cargo-installs...0.. For example: <home>c:/apps/jboss-4. [INFO] [talledLocalContainer] JBoss 4.xml file. or in a settings.2 starting. In this case. [INFO] Searching repository for plugin with prefix: 'cargo'.0. 104 .2 started on port [8080] [INFO] Press Ctrl-C to stop the container. in a profiles.2</home> That's all you need to have a working build and to deploy the EJB JAR into JBoss. you can define a profile in the POM.xml file.xml file defines a profile named vmassol. it detects that the Maven project is producing an EJB from the packaging element and it automatically deploys it when the container is started. The Cargo plugin does all the work: it provides a default JBoss configuration (using port 8080 for example). so run mvn package to generate it.Better Builds with Maven As explained in Chapter 3. [INFO] ----------------------------------------------------------------------[INFO] Building DayTrader :: EJBs [INFO] task-segment: [cargo:start] [INFO] ----------------------------------------------------------------------[INFO] [cargo:start] [INFO] [talledLocalContainer] Parsed JBoss version = [4.
As you have told Cargo to download and install JBoss. modifying various container parameters. the first time you execute If the container was already started and you wanted to just deploy the EJB.8. The layout is the same as for a JAR module (see the first two chapters of this book). especially if you are on a slow connection. Cargo has many other configuration options such as the possibility of using an existing container installation. Subsequent calls will be fast as Cargo will not download JBoss again. let’s focus on building the DayTrader web module. deploying on a remote machine.Building J2EE Applications cargo:start it will take some time.codehaus. to stop the container call mvn cargo:stop. you would run the cargo:deploy goal. etc. Building a Web Application Project Now. Figure 4-8: Directory structure for the DayTrader web module showing some Web application resources 105 . Check the documentation at. JSPs. except that there is an additional src/main/webapp directory for locating Web application resources such as HTML pages. Finally. (see Figure 4-8). and more. WEB-INF configuration files. 4.org/Maven2+plugin.
samples.samples. The reason you are building this web module after the ejb module is because the web module's servlets call the EJBs. Therefore.apache.daytrader</groupId> <artifactId>daytrader-ejb</artifactId> <version>1. Therefore.apache. Depending on the main EJB JAR would also work.4_spec</artifactId> <version>1. for example to prevent coupling.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1. 106 .0</version> </parent> <artifactId>daytrader-web</artifactId> <name>DayTrader :: Web Application</name> <packaging>war</packaging> <description>DayTrader Web</description> <dependencies> <dependency> <groupId>org.0</modelVersion> <parent> <groupId>org.xml: <dependency> <groupId>org.0</version> <scope>provided</scope> </dependency> </dependencies> </project> You start by telling Maven that it’s building a project generating a WAR: <packaging>war</packaging> Next.xml file: <project> <modelVersion>4. a dependency has been added on the ejb module in web/pom.geronimo.xml.specs</groupId> <artifactId>geronimo-j2ee_1.Better Builds with Maven As usual everything is specified in the pom.apache.apache. It’s always cleaner to depend on the minimum set of required classes.0. but it’s not necessary and would increase the size of the WAR file.geronimo. you specify the required dependencies.geronimo.0</version> <type>ejb-client</type> </dependency> <dependency> <groupId>org.0</version> <type>ejb-client</type> </dependency> Note that you’re specifying a type of ejb-client and not ejb. This is because the servlets are a client of the EJBs. the servlets only need the EJB client JAR in their classpath to be able to call the EJBs.geronimo.samples. This is why you told the EJB plugin to generate a client JAR earlier on in ejb/pom.daytrader</groupId> <artifactId>daytrader-ejb</artifactId> <version>1.
the Geronimo J2EE specifications JAR is used with a provided scope (as seen previously when building the EJB).xml.m2\repository\org\apache\geronimo\samples\daytrader\ daytrader-web\1.0.0 [INFO] Assembling webapp daytrader-web in C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.war 107 . [INFO] Copy webapp resources to C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1. only files not in the existing Web application will be added.]\.0 [INFO] Generating war C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1. it’s a good practice to use the default conventions as much as possible. As seen in the introduction.xml file and reduces maintenance. However. The configuration is very simple because the defaults from the WAR plugin are being used. Otherwise it would have surfaced in the WEB-INF/lib directory of the generated WAR. Again.] [INFO] [war:war] [INFO] Exploding webapp.xml won't be merged. This is why we defined the J2EE JAR using a provided scope in the web module’s pom. An alternative is to use the uberwar goal from the Cargo Maven Plugin (see.. Maven 2 supports transitive dependencies. and files such as web. When it generates your WAR.war [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.war [INFO] Building war: C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.. allowing the aggregation of multiple WAR files.0.0. Running mvn install generates the WAR and installs it in your local repository: C:\dev\m2book\code\j2ee\daytrader\web>mvn install [... As you know. it recursively adds your module's dependencies.war to C:\[. as it reduces the size of the pom.. The final dependency listed is the J2EE JAR as your web module uses servlets and calls EJBs.. unless their scope is test or provided.Building J2EE Applications If you add a dependency on a WAR.0\daytrader-web-1.0.org/Merging+WAR+files). then the WAR you generate will be overlaid with the content of that dependent WAR.
see the reference documentation for the WAR plugin at. Thus any recompilation in your IDE will trigger a redeploy of your Web application in Jetty. Improving Web Development Productivity If you’re doing Web development you know how painful it is to have to package your code in a WAR and redeploy it every time you want to try out a change you made to your HTML. the plugin reloads the Web application in Jetty.org/plugins/maven-war-plugin/. providing an extremely fast turnaround time for development. the src/main/webapp tree. A typical usage for this plugin is to develop the source code in your IDE and have the IDE configured to compile classes in target/classes (this is the default when the Maven IDE plugins are used to set up your IDE project). 4. The plugin monitors the source tree for changes. There are two plugins that can alleviate this problem: the Cargo plugin and the Jetty plugin. You’ll discover how to use the Jetty plugin in this section as you’ve already seen how to use the Cargo plugin in a previous section.Better Builds with Maven Table 4-2 lists some other parameters of the WAR plugin that you may wish to configure. Table 4-2: WAR plugin configuration properties Configuration property Default value Description warSourceDirectory webXml ${basedir}/src/main/webapp The web. The plugin is configured by default to look for resource files in src/main/webapp. the web.9.finalName} For the full list. warSourceIncludes/war All files are included SourceExcludes warName ${project.xml Location of Web application resources to include in the WAR. If any change is detected.apache. JSP or servlet code.xml file found in ${warSourceDirectory}/WEBINF/web. Name of the generated WAR.xml file.xml file. and it adds the compiled classes in target/classes to its execution classpath.xml file. Specify the files to include/exclude from the generated WAR. the project dependencies and the compiled classes and classpath resources in target/classes. Maven can help. including the pom. The Jetty plugin creates a custom Jetty configuration that is wired to your source tree. Fortunately.build. Specify where to find the web. 108 .
xml file: [.specs</groupId> <artifactId>geronimo-j2ee_1.. adding this dependency to the plugin adds it to the classpath for Jetty.jetty</groupId> <artifactId>maven-jetty-plugin</artifactId> <configuration> <scanIntervalSeconds>10</scanIntervalSeconds> </configuration> <dependencies> <dependency> <groupId>org.0</version> <scope>provided</scope> </dependency> </dependencies> </plugin> [.Building J2EE Applications Let’s try the Jetty plugin on the DayTrader web module. 109 .4_spec</artifactId> <version>1. The reason for the dependency on the J2EE specification JAR is because Jetty is a servlet engine and doesn't provide the EJB specification JAR..] The scanIntervalSeconds configuration property tells the plugin to monitor for changes every 10 seconds. Since the Web application earlier declared that the specification must be provided through the provided scope.geronimo.] <build> <plugins> <plugin> <groupId>org.apache..mortbay. The following has been added to the web/pom..
Logging to org.0.impl.mortbay.jsp URL as shown in Figure 4-9 to see the Web application running . using defaults: org.xml file located at: C:\dev\m2book\code\j2ee\daytrader\ web\src\main\webapp\WEB-INF\web.SelectChannelConnector listening on 8080 with maxIdleTime 30000 0 [main] INFO org.Started SelectChannelConnector @ 0.log.mortbay. 110 . listening for changes.mortbay.jetty. As you can see.nio. Your Web application has been deployed and the plugin is waiting.. Open a browser with the. [INFO] Finished setting up classpath [INFO] Started configuring web.mortbay.SimpleLogger@1242b11 via org.0.xml.log . resource base= C:\dev\m2book\code\j2ee\daytrader\web\src\main\webapp [INFO] Finished configuring web.xml 681 [main] INFO org.log ... [INFO] No connectors configured. Maven pauses as Jetty is now started and may be stopped at anytime by simply typing Ctrl-C.Slf4jLog [INFO] Context path = /daytrader-web [INFO] Webapp directory = C:\dev\m2book\code\j2ee\daytrader\web\src\main\webapp [INFO] Setting up classpath .slf4j. but then the fun examples won't work!.0:8080 [INFO] Starting scanner at interval of 10 seconds..
Now let’s try to modify the content of this JSP by changing the opening account balance. The reason is that we have only deployed the Web application here. but the EJBs and all the back end code has not been deployed. Edit web/src/main/webapp/register. In practice it's easier to deploy a full EAR as you'll see below. In order to make it work you’d need to have your EJB container started with the DayTrader code deployed in it. .jsp.Building J2EE Applications Figure 4-9: DayTrader JSP registration page served by the Jetty plugin Note that the application will fail if you open a page that calls EJBs.
HashUserRealm"> <name>Test Realm</name> <config>etc/realm.properties</config> </userRealm> </userRealms> </configuration> </plugin> You can also configure the context under which your Web application is deployed by using the contextPath configuration element. that you have custom plugins that do all sorts of transformations to Web application resource files.SelectChannelConnector"> <port>9090</port> <maxIdleTime>60000</maxIdleTime> </connector> </connectors> <userRealms> <userRealm implementation= "org. For a reference of all configuration options see the Jetty plugin documentation at. Now imagine that you have an awfully complex Web application generation process.. and so on.jetty..org/mavenplugin/index. possibly generating some files.] <connectors> <connector implementation= "org.xml file will be applied first.mortbay. For example if you wanted to run Jetty on port 9090 with a user realm defined in etc/realm. Fortunately there’s a solution.nio. The Jetty container automatically recompiled the JSP when the page was refreshed.mortbay. In that case anything in the jetty.Better Builds with Maven That’s nifty. you would use: <plugin> <groupId>org. There are various configuration parameters available for the Jetty plugin such as the ability to define Connectors and Security realms. It's also possible to pass in a jetty.mortbay.jetty</groupId> <artifactId>maven-jetty-plugin</artifactId> <configuration> [. isn’t it? What happened is that the Jetty plugin realized the page was changed and it redeployed the Web application automatically. The strategy above would not work as the Jetty plugin would not know about the custom actions that need to be executed to generate a valid Web application.html.mortbay.xml configuration file using the jettyConfig configuration element. 112 . By default the plugin uses the module’s artifactId from the POM.properties.
[INFO] Scan complete at Wed Feb 15 11:59:00 CET 2006 [INFO] Starting scanner at interval of 10 seconds. The plugin then watches the following files: WEB-INF/lib.. WEB-INF/classes.SimpleLogger@78bc3b via org. Then it deploys the unpacked Web application located in target/ (whereas the jetty:run-war goal deploys the WAR file).Started SelectChannelConnector @ 0.xml file is modified.0.log ..Building J2EE Applications The WAR plugin has an exploded goal which produces an expanded Web application in the target directory.0.0:8080 [INFO] Scanning . Calling this goal ensures that the generated Web application is the correct one. 113 .xml.xml and pom. any change to To demonstrate.mortbay. WEB-INF/web.impl.0 [INFO] Assembling webapp daytrader-web in C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1. execute mvn jetty:run-exploded goal on the web module: C:\dev\m2book\code\j2ee\daytrader\web>mvn jetty:run-exploded [..0. for example) or when the pom....0 [INFO] Generating war C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web1. those files results in a hot redeployment.war [INFO] [jetty:run-exploded] [INFO] Configuring Jetty for project: DayTrader :: Web Application [INFO] Starting Jetty Server . [INFO] Copy webapp resources to C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.Slf4jLog [INFO] Context path = /daytrader-web 2214 [main] INFO org.log .slf4j.] [INFO] [war:war] [INFO] Exploding webapp.mortbay. • jetty:run-war: The plugin first runs the package phase which generates the WAR file.log.mortbay.Logging to org. jetty:run-exploded: The plugin runs the package phase as with the jetty:run-war goal..war [INFO] Building war: C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web1. 0 [main] INFO org.
.xml file has the following added for Cargo configuration: <plugin> <groupId>org. Deploying Web Applications You have already seen how to deploy a Web application for in-place Web development in the previous section. Restart completed.. Restarting. You're now ready for productive web development. Reconfiguring webapp .Better Builds with Maven As you can see the WAR is first assembled in the target directory and the Jetty plugin is now waiting for changes to happen.org/Containers).servlet.10.....port>8280</cargo. Listeners completed... so now the focus will be on deploying a packaged WAR to your target container. This is very useful when you're developing an application and you want to verify it works on several containers. The web module's pom. If you open another shell and run mvn package you'll see the following in the first shell's console: [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] Scan complete at Wed Feb 15 12:02:31 CET 2006 Calling scanner listeners . This example uses the Cargo Maven plugin to deploy to any container supported by Cargo (see. Stopping webapp .codehaus. Scanning .port> </properties> </configuration> </configuration> </plugin> 114 . No more excuses! 4.codehaus.servlet..
xml file: [. This is very useful if you have containers already running your machine and you don't want to interfere with them. There are two differences though: • Two new properties have been introduced (containerId and url) in order to make this build snippet generic.apache.0.30/bin/ jakarta-tomcat-5. However.net/sourceforge/jboss/jboss4.sourceforge.org/dist/jakarta/tomcat-5/v5.] </build> <profiles> <profile> <id>jboss4x</id> <activation> <activeByDefault>true</activeByDefault> </activation> <properties> <containerId>jboss4x</containerId> <url>. You could add as many profiles as there are containers you want to execute your Web application on.0. Those properties will be defined in a Profile.2. 115 .Building J2EE Applications As you can see this is a configuration similar to the one you have used to deploy your EJBs in the Deploying EJBs section of this chapter.servlet.0.port element has been introduced to show how to configure the containers to start on port 8280 instead of the default 8080 port.dl.zip</url> </properties> </profile> </profiles> </project> You have defined two profiles: one for JBoss and one for Tomcat and the JBoss profile is defined as active by default (using the activation element)..zip </url> </properties> </profile> <profile> <id>tomcat5x</id> <properties> <containerId>tomcat5x</containerId> <url>. • As seen in the Deploying EJBs section the installDir property is user-dependent and should be defined in a profiles. the containerId and url properties should be shared for all users of the build.. A cargo.xml file.30. Thus the following profiles have been added to the web/pom.
2 starting.remote.remote...port>${remotePort}</cargo. you would need the following Cargo plugin configuration in web/pom.port> <cargo.0.password>${remotePassword}</cargo..codehaus.] [.username> <cargo.username>${remoteUsername}</cargo. [INFO] [talledLocalContainer] Tomcat 5. once this is verified you'll want a solution to deploy your WAR into an integration platform.remote.hostname> <cargo... One solution is to have your container running on that integration platform and to perform a remote deployment of your WAR to it..war] to [C:\[.2 started on port [8280] [INFO] Press Ctrl-C to stop the container.0.Better Builds with Maven Executing mvn install cargo:start generates the WAR.0.30 starting.servlet..0.. This is useful for development and to test that your code deploys and works. To deploy the DayTrader’s WAR to a running JBoss server on machine remoteserver and executing on port 80.. However.. starts the JBoss container and deploys the WAR into it: C:\dev\m2book\code\j2ee\daytrader\web>mvn install cargo:start [.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>jboss4x</containerId> <type>remote</type> </container> <configuration> <type>runtime</type> <properties> <cargo. [INFO] [CopyingLocalDeployer] Deploying [C:\dev\m2book\code\j2ee\daytrader\web\target\daytrader-web-1.servlet.remote. [INFO] [talledLocalContainer] JBoss 4.0.30 started on port [8280] [INFO] Press Ctrl-C to stop the container.] [INFO] [cargo:start] [INFO] [talledLocalContainer] Tomcat 5.password> </properties> </configuration> </configuration> </plugin> 116 .0.hostname>${remoteServer}</cargo.2] [INFO] [talledLocalContainer] JBoss 4..xml: <plugin> <groupId>org.....]\Temp\cargo\50866\webapps].
It’s time to package the server module artifacts (EJB and WAR) into an EAR for convenient deployment.apache. All the properties introduced need to be declared inside the POM for those shared with other users and in the profiles.11. Figure 4-11: Directory structure of the ear module As usual the magic happens in the pom.xml file) for those user-dependent.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1.Building J2EE Applications When compared to the configuration for a local deployment above.codehaus.. Building an EAR Project You have now built all the individual modules.xml file (or the settings. it solely consists of a pom.0. Note that there was no need to specify a deployment URL as it is computed automatically by Cargo.geronimo. 4. The POM has defined that this is an EAR project by using the packaging element: <project> <modelVersion>4.org/Deploying+to+a+running+container.samples. the changes are: A remote container and configuration type to tell Cargo that the container is remote and not under Cargo's management. • Check the Cargo reference documentation for all details on deployments at. • Several configuration properties (especially a user name and password allowed to deploy on the remote JBoss container) to specify all the details required to perform the remote deployment.0</modelVersion> <parent> <groupId>org. The ear module’s directory structure can't be any simpler.xml file (see Figure 4-11).xml file.0</version> </parent> <artifactId>daytrader-ear</artifactId> <name>DayTrader :: Enterprise Application</name> <packaging>ear</packaging> <description>DayTrader EAR</description> 117 ..
the description to use. 118 .daytrader</groupId> <artifactId>daytrader-streamer</artifactId> <version>1.daytrader</groupId> <artifactId>daytrader-ejb</artifactId> <version>1. Web modules. At the time of writing.samples.samples.0</version> </dependency> <dependency> <groupId>org.daytrader</groupId> <artifactId>daytrader-wsappclient</artifactId> <version>1.apache. the pom. jar.daytrader</groupId> <artifactId>daytrader-web</artifactId> <version>1.apache.geronimo.0</version> <type>war</type> </dependency> <dependency> <groupId>org.0</version> </dependency> </dependencies> Finally. ejb3.apache. ejb-client.geronimo. par. the EAR plugin supports the following module types: ejb. war. rar.samples.samples.xml file defines all of the dependencies that need to be included in the generated EAR: <dependencies> <dependency> <groupId>org.Better Builds with Maven Next. you need to configure the Maven EAR plugin by giving it the information it needs to automatically generate the application.apache.geronimo.xml deployment descriptor file. and EJB modules. This includes the display name to use. and the J2EE version to use.0</version> <type>ejb</type> </dependency> <dependency> <groupId>org. It is also necessary to tell the EAR plugin which of the dependencies are Java modules.geronimo. sar and wsr.
geronimo.samples. with the exception of those that are optional. the contextRoot element is used for the daytrader-web module definition to tell the EAR plugin to use that context root in the generated application.daytrader</groupId> <artifactId>daytrader-web</artifactId> <contextRoot>/daytrader</contextRoot> </webModule> </modules> </configuration> </plugin> </plugins> </build> </project> Here.4</version> <modules> <javaModule> <groupId>org.geronimo.Building J2EE Applications By default. all dependencies are included.apache. 119 . You should also notice that you have to specify the includeInApplicationXml element in order to include the streamer and wsappclient libraries into the EAR. By default. However.plugins</groupId> <artifactId>maven-ear-plugin</artifactId> <configuration> <displayName>Trade</displayName> <description> DayTrader Stock Trading Performance Benchmark Sample </description> <version>1.apache.apache.samples. only EJB client JARs are included when specified in the Java modules list.daytrader</groupId> <artifactId>daytrader-streamer</artifactId> <includeInApplicationXml>true</includeInApplicationXml> </javaModule> <javaModule> <groupId>org.xml file.geronimo.apache.daytrader</groupId> <artifactId>daytrader-wsappclient</artifactId> <includeInApplicationXml>true</includeInApplicationXml> </javaModule> <webModule> <groupId>org.maven. or those with a scope of test or provided. it is often necessary to customize the inclusion of some dependencies such as shown in this example: <build> <plugins> <plugin> <groupId>org.samples.
. The streamer module's build is not described in this chapter because it's a standard build generating a JAR: [.apache.daytrader</groupId> <artifactId>daytrader-streamer</artifactId> <includeInApplicationXml>true</includeInApplicationXml> <bundleDir>lib</bundleDir> </javaModule> <javaModule> <groupId>org.geronimo.Better Builds with Maven It is also possible to configure where the JARs' Java modules will be located inside the generated EAR...apache.geronimo.samples.org/plugins/maven-ear-plugin.apache.samples.. </javaModule> [.. 120 . Run mvn install in daytrader/streamer.] <defaultBundleDir>lib</defaultBundleDir> <modules> <javaModule> ... if you wanted to put the libraries inside a lib subdirectory of the EAR you would use the bundleDir element: <javaModule> <groupId>org. For example.
ear to C:\[.samples.geronimo.samples.ear [INFO] [install:install] [INFO] Installing C:\dev\m2book\code\j2ee\daytrader\ear\ target\daytrader-ear-1.war] [INFO] Copying artifact [ejb:org.daytrader: daytrader-ejb:1.0-client.geronimo.0.0] to[daytrader-ejb-1.geronimo.m2\repository\org\apache\geronimo\samples\ daytrader\daytrader-ear\1.jar] [INFO] Copying artifact [war:org.Building J2EE Applications To generate the EAR.ear 121 .]\.0\daytrader-ear-1.geronimo.0] to [daytrader-ejb-1.0] to [daytrader-web-1.daytrader: daytrader-ejb:1.apache.0] to [daytrader-streamer-1.daytrader: daytrader-wsappclient:1.jar] [INFO] Copying artifact [jar:org.apache.geronimo.0.apache. run mvn install: C:\dev\m2book\code\j2ee\daytrader\ear>mvn install […] [INFO] [ear:generate-application-xml] [INFO] Generating application.samples.samples.0] to [daytrader-wsappclient-1.apache.daytrader: daytrader-web:1.samples.xml [INFO] [resources:resources] [INFO] Using default encoding to copy filtered resources..jar] [INFO] Could not find manifest file: C:\dev\m2book\code\j2ee\daytrader\ear\src\main\application\ META-INF\MANIFEST.0.Generating one [INFO] Building jar: C:\dev\m2book\code\j2ee\daytrader\ear\ target\daytrader-ear-1.0.daytrader: daytrader-streamer:1.0..0. [INFO] [ear:ear] [INFO] Copying artifact [jar:org.MF .jar] [INFO] Copying artifact [ejb-client:org.apache.0.
w3. Geronimo is somewhat special among J2EE containers in that deploying requires calling the Deployer tool with a deployment plan.sun. how to map J2EE resources in the container.0.0" encoding="UTF-8"?> <application xmlns="</ejb> </module> </application> This looks good. However. enabling the Geronimo plan to be modified to suit the deployment environment.jar</java> </module> <module> <java>daytrader-wsappclient-1.12.xsd" version="1.com/xml/ns/j2ee. You'll need to use the JDK 1. etc. it is recommended that you use an external plan file so that the deployment configuration is independent from the archives getting deployed. Deploying a J2EE Application You have already learned how to deploy EJBs and WARs into a container individually. 122 .sun.jar</java> </module> <module> <web> <web-uri>daytrader-web-1. Geronimo also supports having this deployment descriptor located within the J2EE archives you are deploying.Better Builds with Maven You should review the generated application.0. In this example. Deploying EARs follows the same principle. The next section will demonstrate how to deploy this EAR into a container.com/xml/ns/j2ee" xmlns: <description> DayTrader Stock Trading Performance Benchmark Sample </description> <display-name>Trade</display-name> <module> <java>daytrader-streamer-1. 4. The DayTrader application does not deploy correctly when using the JDK 5 or newer. Like any other container.com/xml/ns/j2ee/application_1_4.4 for this section and the following.sun.
Figure 4-12: Directory structure of the ear module showing the Geronimo deployment plan How do you perform the deployment with Maven? One option would be to use Cargo as demonstrated earlier in the chapter.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <container> <containerId>geronimo1x</containerId> <zipUrlInstaller> <url> configuration snippet: <plugin> <groupId>org. You would need the following pom.xml</plan> </properties> </deployable> </deployables> </deployer> </configuration> </plugin> 123 .apache.org/dist/geronimo/1.Building J2EE Applications To get started. as shown on Figure 4-12.0.0/ geronimo-tomcat-j2ee-1.codehaus..
home>c:/apps/geronimo-1.build. or when Cargo doesn't support the container you want to deploy into.Better Builds with Maven However.build.ear </argument> <argument> ${basedir}/src/main/deployment/geronimo/plan.home}/bin/deployer.home> </properties> </profile> </profiles> At execution time. put the following profile in a profiles. learning how to use the Exec plugin is useful in situations where you want to do something slightly different.finalName}.jar –user system –password manager deploy C:\dev\m2book\code\j2ee\daytrader\ear\target/daytrader-ear-1.xml </argument> </arguments> </configuration> </plugin> You may have noticed that you're using a geronimo.xml or settings.directory}/${project. Even though it's recommended to use a specific plugin like the Cargo plugin (as described in 4. As the location where Geronimo is installed varies depending on the user.xml to configure the Exec plugin: <plugin> <groupId>org.0-tomcat</geronimo.xml 124 . As you've seen in the EJB and WAR deployment sections above and in previous chapters it's possible to create properties that are defined either in a properties section of the POM or in a Profile.xml file: <profiles> <profile> <id>vmassol</id> <properties> <geronimo.13 Testing J2EE Applications).codehaus.home property that has not been defined anywhere.0-tomcat/bin/deployer.jar</argument> <argument>--user</argument> <argument>system</argument> <argument>--password</argument> <argument>manager</argument> <argument>deploy</argument> <argument> ${project.ear C:\dev\m2book\code\j2ee\daytrader\ear/src/main/deployment/geronimo/plan. Modify the ear/pom.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <configuration> <executable>java</executable> <arguments> <argument>-jar</argument> <argument>${geronimo.0. This plugin can execute any process. You'll use it to run the Geronimo Deployer tool to deploy your EAR into a running Geronimo container. in this section you'll learn how to use the Maven Exec plugin. the Exec plugin will transform the executable and arguments elements above into the following command line: java -jar c:/apps/geronimo-1.
You will need to make sure that the DayTrader application is not already deployed before running the exec:exec goal or it will fail.. Since Geronimo 1.jar [INFO] [INFO] `-> TradeDataSource [INFO] [INFO] `-> TradeJMS You can now access the DayTrader application by opening your browser to\bin>deploy undeploy Trade 125 .0-tomcat\bin>deploy stop geronimo/daytrader-derby-tomcat/1. you should first stop it. by creating a new execution of the Exec plugin or run the following: C:\apps\geronimo-1.war [INFO] [INFO] `-> daytrader-ejb-1.jar [INFO] [INFO] `-> daytrader-wsappclient-1.jar [INFO] [INFO] `-> daytrader-streamer-1. start your pre-installed version of Geronimo and run mvn exec:exec: C:\dev\m2book\code\j2ee\daytrader\ear>mvn exec:exec [..0/car If you need to undeploy the DayTrader version that you've built above you'll use the “Trade” identifier instead: C:\apps\geronimo-1.0-SNAPSHOT.0-SNAPSHOT.0 comes with the DayTrader application bundled.] [INFO] [exec:exec] [INFO] Deployed Trade [INFO] [INFO] `-> daytrader-web-1.0-SNAPSHOT.Building J2EE Applications First.
see Chapter 7. For example. To achieve this. 126 . so you can define a profile to build the functional-tests module only on demand.xml so that it's built along with the others. At the time of writing.Better Builds with Maven 4. Maven only supports integration and functional testing by creating a separate module.13. modify the daytrader/pom. create a functional-tests module as shown in Figure 4-13. Functional tests can take a long time to execute. Figure 4-13: The new functional-tests module amongst the other DayTrader modules This module has been added to the list of modules in the daytrader/pom. Testing J2EE Application In this last section you'll learn how to automate functional testing of the EAR built previously.
the compiler and Surefire plugins are not triggered during the build life cycle of projects with a pom packaging. so these need to be configured in the functional-tests/pom. take a look in the functional-tests module itself. • Classpath resources required for the tests are put in src/it/resources (this particular example doesn't have any resources). . • The Geronimo deployment Plan file is located in src/deployment/geronimo/plan. Now. However.xml. Figure 4-14: Directory structure for the functional-tests module As this module does not generate an artifact. the packaging should be defined as pom.
..samples..] </plugins> </build> </project> 128 .Better Builds with Maven <project> <modelVersion>4.apache.apache.geronimo.apache.0-SNAPSHOT</version> </parent> <artifactId>daytrader-tests</artifactId> <name>DayTrader :: Functional Tests</name> <packaging>pom</packaging> <description>DayTrader Functional Tests</description> <dependencies> <dependency> <groupId>org.daytrader</groupId> <artifactId>daytrader</artifactId> <version>1.samples.apache.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <executions> <execution> <phase>integration-test</phase> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> [..geronimo.0-SNAPSHOT</version> <type>ear</type> <scope>provided</scope> </dependency> [.] </dependencies> <build> <testSourceDirectory>src/it</testSourceDirectory> <plugins> <plugin> <groupId>org.0</modelVersion> <parent> <groupId>org.daytrader</groupId> <artifactId>daytrader-ear</artifactId> <version>1.0.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <executions> <execution> <goals> <goal>testCompile</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.maven.
] <dependency> <groupId>org.cargo</groupId> <artifactId>cargo-core-uberjar</artifactId> <version>0.8</version> <scope>test</scope> </dependency> </dependencies> 129 . in the case of the DayTrader application. This is because the EAR artifact is needed to execute the functional tests. you'll bind the Cargo plugin's start and deploy goals to the preintegration-test phase and the stop goal to the postintegration-test phase.sourceforge.. so DBUnit is not needed to perform any database operations. and it is started automatically by Geronimo. It also ensures that the daytrader-ear module is built before running the functional-tests build when the full DayTrader build is executed from the toplevel in daytrader/. To set up your database you can use the DBUnit Java API (see. As the Surefire plugin's test goal has been bound to the integration-test phase above.xml file: <project> [.codehaus.. Start by adding the Cargo dependencies to the functional-tests/pom. there's a DayTrader Web page that loads test data into the database. you will usually utilize a real database in a known state..] <dependencies> [. You may be asking how to start the container and deploy the DayTrader EAR into it.cargo</groupId> <artifactId>cargo-ant</artifactId> <version>0. Derby is the default database configured in the deployment plan.net/). For integration and functional tests. However. thus ensuring the proper order of execution.codehaus.8</version> <scope>test</scope> </dependency> <dependency> <groupId>org.. You're going to use the Cargo plugin to start Geronimo and deploy the EAR into it. In addition.Building J2EE Applications As you can see there is also a dependency on the daytrader-ear module.
] The deployer element is used to configure the Cargo plugin's deploy goal.0/ geronimo-tomcat-j2ee-1..codehaus.0. thus ensuring that the EAR is ready for servicing when the tests execute.org/dist/geronimo/1.xml </plan> </properties> <pingURL></pingURL> </deployable> </deployables> </deployer> </configuration> </execution> [.Better Builds with Maven Then create an execution element to bind the Cargo plugin's start and deploy goals: <build> <plugins> [... In addition.. a pingURL element is specified so that Cargo will ping the specified URL till it responds. 130 .apache.geronimo.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <configuration> <wait>false</wait> <container> <containerId>geronimo1x</containerId> <zipUrlInstaller> <url></groupId> <artifactId>daytrader-ear</artifactId> <type>ear</type> <properties> <plan> ${basedir}/src/deployment/geronimo/plan. It is configured to deploy the EAR using the Geronimo Plan file.apache.] <plugin> <groupId>org.samples.
Add the JUnit and HttpUnit dependencies. add an execution element to bind the Cargo plugin's stop goal to the post-integration-test phase: [. with both defined using a test scope.6.8.1</version> <scope>test</scope> </dependency> 131 .. as you're only using them for testing: <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.net/) to call a Web page from the DayTrader application and check that it's working..] <execution> <id>stop-container</id> <phase>post-integration-test</phase> <goals> <goal>stop</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> The functional test scaffolding is now ready.. You're going to use the HttpUnit testing framework (</version> <scope>test</scope> </dependency> <dependency> <groupId>httpunit</groupId> <artifactId>httpunit</artifactId> <version>1.sourceforge.Building J2EE Applications Last.
framework. WebResponse response = wc.531 sec [INFO] [cargo:stop {execution: stop-container}] 4. public class FunctionalTest extends TestCase { public void testDisplayMainPage() throws Exception { WebConversation wc = new WebConversation(). WebRequest request = new GetMethodWebRequest( "").14. assertEquals("DayTrader". add a JUnit test class called src/it/java/org/apache/geronimo/samples/daytrader/FunctionalTest. Failures: 0. 132 .] .getResponse(request). import junit. deploying J2EE archives and implementing functional tests. Summary You have learned from chapters 1 and 2 how to build any type of application and this chapter has demonstrated how to build J2EE applications.*. In addition you've discovered how to automate starting and stopping containers.Better Builds with Maven Next. how to effectively set up Maven in a team.. how to gather project health information from your builds.daytrader.apache.samples..meterware.geronimo. In the class.getTitle()).samples. } } It's time to reap the benefits from your build. At this stage you've pretty much become an expert Maven user! The following chapters will show even more advanced topics such as how to write Maven plugins.httpunit.daytrader. Errors: 0.geronimo. response. and more. type mvn install and relax: C:\dev\m2book\code\j2ee\daytrader\functional-tests>mvn install [. import com. Change directory into functional-tests.apache.*.java.FunctionalTest [surefire] Tests run: 1. the URL is called to verify that the returned page has a title of “DayTrader”: package org. Time elapsed: 0.
for Nature cannot be fooled.5. . and resources from a plugin • Attaching an artifact to the project • • For a successful technology.Richard Feynman 133 . Developing Custom Maven Plugins Developing Custom Maven Plugins This chapter covers: How plugins execute in the Maven life cycle Tools and languages available to aid plugin developers • Implementing a basic plugin using Java and Ant • Working with dependencies. reality must take precedence over public relations. source directories.
plugins provide a grouping mechanism for multiple mojos that serve similar functions within the build life cycle. injecting runtime parameter information. When a number of mojos perform related tasks. resolving dependencies.2. the maven-compiler-plugin incorporates two mojos: compile and testCompile. or even at the Web sites of third-party tools offering Maven integration by way of their own plugins (for a list of some additional plugins available for use. For example. Recall that a mojo represents a single task in the build process. However. Packaging these mojos inside a single plugin provides a consistent access mechanism for users.1. With most projects. and is defined as a set of task categories. but also extending a project's build to incorporate new functionality. Additionally. it will discuss the various ways that a plugin can interact with the Maven build environment and explore some examples. such as integration with external tools and systems. it enables these mojos to share common code more easily. In this case. 134 . A mojo is the basic unit of work in the Maven application. it traverses the phases of the life cycle in order. The actual functional tasks. in order to perform the tasks necessary to build a project. or work. Even if a project requires a special task to be performed. the plugins provided “out of the box” by Maven are enough to satisfy the needs of most build processes (see Appendix A for a list of default plugins used to build a typical project). This ordering is called the build life cycle. resolving project dependencies. It executes an atomic build task that represents a single step in the build process. When Maven executes a build. of the build process are executed by the set of plugins associated with the phases of a project's build life-cycle. it may be necessary to write a custom plugin to integrate these tasks into the build life cycle. the chapter will cover the tools available to simplify the life of the plugin developer. This association of mojos to phases is called binding and is described in detail below. the build process for a project comprises a set of mojos executing in a particular. A Review of Plugin Terminology Before delving into the details of how Maven plugins function and how they are written. 5. It starts by describing fundamentals. Maven's core APIs handle the “heavy lifting” associated with loading project definitions (POMs). the common theme for these tasks is the function of compiling code. From there. called phases. Just like Java packages. the loosely affiliated CodeHaus Mojo project. it is still likely that a plugin already exists to perform this task. they are packaged together into a plugin. Such supplemental plugins can be found at the Apache Maven project. Finally. and organizing and running plugins. This chapter will focus on the task of writing custom plugins. and more. refer to the Plugin Matrix. including a review of plugin terminology and the basic mechanics of the the Maven plugin framework. if your project requires tasks that have no corresponding plugin. This makes Maven's plugin framework extremely important as a means of not only building a project. allowing shared configuration to be added to a single section of the POM. Maven is actually a platform that executes plugins within a build life cycle. executing all the associated mojos at each phase of the build. Introduction As described in Chapter 2. Correspondingly. well-defined order. let's begin by reviewing the terminology used to describe a plugin and its role in the build.Better Builds with Maven 5. Each mojo can leverage the rich infrastructure provided by Maven for loading projects.
Think of these mojos as tangential to the the Maven build process. Together. to ensure compatibility with other plugins. they can be bound to any phase in the life cycle. before a mojo can execute. mojos have a natural phase binding which determines when a task should execute within the life cycle. and as such. As a result. since they often perform tasks for the POM maintainer. Using the life cycle. As a plugin developer. Indeed. which is used for the majority of build activities (the other two life cycles deal with cleaning a project's work directory and generating a project Web site). you must understand the mechanics of life-cycle phase binding and parameter injection. which correspond to the phases of the build life cycle. including a well-defined build life cycle. will not have a life-cycle phase binding at all since they don't fall into any natural category within a typical build process. Since phase bindings provide a grouping mechanism for mojos within the life cycle. the discussion in this chapter is restricted to the default life cycle. 5. Most mojos fall into a few general categories.Developing Custom Maven Plugins Together with phase binding. While mojos usually specify a default phase binding. Each execution can specify a separate phase binding for its declared set of mojos. In some cases. A discussion of all three build life cycles can be found in Appendix A. sequencing the various build operations. a given mojo can even be bound to the life cycle multiple times during a single build. and parameter resolution and injection. using the plugin executions section of the project's POM. 5. it may still require that certain activities have already been completed. Maven also provides a welldefined procedure for building a project's sources into a distributable archive. parameter injection and life-cycle binding form the cornerstone for all mojo development. Understanding this framework will enable you to extract the Maven build-state information that each mojo requires. Therefore. in addition to determining its appropriate phase binding.3.1. While Maven does in fact define three different lifecycles. a mojo can pick and choose what elements of the build state it requires in order to execute its task. The Plugin Framework Maven provides a rich framework for its plugins. Binding to a phase of the Maven life cycle allows a mojo to make assumptions based upon what has happened in the preceding phases. it is important to provide the appropriate phase binding for your mojos. the ordered execution of Maven's life cycle gives coherence to the build process. a mojo may be designed to work outside the context of the build life cycle. However. you will also need a good understanding of how plugins are structured and how they interact with their environment. so be sure to check the documentation for a mojo before you re-bind it. These mojos are meant to be used by way of direct invocation. 135 .3. Using Maven's parameter injection infrastructure. dependency management. plus much more. successive phases can make assumptions about what work has taken place in the previous phases. Such mojos may be meant to check out a project from version control. or aid integration with external development tools. or even create the directory structure for a new project. Bootstrapping into Plugin Development In addition to understanding Maven's plugin terminology.
These mojos were always present in the life-cycle definition. none of the mojos from the maven-resources-plugin will be executed. Instead. Indeed. determining when not to execute. each of the resource-related mojos will discover this lack of non-code resources and simply opt out without modifying the build in any way. the jar mojo from the maven-jarplugin will harvest these class files and archive them into a jar file. Only those mojos with tasks to perform are executed during this build. Maven will execute a default life cycle for the 'jar' packaging. did not execute.. then two additional mojos will be triggered to handle unit testing. generation of the project's Web site. is often as important as the modifications made during execution itself. The testCompile mojo from the maven-compiler-plugin will compile the test sources. Maven's plugin framework ensures that almost anything can be integrated into the build life cycle. the compile mojo from the maven-compiler-plugin will compile the source code into binary class files in the output directory. consider a very basic Maven build: a project with source code that should be compiled and archived into a jar file for redistribution. As a specific example of how plugins work together through the life cycle. Since our hypothetical project has no “non-code” resources. If this basic Maven project also includes source code for unit tests. then the test mojo from the maven-surefire-plugin will execute those compiled tests.Better Builds with Maven Participation in the build life cycle Most plugins consist entirely of mojos that are bound at various phases in the life cycle according to their function in the build process. This level of extensibility is part of what makes Maven so powerful. validation of project content. 136 . many more plugins can be used to augment the default life-cycle definition. Then. but until now they had nothing to do and therefore. and much more. at least two of the above mojos will be invoked. but a requirement of a well-designed mojo. This is not a feature of the framework. providing functions as varied as deployment into the repository system. In good mojo design. Depending on the needs of a given project. First.
and the resulting value is injected into the mojo. It contains information about the mojo's implementation class (or its path within the plugin jar). whether it is required for the mojo's execution. and more. along with any system properties that were provided when Maven was launched. the expression to retrieve that information might look as follows: ${patchDirectory} For more information about which mojo expressions are built into Maven. and once resolved. That is to say. the life-cycle phase to which the mojo should be bound. each declared mojo parameter includes information about the various expressions used to resolve its value.Developing Custom Maven Plugins Accessing build information In order for mojos to execute effectively. The Maven plugin descriptor is a file that is embedded in the plugin jar archive. The descriptor is an XML file that informs Maven about the set of mojos that are contained within the plugin. and consists of the user. a mojo that applies patches to the project source code will need to know where to find the project source and patch files. 137 . whether it is editable. For example. see Appendix A.and machinelevel Maven settings. how do you instruct Maven to inject those values into the mojo instance? Further. • To gain access to the current build state. thereby avoiding traversal of the entire build-state object graph.compileSourceRoots} Then. Using the correct parameter expressions. see Appendix A. under the path /META-INF/maven/plugin. a mojo can keep its dependency list to a bare minimum. in addition to any programmatic modifications made by previous mojo executions. using a language-appropriate mechanism. assuming the patch directory is specified as mojo configuration inside the POM. and what methods Maven uses to extract mojo parameters from the build state. they require information about the state of the current build. and the mechanism for injecting the parameter value into the mojo instance. The plugin descriptor Though you have learned about binding mojos to life-cycle phases and resolving parameter values using associated expressions. This information comes in two categories: • Project information – which is derived from the project POM. the expression associated with a parameter is resolved against the current build state. Within this descriptor. Environment information – which is more static. until now you have not seen exactly how a life-cycle binding occurs. At runtime. the set of parameters the mojo declares. For the complete plugin descriptor syntax. This mojo would retrieve the list of source directories from the current build information using the following expression: ${project. Maven allows mojos to specify parameters whose values are extracted from the build state using expressions. how do you associate mojo parameters with their expression counterparts.xml. how do you instruct Maven to instantiate a given mojo in the first place? The answers to these questions lie in the plugin descriptor.
it consists of a framework library which is complemented by a set of provider libraries (generally. and direct invocations (as from the command line). These plugindevelopment tools are divided into the following two categories: • The plugin extractor framework – which knows how to parse the metadata formats for every language supported by Maven. Maven provides plugin tools to parse mojo metadata from a variety of formats. adding any other plugin-level metadata through its own configuration (which can be modified in the plugin's POM). • Of course. it's a simple case of providing special javadoc annotations to identify the properties and parameters of the mojo. Maven's plugindevelopment tools remove the burden of maintaining mojo metadata by hand. This metadata is embedded directly in the mojo's source code where possible. POM configurations. except when configuring the descriptor. Using Java. and orchestrates the process of extracting metadata from mojo implementations. the maven-plugin-plugin simply augments the standard jar life cycle mentioned previously as a resource-generating step (this means the standard process of turning project sources into a distributable jar archive is modified only slightly.2. it uses a complex syntax. one per supported mojo language). 5. The maven-plugin-plugin – which uses the plugin extractor framework. Maven's development tools expose only relevant specifications in a format convenient for a given plugin's implementation language.3. In short. However. By abstracting many of these details away from the plugin developer. The clean mojo also defines the following: 138 .Better Builds with Maven The plugin descriptor is very powerful in its ability to capture the wiring information for a wide variety of mojos. This framework generates both plugin documentation and the coveted plugin descriptor. and its format is specific to the mojo's implementation language. This is where Maven's plugin development tools come into play. Writing a plugin descriptor by hand demands that plugin developers understand low-level details about the Maven plugin framework – details that the developer will not use. the clean mojo in the mavenclean-plugin provides the following class-level javadoc annotation: /** * @goal clean */ public class CleanMojo extends AbstractMojo This annotation tells the plugin-development tools the mojo's name. Plugin Development Tools To simplify the creation of plugin descriptors. to generate the plugin descriptor). the format used to write a mojo's metadata is dependent upon the language in which the mojo is implemented. this flexibility comes at a price. For example. so it can be referenced from lifecycle mappings. To accommodate the extensive variability required from the plugin descriptor.
verbose}" default-value="false" */ private boolean verbose. 139 java. expression and default-value. If you choose to write mojos in another language. But consider what would happen if the default value you wanted to inject contained a parameter expression. This parameter annotation also specifies two attributes. especially when you could just declare the field as follows: private boolean verbose = false. Since the plugin tools can also generate documentation about plugins based on these annotations. The second specifies that this parameter can also be configured from the command line as follows: -Dclean. Here. see Appendix A. Remember. it might seem counter-intuitive to initialize the default value of a Java field using a javadoc annotation. For instance.build. then the mechanism for specifying mojo metadata such as parameter definitions will be different. this value is resolved based on the POM and injected into this field. At first.io. The first specifies that this parameter's default value should be set to false. it's impossible to initialize the Java field with the value you need. the annotation identifies this field as a mojo parameter. consider the following field annotation from the resources mojo in the maven-resources-plugin: /** * Directory containing the classes. these annotations are specific to mojos written in Java. it's a good idea to consistently specify the parameter's default value in the metadata. For a complete list of javadoc annotations available for specifying mojo metadata. it specifies that this parameter can be configured from the POM using: <configuration> <verbose>false</verbose> </configuration> You may notice that this configuration name isn't explicitly specified in the annotation.outputDirectory}" */ private File classesDirectory. rather than in the Java field initialization code. it's implicit when using the @parameter annotation. namely the mojo is instantiated. which references the output directory for the current project. like Ant. the underlying principles remain the same.File instance. However. In this case. * * @parameter default-value="${project.verbose=false Moreover.Developing Custom Maven Plugins /** * Be verbose in the debug log-level? * * @parameter expression="${clean. When the .
when translating a project build from Ant to Maven (refer to Chapter 8 for more discussion about migrating from Ant to Maven). This is especially important during migration. and so on. you risk confusing the issue at hand – namely. Simple javadoc annotations give the plugin processing plugin (the maven-plugin-plugin) the instructions required to generate a descriptor for your mojo. and Beanshell.3. Since Java is currently the easiest language for plugin development. Maven can accommodate mojos written in virtually any language. Whatever language you use. the examples in this chapter will focus on a relatively simple problem space: gathering and publishing information about a particular build. During the early phases of such a migration. or any combination thereof. Maven can wrap an Ant build target and use it as if it were a mojo. Therefore. Since it provides easy reuse of third-party APIs from within your mojo.Better Builds with Maven Choose your mojo implementation language Through its flexible plugin descriptor format and invocation framework. This pairing of the build script and accompanying metadata file follows a naming convention that allows the mavenplugin-plugin to correlate the two files and create an appropriate plugin descriptor. Ant-based plugins can consist of multiple mojos mapped to a single build script. For many mojo developers. Plugin parameters can be injected via either field reflection or setter methods. individual mojos each mapped to separate scripts. mojo mappings and parameter definitions are declared via an associated metadata file. in certain cases you may find it easier to use Ant scripts to perform build tasks. this chapter will focus primarily on plugin development in this language. Ant. it's important to keep the examples clean and relatively simple. this technique also works well for Beanshell-based mojos. due to the migration value of Ant-based mojos when converting a build to Maven. the specific snapshot versions of dependencies used in the build. In these cases. called buildinfo. In addition. which is used to read and write build information metadata files. Maven currently supports mojos written in Java. For example. the particular feature of the mojo framework currently under discussion. it is often simpler to wrap existing Ant build targets with Maven mojos and bind them to various phases in the life cycle. To make Ant scripts reusable. and minimizes the number of dependencies you will have on Maven's core APIs. and because many Maven-built projects are written in Java. it also provides good alignment of skill sets when developing mojos from scratch. Java is the language of choice. this chapter will also provide an example of basic plugin development using Ant. Otherwise. 5. To facilitate these examples. You can install it using the following simple command: mvn install 140 . This relieves you of the burden associated with traversing a large object graph in your code. Such information might include details about the system environment. However. Since Beanshell behaves in a similar way to standard Java. you will need to work with an external project.3. This project can be found in the source code that accompanies this book. Maven lets you select pieces of the build state to inject as mojo parameters. A Note on the Examples in this Chapter When learning how to interact with the different aspects of Maven from within a mojo.
zip file provides sequential instructions for building the code. BuildInfo Example: Capturing Information with a Java Mojo To begin. In addition to simply capturing build-time information. Here. since it can have a critical effect on the build process and the composition of the resulting Guinea Pig artifacts. The buildinfo plugin is a simple wrapper around this generator. Capturing this information is key. a default profile injects a dependency on a windows-specific library. This development effort will have the task of maintaining information about builds that are deployed to the development repository. you are free to write any sort of adapter or front-end code you wish. this dependency is used only during testing. providing a thin adapter layer that allows the generator to be run from a Maven build. if the system property os.1. As a side note. reusable utility in many different scenarios. this approach encapsulates an important best practice. called Guinea Pig. This information should capture relevant details about the environment used to build the Guinea Pig artifacts. Therefore. then the value of the triggering system property – and the profile it triggers – could reasonably determine whether the build succeeds or fails. For simplicity. you will look at the development effort surrounding a sample project. and take advantage of a single. perform the following steps7: cd buildinfo mvn install 7 The README. When triggered. consider a case where the POM contains a profile. you will need to disseminate the build to the rest of the development team.Developing Custom Maven Plugins 5. this profile adds a new dependency on a Linux-specific library. If you have a test dependency which contains a defect. and this dependency is injected by one of the aforementioned profiles. for the purposes of debugging. Prerequisite: Building the buildinfo generator project Before writing the buildinfo plugin. eventually publishing it alongside the project's artifact in the repository for future reference (refer to Chapter 7 for more details on how teams use Maven). the values of system properties used in the build are clearly very important. you must first install the buildinfo generator library into your Maven local repository. Developing Your First Mojo For the purposes of this chapter.4. 141 .4.txt file in the Code_Ch05. which will be triggered by the value of a given system property – say. by separating the generator from the Maven binding code. which allows the build to succeed in that environment. When this profile is not triggered. To build the buildinfo generator library. it makes sense to publish the value of this particular system property in a build information file so that others can see the aspects of the environment that affected this build. and has no impact on transitive dependencies for users of this project. refer to Chapter 3). which will be deployed to the Maven repository system. 5.name is set to the value Linux (for more information on profiles.
under the following path: src\main\java\com\exist\mvnbook\plugins\MyMojo. This message does not indicate a problem. It can be found in the plugin's project directory. fairly simple Java-based mojo: [. as you know more about your mojos' dependencies. This will create a project with the standard layout under a new subdirectory called mavenbuildinfo-plugin within the current working directory. Inside. Finally.exist. since you will be creating your own mojo from scratch. * @parameter */ 142 . This is a result of the Velocity template. The mojo You can handle this scenario using the following. writing your custom mojo is simple. interacting with Maven's own plugin parameter annotations. this simple version will suffice for now. For the purposes of this plugin. • You will modify the POM again later.plugins \ -DartifactId=maven-buildinfo-plugin \ -DarchetypeArtifactId=maven-archetype-mojo When you run this command. you'll find a basic POM and a sample mojo.build. simply execute the following from the top level directory of the chapter 5 sample code: mvn archetype:create -DgroupId=com. you're likely to see a warning message saying “${project. However. * @goal extract * @phase package * @requiresDependencyResolution test * */ public class WriteBuildInfoMojo extends AbstractMojo { /** * Determines which system properties are added to the buildinfo file. Once you have the plugin's project structure in place. you will need to modify the POM as follows: Change the name element to Maven BuildInfo Plugin.. To generate a stub plugin project for the buildinfo plugin.. used to generate the plugin source code. • Remove the url element.mvnbook.java.] /** * Write the environment information for the current build execution * to an XML file. since this plugin doesn't currently have an associated Web site. it's helpful to jump-start the plugin-writing process by using Maven's archetype plugin to create a simple stub project from a standard pluginproject template.directory} is not a valid reference”. you should remove the sample mojo.Better Builds with Maven Using the archetype plugin to generate a stub plugin project Now that the buildinfo generator library has been installed.
you will use this name.getMessage(). In this case.\ value="${project." ).writeXml( buildInfo. When you invoke this mojo. there are two special annotations: /** * @goal extract * @phase package */ The first annotation.version}-buildinfo.getProperties(). you're collecting information from the environment with the intent of distributing it alongside the main project artifact in the repository. attaching to the package phase also gives you the best chance of capturing all of the modifications made to the build state before the jar is produced. addSystemProperties( buildInfo ). outputFile ).Developing Custom Maven Plugins private String systemProperties.xml" * @required */ private File outputFile.artifactId}. Reason: " + e.\ ${project. In general.split( ". /** * The location to write the buildinfo file. i++ ) { String key = keys[i].getProperty( key.outputFile}" default. Used to attach the buildinfo * to the project jar for installation and deployment. try { BuildInfoUtils. public void execute() throws MojoExecutionException { BuildInfo buildInfo = new BuildInfo().MISSING_INFO_PLACEHOLDER ).build. } catch ( IOException e ) { throw new MojoExecutionException( "Error writing buildinfo \ XML file. if ( systemProperties != null ) { String[] keys = systemProperties. @goal. e ).directory}/${project. In the class-level javadoc comment. value ).addSystemProperty( key. String value = sysprops. \ BuildInfoConstants.length. Therefore. tells the plugin tools to treat this class as a mojo named extract. i < keys. } } private void addSystemProperties( BuildInfo buildInfo ) { Properties sysprops = System. for ( int i = 0. buildInfo. * @parameter expression="${buildinfo.trim(). so it will be ready to attach to the project artifact. } } } } While the code for this mojo is fairly straightforward. it makes sense to execute this mojo in the package phase. it's worthwhile to take a closer look at the javadoc annotations. The second annotation tells Maven where in the build life cycle this mojo should be executed. 143 .
systemProperties=java. since you have more specific requirements for this parameter. In this case.outputFile}" default\ value="${project. you may want to allow a user to specify which system properties to include in the build information file.directory}/${project. Using the @parameter annotation by itself. Used to attach the buildinfo * for installation and deployment. you can see why the normal Java field initialization is not used. as execution without an output file would be pointless. the complexity is justified. In this example.version}-buildinfo.xml" * * @required */ In this case. First. you can specify the name of this parameter when it's referenced from the command line. Take another look: /** * The location to write the buildinfo file.user. which are used to specify the mojo's parameters. 144 . the guinea-pig module is bound to the mavenbuildinfo-plugin having the buildinfo goal prefix so run the above command from the guinea-pig directory. the mojo uses the @required annotation. * * @parameter expression="${buildinfo. you have several field-level javadoc comments. you want the mojo to use a certain value – calculated from the project's information – as a default value for this parameter. In this case. The default output path is constructed directly inside the annotation. will allow this mojo field to be configured using the plugin configuration specified in the POM. Using the expression attribute. using several expressions to extract project information on-demand. However. Each offers a slightly different insight into parameter specification.artifactId}\ ${project.version. the build will fail with an error. with no attributes.Better Builds with Maven Aside from the class-level comment. as follows: localhost $ mvn buildinfo:extract \ -Dbuildinfo. consider the parameter for the systemProperties variable: /** * @parameter expression="${buildinfo. Finally. the mojo cannot function unless it knows where to write the build information file.build. To ensure that this parameter has a value. If this parameter has no value when the mojo is configured. However.dir The module where the command is executed should be bound to a plugin with a buildinfo goal prefix. so they will be considered separately. the expression attribute allows you to specify a list of system properties on-the-fly. In addition. the outputFile parameter presents a slightly more complex example of parameter annotation. This is where the expression attribute comes into play.systemProperties}" */ This is one of the simplest possible parameter specifications.
0-SNAPSHOT</version> </dependency> [.0.mvnbook. you can construct an equally simple POM which will allow you to build the plugin.mvnbook.exist.plugins</groupId> <artifactId>maven-buildinfo-plugin</artifactId> <version>1. note the packaging – specified as maven-plugin – which means that this plugin build will follow the maven-plugin life-cycle mapping.. Note the dependency on the buildinfo project. which simply adds plugin descriptor extraction and generation to the build process. This mapping is a slightly modified version of the one used for the jar packaging.] </dependencies> </project> This POM declares the project's identity and its two dependencies.exist.0-SNAPSHOT</version> <packaging>maven-plugin</packaging> <dependencies> <dependency> <groupId>org.Developing Custom Maven Plugins The Plugin POM Once the mojo has been written.shared</groupId> <artifactId>buildinfo</artifactId> <version>1.apache.0</modelVersion> <groupId>com.maven</groupId> <artifactId>maven-plugin-api</artifactId> <version>2. Also. as follows: <project> <modelVersion>4. which provides the parsing and formatting utilities for the build information file.0</version> </dependency> <dependency> <groupId>com.. 145 .
so that every build triggers it.. The easiest way to guarantee this is to bind the extract mojo to the life cycle.] </build> The above binding will execute the extract mojo from your new maven-buildinfo-plugin during the package phase of the life cycle. and capture the os..exist.] <plugins> <plugin> <groupId>com.plugins</groupId> <artifactId>maven-buildinfo-plugin</artifactId> <executions> <execution> <id>extract</id> <configuration> <systemProperties>os. which you can do by adding the configuration of the new plugin to the Guinea Pig POM.name.. 146 . as follows: <build> [.mvnbook.version</systemProperties> </configuration> <goals> <goal>extract</goal> </goals> </execution> </executions> </plugin> [. This involves modification of the standard jar lifecycle. you need to ensure that every build captures this information.java.name system property..Better Builds with Maven Binding to the life cycle Now that you have a method of capturing build-time environmental information...] </plugins> [.
.. build the buildinfo plugin with the following commands: cd C:\book-projects\maven-buildinfo-plugin mvn clean install Next.....] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [INFO] [.. you will find information similar to the following: <?xml version="1.0" encoding="UTF-8"?><buildinfo> <systemProperties> <java............468s] Guinea Pig API ...name> </systemProperties> <sourceRoots> <sourceRoot>src\main\java</sourceRoot> </sourceRoots> <resourceRoots> <resourceRoot>src\main\resources</resourceRoot> 147 .name>Windows XP</os............. you can build the plugin and try it out! First.. SUCCESS [0............ you should see output similar to the following: [....... SUCCESS [2......0-SNAPSHOT-buildinfo... SUCCESS [6....359s] Guinea Pig Core .... test the plugin by building Guinea Pig with the buildinfo plugin bound to its life cycle as follows: cd C:\book-projects\guinea-pig mvn package When the Guinea Pig build executes...0_06</java.....version> <os............xml In the file.........] [buildinfo:extract {execution: extract}] -----------------------------------------------------------------------Reactor Summary: -----------------------------------------------------------------------Guinea Pig Sample Application ....5.....Developing Custom Maven Plugins The output Now that you have a mojo and a POM...469s] -----------------------------------------------------------------------BUILD SUCCESSFUL ------------------------------------------------------------------------ Under the target directory......version>1.. there should be a file named: guinea-pig-1...
Your new mojo will be in a file called notify. “deployment” is defined as injecting the project artifact into the Maven repository system. and how.Better Builds with Maven </resourceRoots> </buildinfo> While the name of the OS and the java version may differ. However. Information like the to: address will have to be dynamic. For now. and the dozens of well-tested. and both of these properties can have profound effects on binary compatibility. the output of of the generated build information is clear enough. you'll notice that this mojo expects several project properties. given the amount of setup and code required. therefore. so that other team members have access to it. simply declare mojo parameters for them. Your mojo has captured the name of operating system being used to execute the build and the version of the jvm. After writing the Ant target to send the notification e-mail.2. it's simpler to use Ant.name}" mailhost="${mailHost}" mailport="${mailPort}" messagefile="${buildinfo. 5. Of course. It's important to remember that in the Maven world. you need to share it with others in your team when the resulting project artifact is deployed. To ensure these project properties are in place within the Ant Project instance. you just need to write a mojo definition to wire the new target into Maven's build process. BuildInfo Example: Notifying Other Developers with an Ant Mojo Now that some important information has been captured.build.outputFile}"> <to>${listAddr}</to> </mail> </target> </project> If you're familiar with Ant. such a task could be handled using a Java-based mojo and the JavaMail API from Sun.4. and should look similar to the following: <project> <target name="notify-target"> <mail from="maven@localhost" replyto="${listAddr}" subject="Build Info for Deployment of ${project. The Ant target To leverage the output of the mojo from the previous example – the build information file – you can use that content as the body of the e-mail. it might be enough to send a notification e-mail to the project development mailing list. it's a simple matter of specifying where the email should be sent. mature tasks available for build script use (including one specifically for sending e-mails). it should be extracted directly from the POM for the project we're building.xml. From here. 148 .
The corresponding metadata file will be called notify.version}-buildinfo. which is associated to the build script using a naming convention.build. In this example.directory}/$ . ]]></description> <parameters> <parameter> <name>buildinfo. the build script was called notify.Developing Custom Maven Plugins The Mojo Metadata file Unlike the prior Java examples.mojos.xml </defaultValue> <required>true</required> <readonly>false</readonly> </parameter> <parameter> <name>listAddr</name> <required>true</required> </parameter> <parameter> <name>project. metadata for an Ant mojo is stored in a separate file.outputFile</name> <defaultValue> ${project.name</name> <defaultValue>${project.xml.build.\ ${project.artifactId}.xml and should appear as follows: <pluginMetadata> <mojos> <mojo> <call>notify-target</call> <goal>notify</goal> <phase>deploy</phase> <description><![CDATA[ Email environment information from the current build to the development mailing list when the artifact is deployed.
upon closer examination. This library defines a set of interfaces for parsing mojo descriptors from their native format and generating various output from those descriptors – including plugin descriptor files. As with the Java example. the contents of this file may appear different than the metadata used in the Java mojo. In Java. and parameter flags such as required are still present. However. The maven-plugin-plugin ships with the Java and Beanshell provider libraries which implement the above interface. however. mojo-level metadata describes details such as phase binding and mojo name. Finally. metadata specify a list of parameters for the mojo. As with the Java example. parameters are injected as properties and references into the Ant Project instance.String (the default).or Beanshell-based mojos with no additional configuration. with its use of the MojoDescriptorExtractor interface from the maven-plugin-tools-api library. a more in-depth discussion of the metadata file for Ant mojos is available in Appendix A. in order to capture the parameter's type in the specification. Modifying the Plugin POM for Ant Mojos Since Maven 2. Maven allows POM-specific injection of plugin-level dependencies in order to accommodate plugins that take a framework approach to providing their functionality. you'd have to add a <type> element alongside the <name> element.0. Also. expression. since you now have a good concept of the types of metadata used to describe a mojo. by binding the mojo to the deploy phase of life cycle. and more. the notification e-mails will be sent only when a new artifact becomes available in the remote repository. In this example. Maven still must resolve and inject each of these parameters into the mojo. In an Antbased mojo however. you will see many similarities. to develop an Ant-based mojo.Better Builds with Maven At first glance. This allows developers to generate descriptors for Java. The rule for parameter injection in Ant is as follows: if the parameter's type is java. because you're going to be sending e-mails to the development mailing list.0 shipped without support for Ant-based mojos (support for Ant was added later in version 2. Fortunately. 150 . notice that this mojo is bound to the deploy phase of the life cycle. Instead. the difference here is the mechanism used for this injection. If one of the parameters were some other object type.lang. each with its own information like name. Any build that runs must be deployed for it to affect other development team members. some special configuration is required to allow the maven-plugin-plugin to recognize Ant mojos. otherwise. then its value is injected as a property.lang. you will have to add support for Ant mojo extraction to the maven-plugin-plugin. all of the mojo's parameter types are java. First of all.2). This is an important point in the case of this mojo. The expression syntax used to extract information from the build state is exactly the same. its value is injected as a project reference. The maven-plugin-plugin is a perfect example.String. so it's pointless to spam the mailing list with notification e-mails every time a jar is created for the project. parameter injection takes place either through direct field assignment. When this mojo is executed. but expressed in XML. or through JavaBeans-style setXXX() methods. default value. the overall structure of this file should be familiar.
] <dependency> <groupId>org. the specifications of which should appear as follows: <dependencies> [. 151 ...6. If you don't have Ant in the plugin classpath.mvnbook. quite simply.apache. since the plugin now contains an Ant-based mojo.5</version> </dependency> [. a dependency on the core Ant library (whose necessity should be obvious).] </project> Additionally.maven</groupId> <artifactId>maven-plugin-tools-ant</artifactId> <version>2. you will need to add a dependency on the maven-plugin-tools-ant library to the maven-plugin-plugin using POM configuration as follows: <project> [..] <build> <plugins> <plugin> <groupId>com. it will be quite difficult to execute an Ant-based plugin. The second new dependency is..maven</groupId> <artifactId>maven-script-ant</artifactId> <version>2.exist.2</version> </dependency> </dependencies> </plugin> </plugins> </build> [.] </dependencies> The first of these new dependencies is the mojo API wrapper for Ant build scripts.apache..0.Developing Custom Maven Plugins To accomplish this.. it requires a couple of new dependencies. and it is always necessary for embedding Ant scripts as mojos in the Maven build process..plugins</groupId> <artifactId>maven-plugin-plugin</artifactId> <dependencies> <dependency> <groupId>org.2</version> </dependency> <dependency> <groupId>ant</groupId> <artifactId>ant</artifactId> <version>1.0..
it will also extract the relevant environmental details during the package phase...codehaus.] </plugins> </build> The existing <execution> section – the one that binds the extract mojo to the build – is not modified. See section on Deploying your Application of chapter 3. notification happens in the deploy phase only. and send them to the Guinea Pig development mailing list in the deploy phase. This is because an execution section can address only one phase of the build life cycle. Even its configuration is the same. a new section for the notify mojo is created. execute the following command: mvn deploy The build process executes the steps required to build and deploy a jar . In order to tell the notify mojo where to send this e-mail. Note: You have to configure distributionManagement and scm to successfully execute mvn deploy.org</listAddr> </configuration> </execution> </executions> </plugin> [..plugins</groupId> <artifactId>maven-buildinfo-plugin</artifactId> <executions> <execution> <id>extract</id> [. 152 ..Better Builds with Maven Binding the Notify Mojo to the life cycle Once the plugin descriptor is generated for the Ant mojo.] </execution> <execution> <id>notify</id> <goals> <goal>notify</goal> </goals> <configuration> <listAddr>dev@guineapig. because non-deployed builds will have no effect on other team members. you should add a configuration section to the new execution section.exist. Instead.] <plugins> <plugin> <groupId>com.. which supplies the listAddr parameter value.except in this case. Now..mvnbook. Again. it behaves like any other type of mojo to Maven. and these two mojos should not execute in the same phase (as mentioned previously). Adding a life-cycle binding for the new Ant mojo in the Guinea Pig POM should appear as follows: <build> [.
Many different mojo's package resources with their generated artifacts such as web. However. in most cases. The project helper component can be injected as follows: /** * Helper class to assist in attaching artifacts to the project instance. the process of adding a new resource directory to the current build is straightforward and requires access to the MavenProject and MavenProjectHelper: /** * Project instance. the MavenProjectHelper is provided to standardize the process of augmenting the project instance. This declaration will inject the current project instance into the mojo. It provides methods for attaching artifacts and adding new resource definitions to the current project. the unadorned @component annotation – like the above code snippet – is adequate. it is a utility. in some special cases. which means it's always present. * project-helper instance. For example. as discussed previously. to simplify adding resources to a project.xml file found in all maven artifacts. Component requirements are not available for configuration by users. This component is part of the Maven application. To be clear.Developing Custom Maven Plugins Adding a resource to the build Another common practice is for a mojo to generate some sort of non-code resource. this is what Maven calls a component requirement (it's a dependency on an internal component of the running Maven application). the project helper is not a build state. which will be packaged up in the same jar as the project classes. or wsdl files for web services. as in the case of Maven itself and the components. * @component * @required * @readonly */ private MavenProjectHelper projectHelper. * Used to add new source directory to the build. the Maven application itself is well-hidden from the mojo developer. Right away. 159 . This could be a descriptor for binding the project artifact into an application framework. that it's not a parameter at all! In fact. used to make addition of resources simpler. Component requirements are simple to declare. Namely. you should notice something very different about this parameter. and abstract the associated complexities away from the mojo developer. * @parameter default-value="${project}" * @required * @readonly */ private MavenProject project. Maven components can make it much simpler to interact with the build process. needed for attaching the buildinfo file. However.xml files for servlet engines. the mojo also needs access to the MavenProjectHelper component. Normally. so your mojo simply needs to ask for it. Whatever the purpose of the mojo.
addResource(project. Accessing the source-root list Just as some mojos add new source directories to the build. others must read the list of active source directories. adding a new resource couldn't be easier. it's important to understand where resources should be added during the build life cycle. List excludes = null. Simply define the resources directory to add. Resources are copied to the classes directory of the build during the process-resources phase. which actually compiles the source code contained in these root directories into classes in the project output directory. they have to modify the sourceDirectory element in the POM. In a typical case. With these two objects at your disposal. as in the following example: /** * The list of directories which contain source code for the project. Again. however. as it is particularly useful to mojo developers. * List of source roots containing non-test code. this parameter declaration states that Maven does not allow users to configure this parameter directly. and then call a utility method on the project helper. these values would come from other mojo parameters. projectHelper.Better Builds with Maven A complete discussion of Maven's architecture – and the components available – is beyond the scope of this chapter. includes. instead. in order to perform some operation on the source code. and exclusion patterns as local variables. The classic example is the compile mojo in the maven-compiler-plugin. The prior example instantiates the resource's directory. and the jar mojo in the maven-source-plugin. the MavenProjectHelper component is worth mentioning here. List includes = Collections. inclusion patterns. The most common place for such activities is in the generate-resources life-cycle phase. If your mojo is meant to add resources to the eventual project artifact. * @parameter default-value="${project. for the sake of brevity. the entire build will fail. excludes). The parameter is also required for this mojo to execute. it will need to execute ahead of this phase.singletonList("**/*"). along with inclusion and exclusion patterns for resources within that directory. directory.compileSourceRoots}" * @required * @readonly */ private List sourceRoots. conforming with these standards improves the compatibility of your plugin with other plugins in the build. or else bind a mojo to the life-cycle phase that will add an additional source directory to the build. Again. 160 . if it's missing. The code should look similar to the following: String directory = "relative/path/to/some/directory". all you have to do is declare a single parameter to inject them. Gaining access to the list of source root directories for a project is easy. Similar to the parameter declarations from previous sections. which may or may not be directly configurable. Other examples include javadoc mojo in the maven-javadoc-plugin.
By the time the mojo gains access to them. it can iterate through them.hasNext().. in order to incorporate list of source directories to the buildinfo object. 161 . it. binding to any phase later than compile should be acceptable. as in the case of the extract mojo. Therefore.. ) { String sourceRoot = (String) it. it could be critically important to track the list of source directories used in a particular build.] } private void addSourceRoots( BuildInfo buildInfo ) { if ( sourceRoots != null && !sourceR's better to bind it to a later phase like package if capturing a complete picture of the project is important. source roots are expressed as absolute file-system paths. Returning to the buildinfo example. you need to add the following code: public void execute() throws MojoExecutionException { [. buildInfo.. If a certain profile injects a supplemental source directory into the build (most likely by way of a special mojo binding). To be clear. When you add this code to the extract mojo in the maven-buildinfo-plugin.Developing Custom Maven Plugins Now that the mojo has access to the list of project source roots. Remember. However. In this case however. then this profile would dramatically alter the resulting project artifact when activated.iterator().addSourceRoot( makeRelative( sourceRoot ) ).. binding this mojo to an early phase of the life cycle increases the risk of another mojo adding a new source root in a later phase. the ${basedir} expression refers to the location of the project directory in the local file system. for eventual debugging purposes. [. since compile is the phase where source files are converted into classes.] addSourceRoots( buildInfo ). } } } One thing to note about this code snippet is the makeRelative() method. This involves subtracting ${basedir} from the source-root paths. any reference to the path of the project directory in the local file system should be removed.next().isEmpty() ) { for ( Iterator it = sourceRoots. it can be bound to any phase in the life cycle. In order to make this information more generally applicable. applying whatever processing is necessary.
It's also important to note that this list consists of Resource objects.apache. let's learn about how a mojo can access the list of resources used in a build. now. if an activated profile introduces a mojo that generates some sort of supplemental framework descriptor. which in fact contain information about a resource root. and excludes.util. includes. it is important that the buildinfo file capture the resource root directories used in the build for future reference. the resources list is easy to inject as a mojo parameter. allowing direct configuration of this parameter could easily produce results that are inconsistent with other resource-consuming mojos. Since mojos can add new resources to the build programmatically. * List of Resource objects for the current build. and can be accomplished through the following code snippet: 162 . Since the resources list is an instance of java.Better Builds with Maven Accessing the resource list Non-code resources complete the picture of the raw materials processed by a Maven build. This is the mechanism used by the resources mojo in the maven-resources-plugin.4 environment that doesn't support Java generics. this parameter is declared as required for mojo execution and cannot be edited by the user. capturing the list of resources used to produce a project artifact can yield information that is vital for debugging purposes. The parameter appears as follows: /** * The list of resource definitions to be included in the project jar.resources}" * @required * @readonly */ private List resources. and Maven mojos must be able to execute in a JDK 1.model. As noted before with the dependencies parameter. mojos must be smart enough to cast list elements as org. It's a simple task to add this capability. Much like the source-root list. it can mean the difference between an artifact that can be deployed into a server environment and an artifact that cannot. Just like the source-root injection parameter. Therefore. For instance. the user has the option of modifying the value of the list by configuring the resources section of the POM. * @parameter default-value="${project.Resource instances. which copies all non-code resources to the output directory for inclusion in the project artifact.List.maven. You've already learned that mojos can modify the list of resources included in the project artifact. In this case. along with some matching rules for the resource files it contains. containing * directory.
a corresponding activity can be written to work with their test-time counterparts. due to the similarities. instead. it. collecting the list of project resources has an appropriate place in the life cycle. Since all project resources are collected and copied to the project output directory in the processresources phase. [. which may be executed during the build process. which must be processed and included in the final project artifact.addResourceRoot( makeRelative( resourceRoot ) ). All POM paths injected into mojos are converted to their absolute form first.next(). The concepts are the same. ) { Resource resource = (Resource) it. Adding this code snippet to the extract mojo in the maven-buildinfo-plugin will result in a resourceRoots section being added to the buildinfo file.] } private void addResourceRoots( BuildInfo buildInfo ) { if ( resources != null && !resources. That section should appear as follows: <resourceRoots> <resourceRoot>src/main/resources</resourceRoot> <resourceRoot>target/generated-resources/xdoclet</resourceRoot> </resourceRoots> Once more. that for every activity examined that relates to source-root directories or resource definitions.. any mojo seeking to catalog the resources used in the build should execute at least as late as the process-resources phase.getDirectory(). by trimming the ${basedir} prefix. to avoid any ambiguity. it's worthwhile to discuss the proper place for this type of activity within the build life cycle.iterator().hasNext(). only the parameter expressions and method names are different. This ensures that any resource modifications introduced by mojos in the build process have been completed. It's important to note however. } } } As with the prior source-root example. This chapter does not discuss test-time and compile-time source roots and resources as separate topics.isEmpty() ) { for ( Iterator it = resources. buildInfo.] addResourceRoots( buildInfo ). since the ${basedir} path won't have meaning outside the context of the local file system. Note on testing source-roots and resources All of the examples in this advanced development discussion have focused on the handling of source code and resources. It's necessary to revert resource directories to relative locations for the purposes of the buildinfo plugin. Like the vast majority of activities. This method converts the absolute path of the resource directory into a relative path. you'll notice the makeRelative() method.. 163 .. String resourceRoot = resource..Developing Custom Maven Plugins public void execute() throws MojoExecutionException { [. the key differences are summarized in the table below.
While an e-mail describing the build environment is transient. * Used to add new source directory to the build. Therefore. and only serves to describe the latest build. First.addCompileSourceRoot() ${project. an extra piece of code must be executed in order to attach that artifact to the project artifact. produces a derivative artifact.resources} project. These artifacts are typically a derivative action or side effect of the main build process.testResources} 5.Better Builds with Maven Table 5-2: Key differences between compile-time and test-time mojo activities Activity Change This To This Add testing source root Get testing source roots Add testing resource Get testing resources project.addTestResource () ${project. When a mojo. can provide valuable information to the development team. which sets it apart from the main project artifact in the repository. since it provides information about how each snapshot of the project came into existence. or set of mojos.5. the distribution of the buildinfo file via Maven's repository will provide a more permanent record of the build for each snapshot in the repository. you'll need a parameter that references the current project instance as follows: /** * Project instance.addTestSourceRoot() ${project. an artifact attachment will have a classifier. Once an artifact attachment is deposited in the Maven repository. which is still missing from the maven-buildinfo-plugin example. like sources or javadoc. javadoc bundles. needed for attaching the buildinfo file. this classifier must also be specified when declaring the dependency for such an artifact. Usually. Maven treats these derivative artifacts as attachments to the main project artifact. * @parameter default-value="${project}" * @required * @readonly */ private MavenProject project. This extra step. Attaching Artifacts for Installation and Deployment Occasionally. it can be referenced like any other artifact. by using the classifier element for that dependency section within the POM.testSourceRoots} projectHelper. in that they are never distributed without the project artifact being distributed. Doing so guarantees that attachment will be distributed when the install or deploy phases are run. 164 . mojos produce new artifacts that should be distributed alongside the main project artifact in the Maven repository system. Classic examples of attached artifacts are source archives.compileSourceRoots} projectHelper. Including an artifact attachment involves adding two parameters and one line of code to your mojo.4. and even the buildinfo file produced in the examples throughout this chapter.addResource() ${project. for historical reference.
If you build the Guinea Pig project using this modified version of the maven-buildinfo-plugin. From the prior examples. "buildinfo".0-SNAPSHOT dir 165 . then running Maven to the install life-cycle phase on our test project. you're telling Maven that the file in the repository should be named using a. However.Developing Custom Maven Plugins The MavenProject instance is the object with which your plugin will register the attachment with for use in later phases of the lifecycle. These values represent the artifact extension and classifier. It identifies the file as being produced by the the maven-buildinfoplugin. the meaning and requirement of project and outputFile references should be clear.attachArtifact( project. * project-helper instance. Now that you've added code to distribute the buildinfo file. you should see the buildinfo file appear in the local repository alongside the project jar.m2\repository cd com\exist\mvnbook\guineapig\guinea-pig-core\1.xml extension. This serves to attach meaning beyond simply saying. which will make the process of attaching the buildinfo artifact a little easier: /** * Helper class to assist in attaching artifacts to the project instance. as opposed to another plugin in the build process which might produce another XML file with different meaning. By specifying an extension of “xml”. * @component */ private MavenProjectHelper projectHelper. “This is an XML file”. respectively. See Section 5. outputFile ). By specifying the “buildinfo” classifier.2 for a discussion about MavenProjectHelper and component requirements. For convenience you should also inject the following reference to MavenProjectHelper. the process of attaching the generated buildinfo file to the main project artifact can be accomplished by adding the following code snippet: projectHelper. there are also two somewhat cryptic string values being passed in: “xml” and “buildinfo”.5. used to make addition of resources simpler. you're telling Maven that this artifact should be distinguished from other project artifacts by using this value in the classifier element of the dependency declaration. you can test it by re-building the plugin. Once you include these two fields in the extract mojo within the maven-buildinfo-plugin. as follows: mvn install cd C:\Documents and Settings\[user_home]\. "xml".
It can extract relevant details from a running build and generate a buildinfo file based on these details. From there. Finally. the maven-buildinfo-plugin is ready for action.Better Builds with Maven guinea-pig-core-1.0-SNAPSHOT. it can attach the buildinfo file to the main project artifact so that it's distributed whenever Maven installs or deploys the project. the maven-buildinfo-plugin can also generate an e-mail that contains the buildinfo file contents.pom Now. when the project is deployed. and route that message to other development team members on the project development mailing list.xml guinea-pig-core-1.0-SNAPSHOT-buildinfo. 166 .0-SNAPSHOT.jar guinea-pig-core-1.
Maven can build a basic project with little or no modification – thus covering the 80% case. the mojos – that are bound to the build life cycle. Summary In its unadorned state. enabling you to attach custom artifacts for installation or deployment. 167 . If not. only a tiny fraction of which are a part of the default lifecycle mapping. the Codehaus Mojo project. Maven can integrate these custom tasks into the build process through its extensible plugin framework.. it's unlikely to be a requirement unique to your project. So. Whether they be code-generation. However. Finally. you can integrate almost any tool into the build process. Many plugins already exist for Maven use. Using the plugin mechanisms described in this chapter. Using the default lifecycle mapping. If your project requires special handling. chances are good that you can find a plugin to address this need at the Apache Maven project. reporting. or verification steps. a project requires special tasks in order to build successfully. developing a custom Maven plugin is an easy next step.6. Working with project dependencies and resources is equally as simple. or the project web site of the tools with which your project's build must integrate. However. there is a standardized way to inject new behavior into the build by binding new mojos at different life-cycle phases. In this chapter. Maven represents an implementation of the 80/20 rule. please consider contributing back to the Maven community by providing access to your new plugin. Mojo development can be as simple or as complex (to the point of embedding nested Maven processes within the build) as you need it to be. if you have the means. you've also learned how a plugin generated file can be distributed alongside the project artifact in Maven's repository system. Since the build process for a project is defined by the plugins – or more accurately. in certain circumstances. It is in great part due to the re-usable nature of its plugins that Maven can offer such a powerful build platform.Developing Custom Maven Plugins 5.
168 .Better Builds with Maven This page left intentionally blank.
6.
if the build fails its checks? The Web site also provides a permanent record of a project's health. In this chapter. the project will meet only the lowest standard and go no further. and whether the conditions for the checks are set correctly. it was pointed out that Maven's application of patterns provides visibility and comprehensibility. But.Better Builds with Maven 6. What Does Maven Have to Do with Project Health? In the introduction. which everyone can see at any time. to get a build to pass. you'll learn how to use a number of these tools effectively. • Maven takes all of the information you need to know about your project and brings it together under the project Web site. To begin.finding out whether there is any activity on the project.zip for convenience as a starting point. many of the reports illustrated can be run as part of the regular build in the form of a “check” that will fail the build if a certain condition is not met. because if the bar is set too high. new tools that can assess its health are easily integrated. how well it is tested. This is unproductive as minor changes are prioritized over more important tasks.8 8 Please see the README. and learning more about the health of the project. For this reason. you will be revisiting the Proficio application that was developed in Chapter 3. and how well it adapts to change. It provides additional information to help determine the reasons for a failed build. Conversely. Through the POM. When referring to health. relate. there will be too many failed builds. and what the nature of that activity is.determining how well the code works. and then run mvn install from the proficio subdirectory to ensure everything is in place. Because the POM is a declarative model of the project.1. there are two aspects to consider: • Code quality . and display that information in a single place. The code that concluded Chapter 3 is also included in Code_Ch06. 170 . Maven has access to the information that makes up a project. unzip the Code_Ch06.txt file included in the chapter 6 code samples zip file for additional details about building Proficio. It is important not to get carried away with setting up a fancy Web site full of reports that nobody will ever use (especially when reports contain failures they don't want to know about!). Maven can analyze. This is important. Project vitality . if the bar is set too low. why have a site. In this chapter. It is these characteristics that assist you in assessing the health of your project. The next three sections demonstrate how to set up an effective project Web site. and using a variety of tools.zip file into C:\mvnbook or your selected working directory.
xml: 171 .Assessing Project Health with Maven 6. is the focus of the rest of this chapter. These reports are useful for sharing information with others.2. SCM. However. having these standard reports means that those familiar with Maven Web sites will always know where to find the information they need. The Project Info menu lists the standard reports Maven includes with your site by default. by including the following section in proficio/pom. The second menu (shown opened in figure 6-1). you can add the Surefire report to the sample application. and now shows how to integrate project health information. adding a new report is easy. These reports provide a variety of insights into the quality and vitality of the project. For example. issue tracker. this menu doesn't appear as there are no reports included. To start. Adding Reports to the Project Web site This section builds on the information on project Web sites in Chapter 2 and Chapter 3. and to reference as links in your mailing lists. Project Reports. For newcomers to the project. and so on. unless you choose to disable them. review the project Web site shown in figure 6-1. On a new project. Figure 6-1: The reports generated by Maven You can see that the navigation on the left contains a number of reports.
C:\mvnbook\proficio\proficio-core> mvn site This can be found in the file target/site/surefire-report. You can now run the following site task in the proficio-core directory to regenerate the site..Better Builds with Maven [.maven. it will be inherited by all of the child modules.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> </plugin> </plugins> </reporting> [.apache. Figure 6-2: The Surefire report 172 . and as a result.] </project> This adds the report to the top level project.html and is shown in figure 6-2....] <reporting> <plugins> <plugin> <groupId>org.
For a quicker turn around. Maven knows where the tests and test results are.. Configuration of Reports Before stepping any further into using the project Web site. That's all there is to generating the report! This is possible thanks to key concepts of Maven discussed in Chapter 2: through a declarative project model.xml.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.Assessing Project Health with Maven As you may have noticed in the summary. For example.. and due to using convention over configuration.] Configuration for a reporting plugin is very similar..] <build> <plugins> <plugin> <groupId>org. it is important to understand how the report configuration is handled in Maven.apache. however it is added to the reporting section of the POM.5</target> </configuration> </plugin> </plugins> </build> [.html file in target/surefire-reports/. You might recall from Chapter 2 that a plugin is configured using the configuration element inside the plugin declaration in pom. the report can be modified to only show test failures by adding the following configuration in pom. 6.. the report can also be run individually using the following standalone goal: C:\mvnbook\proficio\proficio-core> mvn surefire-report:report Executing mvn surefire-report:report generates the surefire-report.3.5</source> <target>1.maven. for example: [. the report shows the test results of the project. the defaults are sufficient to get started with a useful report.xml: 173 .
plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <configuration> <outputDirectory> ${project... and not site generation..apache. as seen in the previous section. consider if you wanted to create a copy of the HTML report in the directory target/surefirereports every time the build ran. and the build.. 174 . To continue with the Surefire report.Better Builds with Maven [. or in addition to. the reporting section: [.] “Executions” such as this were introduced in Chapter 3.] <build> <plugins> <plugin> <groupId>org. while the configuration can be used to modify its appearance or behavior.build. the plugin would need to be configured in the build section instead of. To do this.maven. they will all be included.maven.directory}/surefire-reports </outputDirectory> </configuration> <executions> <execution> <phase>test</phase> <goals> <goal>report</goal> </goals> </execution> </executions> </plugin> </plugins> </build> [...] The addition of the plugin element triggers the inclusion of the report in the Web site. is used only during the build. even though it is not specific to the execution.. If a plugin contains multiple reports.apache. The plugin is included in the build section to ensure that the configuration.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <configuration> <showSuccess>false</showSuccess> </configuration> </plugin> </plugins> </reporting> [.] <reporting> <plugins> <plugin> <groupId>org.. some reports apply to both the site. However.
However. always place the configuration in the reporting section – unless one of the following is true: 1. 2. consider if you had run Surefire twice in your build. and a list of reports to include.Assessing Project Health with Maven Plugins and their associated configuration that are declared in the build section are not used during site generation. 175 . by default all reports available in the plugin are executed once. When you configure a reporting plugin. which is the reporting equivalent of the executions element in the build section. When you are configuring the plugins to be used in the reporting section. Fortunately. this isn't the case – adding the configuration to the reporting section is sufficient. The reports will not be included in the site. you might think that you'd need to configure the parameter in both sections. what if the location of the Surefire XML reports that are used as input (and would be configured using the reportsDirectory parameter) were different to the default location? Initially. For example. Both of these cases can be achieved with the reportSets element. and cases where a particular report will be run more than once. However. each time with a different configuration. The configuration value is specific to the build stage. there are cases where only some of the reports that the plugin produces will be required. Any plugin configuration declared in the reporting section is also applied to those declared in the build section. and that you had had generated its XML results to target/surefire-reports/unit and target/surefire-reports/perf respectively. once for unit tests and once for a set of performance tests. Each report set can contain configuration.
maven. The reports in this list are identified by the goal names that would be used if they were run from the command line. However.html.build.Better Builds with Maven To generate two HTML reports for these results.xml: [. When a report is executed individually. running mvn surefire-report:report will not use either of these configurations. Maven will use only the configuration that is specified in the plugin element itself. outside of any report sets.directory}/surefire-reports/perf </reportsDirectory> <outputName>surefire-report-perf</outputName> </configuration> <reports> <report>report</report> </reports> </reportSet> </reportSets> </plugin> </plugins> </reporting> [. they must be enumerated in this list.. If you want all of the reports in a plugin to be generated...] Running mvn site with this addition will generate two Surefire reports: target/site/surefirereport-unit.] <reporting> <plugins> <plugin> <groupId>org.apache. as with executions.build. 176 . The reports element in the report set is a required element.html and target/site/surefire-report-perf.. you would include the following section in your pom.
• To determine the correct balance. 177 . who isn't interested in the state of the source code. where the end user documentation is on a completely different server than the developer information. but quite separate to the end user documentation. but in the navigation there are reports about the health of the project.. • The open source reusable library. there's something subtly wrong with the project Web site. depending on the project. This may be confusing for the first time visitor.. where the developer information is available. each section of the site needs to be considered. which are targeted at the developers. For example.. and an inconvenience to the developer who doesn't want to wade through end user documentation to find out the current state of a project's test coverage. where much of the source code and Javadoc reference is of interest to the end user.4. • The open source graphical application. to generate only the mailing list and license pages of the standard reports.xml file: [. in some cases down to individual reports. 6. which are targeted at an end user. On the entrance page there are usage instructions for Proficio. and the content's characteristics.Assessing Project Health with Maven It is also possible to include only a subset of the reports in a plugin.] While the defaults are usually sufficient.plugins</groupId> <artifactId>maven-project-info-reports-plugin</artifactId> <reportSets> <reportSet> <reports> <report>mailing-list</report> <report>license</report> </reports> </reportSet> </reportSets> </plugin> [.maven. Consider the following: The commercial product. Table 6-1 lists the content that a project Web site may contain. Separating Developer Reports From User Documentation After adding a report. This approach to balancing these competing requirements will vary.. add the following to the reporting section of the pom. and most likely doesn't use Maven to generate it.apache.] <plugin> <groupId>org. this customization will allow you to configure reports in a way that is just as flexible as your build.
Some standard reports. Javadoc) that in a library or framework is useful to the end user. This is true of the news and FAQs. This is reference material (for example. For a single module library. These are the reports discussed in this chapter that display the current state of the project to the developers. like mailing list information and the location of the issue tracker and SCM are updated also. the source code reference material and reports are usually generated from the modules that hold the source code and perform the build. and a development branch where new features can be documented for when that version is released. The Separated column indicates whether the documentation can be a separate module or project. the Javadoc and other reference material are usually distributed for reference as well. source code references should be given a version and remain unchanged after being released.Better Builds with Maven Table 6-1: Project Web site content types Content Description Updated Distributed Separated News. including the end user documentation in the normal build is reasonable as it is closely tied to the source code reference. that can be updated between releases without risk of including new features. the Updated column indicates whether the content is regularly updated. This is typically true for the end user documentation. regardless of releases. but usually not distributed or displayed in an application. which are continuously published and not generally of interest for a particular release. It is also true of the project quality and vitality reports. 178 . FAQs and general Web site End user documentation Source code reference material Project health and vitality reports This is the content that is considered part of the Web site rather than part of the documentation. and to maintain only one set of documentation. It is important not to include documentation for features that don't exist in the last release. The Distributed column in the table indicates whether that form of documentation is typically distributed with the project. This is documentation for the end user including usage instructions and guides. is to branch the end user documentation in the same way as source code. The situation is different for end user documentation. Sometimes these are included in the main bundle. It is good to update the documentation on the Web site between releases. and not introducing incorrect documentation. Features that are available only in more recent releases should be marked to say when they were introduced. However. Yes Yes No Yes Yes No No Yes Yes No No No In the table. The best compromise between not updating between releases. You can maintain a stable branch. and sometimes they are available for download separately. While there are some exceptions. as it is confusing for those reading the site who expect it to reflect the latest release. which are based on time and the current state of the project. For libraries and frameworks. It refers to a particular version of the software.
the site currently contains end user documentation and a simple report. and is not distributed with the project. the documentation and Web site should be kept in a separate module dedicated to generating a site. In the following example. but make it an independent project when it forms the overall site with news and FAQs.proficio \ -DarchetypeArtifactId=maven-archetype-site-simple 179 . In this case. or maybe totally independent. a module is created since it is not related to the source code reference material. It is important to note that none of these are restrictions placed on a project by Maven. This is done using the site archetype : C:\mvnbook\proficio> mvn archetype:create -DartifactId=user-guide \ -DgroupId=com. The current structure of the project is shown in figure 6-3. In Proficio. This separated documentation may be a module of the main project. Figure 6-3: The initial setup The first step is to create a module called user-guide for the end user documentation.mvnbook. you will learn how to separate the content and add an independent project for the news and information Web site. This avoids including inappropriate report information and navigation elements. you are free to place content wherever it best suits your project.exist. in most cases.Assessing Project Health with Maven However. You would make it a module when you wanted to distribute it with the rest of the project. While these recommendations can help properly link or separate content according to how it will be used.
version} </url> </site> </distributionManagement> 180 .Better Builds with Maven This archetype creates a very basic site in the user-guide subdirectory. whether to maintain history or to maintain a release and a development preview. Figure 6-4: The directory layout with a user guide The next step is to ensure the layout on the Web site is correct. Previously.com/web/guest/products/resources. The resulting structure is shown in figure 6-4. the URL and deployment location were set to the root of the Web site:. In this example.xml file to change the site deployment url: <distributionManagement> <site> [. the development documentation would go to that location. which you can later add content to..com/www/library/mvnbook/proficio/reference/${pom. the development documentation will be moved to a /reference/version subdirectory so that the top level directory is available for a user-facing Web site.exist. is useful if you are maintaining multiple public versions.] <url> scp://exist. and the user-guide to. Under the current structure.com/web/guest/products/resources/user-guide.exist. while optional.. Adding the version to the development documentation. First. edit the top level pom.
com/www/library/mvnbook/proficio/reference/${pom. This will include news and FAQs about the project that change regularly. edit the user-guide/pom. This time.com/www/library/mvnbook/proficio/user-guide. run it one directory above the proficio directory.version} and scp://exist.Assessing Project Health with Maven Next. Figure 6-5: The new Web site 181 .version}. a top level site for the project is required.exist.site</id> <url> scp://exist.mvnbook. Now that the content has moved. C:\mvnbook> mvn archetype:create -DartifactId=proficio-site \ -DgroupId=com. As before.com/www/library/mvnbook/proficio/user-guide </url> </site> </distributionManagement> There are now two sub-sites ready to be deployed: • • file to set the site deployment url for the module: <distributionManagement> <site> <id>mvnbook.com/www/library/mvnbook/proficio/reference/${pom.proficio \ -DarchetypeArtifactId=maven-archetype-site-simple The resulting structure is shown in figure 6-5.com/www/library/mvnbook/proficio/user-guide You will not be able to deploy the Web site to the locations scp://exist. you can create a new site using the archetype. They are included here only for illustrative purposes.
] Next. add some menus to src/site/site. you will then be able to navigate through the links and see how they relate. Note that you haven't produced the apidocs directory yet.. like the following: ----Proficio ----Joe Blogs ----23 July 2007 ----Proficio Proficio is super.] You can now run mvn site in proficio-site to see how the separate site will look.. If you deploy both sites to a server using mvn site-deploy as you learned in Chapter 3.exist.xml that point to the other documentation as follows: [..] <url>. so that link won't work even if the site is deployed..0-SNAPSHOT/apidocs/" /> <item name="Developer Info" href="/reference/1.6 of this chapter.Proficio project started Finally.xml as follows: [...Better Builds with Maven You will need to add the same elements to the POM for the url and distributionManagement as were set originally for proficio-site/pom.com/web/guest/products/resources</url> [.. Generating reference documentation is covered in section 6..0-SNAPSHOT/" /> </menu> [.com/www/library/mvnbook/proficio</url> </site> </distributionManagement> [.. * News * <16 Jan 2006> .] <distributionManagement> <site> <id>mvnbook.apt file with a more interesting news page.website</id> <url>scp://exist. 182 .. replace the src/site/apt/index.] <menu name="Documentation"> <item name="User's Guide" href="/user-guide/" /> </menu> <menu name="Reference"> <item name="API" href="/reference/1.
183 . While this is certainly the case at present. In particular. for more information.5. Report results and checks performed should be accurate and conclusive – every developer should know what they mean and how to address them. or you can walk through all of the examples one by one. Team Collaboration with Maven. For each report. and which checks to perform during the build. Choosing Which Reports to Include Choosing which reports to include. the reports that utilize unit tests often have to re-run the tests with new parameters. While these aren't all the reports available for Maven. You may notice that many of these tools are Java-centric. 6. the guidelines should help you to determine whether you need to use other reports. in addition to the generic reports such as those for dependencies and change tracking. the performance of your build will be affected by this choice. there is also a note about whether it has an associated visual report (for project site inclusion). it is possible in the future that reports for other languages will be available.Assessing Project Health with Maven The rest of this chapter will focus on using the developer section of the site effectively and how to build in related conditions to regularly monitor and improve the quality of your project. and look at the output to determine which reports to use. Table 6-2 covers the reports discussed in this chapter and reasons to use them. is an important decision that will determine the effectiveness of your build reports. While future versions of Maven will aim to streamline this. You can use this table to determine which reports apply to your project specifically and limit your reading to just those relevant sections of the chapter. it is recommended that these checks be constrained to the continuous integration and release environments if they cause lengthy builds. and an applicable build check (for testing a certain condition and failing the build if it doesn't pass). In some instances. See Chapter 7.
Recommended to enhance readability of the code. Simple report on outstanding tasks or other markers in source code Analyze code statement coverage during unit tests or other code execution. Recommended for multiple module builds where consistent versions are important. Yes N/A Useful for most Java software Important for any projects publishing a public API Companion to Javadoc that shows the source code Important to include when using other reports that can refer to it. such as Checkstyle Doesn't handle JDK 5. Recommended for teams with a focus on tests Can help identify untested or even unused code. Doesn't identify all missing or inadequate tests. Checks your source code against known rules for code smells. Produces a source cross reference for any Java code. Yes N/A Checkstyle Checks your source code against a standard descriptor for formatting issues.0 features Use to enforce a standard code style. . Can also show any tests that are long running and slowing the build. Check already performed by surefire:test. Recommended for easier browsing of results. Show the results of unit tests visually. Avoids issues when one piece of code is fixed/updated and the other forgotten Useful for tracking TODO items Very simple. checks for duplicate source code blocks that indicates it was copy/pasted. convenient set up Can be implemented using Checkstyle rules instead. so additional tools may be required. Part of PMD. Can help find snapshots prior to release. Should be used to improve readability and identify simple and common bugs. Not useful if there are a lot of errors to be fixed – it will be slow and the result unhelpful.
html created. because it is often of interest to the end user of a library or framework. 185 . as well as to the developer of the project itself.6. and. Recommended for all publicly released projects. The two reports this section illustrates are: • • JXR – the Java source cross reference. Changes Produce release notes and road maps from issue tracking systems Yes N/A ✔ ✔ 6. Creating Reference Material Source code reference materials are usually the first reports configured for a new project. Should be used for keeping teams up to date on internal projects also..
maven.xml: [. Including JXR as a permanent fixture of the site for the project is simple. the links can be used to quickly find the source belonging to a particular exception..] </plugins> </reporting> [.plugins</groupId> <artifactId>maven-jxr-plugin</artifactId> </plugin> [.. Or. crossreferenced Java source file for the selected class. The hyper links in the content pane can be used to navigate to other classes and interfaces within the cross reference. and can be done by adding the following to proficio/pom.] <reporting> <plugins> <plugin> <groupId>org. Those familiar with Javadoc will recognize the framed navigation layout.apache. however the content pane is now replaced with a syntax-highlighted. if you don't have the project open in your IDE.. A useful way to leverage the cross reference is to use the links given for each line number in a source file to point team mates at a particular piece of code.Better Builds with Maven Figure 6-6: An example source code cross reference Figure 6-6 shows an example of the cross reference....] 186 .
For example.html created. will link both the JDK 1.] The end result is the familiar Javadoc output. you can run it on its own using the following command: C:\mvnbook\proficio\proficio-core> mvn javadoc:javadoc Since it will be included as part of the project site..org/plugins/maven-jxr-plugin/. Unlike JXR. However. Now that you have a source cross reference. Using Javadoc is very similar to the JXR report and most other reports in Maven. see the plugin reference at. In the online mode. you should include it in proficio/pom. the default JXR configuration is sufficient.xml. however if you'd like a list of available configuration options. this will link to an external Javadoc reference at a given URL. browsing source code is too cumbersome for the developer if they only want to know about how the API works.plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> </plugin> [.5 API documentation and the Plexus container API documentation used by Proficio: 187 . so an equally important piece of reference material is the Javadoc report.. This page contains the Source Xref and the Test Source Xref items listed in the Project Reports menu of the generated site. Again.apache. A Javadoc report is only as good as your Javadoc! Make sure you document the methods you intend to display in the report.xml as a site report to ensure it is run every time the site is regenerated: [. and if possible use Checkstyle to ensure they are documented. when added to proficio/pom. the Javadoc report is quite configurable.Assessing Project Health with Maven You can now run mvn site in proficio-core and see target/site/projectreports.apache.. In most cases. with most of the command line options of the Javadoc tool available.] <plugin> <groupId>org. in target/site/apidocs. the following configuration.maven. One useful option to configure is links. many of the other reports demonstrated in this chapter will be able to link to the actual code to highlight an issue..
the next section will allow you to start monitoring and improving its health. ensuring that the deployed Javadoc corresponds directly to the artifact with which it is deployed for use in an IDE.. of course!). Since it is preferred to have discrete functional pieces separated into distinct modules. Edit the configuration of the existing Javadoc plugin in proficio/pom. this is not sufficient. the Javadoc plugin provides a way to produce a single set of API documentation for the entire project. Try running mvn clean javadoc:javadoc in the proficio directory to produce the aggregated Javadoc in target/site/apidocs/index. but conversely to have the Javadoc closely related.maven. this simple change will produce an aggregated Javadoc and ignore the Javadoc report in the individual modules. but it results in a separate set of API documentation for each library in a multi-module build.xml by adding the following line: [...Better Builds with Maven <plugin> <groupId>org.lang.] </configuration> [.0-alpha-9/apidocs</link> </links> </configuration> </plugin> If you regenerate the site in proficio-core with mvn site again.0/docs/api</link> <link>. This setting must go into the reporting section so that it is used for both reports and if the command is executed separately.apache. but this would still limit the available classes in the navigation as you hop from module to module.5. 188 . are linked to API documentation on the Sun Web site.. this setting is always ignored by the javadoc:jar goal. as well as any references to classes in Plexus. Now that the sample application has a complete reference for the source code.] <configuration> <aggregate>true</aggregate> [.lang. However.. Setting up Javadoc has been very convenient.Object. you'll see that all references to the standard JDK classes such as java. One option would be to introduce links to the other modules (automatically generated by Maven based on dependencies.] When built from the top level project..sun.html. Instead.plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> <configuration> <links> <link> and java.codehaus.
which is obtained by running mvn pmd:pmd.. and this section will look at three: PMD (. the line numbers in the report are linked to the actual source code so you can browse the issues. which in turn reduces the risk that its accuracy will be affected by change) Maven has reports that can help with each of these health factors.7.Assessing Project Health with Maven 6. Figure 6-7 shows the output of a PMD report on proficio-core. some source files are identified as having problems that could be addressed. 189 .net/) • Tag List • PMD takes a set of either predefined or user-defined rule sets and evaluates the rules across your Java source code. Also. and violations of a coding standard. such as unused methods and variables. this is important for both the efficiency of other team members and also to increase the overall level of code comprehension.sf. since the JXR report was included earlier. Figure 6-7: An example PMD report As you can see. copy-and-pasted code.net/) • Checkstyle (. The result can help identify bugs.sf.
add the following to the plugin configuration you declared earlier: [.xml file: [.] <plugin> <groupId>org.apache. you must configure all of them – including the defaults explicitly.... and the finalizer rule sets. Adding new rule sets is easy.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <configuration> <rulesets> <ruleset>/rulesets/basic. However.] <plugin> <groupId>org.xml</ruleset> <ruleset>/rulesets/imports.maven. For example..xml</ruleset> </rulesets> </configuration> </plugin> [. unnecessary statements and possible bugs – such as incorrect loop variables. and imports rule sets.xml</ruleset> <ruleset>/rulesets/finalizers. redundant or unused import declarations. if you configure these. The “basic” rule set includes checks on empty blocks.xml</ruleset> <ruleset>/rulesets/unusedcode.Better Builds with Maven Adding the default PMD report to the site is just like adding any other report – you can include it in the reporting section in the proficio/pom. by passing the rulesets configuration to the plugin. The “imports” rule set will detect duplicate.maven. to include the default rules... methods.apache.] The default PMD report includes the basic. variables and parameters.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> </plugin> [... The “unused code” rule set will locate unused private fields. unused code.] 190 .
xml.net/bestpractices.. you could create a rule set with all the default rules. but not others. create a file in the proficio-core directory of the sample application called src/main/pmd/custom.xml" /> <rule ref="/rulesets/unusedcode. and imports are useful in most scenarios and easily fixed.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <configuration> <rulesets> <ruleset>${basedir}/src/main/pmd/custom.html.. For example.xml</ruleset> </rulesets> </configuration> </plugin> </plugins> </reporting> [. In either case. and add more as needed. but exclude the “unused private field” rule.sf.html: • • Pick the rules that are right for you.0"?> <ruleset name="custom"> <description> Default rules.sf. basic. you can choose to create a custom rule set. Or.maven. no unused private field warning </description> <rule ref="/rulesets/basic. with the following content: <?xml version="1. If you've done all the work to select the right rules and are correcting all the issues being discovered.xml"> <exclude name="UnusedPrivateField" /> </rule> </ruleset> To use this rule set.] <reporting> <plugins> <plugin> <groupId>org. select the rules that apply to your own project. For PMD. Start small. There is no point having hundreds of violations you won't fix. It is also possible to write your own rules if you find that existing ones do not cover recurring problems in your source code.. One important question is how to select appropriate rules.xml" /> <rule ref="/rulesets/imports.] For more examples on customizing the rule sets.xml file by adding: [. 191 .Assessing Project Health with Maven You may find that you like some rules in a rule set. From this starting. see the instructions on the PMD Web site at. To try this. try the following guidelines from the Web site at.. override the configuration in the proficio-core/pom.net/howtomakearuleset. unusedcode. you need to make sure it stays that way.apache. you may use the same rule sets in a number of projects.
so that it is regularly tested. This is done by binding the goal to the build life cycle. You will see that the build fails.maven. To do so. By default.] </plugins> </build> You may have noticed that there is no configuration here. If you need to run checks earlier.. fix the errors in the src/main/java/com/exist/mvnbook/proficio/DefaultProficio. [INFO] --------------------------------------------------------------------------- Before correcting these errors. you should include the check in the build. but recall from Configuring Reports and Checks section of this chapter that the reporting configuration is applied to the build as well..java file by adding a //NOPMD comment to the unused variables and method: 192 . To correct this.apache. add the following section to the proficio/pom.. which occurs after the packaging phase. the pmd:check goal is run in the verify phase.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <executions> <execution> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> [.Better Builds with Maven Try this now by running mvn pmd:check on proficio-core.xml file: <build> <plugins> <plugin> <groupId>org. try running mvn verify in the proficio-core directory. you could add the following to the execution block to ensure that the check runs just after all sources exist: <phase>process-sources</phase> To test this new setting.
. and will appear as “CPD report” in the Project Reports menu. or copy/paste detection report. While this check is very useful.] int j. Figure 6-8: An example CPD report 193 . While the PMD report allows you to run a number of different rules.Assessing Project Health with Maven [.] // Trigger PMD and checkstyle int i.] private void testMethod() // NOPMD { } [. adding the check to a profile. This is the CPD. // NOPMD [.. it can be slow and obtrusive during general development. See Continuous Integration with Continuum section in the next chapter for information on using profiles and continuous integration. there is one that is in a separate report. but mandatory in an integration environment.. and it includes a list of duplicate code fragments discovered across your entire source base..This report is included by default when you enable the PMD plugin in your reporting section. For that reason.. which is executed only in an appropriate environment.. the build will succeed. An example report is shown in figure 6-8...] If you run mvn verify again. // NOPMD [. can make the check optional for developers.
and a commercial product called Simian (. It was originally designed to address issues of format and style.redhillconsulting. rather than identifying a possible factoring of the source code. or to enforce a check will depend on the environment in which you are working. Whether to use the report only. • Use it to check code formatting and selected other problems. in many ways. Figure 6-9 shows the Checkstyle report obtained by running mvn checkstyle:checkstyle from the proficio-core directory. pmd:cpd-check can be used to enforce a failure if duplicate source code is found. This may not give you enough control to effectively set a rule for the source code. There are other alternatives for copy and paste detection. 194 . Checkstyle is a tool that is. and still rely on other tools for greater coverage. Simian can also be used through Checkstyle and has a larger variety of configuration options for detecting duplicate source code. Depending on your environment.net/availablechecks. such as Checkstyle. However. which defaults to 100. With this setting you can fine tune the size of the copies detected. but has more recently added checks for other code issues.au/products/simian/). you may choose to use it in one of the following ways: Use it to check code formatting only. If you need to learn more about the available modules in Checkstyle. and rely on other tools for detecting other problems. the CPD report contains only one variable to configure: minimumTokenCount. similar to PMD. refer to the list on the Web site at. resulting in developers attempting to avoid detection by making only slight modifications.Better Builds with Maven In a similar way to the main check. Some of the extra summary information for overall number of errors and the list of checks used has been trimmed from this display. • Use it to check code formatting and to detect other problems exclusively • This section focuses on the first usage scenario.
xml: [.. add the following to the reporting section of proficio/pom. so to include the report in the site and configure it to use the Maven style. the rules used are those of the Sun Java coding conventions.. with a link to the corresponding source line – if the JXR report was enabled. but Proficio is using the Maven team's code style.plugins</groupId> <artifactId>maven-checkstyle-plugin</artifactId> <configuration> <configLocation>config/maven_checks.maven.Assessing Project Health with Maven Figure 6-9: An example Checkstyle report You'll see that each file with notices.xml</configLocation> </configuration> </plugin> 195 .apache. warnings or errors is listed in a summary. and then the errors are shown.] <plugin> <groupId>org. This style is also bundled with the Checkstyle plugin. That's a lot of errors! By default.
html. These checks are for backwards compatibility only. It is also possible to share a Checkstyle configuration among multiple projects.0 and above.apache. or would like to use the additional checks introduced in Checkstyle 3. filter the results. 196 . known as “Task List” in Maven 1.xml config/turbine_checks. Before completing this section it is worth mentioning the Tag List plugin.html No longer online – the Avalon project has closed. the Checkstyle documentation provides an excellent reference at. and to parameterize the Checkstyle configuration for creating a baseline organizational standard that can be customized by individual projects.Better Builds with Maven Table 6-3 shows the configurations that are built into the Checkstyle plugin.com/docs/codeconv/. The built-in Sun and Maven standards are quite different.xml Description Reference Sun Java Coding Conventions Maven team's coding conventions Conventions from the Jakarta Turbine project Conventions from the Apache Avalon project. will look through your source code for known tags and provide a report on those it finds.xml config/avalon_checks. This report.html#Maven%20Code%20Style. The configLocation parameter can be set to a file within your build.sun.xml config/maven_checks. as explained at. and typically. a URL. While this chapter will not go into an example of how to do this. By default.org/guides/development/guidem2-development. It is a good idea to reuse an existing Checkstyle configuration for your project if possible – if the style you use is common.org/turbine/common/codestandards. Table 6-3: Built-in Checkstyle configurations Configuration config/sun_checks. The Checkstyle plugin itself has a large number of configuration options that allow you to customize the appearance of the report.0. you will need to create a Checkstyle configuration. one or the other will be suitable for most people. then it is likely to be more readable and easily learned by people joining your project. However. if you have developed a standard that differs from these. or a resource within a special dependency also. this will identify the tags TODO and @todo in the comments of your source code.apache.org/plugins/maven-checkstyle-plugin/tips.
At the time of writing. based on the theory that you shouldn't even try to use something before it has been tested. 6.mojo</groupId> <artifactId>taglist-maven-plugin</artifactId> <configuration> <tags> <tag>TODO</tag> <tag>@todo</tag> <tag>FIXME</tag> <tag>XXX</tag> </tags> </configuration> </plugin> [. have beta versions of plugins available from the. using this report on a regular basis can be very helpful in spotting any holes in the test plan. such as FindBugs. Setting Up the Project Web Site. As you learned in section 6. Checkstyle. the report (run either on its own. and more plugins are being added every day. or as part of the site). however this plugin is a more convenient way to get a simple report of items that need to be addressed at some point later in time. it can be a useful report for demonstrating the number of tests available and the time it takes to run certain tests for a package.. JavaNCSS and JDepend.. or XXX in your source code. FIXME. Another critical technique is to determine how much of your source code is covered by the test execution. In the build life cycle defined in Chapter 2. for assessing coverage. add the following to the reporting section of proficio/pom. Some other similar tools.org/ project at the time of this writing. will ignore these failures when generated to show the current test state.codehaus.] <plugin> <groupId>org. While you are writing your tests. @todo. In addition to that.xml: [. It is actually possible to achieve this using Checkstyle or PMD rules. Cobertura (. you saw that tests are run before the packaging of the library or application for distribution. it is easy to add a report to the Web site that shows the results of the tests that have been run. Knowing whether your tests pass is an obvious and important assessment of their health..] This configuration will locate any instances of TODO.8. PMD.codehaus. While the default Surefire configuration fails the build if the tests fail. 197 ..net) is the open source tool best integrated with Maven.2.sf. and Tag List are just three of the many tools available for assessing the health of your project's source code. There are additional testing stages that can occur after the packaging step to verify that the assembled package works under other circumstances. Monitoring and Improving the Health of Your Tests One of the important (and often controversial) features of Maven is the emphasis on testing as part of the production of your code. Failing the build is still recommended – but the report allows you to provide a better visual representation of the results.Assessing Project Health with Maven To try this plugin.
run mvn cobertura:cobertura in the proficio-core directory of the sample application. and a line-by-line coverage analysis of each source file. comments and white space.html. you'll notice the following markings: • Unmarked lines are those that do not have any executable code associated with them. The report contains both an overall summary. Each line with an executable statement has a number in the second column that indicates during the test run how many times a particular statement was run.Better Builds with Maven To see what Cobertura is able to report. For a source file. a branch is an if statement that can behave differently depending on whether the condition is true or false. This includes method and class declarations. or for which all possible branches were not executed. Lines in red are statements that were not executed (if the count is 0). For example. in the familiar Javadoc style framed layout. 198 . • • Unmarked lines with a green number in the second column are those that have been completely covered by the test execution. Figure 6-10 shows the output that you can view in target/site/cobertura/index.
over 10). Add the following to the reporting section of proficio/pom.] <plugin> <groupId>org.xml: [. which measures the number of branches that occur in a particular method. If this is a metric of interest.Assessing Project Health with Maven Figure 6-10: An example Cobertura report The complexity indicated in the top right is the cyclomatic complexity of the methods in the class. High numbers (for example.codehaus. you might consider having PMD monitor it.. as it can be hard to visualize and test the large number of alternate code paths. might indicate a method should be re-factored into simpler pieces. The Cobertura report doesn't have any notable configuration.. so including it in the site is simple.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> </plugin> 199 .
html.. 200 . the report will be generated in target/site/cobertura/index.xml: [. you'll see that the cobertura. While not required.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <executions> <execution> <id>clean</id> <goals> <goal>clean</goal> </goals> </execution> </executions> </plugin> </plugins> </build> [.Better Builds with Maven [.ser. Due to a hard-coded path in Cobertura. as well as the target directory. The Cobertura plugin also contains a goal called cobertura:check that is used to ensure that the coverage of your source code is maintained at a certain percentage. the database used is stored in the project directory as cobertura.. To ensure that this happens...] If you now run mvn clean in proficio-core..] If you now run mvn site under proficio-core. and is not cleaned with the rest of the project. there is another useful setting to add to the build section.] <build> <plugins> <plugin> <groupId>org. add the following to the build section of proficio/pom..ser file is deleted.
] Note that the configuration element is outside of the executions. This is because Cobertura needs to instrument your class files..] <execution> <id>check</id> <goals> <goal>check</goal> </goals> </execution> </executions> [.xml: [.Assessing Project Health with Maven To configure this goal for Proficio. 201 .] <configuration> <check> <totalLineRate>80</totalLineRate> [. so are not packaged in your application). The Surefire report may also re-run tests if they were already run – both of these are due to a limitation in the way the life cycle is constructed that will be improved in future versions of Maven. add a configuration and another execution to the build plugin definition you added above when cleaning the Cobertura database: [..] <configuration> <check> <totalLineRate>100</totalLineRate> <totalBranchRate>100</totalBranchRate> </check> </configuration> <executions> [.. You can do this for Proficio to have the tests pass by changing the setting in proficio/pom.. You'll notice that your tests are run twice. as in the Proficio example.. The rules that are being used in this configuration are 100% overall line coverage rate. and 100% branch coverage rate.] If you run mvn verify again. the check will be performed. Normally. so running the check fails.... However. you would add unit tests for the functions that are missing tests. This ensures that if you run mvn cobertura:check from the command line. the check passes. If you now run mvn verify under proficio-core.. and decide to reduce the overall average required. looking through the report. You would have seen in the previous examples that there were some lines not covered. you may decide that only some exceptional cases are untested. This wouldn't be the case if it were associated with the life-cycle bound check execution. these are instrumented in a separate directory.. the configuration will be applied. and the tests are re-run using those class files instead of the normal ones (however.
For example. only allowing a small number of lines to be untested. For more information. so that they understand and agree with the choice.org/cobertura-maven-plugin. using lineRate and branchRate. If you have another tool that can operate under the Surefire framework. there is more to assessing the health of tests than success and coverage. involve the whole development team in the decision. using packageLineRate and packageBranchRate. see the Clover plugin reference on the Maven Web site at. Consider setting any package rates higher than the per-class rate. Remember. such as handling checked exceptions that are unexpected in a properly configured system and difficult to test. The best known commercial offering is Clover.Better Builds with Maven These settings remain quite demanding though. For more information. may be of assistance there. It is just as important to allow these exceptions. and get integration with these other tools for free. In both cases. it is possible for you to write a provider to use the new tool. Jester mutates the code that you've already determined is covered and checks that it causes the test to fail when run a second time with the wrong code. and you can evaluate it for 30 days when used in conjunction with Maven. refer to the Cobertura plugin configuration reference at. it is worth noting that one of the benefits of Maven's use of the Surefire abstraction is that the tools above will work for any type of runner introduced. Don't set it too low. Remain flexible – consider changes over time rather than hard and fast rules. although not yet integrated with Maven directly.0 support is also available. These reports won't tell you if all the features have been implemented – this requires functional or acceptance testing. or as the average across each package. these reports work unmodified with those test types. Set some known guidelines for what type of code can remain untested. Of course.codehaus. It also won't tell you whether the results of untested input values produce the correct results.org/plugins/maven-clover-plugin/. You may want to enforce this for each file individually as well. It is also possible to set requirements on individual packages or classes using the regexes parameter. the easiest way to increase coverage is to remove code that handles untested. Some helpful hints for determining the right code coverage settings are: • • • • • • Like all metrics. Cobertura is not the only solution available for assessing test coverage. exceptional cases – and that's certainly not something you want! The settings above are requirements for averages across the entire source tree.net). 202 . which is very well integrated with Maven as well. Tools like Jester (. as it is to require that the other code be tested. This will allow for some constructs to remain untested. Don't set it too high. as it will become a minimum benchmark to attain and rarely more. and setting the total rate higher than both. It behaves very similarly to Cobertura.sf. Surefire supports tests written with TestNG. and at the time of writing experimental JUnit 4. as it will discourage writing code to handle exceptional cases that aren't being tested. Choose to reduce coverage requirements on particular classes or packages rather than lowering them globally. To conclude this section on testing. Choosing appropriate settings is the most difficult part of configuring any of the reporting metrics in Maven.apache.
The result is shown in figure 6-11.0 introduced transitive dependencies. and a number of other features such as scoping and version selection. run mvn site in the proficio-core directory.9. Maven 2. but does introduce a drawback: poor dependency maintenance or poor scope and version selection affects not only your own project. Figure 6-11: An example dependency report 203 . The first step to effectively maintaining your dependencies is to review the standard report included with the Maven site. This brought much more power to Maven's dependency mechanism. and browse to the file generated in target/site/dependencies. but any projects that depend on your project. where the dependencies of dependencies are included in a build. Monitoring and Improving the Health of Your Dependencies Many people use Maven primarily as a dependency manager. the full graph of a project's dependencies can quickly balloon in size and start to introduce conflicts. Left unchecked. used well it is a significant time saver. If you haven't done so already. While this is only one of Maven's features.html.Assessing Project Health with Maven 6.
This report is also a standard report.8. 204 . Currently. or an incorrect scope – and choose to investigate its inclusion.0-SNAPSHOT (selected for compile) proficio-model:1. The report shows all of the dependencies included in all of the modules within the project.0-alpha-9 (selected for compile) plexus-utils:1. To see the report for the Proficio project. but more importantly in the second section it will list all of the transitive dependencies included through those dependencies.1-alpha-2 (selected for compile) junit:3.html will be created.exist. run mvn site from the base proficio directory. Whether there are outstanding SNAPSHOT dependencies in the build. using indentation to indicate which dependencies introduce other dependencies.1 (selected for test) plexus-container-default:1.0-SNAPSHOT (selected for compile) Here you can see that. local scope test wins) proficio-api:1. which indicates dependencies that are in development. an incorrect version. This helps ensure your build is consistent and reduces the probability of introducing an accidental incompatibility. This can be quite difficult to read.0-SNAPSHOT junit:3. It also includes some statistics and reports on two important factors: • Whether the versions of dependencies used for each module is in alignment. • 9 Artifacts can also be obtained from. here is the resolution process of the dependencies of proficio-core (some fields have been omitted for brevity): proficio-core:1.0.1 (not setting scope to: compile. this requires running your build with debug turned on. The file target/site/dependencyconvergence. so at the time of this writing there are two features that are aimed at helping in this area: • • The Repository Manager (Archiva) will allow you to navigate the dependency tree through the metadata stored in the Ibiblio9 repository.Better Builds with Maven This report shows detailed information about your direct dependencies. proficio-model is introduced by proficio-api.org/maven2/. This will output the dependency tree as it is calculated. and is shown in figure 6-12. A dependency graphing plugin that will render a graphical representation of the information.maven.8. and must be updated before the project can be released. and why. as well as comments about what versions and scopes are selected. for example. but that it is overridden by the test scoped dependency in proficio-core. For example. and that plexus-container-default attempts to introduce junit as a compile dependency. such as mvn -X package.4 (selected for compile) classworlds:1.com/maven2/ and. Another report that is available is the “Dependency Convergence Report”. but appears in a multi-module build only. It's here that you might see something that you didn't expect – an extra dependency.
You can control what version is actually used by declaring the dependency version in a project that packages or runs the application. To improve your project's health and the ability to reuse it as a dependency itself. or runtime if it is needed to bundle with or run the application but not for compiling your source code).. 205 . However. Add exclusions to dependencies to remove poorly defined dependencies from the tree. Use a range of supported dependency versions. rather than using the latest available. they can provide basic help in identifying the state of your dependencies once you know what to find. declaring the absolute minimum supported as the lower boundary.
An important tool in determining whether a project is ready to be released is Clirr (. Libraries will often be substituted by newer versions to obtain new features or bug fixes. Because existing libraries are not recompiled every time a version is changed. Clirr detects whether the current version of a library has introduced any binary incompatibilities with the previous release. Figure 6-13: An example Clirr report This is particularly important if you are building a library or framework that will be consumed by developers outside of your own project. but there are plans for more: • • A class analysis plugin that helps identify dependencies that are unused in your current project Improved dependency management features including different mechanisms for selecting versions that will allow you to deal with conflicting versions.net/).Better Builds with Maven Given the importance of this task. Monitoring and Improving the Health of Your Releases Releasing a project is one of the most important procedures you will perform. Catching these before a release can eliminate problems that are quite difficult to resolve once the code is “in the wild”. An example Clirr report is shown in figure 6-13. but then expected to continue working as they always have. While the next chapter will go into more detail about how Maven can help automate that task and make it more reliable. 206 .sf. 6. more tools are needed in Maven. this section will focus on improving the quality of the code released. and the information released with it. specification dependencies that let you depend on an API and manage the implementation at runtime. Two that are in progress were listed above. there is no verification that a library is binary-compatible – incompatibility will be discovered only when there's a failure. and more. but it is often tedious and error prone.
] [clirr:clirr] Comparing to version: 0..] <reporting> <plugins> <plugin> <groupId>org. you may need to install the artifacts in your local repository yourself.. As a project grows.9 ------------------------------------------------------------BUILD SUCCESSFUL ------------------------------------------------------------- This version is determined by looking for the newest release in repository. even if they are binary compatible. This is particularly true in a Maven-based environment.. Note: The older versions of proficio-api are retrieved from the repository..mojo</groupId> <artifactId>clirr-maven-plugin</artifactId> <configuration> <minSeverity>info</minSeverity> </configuration> </plugin> </plugins> </reporting> [. If you run this command. or a quick patch may need to be made and a new version deployed into an existing application. the report will be generated in target/site/clirr-report. This gives you an overview of all the changes since the last release. that is before the current development version.9 of proficio-api against which to compare (and that it is downloaded if you don't have it already): [.. While methods of marking incompatibility are planned for future versions.] If you run mvn clirr:clirr in proficio-api..html.9. Different modules may use different versions. the Clirr report shows only errors and warnings. by setting the minSeverity parameter.xml: [. the interactions between the project's own components will start behaving as if they were externally-linked. Maven currently works best if any version of an artifact is backwards compatible. By default.8 and proficio-0. However. 207 .] [INFO] [INFO] [INFO] [INFO] [INFO] [. you can configure the plugin to show all informational messages.. the answer here is clearly – yes. However.. To see this in action. you'll notice that Maven reports that it is using version 0. which you can do by issuing the mvn install command from each sub-directory: proficio-0.Assessing Project Health with Maven But does binary compatibility apply if you are not developing a library for external consumption? While it may be of less importance. back to the first release.codehaus. add the following to the reporting section of proficio-api/pom. where the dependency mechanism is based on the assumption of binary compatibility between versions.
and later was redesigned to make sure that version 1.. so that fewer people are affected. This is the most important one to check. However. The longer poor choices remain. The Clirr plugin is also capable of automatically checking for introduced incompatibilities through the clirr:check goal.. to discuss and document the practices that will be used.mojo</groupId> <artifactId>clirr-maven-plugin</artifactId> <executions> <execution> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> [. it is important to agree up front.] </plugins> </build> [.0 version. there is nothing in Java preventing them from being used elsewhere. run the following command: mvn clirr:clirr -DcomparisonVersion=0.. you will see that the build fails due to the binary incompatibility introduced between the 0.] <build> <plugins> <plugin> <groupId>org. For example. In this instance.0 would be more stable in the long run..9 release.. It is best to make changes earlier in the development cycle. Like all of the quality metrics. To add the check to the proficio-api/pom. and to check them automatically. Even if they are designed only for use inside the project. Since this was an acceptable incompatibility due to the preview nature of the 0.8 release. delegating the code.xml file.codehaus. add the following to the build section: [. as it will be used as the interface into the implementation by other applications. the harder they are to change as adoption increases.Better Builds with Maven You can change the version used with the comparisonVersion parameter. on the acceptable incompatibilities. Once a version has been released that is intended to remain binary-compatible going forward.. then there is no point in checking the others – it will create noise that devalues the report's content in relation to the important components. it is a good idea to monitor as many components as possible.8 You'll notice there are a more errors in the report. rather than removing or changing the original API and breaking binary compatibility.9 preview release and the final 1. since this early development version had a different API. you are monitoring the proficio-api component for binary compatibility changes only. to compare the current code to the 0. If it is the only one that the development team will worry about breaking. it is almost always preferable to deprecate an old API and add a new one.] If you now run mvn verify. if the team is prepared to do so. and it can assist in making your own project more stable. you can choose to exclude that from the report by adding the following configuration to the plugin: 208 .
Effective Java describes a number of practical rules that are generally helpful to writing code in Java. it is listed only in the build configuration.codehaus. With this simple setup. While the topic of designing a strong public API and maintaining binary compatibility is beyond the scope of this book. it will not pinpoint potential problems for you. not just the one acceptable failure. This allows the results to be collected over time to form documentation about known incompatibilities for applications using the library.Assessing Project Health with Maven [.mojo</groupId> <artifactId>clirr-maven-plugin</artifactId> <configuration> <excludes> <exclude>**/Proficio</exclude> </excludes> </configuration> [.] <plugin> <groupId>org. and ignored in the same way that PMD does. taking two source trees and comparing the differences in method signatures and Javadoc annotations. Note that in this instance. and particularly so if you are designing a public API. the following articles and books can be recommended: • Evolving Java-based APIs contains a description of the problem of maintaining binary compatibility. and then act accordingly... and so is most useful for browsing. Built as a Javadoc doclet. Hopefully a future version of Clirr will allow acceptable incompatibilities to be documented in the source code. as well as strategies for evolving an API without breaking it. you can create a very useful mechanism for identifying potential release disasters much earlier in the development process. This can be useful in getting a greater level of detail than Clirr on specific class changes.] </plugin> This will prevent failures in the Proficio class from breaking the build in the future. it takes a very different approach. which is available at.. so the report still lists the incompatibility. It has a functional Maven 2 plugin. A limitation of this feature is that it will eliminate a class entirely. 209 . However.codehaus. • A similar tool to Clirr that can be used for analyzing changes between releases is JDiff.org/jdiff-maven-plugin..
and will be used as the basis for the next chapter. a large amount of information was presented about a project.0 (for example. will reduce the need to gather information from various sources about the health of the project. the Dashboard plugin). Summary The power of Maven's declarative project model is that with a very simple setup (often only 4 lines in pom. it is important that your project information not remain passive. the model remains flexible enough to make it easy to extend and customize the information published on your project web site. These are all important features to have to get an overall view of the health of a project. However. Best of all.12. Viewing Overall Project Health In the previous sections of this chapter. a new set of information about your project can be added to a shared Web site to help your team visualize the health of the project. and have not yet been implemented for Maven 2. 6. It is important that developers are involved in the decision making process regarding build constraints. as there is a constant background monitor that ensures the health of the project is being maintained. Once established. and incorporates the concepts learned in this chapter. then.zip source archive. While some attempts were made to address this in Maven 1.11. this focus and automated monitoring will have the natural effect of improving productivity and reducing time of delivery again. 210 . The next chapter examines team development and collaboration. it requires a shift from a focus on time and deadlines. it should be noted that the Maven reporting API was written with these requirements in mind specifically. so that they feel that they are achievable. and run in the appropriate environment. and few of the reports aggregated information across a multiple module build. along with techniques to ensure that the build checks are now automated. regularly scheduled. they did not address all of these requirements.Better Builds with Maven 6. However. but none related information from another report to itself. In some cases.0. The additions and changes to Proficio made in this chapter can be found in the Code_Ch06. enforcing good. none of the reports presented how the information changes over time other than the release announcements. of the visual display is to aid in deriving the appropriate constraints to use. Some of the reports linked to one another. and as the report set stabilizes – summary reports will start to appear. each in discrete reports. individual checks that fail the build when they're not met. Finally. to a focus on quality. Most Maven plugins allow you to integrate rules into the build that check certain constraints on that piece of information once it is well understood. The purpose. How well this works in your own projects will depend on the development culture of your team.xml). In the absence of these reports.
Tom Clancy 211 . .
it's just as important that they don't waste valuable time researching and reading through too many information sources simply to find what they need.1. visualize. The CoRE approach to development also means that new team members are able to become productive quickly. While it's essential that team members receive all of the project information required to be productive. repeating errors previously solved or duplicating efforts already made. The Issues Facing Teams Software development as part of a team. These tools aid the team to organize. further contributing to the problem. CoRE emphasizes the relationship between project information and project members. but also to incorporate feedback. widely-distributed teams. and document for reuse the artifacts that result from a software project. or forgotten. However. one of the biggest challenges relates to the sharing and management of development information. component-based projects despite large. CoRE is based on accumulated learnings from open source projects that have achieved successful. real-time stakeholder participation. the fact that everyone has direct access to the other team members through the CoRE framework reduces the time required to not only share information. As teams continue to grow. resulting in shortened development cycles. misinterpreted. every other member (and particularly new members). This value is delivered to development teams by supporting project transparency. will inevitably have to spend time obtaining this localized information. Even when it is not localized. Many of these challenges are out of any given technology's control – for instance finding the right people for the team. This problem gets exponentially larger as the size of the team increases. This problem is particularly relevant to those working as part of a team that is distributed across different physical locations and timezones. rapid development. As each member retains project information that isn't shared or commonly accessible. and asynchronous engineering. iterative cycles. it does encompass a set of practices and tools that enable effective team communication and collaboration. which is enabled by the accessibility of consistently structured and organized information such as centralized code repositories. whether it is 2 people or 200 people. the key to the information issue in both situations is to reduce the amount of communication necessary to obtain the required information in the first place. faces a number of challenges to the success of the effort. although a distributed team has a higher communication overhead than a team working in a single location. working on complex.Better Builds with Maven 7. project information can still be misplaced. A Community-oriented Real-time Engineering (CoRE) process excels with this information challenge. However. CoRE enables globally distributed development teams to cohesively contribute to high-quality software. and dealing with differences in opinions. 212 . it is obvious that trying to publish and disseminate all of the available information about a project would create a near impossible learning curve and generate a barrier to productivity. and that existing team members become more productive and effective. in rapid. web-based communication channels and web-based project management tools. An organizational and technology-based framework. Using the model of a community. While Maven is not tied directly to the CoRE framework. Even though teams may be widely distributed.
2. error-prone and full of omissions. this is taken a step further. it's a good idea to leverage Maven's two different settings files to separately manage shared and user-specific settings. multiple JDK versions.xml file contains a number of settings that are user-specific. and to effectively define and declare them.Team Collaboration with Maven As described in Chapter 6. such as proxy settings. while still allowing for this natural variability. This chapter also looks at the adoption and use of a consistent development environment. the key is to minimize the configuration required by each individual developer. How to Set up a Consistent Developer Environment Consistency is important when establishing a shared development environment. In a shared development environment. you learned how to create your own settings. because the environment will tend to evolve inconsistently once started that way. This file can be stored in the conf directory of your Maven installation. or in the . Additionally. there are unavoidable variables that remain. 7. While one of Maven's objectives is to provide suitable conventions to reduce the introduction of inconsistencies in the build environment. In Chapter 2. but also several that are typically common across users in a shared environment. and to user-specific profiles.xml file. In Maven. Without it. Common configuration settings are included in the installation directory. 213 . To maintain build consistency. it will be the source of timeconsuming development problems in the future. such as different installation locations for software. through the practice of continuous integration.m2 subdirectory of your home directory (settings in this location take precedence over those in the Maven installation directory). The settings. demonstrating how Maven provides teams with real-time information on the builds and health of a project. In this chapter. the set up process for a new developer can be slow. while an individual developer's settings are stored in their home directory. these variables relate to the user and installation settings files. varying operating systems. and the use of archetypes to ensure consistency in the creation of new projects. and other discrete settings such as user names and passwords. Maven can gather and share the knowledge about the health of a project.
mycompany.m2/settings.mycompany.plugins</pluginGroup> </pluginGroups> </settings> 214 . <user_home>/.xml: <settings> <proxies> <proxy> <active>true</active> <protocol>http</protocol> <host>proxy</host> <port>8080</port> </proxy> </proxies> <servers> <server> <id>website</id> <username>${website.mycompany.Better Builds with Maven The following is an example configuration file that you might use in the installation directory.
Another profile. The user-specific configuration is also much simpler as shown below: <settings> <profiles> <profile> <id>property-overrides</id> <properties> <website. You'll notice that the local repository is omitted in the prior example. it is important that you do not configure this setting in a way that shares a local repository. While you may define a standard location that differs from Maven's default (for example.3 for more information on setting up an internal repository. ${user. Using the basic template. issues with inconsistently-defined identifiers and permissions are avoided. The server settings will typically be common among a set of developers. These repositories are independent of the central repository in this configuration. The profile defines those common. it would usually be set consistently across the organization or department.username> </properties> </profile> </profiles> </settings> To confirm that the settings are installed correctly.Team Collaboration with Maven There are a number of reasons to include these settings in a shared configuration: • • • • • • If a proxy server is allowed. which is typically one that has been set up within your own organization or department. The plugin groups are necessary only if an organization has plugins. internal repositories that contain a given organization's or department's released artifacts. without having to worry about integrating local changes made by individual developers.home}/maven-repo). By placing the common configuration in the shared settings. which are run from the command line and not defined in the POM.username>myuser</website. with only specific properties such as the user name defined in the user's settings. property-overrides is also enabled by default. The active profiles listed enable the profile defined previously in every environment. This profile will be defined in the user's settings file to set the properties used in the shared file. you can view the merged result by using the following help plugin command: C:\mvnbook> mvn help:effective-settings 215 . See section 7. The mirror element can be used to specify a mirror of a repository that is closer to you. The previous example forms a basic template that is a good starting point for the settings file in the Maven installation.username}. In Maven. See section 7. at a single physical location. across users. such as ${website.3 of this chapter for more information on creating a mirror of the central repository within your own organization. the local repository is defined as the repository of a single user. you can easily add and consistently roll out any new server and repository settings.
Place the Maven installation on a read-only shared or network drive from which each developer runs the application. Setting up an internal repository is simple. or other custom solution.Better Builds with Maven Separating the shared settings from the user-specific settings is helpful.3. To set up your organization's internal repository using Jetty.jar 8081 216 . see Chapter 2.1. each execution will immediately be up-to-date. or other source control management (SCM) system. If this infrastructure is available. but it is also important to ensure that the shared settings are easily and reliably installed with Maven. however it applies to all projects that are built in the developer's environment. located in the project directory. but requires a manual procedure. or create a new server using Apache HTTPd. Change to that directory. an individual will need to customize the build of an individual project. just as any other external repository would be. You can use an existing HTTP server for this. download the Jetty 5. create a new directory in which to store the files. To set up Jetty. Use an existing desktop management solution. In some circumstances however. 7. and when possible. The following are a few methods to achieve this: • • • • Rebuild the Maven release distribution to include the shared configuration file and distribute it internally. it is possible to maintain multiple Maven installations. in this example C:\mvnbook\repository will be used.10-bundle. Jetty. doing so will prevent Maven from being available off-line. For more information on profiles. Each developer can check out the installation into their own machines and run it from there.xml file. Now that each individual developer on the team has a consistent set up that can be customized as needed. Subversion. or if there are network problems. if M2_HOME is not set. Apache Tomcat. A new release will be required each time the configuration is changed. or any number of other servers. easily updated.10 server bundle from the book's Web site and copy it to the repository directory. and run: C:\mvnbook\repository> java -jar jetty-5. • Configuring the settings. by one of the following methods: Using the M2_HOME environment variable to force the use of a particular installation.xml file covers the majority of use cases for individual developer customization. Creating a Shared Repository Most organizations will need to set up one or more shared repositories. Retrieving an update from an SCM will easily update the configuration and/or installation. so that multiple developers and teams can collaborate effectively. This internal repository is still treated as a remote repository in Maven. While it can be stored anywhere you have permissions. see Chapter 3.1. However. For an explanation of the different types of repositories. While any of the available transport protocols can be used. the most popular is HTTP. since not everyone can deploy to the central Maven repository. developers must use profiles in the profiles. organization's will typically want to set up what is referred to as an internal repository. Check the Maven installation into CVS. To do this. the next step is to establish a repository to and from which artifacts can be published and dependencies downloaded. If necessary. To publish releases for use across different environments within their network. • Adjusting the path or creating symbolic links (or shortcuts) to the desired Maven executable.
using the following command: C:\mvnbook\repository> mkdir central This repository will be available at. In addition. For more information on Maestro please see:. Maestro is an Apache License 2.Team Collaboration with Maven You can now navigate to and find that there is a Web server running displaying that directory. and is all that is needed to get started. This creates an empty repository. For the first repository. sftp and more. searching. Continuum and Archiva build platform. While this isn't required. You can create a separate repository under the same server. and reporting. it is possible to use a repository on another server with any combination of supported protocols including http. However.8G. At the time of writing.0 distribution based on a pre-integrated Maven. the size of the Maven repository was 5. refer to Chapter 3. This chapter will assume the repositories are running from and that artifacts are deployed to the repositories using the file system. To populate the repository you just created. The repository manager can be downloaded from. 217 . it provides faster performance (as most downloads to individual developers come from within their own network). create a subdirectory called internal that will be available at. • The Repository Manager (Archiva)10 is a recent addition to the Maven build platform that is designed to administer your internal repository. by avoiding any reliance on Maven's relatively open central repository. Use rsync to take a copy of the central repository and regularly update it. and gives full control over the set of artifacts with which your software is built. For more information. Set up the Repository Manager (Archiva) as a proxy to the central repository. ftp. 10 Repository Manager (Archiva) is a component of Exist Global Maestro Project Server.com/.apache. It is deployed to your Jetty server (or any other servlet container) and provides remote repository proxies. it is common in many organizations as it eliminates the requirement for Internet access or proxy configuration.exist. Your repository is now set up. The server is set up on your own workstation for simplicity in this example. This will download anything that is not already present. and keep a copy in your internal repository for others on your team to reuse. as well as friendly repository browsing. configured securely and monitored to ensure it remains running at all times. but rather than set up multiple Web servers. C:\mvnbook\repository> mkdir internal It is also possible to set up another repository (or use the same one) to mirror content from the Maven central repository. scp. you will want to set up or use an existing HTTP server that is in a shared.org/repositorymanager/. accessible location. separate repositories. there are a number of methods available: • • Manually add content as desired using mvn deploy:deploy-file. However. Later in this chapter you will learn that there are good reasons to run multiple. you can store the repositories on this single server.
To override the central repository with your internal repository. or hierarchy. for a situation where a developer might not have configured their settings and instead manually installed the POM. this must be defined as both a regular repository and a plugin repository to ensure all access is consistent.2. as shown in section 7. you must define a repository in a settings file and/or POM that uses the identifier central. to configure the repository from the project level instead of in each user's settings (with one exception that will be discussed next). that declares shared settings within an organization and its departments.. and if it's acceptable to have developers configure this in their settings as demonstrated in section 7. it must retrieve the parent from the repository. there are two choices: use it as a mirror.2. Developers may choose to use a different mirror. it would need to be declared in every POM. and as a result. Not only is this very inconvenient.xml file. or to include your own artifacts in the same repository. This makes it impossible to define the repository in the parent. On the other hand. it is necessary to declare only those that contain an inherited POM. or have it override the central repository. otherwise Maven will fail to download any dependencies that are not in your local repository. there is a problem – when a POM inherits from another POM that is not in the central repository. It is still important to declare the repositories that will be used in the top-most POM itself. you should override the central repository. However. if you want to prevent access to the central repository for greater control. Repositories such as the one above are configured in the POM usually. You would use it as a mirror if it is intended to be a copy of the central repository exclusively. 218 . or had it in their source code check out. If you have multiple repositories. Usually.Better Builds with Maven When using this repository for your projects. so that a project can add repositories itself for dependencies located out of those repositories configured initially. The next section discusses how to set up an “organization POM”. it would be a nightmare to change should the repository location change! The solution is to declare your internal repository (or central replacement) in the shared settings. unless you have mirrored the central repository using one of the techniques discussed previously. or the original central repository directly without consequence to the outcome of the build.
Maven Continuum. depending on the information that needs to be shared..xml file in the shared installation (or in each developer's home directory). It is important to recall. there are three levels to consider when working with any individual module that makes up the Maven project. consider the Maven project itself. the easiest way to version a POM is through sequential numbering. the current project – Maven 2 now retrieves parent projects from the repository. its departments.scm</groupId> <artifactId>maven-scm</artifactId> <url>. These parents (levels) may be used to define departments. As an example. Future versions of Maven plan to automate the numbering of these types of parent projects to make this easier. then that repository will need to be added to the settings.apache.] <modules> <module>maven-scm-api</module> <module>maven-scm-providers</module> [.4. that if your inherited projects reside in an internal repository. Creating an Organization POM As previously mentioned in this chapter.apache. 219 ... itself. Any number of levels (parents) can be used. While project inheritance was limited by the extent of a developer's checkout in Maven 1. By declaring shared elements in a common parent POM. This project structure can be related to a company structure. and is a project that. has a number of sub-projects (Maven. wherein there's the organization. Since the version of the POM usually bears no resemblance to the software.org/maven-scm/</url> [.0 – that is..maven.maven</groupId> <artifactId>maven-parent</artifactId> <version>1</version> </parent> <groupId>org. You may have noticed the unusual version declaration for the parent project. etc.0</modelVersion> <parent> <groupId>org. you'd find that there is very little deployment or repositoryrelated information. consistency is important when setting up your build infrastructure. project inheritance can be used to assist in ensuring project consistency.] </modules> </project> If you were to review the entire POM.).Team Collaboration with Maven 7. and then the teams within those departments.0. To continue the Maven example. so it's possible to have one or more parents that define elements common to several projects.apache. which is shared across all Maven projects through inheritance.3. As a result. or the organization as a whole. Maven SCM. as this is consistent information. from section 7. It is a part of the Apache Software Foundation. consider the POM for Maven SCM: <project> <modelVersion>4.
org/</url> [..] </developer> </developers> </project> 220 .apache.] <mailingLists> <mailingList> <name>Maven Announcements List</name> <post>announce@maven.apache</groupId> <artifactId>apache</artifactId> <version>1</version> </parent> <groupId>org..org</post> [..0.0</modelVersion> <parent> <groupId>org.apache.apache...Better Builds with Maven If you look at the Maven project's parent POM. you'd see it looks like the following: <project> <modelVersion>4.] </mailingList> </mailingLists> <developers> <developer> [.maven</groupId> <artifactId>maven-parent</artifactId> <version>5</version> <url>..
.6).apache. modified. there is no best practice requirement to even store these files in your source control management system.0. most of the elements are inherited from the organization-wide parent project. when working with this type of hierarchy.] <distributionManagement> <repository> [. you can retain the historical versions in the repository if it is backed up (in the future.apache</groupId> <artifactId>apache</artifactId> <version>1</version> <organization> <name>Apache Software Foundation</name> <url>.. it is best to store the parent POM files in a separate area of the source control tree.. is regarding the storage location of the source POM files.org/maven-snapshot-repository</url> <releases> <enabled>false</enabled> </releases> </repository> </repositories> [. where they can be checked out.org/</url> </organization> <url></url> [. such as the announcements mailing list and the list of developers that work across the whole project. and the deployment locations.apache.] </repository> <snapshotRepository> [. An issue that can arise. and deployed with their new version as appropriate. In fact.Team Collaboration with Maven The Maven parent POM includes shared elements.] <repositories> <repository> <id>apache... These parent POM files are likely to be updated on a different. and less frequent schedule than the projects themselves.. in this case the Apache Software Foundation: <project> <modelVersion>4. the Maestro Repository Manager will allow POM updates from a Web interface). For this reason. Again.] </snapshotRepository> </distributionManagement> </project> The Maven project declares the elements that are common to all of its sub-projects – the snapshot repository (which will be discussed further in section 7..0</modelVersion> <groupId>org.apache.snapshots</id> <name>Apache Snapshot Repository</name> <url>. Source control management systems like CVS and SVN (with the traditional intervening trunk directory at the individual project level) do not make it easy to store and check out such a structure.. 221 .
/[maestro_home]/project-server/bin/linux-x86-32/run.sh You need to make the file executable in Linux by running the command from the directory where the file is located. rather than close to a release. Maestro is an Apache License 2. use the following command: [maestro_home]/project-server/bin/windows-x86-32/run. First. There are scripts for most major platforms. In this chapter.sh for use on other Unix-based platforms. Continuum and Archiva build platform. which you can obtain for your operating system from. This is very simple – once you have downloaded it from Refer to the Maestro Getting Started Guide for more instructions on starting the Maestro Project Server./[maestro_home]/project-server/bin/plexus/plexus.Better Builds with Maven 7. It is also a component of Exist Global's Maestro which is referred to as Build Management.com and unpacked it. The examples discussed are based on Maestro 1. Continuous Integration with Maestro If you are not already familiar with it.0 distribution based on a pre-integrated Maven. and learn how to use Maestro Build Management to build this project on a regular basis.tigris. For example on Windows. This document can be found in the Documentation link of Maestro user interface. Starting up the Maestro Project Server will also start a http server and servlet engine. iterative changes that can more easily support concurrent development processes. The examples assume you have Subversion installed. continuous integration can be done from the Exist Global Maestro Project Server.bat For Linux. For example: chmod +x run. For more information on Maestro please see:. you will pick up the Proficio example from earlier in the book. you can run it. 222 . ensuring that conflicts are detected earlier in a project's release life cycle.exist.5. you will need to install the Maestro Project Server. As such.com/. More than just nightly builds. as well as the generic bin/plexus/plexus.3. 11 Alternatively. You can verify the installation by viewing the web site at. continuous integration enables automated builds of your project on a regular interval. Continuum11 is Maven's continuous integration and build server. use .org/. continuous integration can enable a better development culture where team members can make smaller. however newer versions should be similar.exist. continuous integration is a key element of effective collaboration.sh start or .
Team Collaboration with Maven The first screen to appear will be the one-time setup page shown in figure 7-1. If you are running Maestro on your desktop and want to try the examples in this section. these additional configuration requirements can be set only after the previous step has been completed. some additional steps are required. As of Maestro 1. Figure 7-1: The Administrator account screen For most installations.3. press Ctrl-C in the window that is running Maestro). Figure 7-2 shows all the configuration that's required when Build Management is accessed for the first time. and you must stop the server to make the changes (to stop the server. Figure 7-2: Build Management general configuration screen 223 . The configuration on the screen is straight forward – all you should need to enter are the details of the administration account you'd like to use.
You can then check out Proficio from that location. If you do not have this set up on your machine. this is disabled as a security measure.plexus. ports. To enable this setting. you can start Maestro again. The default is to use localhost:25.. POM files will be read from the local hard disk where the server is running.zip archive and unpacking it in your environment.. for example if it was unzipped in C:\mvnbook\svn. 224 .formica.xml found in the [maestro_home]/projectserver/apps/continuum/webapp/WEB-INF/classes/META-INF/plexus/ directory and verify that the following lines are present and are not commented out: [.Better Builds with Maven To complete the Build Management setup page. edit the file above to change the smtp-host setting.UrlValidator </implementation> <configuration> <allowedSchemes> [.. The next step is to set up the Subversion repository for the examples. you will also need an SMTP server to which to send the messages. For instructions. By default.] Field Name Value To have Build Management send you e-mail notifications. servers.. This requires obtaining the Code_Ch07. and directories section of Maestro Project Server User Guide found in the Documentation link of the Maestro user interface. since paths can be entered from the Web interface. edit the application. execute the following: C:\mvnbook\proficio> svn co \ The command above works if the code is unpacked in C:\mvnbook\svn.] <implementation> org.validation.codehaus. refer to Configuring mail. If the code is unpacked in a different location. you can cut and paste field values from the following list: working-directory Working Directory Build Output Directory build-output-directory Base URL In the following examples. After these steps are completed. the file URL in the command should be similar to the following:[path_to_svn_code]/proficio/trunk...] <allowedScheme>file</allowedScheme> </allowedSchemes> </configuration> [.
] <distributionManagement> <site> <id>website</id> <url> /reference/${project.Team Collaboration with Maven The POM in this repository is not completely configured yet.xml to correct the e-mail address to which notifications will be sent.3 for information on how to set this up. If you haven't done so already. refer to section 7..com</address> </configuration> </notifier> </notifiers> </ciManagement> [.... since not all of the required details were known at the time of its creation... from the directory C:\mvnbook\repository. by uncommenting and modifying the following lines: [. commit the file with the following command: C:\mvnbook\proficio\trunk> svn ci -m "my settings" pom..version} </url> </site> </distributionManagement> [. and edit the location of the Subversion repository. Edit proficio/trunk/pom.] The ciManagement section is where the project's continuous integration is defined and in the above example has been configured to use Maestro Build Management locally on port 8080.] <ciManagement> <system>continuum</system> <url> <notifiers> <notifier> <type>mail</type> <configuration> <address>youremail@yourdomain.xml 225 .] <scm> <connection> scm:svn: </connection> <developerConnection> scm:svn: </developerConnection> </scm> [. This assumes that you are still running the repository Web server on localhost:8081. The distributionManagement setting will be used in a later example to deploy the site from your continuous integration environment.. Once these settings have been edited to reflect your setup.
The login link is at the top-left of the screen. a ViewCVS installation. To make the settings take effect. While uploading is a convenient way to configure from your existing check out. click the Save button. you will enter either a HTTP URL to a POM in the repository. with the following command: C:\mvnbook\proficio\trunk> mvn install You are now ready to start using Continuum or Maestro Build Management. Figure 7-3: Add project screen shot When using the file:// protocol for the URL. as in the Proficio example. Go to the Project Server page. under the Maestro logo. you can now select Maven 2. or perform other tasks. To configure: 1. Before you can add a project to the list. Instead. Once you have logged in. This will present the screen shown in figure 7-3. If you return to the location that was set up previously. You have two options: you can provide the URL for a POM. check the box before FILE. or upload from your local drive. you will see an empty project list. in Maestro 1. or with another account you have since created with appropriate permissions.3 and newer versions. 226 . enter the file:// URL as shown. or a Subversion HTTP server.Better Builds with Maven You should build all these modules to ensure everything is in order. When you set up your own system later. you must either log in with the administrator account you created during installation. 2. 3. Under Configure > Build Management.0+ Project from the Add Project menu. the File protocol permission should be selected. this does not work when the POM contains modules.
] public Proficio [. check the file in: C:\mvnbook\proficio\trunk\proficio-api> svn ci -m "introduce error" \ src/main/java/com/exist/mvnbook/proficio/Proficio. After submitting the URL..] Now.Team Collaboration with Maven This is all that is required to add a Maven 2 project to Build Management. and send an e-mail notification if there are any problems.java. Initially. go to your earlier checkout and introduce an error into Proficio. and each of the modules will be added to the list of projects.java 227 ... Figure 7-4: Summary page after projects have built Build Management will now build the project hourly. the builds will be marked as New and their checkouts will be queued.. If you want to put this to the test. for example. Maestro Build Management will return to the project summary page. The result is shown in figure 7-4. remove the interface keyword: [.
Build all of a project's active branches. it is often ignored. not just that the project still compiles after one or more changes occur. This will be constrained by the length of the build and the available resources on the build machine. While rapid. the build will show an “In progress” status. Fix builds as soon as possible. This doesn’t mean committing incomplete code. Continuous integration is most beneficial when tests are validating that the code is working as it always has. Regardless of which continuous integration server you use. In addition. but you may wish to go ahead and try them. you might want to set up a notification to your favorite instant messenger – IRC. iterative builds are helpful in some situations. there are a few tips for getting the most out of the system: • Commit early. Run builds as often as possible. press the Build Now icon on the Build Management user interface next to the Proficio API module. Build Management can be configured to trigger a build whenever a commit occurs. if it isn't something already in use in other development. • • • • • • 228 . but rather keeping changes small and well tested. Build Management has preliminary support for system profiles and distributed testing. While this seems obvious. test and production environments. In addition. Continuum currently defaults to doing a clean build. restore the file above to its previous state and commit it again. operating system and other variables. Avoid customizing the JDK. For example. Run clean builds. and a future version will allow developers to request a fresh checkout. Consider a regular. you should receive an e-mail at the address you configured earlier. periodically. if the source control repository supports post-commit hooks. Continuous integration will be pointless if developers repetitively ignore or delete broken build notifications. marking the left column with an “!” to indicate a failed build (you will need to refresh the page using the Show Projects link in the navigation to see these changes). If multiple branches are in development. Continuous integration is most effective when developers commit regularly. When a failure occurs in the continuous integration environment. it is also important that failures don't occur due to old build state. Run comprehensive tests. the continuous integration environment should be set up for all of the active branches. This chapter will not discuss all of the features available in Maestro Build Management. The build in Build Management will return to the successful state. enhancements that are planned for future versions. This will make it much easier to detect the source of an error when the build does break. based on selected schedules. but it is best to detect a failure as soon as possible. before the developer moves on or loses focus. First. This also means that builds should be fast – long integration and performance tests should be reserved for periodic builds. Establish a stable environment.Better Builds with Maven Finally. commit often. To avoid receiving this error every hour. and then fail. it is beneficial to test against all different versions of the JDK. and your team will become desensitized to the notifications in the future. Jabber. or local settings. clean build. The Build History link can be used to identify the failed build and to obtain a full output log. MSN and Google Talk are all supported. it is important that it can be isolated to the change that caused it. and independent of the environment being used.
Click the Add button to add a new schedule. Verify that you are still logged into your Maestro instance.Team Collaboration with Maven • Run a copy of the application continuously. they need to be kept up-todate. If the application is a web application. run a servlet container to which the application can be deployed from the continuous integration environment. The appropriate configuration is shown in figure 7-5. Figure 7-5: Schedule configuration To complete the schedule configuration. Though it would be overkill to regenerate the site on every commit. In addition to the above best practices. This can be helpful for non-developers who need visibility into the state of the application. there are two additional topics that deserve special attention: automated updates to the developer web site. and profile usage. Next. you learned how to create an effective site containing project information and reports about the project's health and vitality. For these reports to be of value. In Chapter 6. you can cut and paste field values from the following list: Field Name Value Name Description Site Generation Redeploy the site to the development project site 229 . select Schedules. from the Administration menu on the left-hand side. it is recommend that a separate. but regular schedule is established for site generation. only the default schedule is available. This is another way continuous integration can help with project collaboration and communication. for example. You will see that currently. separate from QA and production releases. which will be configured to run every hour during business hours (8am – 4pm weekdays).
click the Add button below the default build definition. Since this is the root of the multi-module build – and it will also detect changes to any of the modules – this is the best place from which to build the site. Once you add this schedule. To add a new build definition. This is useful when using CVS. 16:00:00 from Monday to Friday. It is not typically needed if using Subversion. The project information shows just one build on the default schedule that installs the parent POM. The example above runs at 8:00:00. as well – if this is a concern. use the non-recursive mode instead... there is no way to make bulk changes to build definitions. The downside to this approach is that Build Management (Continuum) will build any unchanged modules. and select the top-most project. on the business hours schedule. In addition to building the sites for each module. The “quiet period” is a setting that delays the build if there has been a commit in the defined number of seconds prior. but does not recurse into the modules (the -N or --non-recursive argument).com/quartz/api/org/quartz/CronTrigger. In this example you will add a new build definition to run the site deployment for the entirety of the multi-module build.html. In Maestro Build Management. so you will need to add the definition to each module individually.opensymphony. since commits are not atomic and a developer might be committing midway through an update. 230 . return to the project list. Maven Proficio. 9:00:00... it can aggregate changes into the top-level site as required. and add the same build definition to all of the modules.
and that it is not the default build definition. The --non-recursive option is omitted. It is rare that the site build will fail. which is essential for all builds to ensure they don't block for user input. if you want to fail the build based on these checks as well. and view the generated site from. The meaning of this system property will be explained shortly. However. and -Pci. the generated site can be used as reference for what caused the failure. However. You can see also that the schedule is set to use the site generation schedule created earlier.xml clean site-deploy --batch-mode -Pci Site Generation maven2 The goals to run are clean and site-deploy. you can cut and paste field values from the following list: Field Name Value POM filename Goals Arguments Schedule Type pom. so that if the build fails because of a failed check. you can add the test. The site will be deployed to the file system location you specified in the POM. to ensure these checks are run. which means that Build Now from the project summary page will not trigger this build. Any of these test goals should be listed after the site-deploy goal. which will be visible from. verify or integration-test goal to the list of goals. since most reports continue under failure conditions. when you first set up the Subversion repository earlier in this chapter. 231 . Click this for the site generation build definition. which sets the given system property.Team Collaboration with Maven Figure 7-6: Adding a build definition for site deployment To complete the Add Build Definition screen. The arguments provided are --batch-mode. each build definition on the project information page (to which you would have been returned after adding the build definition) has a Build Now icon.
. at least in the version of Continuum current at the time of writing. The first is to adjust the default build definition for each module. In Chapter 6.apache.. a number of plugins were set up to fail the build if certain project health checks failed.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <executions> [.Better Builds with Maven In the previous example. [. As you saw before.m2/settings. and clicking Edit next to the default build definition.home}/. There are two ways to ensure that all of the builds added in Maestro Build Management use this profile. Profiles are a means for selectively enabling portions of the build.xml file for the user under which it is running. If you compare the example proficio/trunk/pom. As Maven 2 is still executed as normal. for all projects in Build Management.maven. you'll see that these checks have now been moved to a profile. it reads the ${user..xml file in <user_home>/.xml file in your Subversion checkout to that used in Chapter 6. the profile is enabled only when the ci system property is set to true.xml: 232 . To enable this profile by default from these settings. add the following configuration to the settings. which can be a discouragement to using them. a system property called ci was set to true.] You'll find that when you run the build from the command line (as was done in Continuum originally). However. these checks delayed the build for all developers.m2/settings. by going to the module information page. such as the percentage of code covered in the unit tests dropping below a certain value. In this particular case. as well as the settings in the Maven installation. please refer to Chapter 3. The other alternative is to set this profile globally.. none of the checks added in the previous chapter are executed.] <profiles> <profile> <id>ci</id> <activation> <property> <name>ci</name> <value>true</value> </property> </activation> <plugins> <plugin> <groupId>org. it is necessary to do this for each module individually. The checks will be run when you enable the ci using mvn -Pci. If you haven't previously encountered profiles.
and in contrast to regular dependencies.] <activeProfiles> [. For example. which are not changed. The generated artifacts of the snapshot are stored in the local repository. In this section. in some cases. where projects are closely related. it may be necessary to schedule them separately for each module. the verify goal may need to be added to the site deployment build definition. and while dependency management is fundamental to any Maven build. which can be error-prone.. in an environment where a number of modules are undergoing concurrent development. which is discussed in section 7. these artifacts will be updated frequently. and how to enable this within your continuous integration environment.] In this case the identifier of the profile itself. but the timing and configuration can be changed depending upon your circumstances.6. the build involves checking out all of the dependent projects and building them yourself. Additionally. Usually.] <activeProfile>ci</activeProfile> </activeProfiles> [. rather than the property used to enable it. So far in this book. you will learn about using snapshots more effectively in a team environment. if the additional checks take too much time for frequent continuous integration builds. How you configure your continuous integration depends on the culture of your development team and other environmental factors such as the size of your projects and the time it takes to build and test them. Projects in Maven stay in the snapshot state until they are released. you must build all of the modules simultaneously from a master build. the team dynamic makes it critical. This will result in local inconsistencies that can produce non-working builds There is no common baseline against which to measure progress Building can be slower as multiple dependencies must be rebuilt also Changes developed against outdated code can make integration more difficult 233 .Team Collaboration with Maven [. it can lead to a number of problems: • • • • It relies on manual updates from developers. snapshots have been used to refer to the development version of an individual module. or for the entire multi-module project to run the additional checks after the site has been generated.. The guidelines discussed in this chapter will help point your team in the right direction. While building all of the modules from source can work well and is handled by Maven inherently. as discussed previously.. Snapshots were designed to be used in a team environment as a means for sharing development versions of artifacts that have already been built. indicates that the profile is always active when these settings are read.. 7..8 of this chapter. Team Dependency Management Using Snapshots Chapter 3 of this book discussed how to manage your dependencies in a multi-module build..
This technique allows you to continue using the latest version by declaring a dependency on 1. you will see that some of the dependencies are checked for updates. Now.120139-1. In Maven. Currently. or to lock down a stable version by declaring the dependency version to be the specific equivalent such as 1... If you were to deploy again. Considering that example. While this is not usually the case.0-20070726. you may also want to add this as a pluginRepository element as well.jar.] <repositories> <repository> <id>internal</id> <url></url> </repository> </repositories> [...] <distributionManagement> <repository> <id>internal</id> <url></url> </repository> [.xml: [. The filename that is used is similar to proficio-api-1..] </distributionManagement> Now. In this case.xml: [. To add the internal repository to the list of repositories used by Proficio regardless of settings.3. such as the internal repository set up in section 7..0SNAPSHOT. though it may have been configured as part of your settings files.12013-1. the time stamp would change and the build number would increment to 2. add the following to proficio/trunk/pom. you'll see that the repository was defined in proficio/trunk/pom. building from source doesn't fit well with an environment that promotes continuous integration. build proficio-core with the following command: C:\mvnbook\proficio\trunk\proficio-core> mvn -U install During the build.0-20070726. use binary snapshots that have been already built and tested. the Proficio project itself is not looking in the internal repository for dependencies. to see the updated version downloaded. but rather relying on the other modules to be built first. this is achieved by regularly deploying snapshots to a shared repository. similar to the example below (note that this output has been abbreviated): 234 .. deploy proficio-api to the repository with the following command: C:\mvnbook\proficio\trunk\proficio-api> mvn deploy You'll see that it is treated differently than when it was installed in the local repository. locking the version in this way may be important if there are recent changes to the repository that need to be ignored temporarily..] If you are developing plugins.Better Builds with Maven As you can see from these issues. the version used is the time that it was deployed (in the UTC timezone) and the build number. Instead.
but you can also change the interval by changing the repository configuration. assuming that the other developers have remembered to follow the process. However. This causes many plugins to be checked for updates. it makes sense to have it build snapshots. If it were omitted. making it out-of-date. To see this. Since the continuous integration server regularly rebuilds the code from a known state.. and interval:minutes.0-SNAPSHOT: checking for updates from internal [.. The settings that can be used for the update policy are never.. and then deploy the snapshot to share with the other team members. all that is being saved is some time.. no update would be performed. 235 . the updates will still occur only as frequently as new versions are deployed to the repository.. This technique can ensure that developers get regular updates. It is possible to establish a policy where developers do an update from the source control management (SCM) system before committing.] In this example. any snapshot dependencies will be checked once an hour to determine if there are updates in the remote repository.xml: [. Several of the problems mentioned earlier still exist – so at this point. deployed with uncommitted code. Whenever you use the -U argument.] The -U argument in the prior command is required to force Maven to update all of the snapshots in the build. You can always force the update using the -U command... This is because the default policy is to update snapshots daily – that is. daily (the default). by default. to check for an update the first time that particular dependency is used after midnight local time. as well as updating any version ranges. without having to manually intervene.] proficio-api:1. this introduces a risk that the snapshot will not be deployed at all.] <snapshots> <updatePolicy>interval:60</updatePolicy> </snapshots> </repository> [.] <repository> [.Team Collaboration with Maven [.. and without slowing down the build by checking on every access (as would be the case if the policy were set to always).. or deployed without all the updates from the SCM. it updates both releases and snapshots.. always. add the following configuration to the repository configuration you defined above in proficio/trunk/pom. However. A much better way to use snapshots is to automate their creation. as well.
Log in as an administrator and go to the following Configuration screen. and click Build Now on the Proficio API project. Figure 7-7: Build Management configuration To complete the Continuum configuration page. follow the Show Projects link. However. Continuum can be configured to deploy its builds to a Maven snapshot repository automatically. To deploy from your server. you can cut and paste field values from the following list: working-directory Working Directory Build Output Directory build-output-directory Base URL The Deployment Repository Directory field entry relies on your internal repository and Continuum server being in the same location. Once the build completes. If this is not the case. shown in figure 7-7. you can enter a full repository URL such as scp://repositoryhost/www/repository/internal.Better Builds with Maven How you implement this will depend on the continuous integration server that you use. If there is a repository configured to which to deploy them. To try this feature. you have not been asked to apply this setting. as you saw earlier. you must ensure that the distributionManagement section of the POM is correctly configured. return to your console and build proficio-core again using the following command: C:\mvnbook\proficio\trunk\proficio-core> mvn -U install Field Name Value 236 . so let's go ahead and do it now. this feature is enabled by default in a build definition. So far in this section.
snapshots</id> <url></url> </snapshotRepository> </distributionManagement> [. you can avoid all of the problems discussed previously.... when it doesn't. you would add the following: [.Team Collaboration with Maven You'll notice that a new version of proficio-api is downloaded. and deploy to the regular repository you listed earlier.] <snapshotRepository> <id>internal.] This will deploy to that repository whenever the version contains SNAPSHOT.. Given this configuration. when necessary. or build from source. Another point to note about snapshots is that it is possible to store them in a separate repository from the rest of your released artifacts. while you get regular updates from published binary dependencies. this separation is achieved by adding an additional repository to the distributionManagement section of your POM.. if you had a snapshot-only repository in /www/repository/snapshots..snapshots</id> <url></url> <snapshots> <updatePolicy>interval:60</updatePolicy> </snapshots> </repository> </repositories> [. This can be useful if you need to clean up snapshots on a regular interval. If you are using the regular deployment mechanism (instead of using Maestro Build Management or Continuum). you can either lock a dependency to a particular build. The replacement repository declarations in your POM would look like this: [.] <distributionManagement> [. Better yet.. With this setup.] <repositories> <repository> <id>internal</id> <url></url> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>internal. For example.. you can make the snapshot update process more efficient by not checking the repository that has only releases for updates.. with an updated time stamp and build number. but still keep a full archive of releases.] 237 ..
either in adding or removing content from that generated by the archetypes. There are two ways to create an archetype: one based on an existing project using mvn archetype:create-from-project. you can create one or more of your own archetypes. archetypes give you the opportunity to start a project in the right way – that is. To avoid this. As you saw in this chapter.mvnbook \ -DartifactId=proficio-archetype \ -DarchetypeArtifactId=maven-archetype-archetype 238 . To get started with the archetype.exist. in a way that is consistent with other projects in your environment. Beyond the convenience of laying out a project structure instantly. there is always some additional configuration required.7. the requirement of achieving consistency is a key issue facing teams. you have seen the archetypes that were introduced in Chapter 2 used to quickly lay down a project structure. and the other.Better Builds with Maven 7. Creating a Standard Project Archetype Throughout this book. using an archetype. Writing an archetype is quite like writing your own project. by hand. While this is convenient. run the following command: C:\mvnbook\proficio\trunk> mvn archetype:create \ -DgroupId=com. and replacing the specific values with parameters.
java</source> </sources> <testSources> <source>src/test/java/AppTest. you'll see that the archetype is just a normal JAR project – there is no special build configuration required. The example descriptor looks like the following: <archetype> <id>proficio-archetype</id> <sources> <source>src/main/java/App. 239 . Figure 7-8: Archetype directory layout If you look at pom. The example above shows the sources and test sources.xml. The JAR that is built is composed only of resources.xml at the top level.java</source> </testSources> </archetype> Each tag is a list of files to process and generate in the created project. and siteResources. The archetype descriptor describes how to construct a new project from the archetype-resources provided. testResources. so everything else is contained under src/main/resources.Team Collaboration with Maven The layout of the resulting archetype is shown in figure 7-8. There are two pieces of information required: the archetype descriptor in META-INF/maven/archetype. and the template project in archetype-resources. but it is also possible to specify files for resources.
if omitted. since the archetype has not yet been released. These files will be used to generate the template files when the archetype is run.exist. go to an empty directory and run the following command: C:\mvnbook> mvn archetype:create -DgroupId=com. install and deploy it like any other JAR. now however. so you can run the following command: C:\mvnbook\proficio\trunk\proficio-archetype> mvn deploy The archetype is now ready to be used. a previous release would be used instead). For this example. Continuing from the example in section 7.org/POM/4. refer to the documentation on the Maven Web site. 240 .apache.3 of this chapter. To do so. Maven will build.0</modelVersion> <groupId>$groupId</groupId> <artifactId>$artifactId</artifactId> <version>$version</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. the content of the files will be populated with the values that you provided on the command line.0.0-SNAPSHOT Normally. It will look very similar to the content of the archetype-resources directory you created earlier.0" xmlns: <modelVersion>4. Since the archetype inherits the Proficio parent.exist.0.org/2001/XMLSchema-instance" xsi:schemaLocation=" of this chapter. it has the correct deployment settings already. you will use the “internal” repository.org/maven-v4_0_0. For more information on creating an archetype. However. Once you have completed the content in the archetype.0.apache. the groupId. the required version would not be known (or if this was later development.
As the command runs. without making any modifications to your project. The perform step could potentially be run multiple times to rebuild a release from a clean checkout of the tagged version. full of manual steps that need to be completed in a particular order. To demonstrate how the release plugin works. Maven provides a release plugin that provides the basic functions of a standard release process. and to perform standard tasks.exist. or check out the following: C:\mvnbook\proficio> svn co \ To start the release process. The prepare step is run once for a release. such as deployment to the remote repository. once a release has been made. releases should be consistent every time they are built. and does all of the project and source control manipulation that results in a tagged version. 12 Exist Global Maestro provides an automated feature for performing releases. the Proficio example will be revisited. Once the definition for a release has been set by a team.0 distribution based on a pre-integrated Maven. The release plugin takes care of a number of manual steps in updating the project POM.12 The release plugin operates in two steps: prepare and perform. it is usually difficult or impossible to correct mistakes other than to make another. It is usually tedious and error prone. allowing them to be highly automated.8.Team Collaboration with Maven 7. Accept the defaults in this instance (note that running Maven in “batch mode” avoids these prompts and will accept all of the defaults). run the following command: C:\mvnbook\proficio\trunk> mvn release:prepare -DdryRun=true This simulates a normal release preparation. You'll notice that each of the modules in the project is considered. For more information on Maestro please see:. Maestro is an Apache License 2. Worse. and creating tags (or equivalent for your SCM). Finally. updating the source control management system to check and commit release related changes. new release. Cutting a Release Releasing software is difficult.com/. and released as 1. you will be prompted for values. Continuum and Archiva build platform. it happens at the end of a long period of development when all everyone on the team wants to do is get it out there. You can continue using the code that you have been working on in the previous sections. 241 . which often leads to omissions or short cuts.
1. how to run Ant tasks from within Maven. how to split your sources into modules or components. we will focus only on building version 2. recommended Maven directory structure). Introducing the Spring Framework The Spring Framework is one of today's most popular Java frameworks. while still running your existing. Introduction The purpose of this chapter is to show a migration path from an existing build in Ant to Maven.Better Builds with Maven 8. Maven build. You will learn how to start building with Maven.1. This will allow you to evaluate Maven's technology. . which uses an Ant script.1. You will learn how to use an existing directory structure (though you will not be following the standard. Ant-based build system.0-m1 of Spring. 8. For the purpose of this example. The Maven migration example is based on the Spring Framework build. This example will take you through the step-by-step process of migrating Spring to a modularized. which is the latest version at the time of writing. while enabling you to continue with your required work. The Spring release is composed of several modules. and among other things. you will be introduced to the concept of dependencies. component-based.
5 classes. 249 . For Spring. using inclusions and exclusions that are based on the Java packages of each class. resulting in JARs that contain both 1. TLD files. you can see graphically the dependencies between the modules. and each produces a JAR. with the Java package structure.).4 and 1. etc.4 compatible source code and JUnit tests respectively tiger/src and tiger/test: contain additional JDK 1. The src and tiger/src directories are compiled to the same destination as the test and tiger/test directories..Migrating to Maven Figure 8-1: Dependency relationship between Spring modules In figure 8-1. Optional dependencies are indicated by dotted lines. more or less. Each of these modules corresponds.5 compatible source code and JUnit Each of the source directories also include classpath resources (XML files. the Ant script compiles each of these different source directories and then creates a JAR for each module. properties files.
To start.Better Builds with Maven 8. you will need to create a directory for each of Spring's modules.) per Maven project file.2. Inside the 'm2' directory. that means you will need to have a Maven project (a POM) for each of the modules listed above. WAR. Figure 8-2: A sample spring module directory 250 . etc. the rule of thumb to use is to produce one artifact (JAR. In the Spring example. Where to Begin? With Maven. you will create a subdirectory called 'm2' to keep all the necessary Maven changes clearly separated from the current build system.
SNAPSHOT – that is. <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3. • packaging: the jar. • it should mimic standard package naming conventions to avoid duplicate values. company. department.Migrating to Maven In the m2 directory. which will be used for testing in every module. project. You will use the parent POM to store the common configuration settings that apply to all of the modules. <groupId>com.migrating. each module will inherit the following values (settings) from the parent POM. non-snapshot version for a short period of time.exist. as it is our 'unofficial' example version of Spring.8. in Spring. however. war. Recall from previous chapters that during the release process. etc. Let's begin with these directories. the main source and test directories are src and test. and ear values should be obvious to you (a pom value means that this project is used for metadata only) The other values are not strictly required.. the version you are developing in order to release. in order to tag the release in your SCM.1</version> <scope>test</scope> </dependency> </dependencies> As explained previously.migrating</groupId> <artifactId>spring-parent</artifactId> <version>2. For example. Maven will convert to the definitive. you will use com.0-m1-SNAPSHOT</version> <name>Spring parent</name> <packaging>pom</packaging> <description>Spring Framework</description> <inceptionYear>2002</inceptionYear> <url>. spring-parent) • version: this setting should always represent the next release version number appended with . and In this parent POM we can also add dependencies such as JUnit. respectively. For this example.springframework.m2book.springframework • artifactId: the setting specifies the name of this module (for example. you will need to create a parent POM. 251 . the Spring team would use org.org</url> <organization> <name>The Spring Framework Project</name> </organization> groupId: this setting indicates your area of influence. thereby eliminating the requirement to specify the dependency repeatedly across multiple modules.m2book. and are primarily used for documentation purposes.exist.
For now.dir}"/> <!-. At this point. so there is no need for you to add the configuration parameters. as you will learn about that later in this chapter.3" debug="${debug}" deprecation="false" optimize="false" failonerror="true"> <src path="${src./. and failonerror (true) values./src</sourceDirectory> <testSourceDirectory>. deprecation and optimize (false). For the debug attribute. Spring's Ant script uses a debug parameter.Include Commons Attributes generated Java sources --> <src path="${commons.3</target> </configuration> </plugin> </plugins> </build> 252 .Better Builds with Maven Using the following code snippet from Spring's Ant build script. These last three properties use Maven's default values. you can retrieve some of the configuration parameters for the compiler.tempdir. in the buildmain target.attributes.3</source> <target>1. that Maven automatically manages the classpath from its list of dependencies.apache... your build section will look like this: <build> <sourceDirectory>.. you will need to append -Dmaven. you don't have to worry about the commons-attributes generated sources mentioned in the snippet.debug=false to the mvn command (by default this is set to true).src}"/> <classpath refid="all-libs"/> </javac> As you can see these include the source and target compatibility (1.. <javac destdir="${target./. Recall from Chapter 2.compiler. so to specify the required debug function in Maven.dir}" source="1.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.classes.3" target="1./test</testSourceDirectory> <plugins> <plugin> <groupId>org.maven.3).
dir}" includes="${test.mockclasses. and this doesn't need to be changed.excludes}"/> </batchtest> </junit> You can extract some configuration information from the previous code: • forkMode=”perBatch” matches with Maven's forkMode parameter with a value of once.includes and test. haltonfailure and haltonerror settings.includes}" excludes="${test.properties files etc take precedence --> <classpath location="${target.headless=true -XX:MaxPermSize=128m -Xmx128m"/> <!-.excludes from the nested fileset. so you will not need to locate the test classes directory (dir).Must go first to ensure any jndi. Maven uses the default value from the compiler plugin. Maven sets the reports destination directory (todir) to target/surefire-reports.testclasses. The nested element jvmarg is mapped to the configuration parameter argLine As previously noted. by default.dir}"/> <!-.dir}"/> <classpath location="${target. since the concept of a batch for testing does not exist. From the tests target in the Ant script: <junit forkmode="perBatch" printsummary="yes" haltonfailure="yes" haltonerror="yes"> <jvmarg line="-Djava. • • • • • 253 .dir}"/> <classpath refid="all-libs"/> <formatter type="plain" usefile="false"/> <formatter type="xml"/> <batchtest fork="yes" todir="${reports.dir}"/> <classpath location="${target.dir}"> <fileset dir="${target. • • formatter elements are not required as Maven generates both plain text and xml reports. You will need to specify the value of the properties test. You will not need any printsummary. this value is read from the project.properties file loaded from the Ant script (refer to the code snippet below for details).awt. by default.Need files loaded as resources --> <classpath location="${test.classes. classpath is automatically managed by Maven from the list of dependencies.Migrating to Maven The other configuration that will be shared is related to the JUnit tests. as Maven prints the test summary and stops for any test error or failure.testclasses.
test. test.excludes=**/Abstract* #test.maven.3. # Second exclude needs to be used for JDK 1. 254 . It makes tests run using the standard classloader delegation instead of the default Maven isolated classloader. Note that it is possible to use another lower JVM to run tests if you wish – refer to the Surefire plugin reference documentation for more information. <plugin> <groupId>org.5 .4.Better Builds with Maven # Wildcards to be matched by JUnit tests. translate directly into the include/exclude elements of the POM's plugin configuration. Since Maven requires JDK 1. which are processed prior to the compilation.headless=true -XX:MaxPermSize=128m -Xmx128m </argLine> <includes> <include>**/*Tests.includes=**/*Tests.4 to run you do not need to exclude hibernate3 tests.excludes=**/Abstract* org/springframework/orm/hibernate3/** The includes and excludes referenced above. # Convention is that our JUnit test classes have XXXTests-style names.class</include> </includes> <excludes> <exclude>**/Abstract*</exclude> </excludes> </configuration> </plugin> The childDelegation option is required to prevent conflicts when running under Java 5 between the XML parser provided by the JDK and the one included in the dependencies in some modules.1 # being compiled with target JDK 1. due to Hibernate 3.and generates sources from them that have to be compiled with the normal Java compiler. Spring's Ant build script also makes use of the commons-attributes compiler in its compileattr and compiletestattr targets. When building only on Java 5 you could remove that option and the XML parser (Xerces) and APIs (xml-apis) dependencies.4.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <forkMode>once</forkMode> <childDelegation>false</childDelegation> <argLine> -Djava.class # # Wildcards to exclude among JUnit tests. mandatory when building in JDK 1.apache. The commons-attributes compiler processes javadoc style annotations – it was created before Java supported annotations in the core language on JDK 1.awt.
attributes.java"/> <fileset dir="${test.java"/> </attribute-compiler> From compiletestattr: <!-. --> <attribute-compiler <!-Only the PathMap attribute in the org. 255 . this same function can be accomplished by adding the commons-attributes plugin to the build section in the POM.dir}" includes="**/metadata/*. so you will only need to add the inclusions for the main source and test source compilation.Compile to a temp directory: Commons Attributes will place Java Source here.attributes.Compile to a temp directory: Commons Attributes will place Java Source here.java</include> </includes> <testIncludes> <include>org/springframework/aop/**/*.metadata package currently needs to be shipped with an attribute.tempdir.springframework.Migrating to Maven From compileattr: <!-.mojo</groupId> <artifactId>commons-attributes-maven-plugin</artifactId> <executions> <execution> <configuration> <includes> <include>**/metadata/*.java</include> <include>org/springframework/jmx/**/*. --> <attribute-compiler </attribute-compiler> In Maven.handler.servlet. --> <fileset dir="${src.tempdir. Maven handles the source and destination directories automatically. <plugin> <groupId>org.test}"> <fileset dir="${test.web.dir}" includes="org/springframework/jmx/**/*.codehaus.dir}" includes="org/springframework/aop/**/*.. [INFO] ------------------------------------------------------------------------ Upon closer examination of the report output.FileNotFoundException: class path resource [org/aopalliance/] cannot be resolved to URL because it does not exist. Errors: 1.015 sec <<<<<<<< FAILURE !! org. simply requires running mvn test.2. Errors: 1 [INFO] -----------------------------------------------------------------------[ERROR] BUILD ERROR [INFO] -----------------------------------------------------------------------[INFO] There are test failures. so to resolve the problem add the following to your POM <dependency> <groupId>aopalliance</groupId> <artifactId>aopalliance</artifactId> <version>1.springframework. The org. you will find the following: [surefire] Running org. Within this file. you will get the following error report: Results : [surefire] Tests run: 113. you will need to check the test logs under target/surefire-reports. Time elapsed: 0. Failures: 1. for the test class that is failing 262 . Failures: 1.5.support.support.0</version> <scope>test</scope> </dependency> This output means that this test has logged a JUnit failure and error. there is a section for each failed test called stacktrace.aopalliance package is inside the aopalliance JAR. This indicates that there is something missing in the classpath that is required to run the tests.io.PathMatchingResourcePatternResolverTests [surefire] Tests run: 5.core. The first section starts with java.txt.io.springframework.PathMatchingResourcePatternResolverTe sts.Better Builds with Maven 8. To debug the problem. However. Running Tests Running the tests in Maven. when you run this command.io.
compile. is to run mvn install to make the resulting JAR available to other projects in your local Maven repository. compile tests. as it will process all of the previous phases of the build life cycle (generate sources. run tests. You will get the following wonderful report: [INFO] -----------------------------------------------------------------------[INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ The last step in migrating this module (spring-core) from Ant to Maven. etc.Migrating to Maven Now run mvn test again.) 263 . This command can be used instead most of the time.
Better Builds with Maven 8. In the same way.0. Other Modules Now that you have one module working it is time to move on to the other modules. Avoiding Duplication As soon as you begin migrating the second module. instead of repeatedly adding the same dependency version information to each module. since they have the same groupId and version: <dependency> <groupId>${project.6. Using the parent POM to centralize this information makes it possible to upgrade a dependency version across all sub-projects from a single location. each of the modules will be able to inherit the required Surefire configuration.6. That way. you will be adding the Surefire plugin configuration settings repeatedly for each module that you convert.4</version> </dependency> </dependencies> </dependencyManagement> The following are some variables that may also be helpful to reduce duplication: • • ${project. To avoid duplication.1.version}</version> </dependency> 264 .version}: version of the current POM being built ${project. If you follow the order of the modules described at the beginning of the chapter you will be fine. otherwise you will find that the main classes from some of the modules reference classes from modules that have not yet been built. move these configuration settings to the parent POM instead. See figure 8-1 to get the overall picture of the interdependencies between the Spring modules. you can refer to spring-core from spring-beans with the following. For instance.groupId}: groupId of the current POM being built For example. 8. and remove the versions from the individual modules (see Chapter 3 for more information). you will find that you are repeating yourself. <dependencyManagement> <dependencies> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1. use the parent POM's dependencyManagement section to specify this information once.groupId}</groupId> <artifactId>spring-core</artifactId> <version>${project.
3 and some compiled for Java 5 in the same JAR. users will know that if they depend on the module composed of Java 5 classes. Building Java 5 Classes Some of Spring's modules include Java 5 classes from the tiger folder. you can split Spring's mock classes into spring-context-mock. they will need to run them under Java 5.maven. with only those classes related to spring-context module. how can the Java 1. any users. it's easier to deal with small modules. First.2. attempting to use one of the Java 5 classes under Java 1. As the compiler plugin was earlier configured to compile with Java 1. would experience runtime errors.4. 265 . you can use it as a dependency for other components.6.5 sources be added? To do this with Maven. that a JAR that contains the test classes is also installed in the repository: <plugin> <groupId>org. However. Generally with Maven. there is a procedure you can use. make sure that when you run mvn install.3 compatibility. Consider that if you include some classes compiled for Java 1.6.groupId}</groupId> <artifactId>spring-beans</artifactId> <version>${project. you will need to create a new spring-beans-tiger module.3 or 1. be sure to put that JAR in the test scope as follows: <dependency> <groupId>${project.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <executions> <execution> <goals> <goal>test-jar</goal> </goals> </execution> </executions> </plugin> Once that JAR is installed. By splitting them into different modules.apache. and spring-web-mock. 8. So.version}</version> <type>test-jar</type> <scope>test</scope> </dependency> A final note on referring to test classes from other modules: if you have all of Spring's mock classes inside the same module. particularly in light of transitive dependencies. Referring to Test Classes from Other Modules If you have tests from one component that refer to tests from other modules. with only those classes related to spring-web. in this case it is necessary to avoid refactoring the test source code. Although it is typically not recommended. this can cause previously-described cyclic dependencies problem.3. you need to create a new module with only Java 5 classes instead of adding them to the same module and mixing classes with different requirements. by specifying the test-jar type.Migrating to Maven 8. To eliminate this problem.
Better Builds with Maven As with the other modules that have been covered. and then a directory for each one of the individual tiger modules. as follows: Figure 8-3: A tiger module directory The final directory structure should appear as follows: Figure 8-4: The final directory structure 266 . the Java 5 modules will share a common configuration for the compiler. The best way to split them is to create a tiger folder with the Java 5 parent POM.
apache. you will need to add a module entry for each of the directories../.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.maven.Migrating</target> </configuration> </plugin> </plugins> </build> 267 .. with all modules In the tiger POM..5</source> <target>1././tiger/src</sourceDirectory> <testSourceDirectory>..
RmiInvocationWrapper"/> <rmic base="${target.classes.rmi.5</id> <activation> <jdk>1.remoting.RmiInvocationWrapper" iiop="true"> <classpath refid="all-libs"/> </rmic> 268 . you need to use the Ant task in the spring-remoting module to use the RMI compiler.rmi.springframework.classes. you just need a new module entry for the tiger folder. <profiles> <profile> <id>jdk1. this is: <rmic base="${target.6. In this case. From Ant.5 JDK. you may find that Maven does not have a plugin for a particular task or an Ant target is so small that it may not be worth creating a new plugin.remoting.Better Builds with Maven In the parent POM. but to still be able to build the other modules when using Java 1. Maven can call Ant tasks directly from a POM using the maven-antrun-plugin.5</jdk> </activation> <modules> <module>tiger</module> </modules> </profile> </profiles> 8. with the Spring migration. For example. Using Ant Tasks From Inside Maven In certain migration cases.springframework.dir}" classname="org.4 you will add that module in a profile that will be triggered only when using 1.dir}" classname="org.4.
classpath"/> </rmic> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>com. and required by the RMI task.jar above. such as reference constructed from all of the dependencies in the compile scope or lower.build.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <phase>process-classes</phase> <configuration> <tasks> <echo>Running rmic</echo> <rmic base="${project.rmi. there are some references available already.classpath.apache.maven.remoting. which is bundled with the JDK.directory}/classes" classname="org./lib/tools.sun</groupId> <artifactId>tools</artifactId> <scope>system</scope> <version>1.RmiInvocationWrapper" iiop="true"/> <classpath refid="maven. So. such as the reference to the tools.springframework. To complete the configuration. the most appropriate phase in which to run this Ant task is in the processclasses phase.rmi. There are also references for anything that was added to the plugin's dependencies section. stub and tie classes from them.4</version> <systemPath>${java. In this case.directory}/classes" classname="org.build. you will need to determine when Maven should run the Ant task. add: <plugin> <groupId>org.springframework.jar</systemPath> </dependency> </dependencies> </plugin> As shown in the code snippet above.. the rmic task. ${project.RmiInvocationWrapper"/> <rmic base="${project.remoting. will take the compiled classes and generate the rmi skeleton.directory} and maven.build.Migrating to Maven To include this in Maven build.compile. which applies to that plugin only. which is a classpath 269 .home}/.compile.
html. Using classpath resources is recommended over using file system resources. Some Special Cases In addition to the procedures outlined previously for migrating Spring to Maven.org/guides/mini/guide-coping-with-sun-jars.Better Builds with Maven 8. 8.6. For more information on dealing with this issue. mvn install:install-file -Dfile=<path-to-file> -DgroupId=<group-id> -DartifactId=<artifact-id> -Dversion=<version> -Dpackaging=<packaging> For instance.2 -Dpackaging=jar You will only need to do this process once for all of your projects or you may use a corporate repository to share them across your organization. For example. there are two additional. which used relative paths in Log4JConfigurerTests class.mail -DartifactId=mail -Dversion=1. to install JavaMail: mvn install:install-file -Dfile=[path_to_file]/mail. Sun's Activation Framework and JavaMail are not redistributable from the repository due to constraints in their licenses. see. special cases that must be handled. There is some additional configuration required for some modules.6. as these test cases will not work in both Maven and Ant.apache.3.. NamespaceHandlerUtilsTests.jar -DgroupId=javax. These can be viewed in the example code. 270 . such as springaspects.6. You may need to download them yourself from the Sun site or get them from the lib directory in the example code for this chapter. These issues were shared with the Spring developer community and are listed below: • Moving one test class. You can then install them in your local repository with the following command. which uses AspectJ for weaving the classes.
Migrating to Maven 8. you can apply similar concepts to your own Ant based build. 271 . Summary By following and completing this chapter. Restructuring the Code If you do decide to use Maven for your project. ObjectUtilsTests. Now that you have seen how to do this for Spring. reports. ClassUtilsTests. these would move from the original test folder to src/test/java and src/test/resources respectively for Java sources and other files . you will be able to keep your current build working. it it highly recommended that you go through the restructuring process to take advantage of the many time-saving and simplifying conventions within Maven. All of the other files under those two packages would go to src/main/resources. you can realize Maven' other benefits . and install those JARs in your local repository using Maven.just remember not to move the excluded tests (ComparatorTests. create JARs. you would eliminate the need to include and exclude sources and resources “by hand” in the POM files as shown in this chapter. At the same time.7.you can delete that 80 MB lib folder. Finally.advantages such as built-in project documentation generation. as Maven downloads everything it needs and shares it across all your Maven projects automatically . By adopting Maven's standard directory structure. Once you decide to switch completely to Maven. you would move all Java files under org/springframework/core and org/springframework/util from the original src folder to the module's folder src/main/java. you will be able to take advantage of the benefits of adopting Maven's standard directory structure. in addition to the improvements to your build life cycle. In the case of the Spring example. and quality metrics. Once you have spent this initial setup time Maven. you will be able to take an existing Ant-based build. compile and test the code. The same for tests. For example. you can simplify the POM significantly. ReflectionUtilsTests. Maven can eliminate the requirement of storing jars in a source code management system. split it into modular components (if needed). reducing its size by two-thirds! 8. for the spring-core module.8. By doing this. SerializationTestUtils and ResourceTests).
272 .Better Builds with Maven This page left intentionally blank.
Star Trek 273 .Appendix A: Resources for Plugin Developers Appendix A: Resources for Plugin Developers In this appendix you will find: Maven's Life Cycles • Mojo Parameter Expressions • Plugin Metadata • Scotty: She's all yours. All systems automated and ready. sir. Scott. A chimpanzee and two trainees could run her! Kirk: Thank you. . Mr. I'll try not to take that personally.
compile – compile source code into binary form. 2. It continues by describing the mojos bound to the default life cycle for both the jar and maven-plugin packagings. It contains the following phases: 1. process-resources – perform any modification of non-code resources necessary. 4. this section will describe the mojos bound by default to the clean and site life cycles. 8. For example. 5. a mojo may apply source code patches here. 6. validate – verify that the configuration of Maven. initialize – perform any initialization steps required before the main part of the build can start.1. For the default life cycle.Better Builds with Maven A. 9. generate-test-sources – generate compilable unit test code from other source formats. etc. along with a summary of bindings for the jar and maven-plugin packagings. Maven's Life Cycles Below is a discussion of Maven's three life cycles and their default mappings. and generating a project web site. Life-cycle phases The default life cycle is executed in order to perform a traditional build. performing any associated tests. along with a short description for the mojos which should be bound to each.1. generate-sources – generate compilable code from other source formats. mojo-binding defaults are specified in a packaging-specific manner. as when using Aspect-Oriented Programming techniques. generate-resources – generate non-code resources (such as configuration files. such as instrumentation or offline code-weaving. This is necessary to accommodate the inevitable variability of requirements for building different types of projects. 274 . corresponding to the three major activities performed by Maven: building a project from source. archiving it into a jar. It begins by listing the phases in each life cycle. and the content of the current set of POMs to be built is valid. 3.) from other source formats. This section contains a listing of the phases in the default life cycle. The default Life Cycle Maven provides three life cycles. A. cleaning a project of the files generated by a build. process-sources – perform any source modification processes necessary to prepare the code for compilation. process-classes – perform any post-processing of the binaries produced in the preceding step. 7. it takes care of compiling the project's code. In other words. and distributing it into the Maven repository system.1. Finally. in the target output location. This may include copying these resources into the target classpath directory in a Java build.
integration-test – execute any integration tests defined for this project. This may involve installing the archive from the preceding step into some sort of application server. 21. 15. preintegration-test – setup the integration testing environment for this project. before it is available for installation or deployment. package – assemble the tested application code and resources into a distributable archive. 11. process-test-sources – perform any source modification processes necessary to prepare the unit test code for compilation. 17. test-compile – compile unit test source code into binary form. 18. 12.) from other source formats. in the testing target output location. verify – verify the contents of the distributable archive. install – install the distributable archive into the local Maven repository.Appendix A: Resources for Plugin Developers 10. 13. 14. post-integration-test – return the environment to its baseline form after executing the integration tests in the preceding step. 275 . generate-test-resources – generate non-code testing resources (such as configuration files. 19. This may include copying these resources into the testing target classpath location in a Java build. For example. test – execute unit tests on the application compiled and assembled up to step 8 above. 16. etc. deploy – deploy the distributable archive into the remote Maven repository configured in the distributionManagement section of the POM. using the environment configured in the preceding step. a mojo may apply source code patches here. 20. process-test-resources – perform any modification of non-code testing resources necessary. This could involve removing the archive produced in step 15 from the application server used to test it.
Table A-1: The default life-cycle bindings for the jar packaging Phase Mojo Plugin Description processresources compile resourc maven-resourceses plugin compile maven-compilerplugin Copy non-source-code resources to the staging directory for jar creation. Filter variables if necessary.Better Builds with Maven Bindings for the jar packaging Below are the default life-cycle bindings for the jar packaging. Create a jar archive from the staging directory. Install the jar archive into the local Maven repository.testRes maven-resourcesresources ources plugin test-compile test package install deploy testCom maven-compilerpile plugin test jar maven-surefireplugin maven-jar-plugin install maven-installplugin deploy maven-deployplugin 276 . Alongside each. you will find a short description of what that mojo does. Copy non-source-code test resources to the test output directory for unit-test compilation. specified in the POM distribution Management section. Compile project source code to the staging directory for jar creation. Deploy the jar archive to a remote Maven repository. process-test. Execute project unit tests. Compile unit-test source code to the test output directory.
testing. the maven-plugin packaging also introduces a few new mojo bindings. and generate a plugin descriptor. However. and metadata references to latest plugin version. 277 . packaging. for example). if one exists. and the rest. compiling source code. addPluginArtifact Metadata updateRegistry maven-plugin-plugin Integrate current plugin information with plugin search metadata. Indeed. to extract and format the metadata for the mojos within.Appendix A: Resources for Plugin Developers Bindings for the maven-plugin packaging The maven-plugin project packaging behaves in almost the same way as the more common jar packaging. maven-plugin-plugin Update the plugin registry. As such. they undergo the same basic processes of marshaling non-source-code resources. maven-plugin artifacts are in fact jar files. to install reflect the new plugin installed in the local repository..
Better Builds with Maven A. Alongside each. the state of the project before it was built.1.2. Default life-cycle bindings Below are the clean life-cycle bindings for the jar packaging. Table A-3: The clean life-cycle bindings for the jar packaging Phase Mojo Plugin Description clean clean maven-clean-plugin Remove the project build directory. Life-cycle phases The clean life-cycle phase contains the following phases: 1. along with a summary of the default bindings. along with any additional directories configured in the POM. Below is a listing of the phases in the clean life cycle. Maven provides a set of default mojo bindings for this life cycle. 278 . clean – remove all files that were generated during another build process 3. The clean Life Cycle This life cycle is executed in order to restore a project back to some baseline state – usually. effective for all POM packagings. you will find a short description of what that mojo does. pre-clean – execute any setup or initialization procedures to prepare the project for cleaning 2. which perform the most common tasks involved in cleaning a project. post-clean – finalize the cleaning process.
Default Life Cycle Bindings Below are the site life-cycle bindings for the jar packaging. Alongside each. and even deploy the resulting web site to your server. and render documentation source files into HTML.Appendix A: Resources for Plugin Developers A. and prepare the generated web site for potential deployment 4. Table A-4: The site life-cycle bindings for the jar packaging Phase Mojo Plugin Description site site maven-site-plugin maven-site-plugin Generate all configured project reports.3. you will find a short description of what that mojo does. The site Life Cycle This life cycle is executed in order to generate a web site for your project.1. effective for all POM packagings. Life-cycle phases The site life cycle contains the following phases: 1. Deploy the generated web site to the web server path specified in the POM distribution Management section. Below is a listing of the phases in the site life cycle. post-site – execute any actions required to finalize the site generation process. render your documentation source files into HTML. site-deploy – use the distributionManagement configuration in the project's POM to deploy the generated web site files to the web server. Maven provides a set of default mojo bindings for this life cycle. site – run all associated project reports. which perform the most common tasks involved in generating the web site for a project. pre-site – execute any setup or initialization steps to prepare the project for site generation 2. along with a summary of the default bindings. It will run any reports that are associated with your project. site-deploy deploy 279 . and render documentation into HTML 3.
280 . These expressions allow a mojo to traverse complex build state. Using the discussion below.List<org.apache. It is used for bridging results from forked life cycles back to the main line of execution. java. org. This contains avenSession methods for accessing information about how Maven was called.reporting. along with the published Maven API documentation. org. They are summarized below: Table A-5: Primitive expressions supported by Maven's plugin parameter Expression Type Description ${localRepository} ${session} org.util.execution.ma List of reports to be generated when the site ven.2.project.ArtifactRepository used to cache artifacts during a Maven build.util.re This is a reference to the local repository pository. and extract only the information it requires.maven.project.ma List of project instances which will be ven.apache. A. This section discusses the expression language used by Maven to inject build state and plugin configuration into mojos.1. and often eliminates dependencies on Maven itself beyond the plugin API.M The current build session.Better Builds with Maven A. ${reactorProjects} ${reports} ${executedProject} java.apache. Mojo Parameter Expressions Mojo parameter values are resolved by way of parameter expressions when a mojo is initialized. mojo developers should have everything they need to extract the build state they require.MavenProject> processed as part of the current build. it will describe the algorithm used to resolve complex parameter expressions.apache. Simple Expressions Maven's plugin parameter injector supports several primitive expressions. in addition to providing a mechanism for looking up Maven components on-demand. It will summarize the root objects of the build state which are available for mojo expressions. This reduces the complexity of the code contained in the mojo.2.MavenReport> life cycle executes.artifact.apache. Finally.maven.maven.Mav This is a cloned instance of the project enProject instance currently being built.List<org. which act as a shorthand for referencing commonly used build state objects.
During this process. First.m2/settings.io. then the value mapped to that expression is returned. if there is one. Maven supports more complex expressions that traverse the object graph starting at some root object that contains build state. this reflective lookup process is aborted.Sett The Maven settings.Maven Project instance which is currently being built. No advanced navigation can take place using is such expressions.xml in the user's home directory. A.2. and must correspond to one of the roots mentioned above. Project org. an expression part named 'child' translates into a call to the getChild() method on that object.maven.Appendix A: Resources for Plugin Developers A. If at some point the referenced object doesn't contain a property that matches the next expression part. Otherwise. the value that was resolved last will be returned as the expression's value. When there are no more expression parts. The Expression Resolution Algorithm Plugin parameter expressions are resolved using a straightforward algorithm.apache. if the expression matches one of the primitive expressions (mentioned above) exactly.' character. merged from ings conf/settings. 281 . the expression is split at each '. Repeating this.PluginDescriptor including its dependency artifacts. the next expression part is used as a basis for reflectively traversing that object' state. much like a primitive expression would.maven. following standard JavaBeans naming conventions.maven.xml in the maven application directory and from . unless specified otherwise. org.apache. The first is the root object.2. Complex Expression Roots In addition to the simple expressions above.settings.project.plugin. The valid root objects for plugin parameter expressions are summarized below: Table A-6: A summary of the valid root objects for plugin parameter expressions Expression Root Type Description ${basedir} ${project} ${settings} java.File The current project's root directory.apache. From there. ${plugin} org. successive expression parts will extract values from deeper and deeper inside the build state. rendering an array of navigational directions. ptor. This root object is retrieved from the running application using a hard-wired mapping.2.descri The descriptor instance for the current plugin.3. The resulting value then becomes the new 'root' object for the next round of traversal.
it will attempt to find a value in one of two remaining places. array index references. |-> <goalPrefix>myplugin</goalPrefix> <!-.plugins</groupId> <artifactId>maven-myplugin-plugin</artifactId> <version>2. Its syntax has been annotated to provide descriptions of the elements. This includes properties specified on the command line using the -D commandline option.apache. The POM properties. <plugin> <!-. If the value is still empty.0-SNAPSHOT</version> <!-. then the string literal of the expression itself is used as the resolved value. Maven plugin parameter expressions do not support collection lookups. If a user has specified a property mapping this expression to a specific value in the current POM. an ancestor POM. this plugin could be referred to from the command line using | the 'myplugin:' prefix.The name of the mojo. --> <description>Sample Maven Plugin</description> <!-.This element provides the shorthand reference for this plugin. Plugin descriptor syntax The following is a sample plugin descriptor. it will be resolved as the parameter value at this point.This is a list of the mojos contained within this plugin. or an active profile. If the parameter is still empty after these two lookups.These are the identity elements (groupId/artifactId/version) | from the plugin POM. resolved in this order: 1. as well as the metadata formats which are translated into plugin descriptors from Java. Plugin metadata Below is a review of the mechanisms used to specify metadata for plugins.maven. Combined with the 'goalPrefix' element above.and Ant-specific mojo source files.Whether the configuration for this mojo should be inherted from | parent to child POMs by default. |-> <inheritedByDefault>true</inheritedByDefault> <!-. |-> <goal>do-something</goal> 282 . | this name allows the user to invoke this mojo from the command line | using 'myplugin:do-something'. The system properties. For | instance.Better Builds with Maven If at this point Maven still has not been able to resolve a value for the parameter expression. Maven will consult the current system properties. --> <mojos> <mojo> <!-. 2. or method invocations that don't conform to standard JavaBean naming conventions. It includes summaries of the essential plugin descriptor.The description element of the plugin's POM. Currently. |-> <groupId>org.
|-> <executeLifecycle>myLifecycle</executeLifecycle> <!-. If the mojo is not marked as an | aggregator. | and specifies a custom life-cycle overlay that should be added to the | cloned life cycle before the specified phase is executed. If Maven is operating in offline mode. without | also having to specify which phase is appropriate for the mojo's | execution. This is | useful to inject specialized behavior in cases where the main life | cycle should remain unchanged. If a mojo is marked as an aggregator.Determines how Maven will execute this mojo in the context of a | multimodule build. via the | command line. |-> <executePhase>process-resources</executePhase> <!-. |-> <phase>compile</phase> <!-. but the mojo itself has certain life-cycle | prerequisites. It's restricted to this plugin to avoid creating inter-plugin | dependencies.Tells Maven that a valid project instance must be present for this | mojo to execute.Tells Maven that this mojo can ONLY be invoked directly. | This allows the user to specify that this mojo be executed (via the | <execution> section of the plugin configuration in the POM). it will only | execute once. |-> <requiresDirectInvocation>false</requiresDirectInvocation> <!-.Which phase of the life cycle this mojo will bind to by default.Ensure that this other mojo within the same plugin executes before | this one. --> <description>Do something cool. then execute that life cycle up to the specified phase. |-> <aggregator>false</aggregator> <!-.Appendix A: Resources for Plugin Developers <!-.This is optionally used in conjunction with the executePhase element. regardless of the number of project instances in the | current build. |-> <requiresReports>false</requiresReports> <!-. |-> <executeGoal>do-something-first</executeGoal> <!-.</description> <!-. |-> <requiresProject>true</requiresProject> <!-. to give users a hint | at where this task should run.This tells Maven to create a clone of the current project and | life cycle. such mojos will 283 . | This is useful when the user will be invoking this mojo directly from | the command line. it will be executed once for each project instance in the | current build.Some mojos cannot execute if they don't have access to a network | connection.Tells Maven that a valid list of reports for the current project are | required before this plugin can execute. It is a good idea to provide this.Description of what this mojo does. Mojos that are marked as aggregators should use the | ${reactorProjects} expression to retrieve a list of the project | instances in the current build.
| It will be used as a backup for retrieving the parameter value. --> <parameters> <parameter> <!-.Description for this parameter.The parameter's name.File</type> <!-. |-> <required>true</required> <!-.io. this parameter must be configured via some other section of | the POM. |-> <alias>outputDirectory</alias> <!-. |-> <requiresOnline>false</requiresOnline> <!-.maven.plugins. as in the case of the list of project dependencies. |-> <inheritedByDefault>true</inheritedByDefault> <!-.The Java type for this parameter.apache. This flag controls whether the mojo requires | Maven to be online. If set to | false.SiteDeployMojo</implementation> <!-. specified in the javadoc comment | for the parameter field in Java mojo implementations. |-> <description>This parameter does something important. In Java mojos.</description> </parameter> </parameters> 284 .Whether this parameter's value can be directly specified by the | user.The class or script path (within the plugin's jar) for this mojo's | implementation. --> <type>java. unless the user specifies | <inherit>false</inherit>.Whether this parameter is required to have a value. If true.site. this will often reflect the | parameter field name in the mojo class. the | mojo (and the build) will fail when this parameter doesn't have a | value. |-> <editable>true</editable> <!-.The implementation language for this mojo.This is a list of the parameters used by this mojo. --> <language>java</language> <!-. |-> <name>inputDirectory</name> <!-.Better Builds with Maven | cause the build to fail.This is an optional alternate parameter name for this parameter. either via command-line or POM configuration.Tells Maven that the this plugin's configuration should be inherted | from a parent POM by default. |-> <implementation>org.
Each parameter must | have an entry here that describes the parameter name.apache.File.For example. | | The general form is: | <param-nameparam-expr</param-name> | |-> <configuration> <!-.io. | and the primary expression used to extract the parameter's value.This is the operational specification of this mojo's parameters.artifact.Appendix A: Resources for Plugin Developers <!-.reporting. |-> <inputDirectory implementation="java.File">${project.maven.outputDirectory}</inputDirectory> </configuration> <!-.manager.Use a component of type: org.reporting.WagonManager</role> <!-.apache. parameter type. this parameter is named "inputDirectory".io.outputDirectory}.Inject the component instance into the "wagonManager" field of | this mojo. |-> <field-name>wagonManager</field-name> </requirement> </requirements> </mojo> </mojos> </plugin> 285 .artifact.maven.This is the list of non-parameter component references used by this | mojo. the requirement specification tells | Maven which mojo-field should receive the component instance. Finally. |-> <requirements> <requirement> <!-.WagonManager |-> <role>org.manager. as | compared to the descriptive specification above. and it | expects a type of java. Components are specified by their interface class name (role). | along with an optional classifier for the specific component instance | to be used (role-hint). The expression used to extract the | parameter value is ${project.
phase.2.. Class-level annotations The table below summarizes the class-level javadoc annotations which translate into specific elements of the mojo section in the plugin descriptor. executeLifecycle.4. Java Mojo Metadata: Supported Javadoc Annotations The Javadoc annotations used to supply metadata about a particular mojo come in two types. life cycle name. Classlevel annotations correspond to mojo-level metadata elements. Table A-7: A summary of class-level javadoc annotations Descriptor Element Javadoc Annotation Values Required? aggregator description executePhase. Alphanumeric. with dash ('-') Any valid phase name true or false (default is false) true or false (default is true) true or false (default is false) true or false (default is false) Yes No No No No No 286 .Better Builds with Maven A. and field-level annotations correspond to parameter-level metadata elements.
Its syntax has been annotated to provide descriptions of the elements. and Yes Requirements section required editable description deprecated usually left blank None None Anything Alternative parameter No No No No (recommended) No A. These metadata translate into elements within the parameter.2.Whether this mojo requires access to project reports --> <requiresReports>true</requiresReports> 287 . Maven will resolve | the dependencies in this scope before this mojo executes. corresponding to the ability to map | multiple mojos into a single build script.Whether this mojo requires a current project instance --> <requiresProject>true</requiresProject> <!-.Appendix A: Resources for Plugin Developers Field-level annotations The table below summarizes the field-level annotations which supply metadata about mojo parameters. Table A-8: Field-level annotations Descriptor Element Javadoc Annotation Values Required? alias. |-> <mojos> <mojo> <!-.The name for this mojo --> <goal>myGoal</goal> <!-. and requirements sections of a mojo's specification in the plugin descriptor.Contains the list of mojos described by this metadata file.The default life-cycle phase binding for this mojo --> <phase>compile</phase> <!-.5. Ant Metadata Syntax The following is a sample Ant-based mojo metadata file.The dependency scope required for this mojo. configuration. NOTE: | multiple mojos are allowed here. <pluginMetadata> <!-. |-> <requiresDependencyResolution>compile</requiresDependencyResolution> <!-. parameterconfiguration section @parameter expression=”${expr}” alias=”alias” default-value=”val” @component roleHint=”someHint” @required @readonly N/A (field comment) @deprecated Anything roleHint is optional.
This describes the mechanism for forking a new life cycle to be | executed prior to this mojo executing.This is an optional classifier for which instance of a particular | component type should be used. |-> <hint>custom</hint> </component> </components> <!-.Better Builds with Maven <!-.The phase of the forked life cycle to execute --> <phase>initialize</phase> <!-.Whether this mojo requires Maven to execute in online mode --> <requiresOnline>true</requiresOnline> <!-.artifact.The parameter name.The property name used by Ant tasks to reference this parameter | value. |-> <inheritByDefault>true</inheritByDefault> <!-.ArtifactResolver</role> <!-.List of non-parameter application components used in this mojo --> <components> <component> <!-. |-> <requiresDirectInvocation>true</requiresDirectInvocation> <!-.Whether the configuration for this mojo should be inherited | from parent to child POMs by default.The list of parameters this mojo uses --> <parameters> <parameter> <!-. |-> <execute> <!-. --> <role>org.apache.Whether this parameter is required for mojo execution --> <required>true</required> 288 . |-> <property>prop</property> <!-. |-> <goal>goal</goal> </execute> <!-.This is the type for the component to be injected. --> <name>nom</name> <!-.maven.Another mojo within this plugin to execute before this mojo | executes.A named overlay to augment the cloned life cycle for this fork | only |-> <lifecycle>mine</lifecycle> <!-.resolver.Whether this mojo must be invoked directly from the command | line.Whether this mojo operates as an aggregator --> <aggregator>true</aggregator> <!-.
|-> <deprecated>Use something else</deprecated> </parameter> </parameters> <!-. it provides advice on which alternative mojo | to use.If this is specified.project.The description of what the mojo is meant to accomplish --> <description> This is a test.The expression used to extract this parameter's value --> <expression>${my.maven.apache. </description> <!-. this element will provide advice for an | alternative parameter to use instead.Appendix A: Resources for Plugin Developers <!-.Whether the user can edit this parameter directly in the POM | configuration or the command line |-> <readonly>true</readonly> <!-.The description of this parameter --> <description>Test parameter</description> <!-.The Java type of this mojo parameter --> <type>org.property}</expression> <!-. |-> <deprecated>Use another mojo</deprecated> </mojo> </mojos> </pluginMetadata> 289 .An alternative configuration name for this parameter --> <alias>otherProp</alias> <!-.MavenProject</type> <!-.When this is specified.The default value provided when the expression won't resolve --> <defaultValue>${project.artifactId}</defaultValue> <!-.
290 .Better Builds with Maven This page left intentionally blank.
.
which is always at the top-level of a project. Directory for all generated output. Standard location for resource filters. Standard location for test sources. Standard location for test resource filters. Standard Directory Structure Table B-1: Standard directory layout for maven project content Standard Location Description pom. Standard location for application resources. generated sources that may be compiled. A simple note which might help first time users and is optional.1.Better Builds with Maven B.txt target/ Maven’s POM. Standard location for test resources. A license file is encouraged for easy identification by users and is optional. This would include compiled classes. Standard location for assembly filters.txt README.xml LICENSE. 292 . you src/main/java/ src/main/resources/ src/main/filters/ src/main/assembly/ src/main/config/ src/test/java/ src/test/resources/ src/test/filters/ Standard location for application sources. may generate some sources from a JavaCC grammar. the generated site or anything else that might be generated as part of your build. For example. target/generated-sources/<plugin-id> Standard location for generated sources. Standard location for application configuration filters.
org/maven2</url> <layout>default</layout> <snapshots> <enabled>false</enabled> </snapshots> <releases> <updatePolicy>never</updatePolicy> </releases> </pluginRepository> </pluginRepositories> <!-.0.Reporting Conventions --> <reporting> <outputDirectory>target/site</outputDirectory> </reporting> .Repository Conventions --> <repositories> <repository> <id>central</id> <name>Maven Repository Switchboard</name> <layout>default</layout> <url>. Maven’s Super POM <project> <modelVersion>4.org/maven2</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <!-..Plugin Repository Conventions --> <pluginRepositories> <pluginRepository> <id>central</id> <name>Maven Plugin Repository</name> <url>. </project> 293 .2> <!-.0</modelVersion> <name>Maven Default Project</name> <!-..maven.Appendix B: Standard Conventions B.
Post-process the generated files from compilation. Process the test source code. Run any checks to verify the package is valid and meets quality criteria. for example to filter any values. Perform actions required after integration tests have been executed. Copy and process the resources into the destination directory. ready for packaging. for example to do byte code enhancement on Java classes. Take the compiled code and package it in its distributable format. Copy and process the resources into the test destination directory. Done in an integration or release environment. 294 . Process and deploy the package if necessary into an environment where integration tests can be run. copies the final package to the remote repository for sharing with other developers and projects. Create resources for testing. for example to filter any values. Generate any test source code for inclusion in compilation. Compile the test source code into the test destination directory Run tests using a suitable unit testing framework. Install the package into the local repository. This may involve things such as setting up the required environment. for use as a dependency in other projects locally. Compile the source code of the project. This may including cleaning up the environment. Process the source code.. These tests should not require the code be packaged or deployed. such as a JAR. Perform actions required before integration tests are executed. Description Validate the project is correct and all necessary information is available.3.Better Builds with Maven B. Generate resources for inclusion in the package.
org/Maven2+plugin Cargo Merging War Files Plugin . Bloch.org/Merging+WAR+files Cargo Reference Documentation .org/Deploying+to+a+running+container Cargo Plugin Configuration Options .codehaus.apache. Axis Tool Plugin .codehaus. Cargo Containers Reference . Effective Java.eclipse. June 8.org/axis/java/ AxisTools Reference Documentation .html 295 . Web Sites Axis Building Java Classes from WSDL. Evolving Java-based APIs. 2001 Bibliography Online Books des Rivieres.. Joshua.org/ Checkstyle .codehaus. Sun Developer Network . Jim.org/Containers Cargo Container Deployments .
net/xdoclet/ant/xdoclet/modules/ejb/EjbDocletTask.html Xdoclet . Simian .html Ruby on Rails .sf.org/plugins/ Mojo .org/xdoclet-maven-plugin/ XDoclet Reference Documentation J2EE Specification .html Introduction to the Build Life Cycle – Maven .apache. XDoclet2 Maven Plugin .sourceforge.org/guides/introduction/introduction-to-archetypes.sf.apache. Jdiff .com Introduction to Archetypes .org/ PMD Best Practices .html Xdoclet2 .html Cobertura .org/jdiff-maven-plugin Jetty 6 Plugin Documentation .sf.Better Builds with Maven Checkstyle Available Checks . Clirr .org Maven Downloads .codehaus.codehaus.net/ EJB Plugin Documentation .net/xdoclet/ant/xdoclet/modules/ejb/EjbDocletTask.0-doc/manager-howto. PMD Rulesets . 296 .org/plugins/maven-ejb-plugin/ ibiblio.codehaus.rubyonrails. Jester .apache.org/plugins/maven-clover-plugin/ DBUnit Java API . Clover Plugin .sf.com/j2ee/reference/api/ Maven 2 Wiki .sourceforge.apache.org/maven-model/maven.html POM Reference .apache. XDoclet EjbDocletTask . Tomcat Manager Web Application .html XDoclet Maven Plugin .com.org/tomcat-5.html Maven Plugins .net/availablechecks..
134-136. 187. Tom 211 classpath adding resources for unit tests 48 filtering resources 49-51 handling resources 46. 237 Butler. 194. 234. 84 modules 56 preparing a release 241-244 project inheritance 55.Index Index A Alexander. Edward V. 264-271 Apache Avalon project 196 Commons Collections 260 Commons Logging library 258 Geronimo project 86. 83. 122. 232. 95. 41 main Spring source 256-260 test sources 42. 103-105. 84. 126-132 creating 55-64. 69. 143. 230. 206-209 Cobertura 184. 230. 219. 228. 112. 131. 59-62. 196 Clancy. 50-52 preventing filtering of resources 52 testing 35 clean life cycle 278 Clirr report 185. 43 tests 260. 261 Confluence format 79 container 62. 62. 114. 112. 279 build life cycle 30. 117. 226. 197-202 code improving quality of 206-209 restructuring 271 code restructuring to migrate to Maven 271 Codehaus Mojo project 134. 276. 82 creating a Web site for 78-81. 114-122. 23. 130 Changes report 185 Checkstyle report 184. 84 DayTrader 86-88. 213 setting up shared development environments 213-216 Community-oriented Real-time Engineering (CoRE) 212 compiling application sources 40. 224. 124. 124. 111. 226. 55. 240 conventions about 26 default 29 default build life cycle 294 Maven’s super POM 293 naming 56 single primary output per project 27 standard directory layout for projects 27 standard directory structure 292 standard naming conventions 28 297 B Bentley. 86. Samuel 169 . 55 bibliography 295. 84 managing dependencies 61. 167. 87. 135. 77. 100. Jon 37 Berard. 90-99. 229 Continuum 237 continuous integration with 224. 294 Build Management 222. 76. 236. 74. 112. 84 Separation of Concerns (SoC) 56 setting up the directory structure 56-59 APT format 78 archetypes creating standard project 238. 251. 129. 97 HTTPd 216 Maven project 134. 41. 232. 116. 101-110. 129. 258 ASF 23 aspectj/src directory 249 aspectj/test directory 249 C Cargo 103-105. 217. 163. 167 Software Foundation 22. 160. 239 definition of 39 artifactId 29. Christopher 25 Ant metadata syntax 287-289 migrating from 247-262. 48. 90 deploying 55. 107. 68. 167 collaborating with teams introduction to 211 issues facing teams 212. 296 binding 134. 189. 277. 237 creating standard project 239. 221 Tomcat 216 application building J2EE 85-88. 152.
34. 251 standard structure 292 test 249. 101-110. 268 tiger/src 249 tiger/test 249 directory structures building a Web services client project 94 flat 88 nested 89 DocBook format 79 E Einstein. 114122. 126-132 deploying applications 122. 258 H Hansson. 125 Geronimo specifications JAR 107 testing applications 126. 251. 129-132 Java description 30 java. 266. 127. 124. 208. 193 DocBook Simple format 78 D DayTrader architecture 86. 88. 47.Object 29 mojo metadata 286-289 Spring Framework 248-252. 294 conventions 29 location of local repository 44 naming conventions 56 pom. 77 to the file system 74 with an external SSH 76 with FTP 77 with SFTP 75 with SSH2 75 development environment 213-216 directories aspectj/src 249 aspectj/test 249 m2 250. 124. 76. 256 url 30 Java EE 86 Javadoc 298 . Richard filtering classpath resources preventing on classpath resources FindBugs report FML format FTP 133 49-51 52 197 78 77 G groupId 29.lang. 99 95 103-105 99 100. 209 216 J J2EE building applications 85-88. Albert EJB building a project canonical directory structure for deploying plugin documentation Xdoclet external SSH 21 95-97.xml 39. 249. 251 mock 249 my-app 39 src 40. 91-99. 124 deploying applications methods of 74. 101 76 F Feynman. 49-51 structures 24 dependencies determining versions for 259 locating dependency artifacts 34 maintaining 203-205 organization of 31 relationship between Spring modules 249 resolving conflicts 64-67 specifying snapshot versions for 63 using version ranges to resolve conflicts 64-67 Dependency Convergence report 184 Deployer tool 122. 90 Quote Streamer 87 default build life cycle 41. 68. 251 tiger 265. 87 building a Web module 105-108 organizing the directory structure 87. David Heinemeier hibernate3 test 26 254 I IBM improving quality of code internal repository 86 206.Better Builds with Maven copy/paste detection report CPD report 193 184. 112.
188 254 202 105 184-186 K Keller. 275 136 44 44. 41 configuration of reports 173. Helen 85 L life cycle default for jar packaging local repository default location of installing to requirement for Maven storing artifacts in locating dependency artifacts 274. 201. 155-163 basic development 141-144. 146. 227-234. 57 naming 88 naming conventions 56 nested directory structure 89 referring to test classes from 265 tiger 266 WAR 88 mojos accessing build information 137 addition 281 advanced development 153. 156 writing Ant mojos to send e-mail 149-152 my-app directory 39 N name (element) naming conventions 30 56 O overview benefits of Maven local repository 36 32-34 299 . 224. 79 getting started with 37-46. 136 developing custom plugins 133-140. 269 using plugins 53 using to assess project health 170 XDoclet plugin 101 XDoclet2 102 maven-plugin packaging 277 McIlroy. 69. 285-289 phase binding 134-136 requiring dependency resolution 155. 236.. 142. 175.Index class-level annotations field-level annotations report JDK Jester JSP JXR report 286 287 184. 40 default build life cycle 68. 237. 163-165. 229 Maven Apache Maven project 134. 225. 187. 48-53 groupId 34 integrating with Cobertura 197-199. 150-152 capturing information with Java 141-147 definition of 134 implementation language 140 parameter expressions 280-283. 45 32 35 35 M m2 directory 250. 185. 176 creating your first project 39. 167 artifact guideline 87 build life cycle 41 collaborating with 211-222. 148. 167 documentation formats for Web sites 78. 275 migrating to 247-260 naming conventions 56 origins of 23 plugin descriptor 137 plugin descriptor 138 preparing to use 38 Repository Manager (Archiva) 217 standard conventions 291-294 super POM 293 using Ant tasks from inside 268. 294 developing custom 135. 145-161. 143. 202 JDK requirement 254 life-cycle phases 274. 239-245 compiling application sources 40. 251 Maestro 222-224.
240. 39. 34. 196. 178-183. 106-109. 101. 193 selecting 183. 167 development tools 138-140 framework for 135. 30. 174-176. 206-209 173-176 193 V version version ranges 30. 155-161. 200-210 inheritance 55. 97. 194. 172. 134. 126. 232. 184. 241-244 75 80 279 55. 293 tiger 267 pom. 135 using 53. 143. 292 preparing to use Maven 38 profiles 55. 96. 242-244. 190-192. 219. 233 35. 103. 197 249. 184. 35 manager 217 types of 32 restructuring code 271 Ruby on Rails (ROR) 296 running tests 262 S SCM SFTP site descriptor site life cycle snapshot Spring Framework src directory SSH2 Surefire report 35. 54 PMD report 184. 65. 286-289 developing custom 133. 201. 185. 184. 219 creating an organization 219-221 creating files 256 key elements 29 super 29. 202 Tag List 184. 186-188. 67. 196 185. 180182. 235. 251 75 171. 248-252. 202 262 249 249 79 Q Quote Streamer 87 R releasing projects reports adding to project Web site Changes Checkstyle Clirr configuration of copy/paste detection 300 241-244 171-173 185 184. 189-191. 113-118. 218. 72-74 project assessing health of 169-174. 70. 234. 207. 196-198. 195. 84 monitoring overall health of 210 project management framework 22 Project Object Model 22 CPD 184. 185-187. 193 POM 22. 66. 59-62. 64. 256 40. 172. 136 Plugin Matrix 134 terminology 134. 201. 197 repository creating a shared 216-218 internal 216 local 32. 92. 145153. 225.xml 29. 137-140. 142. 32-35 26 P packaging 30. 202 T Tag List report test directory testing sources tests compiling hibernate3 JUnit monitoring running tiger/src directory tiger/test directory Twiki format 184. 251. 215. 199-201. 251 parameter injection 135 phase binding 134-136 plugin descriptor 137. 176. 67 . 187. 189. 191-194. 197. 124. 134 developer resources 273-282. 127. 189. 138 plugins definition of 28. 43 260. 258 55. 63. 184 separating from user documentation 177-182 standard project information reports 80 Surefire 171. 88-90.Better Builds with Maven principles of Maven Ruby on Rails (ROR) 25-27. 189-191. 284. 239. 173. 208. 235. 188 JavaNCSS 197 JXR 184-186 PMD 184. 177. 123. 231. 197. 193 creating source code reference 185. 163-165. 187 Dependency Convergence 184 FindBugs 197 Javadoc 184. 174-176. 197. 189. 156. 176. 210. 261 254 249 197-199. 171. 189. 216. 201. 249. 129. 64. 251 42. 196.
110-112. 115.Index W Web development building a Web services client project 91-93. 101 102 301 . 117-122 deploying Web applications 114. 114 XDOC format Xdoclet XDoclet2 X 78 100. 117 improving productivity 108.
This action might not be possible to undo. Are you sure you want to continue? | https://pt.scribd.com/doc/66655570/47487031-BetterBuildsWithMaven | CC-MAIN-2015-48 | en | refinedweb |
Hi, I am a C newbie. My first project is to model my guitar (strings + frets = notes) in C and then write algorithms to create music. This post attempts a simple for loop to assign a char array containing musical letters (i.e., E, F, F#...n) to another char array of 19 possible notes on my bottom (i.e., low) E string.
Questions:Questions:Code:
#include <stdio.h>
main() {
char e_string[19];
char notes[12] = {E, F, F#Gb, G, G#Ab, A, A#Bb, B, C, C#Db, D, D#Eb};
int i;
for (i=0; i<19; i++) {
e_string[i] = notes[i];
print f("%c\n", e_string[i]);
}
return void;
}
1) Have I correctly declared and initialized my variables?
2) Have I correctly looped through i and assigned the notes to the 19 possible positions on the E string?
3) Is their a simpler way to do this operation?
Thanks for your interest and comments! IRNO | http://cboard.cprogramming.com/c-programming/116576-c-guitar-printable-thread.html | CC-MAIN-2015-48 | en | refinedweb |
Library: General utilities
Does not inherit
Base class for creating binary function objects
#include <functional> namespace std { C++ Standard function objects by inheriting from binary_function.
Function Objects, unary_function, and Section 3.2, "Function Objects," in the User's Guide
ISO/IEC 14882:1998 -- International Standard for Information Systems -- Programming Language C++, Section 20.3.1 | http://stdcxx.apache.org/doc/stdlibref/binary-function.html | CC-MAIN-2015-48 | en | refinedweb |
Next: Display Tables, Up: Character Display [Contents][Index]
Here are the conventions for displaying each character code (in the absence of a display table, which can override these conventions; see Display Tables).
tab-widthcontrols the number of spaces per tab stop (see below).
ctl-arrow. If this variable is non-
nil(the default), these characters are displayed as sequences of two glyphs, where the first glyph is ‘^’ (a display table can specify a glyph to use instead of ‘^’); e.g., the DEL character is displayed as ‘^?’.
If
ctl-arrow is
nil, these characters are displayed as
octal escapes (see below).
This rule also applies to carriage return (character code 13), if that character appears in the buffer. But carriage returns usually do not appear in buffer text; they are eliminated as part of end-of-line conversion (see Coding System Basics).
The above display conventions apply even when there is a display
table, for any character whose entry in the active display table is
nil. Thus, when you set up a display table, you need only
specify the characters for which you want special behavior.
The following variables affect how certain characters are displayed
on the screen. Since they change the number of columns the characters
occupy, they also affect the indentation functions. They octal escapes: a backslash followed by three octal
digits, as in ‘\001’.
tab-to-tab-stop. See Indent Tabs.
Next: Display Tables, Up: Character Display [Contents][Index] | http://www.gnu.org/software/emacs/manual/html_node/elisp/Usual-Display.html | CC-MAIN-2015-48 | en | refinedweb |
Details
Description
I have a contrib:tableRows component that binds the EvenOdd class like this:
<component id="tableRows" type="contrib:TableRows">
<binding name="class" expression="beans.evenOdd.next"/>
<binding name="row" expression="currentRow"/>
</component>
However the evenOdd class was never getting instantiated/processing on each row in the table.
I looked into my .html page and noticed that I had something like this:
<tr jwcid="tableRows" class="odd">
but when I took the class attribute out of my html the evenOdd bean got instantiated and processing. So it looks like there is an issue of binding a bean to an attribute if it already exists in a .html page. This page is part of an app I am migrating from 3.0, in which the evenOdd bean processed properly with the class attribute in the .html page.
Is this an issue with contrib:tableRows or bindings in general if you attempt to bind to an html attribute that is already set?
scott
Activity
- All
- Work Log
- History
- Activity
- Transitions
I think this behavior is actually correct. That component probably has inherit-informal-parameters set to true, which is what you've done. Not allowing the informal parameter to be set would be an even bigger issue. I see no other way to do this.
It is a bug with bindings in general. The problem should be in ComponentTemplateLoaderLogic. If a binding is specified in both the template and the specification, the validate() method should return false, but at the moment it is only checking conflicts between informal parameters in the template and formal/reserved parameters, so it is returning true.
public class ComponentTemplateLoaderLogic
{
private boolean validate(IComponent component, IComponentSpecification spec, String name,
IBinding binding)
{
// TODO: This is ugly! Need a better/smarter way, even if we have to extend BindingSource
// to tell us.
boolean literal = binding instanceof LiteralBinding;
boolean isFormal = (spec.getParameter(name) != null);
if (isFormal){ // Literal bindings in the template that conflict with bound parameters // from the spec are silently ignored. if (literal) return false; throw new ApplicationRuntimeException(ImplMessages.dupeTemplateBinding( name, component, _loadComponent), component, binding.getLocation(), null); }
{
if (component.getBinding(name) != null)
return true;
}
if (!spec.getAllowInformalParameters()){ // Again; if informal parameters are disallowed, ignore literal bindings, as they // are there as placeholders or for WYSIWYG. if (literal) return false; throw new ApplicationRuntimeException(ImplMessages.templateBindingForInformalParameter( _loadComponent, name, component), component, binding.getLocation(), null); }
// If the name is reserved (matches a formal parameter
// or reserved name, caselessly), then skip it.
if (spec.isReservedParameterName(name)){ // Final case for literals: if they conflict with a reserved name, they are ignored. // Again, there for WYSIWYG. if (literal) return false; throw new ApplicationRuntimeException(ImplMessages.templateBindingForReservedParameter( _loadComponent, name, component), component, binding.getLocation(), null); }
return true;
}
} | https://issues.apache.org/jira/browse/TAPESTRY-403 | CC-MAIN-2015-48 | en | refinedweb |
Re: The Sundowners: "Always You"
Expand Messages
- Below is what I found in 5 minutes of "Googling". Apart
from the "Always You" track appearing on several
compilations and the Rev-Ola label CD listed below,
there is also a 12minute and 28 second performance by
the Sundowners on the Monkees tour in which they
perform a medley of songs -- file attached here.
1.
"The Sundowners were a music group from Lake George,
New York. The original line-up included Bobby Dick (bass
& vocals), Dominick DeMieri (guitar & vocals), Eddie
Placidi (guitar & vocals), Eddie Brick (lead vocals),
and Kerim "Kim" Capli (drums). They opened for the
Monkees famous tour in 67 with Jimi Hendrix. The
lackluster sales doomed this single but it makes a good
vehicle for Jenni. Enjoy!"
2.
Always You (Single Version) performed by The Sundowners
(1968)
Composed by Nichols/Asher
Arranged by Dominick De Mieri
From: USA
Recommended by eftimihn [profile] on Sunday 20th
June 2004
"To me this is certainly a pinnacle of pure late 60s
sunshine pop., Beach Boys-
esque vocal harmonies, great bassline & trumpet and
catchy as hell with it's uplifting chord progressions
throughout. While the album version (recently included
on the highly recommended "The Get Easy! Sunshine Pop
Collection") is good already, the single version is
just crisper, lusher, just perfect."
3. and also:
Formed: 1959, Lake George, NY.
~ Mark Deming, All Music Guide
4.
Friday, September 01, 2006
The Sundowners - "Dear Undecided"
A happy holiday weekend to everyone! Today, I thought
that I'd write about a great pop single. You know, the
kind with an irresistible hook, that you might spin
several times in a row. And since The Sundowners have
been a recent topic of discussion over at the Psychotic
Reactions forum at garagepunk.com (see link on this
page), there's not been a better time to blog "Dear
Undecided". I found this record way over a year ago and
though I've filed it several times, it always finds its
way right back on my turntable. That's about the best
testimonial that I can give a record.
I don't know a lot about The Sundowners, except that
they released at least three singles and an album for
Decca in 1968. They also probably hailed from California,
since they got to appear on an episode of The Flying Nun
and also the Robert Wagner spy drama, It Takes A Thief.
It was on that program that they lip synched to today's
record.
"Dear Undecided" was written by band member, Domenic
DeMurri. It could loosely be called garage pop and even
has a bit of a mod sound. It's tuneful, with guitars,
lots of good drumming and background vocals galore. And
the hook is just incredible. Lyrically, the song is
about a guy who's willing to wait for a girl's current
relationship to washout before he gets his chance.
I've seen "Dear Undecided" described as Beatles
influenced and I suppose that's true. I also think it
sounds like The Sundowners had listened to a lot of Who
singles. Anyhow, the British Invasion plays a major role
but The Sundowners certainly put their best spin on it
here and whipped up an unforgettable tune of their own.
It seems ripe for a revival. I'm surprised that some
contemporary outfit hasn't remade this with a bigger
guitar sound and a bit faster tempo.
The flipside of this gem, "Always You" isn't bad either.
It falls more into the sunshine pop category but is also
tuneful, with a few strings added to the mix. I'm not
sure how common the record is but I don't think it's
very pricy. My copy cost a whopping 50 cents. I'm now on
the lookout for The Sundowners' other 45's and their
album.
The Sundowners entertain on It Takes A Thief, circa 1968
[click on the URL above to visit this blog for the photo
and more]
Their Website is
5. current eBay listing:
6.
"Captain Nemo" CD by The Sundowners on Rev-Ola CRREV201
Tracks:
Sunny Day People
Edge Of Love
Let It Be Me
Dear Undecided
Ring Out, Wild Bells
Plaster Casters
Captain Nemo
Always You
Easy Does It
Blue-Green Eyes
So Sad
"This mysterious band has become something of a legend
in its own lunchtime.
The Sundowners released a string of singles between 1966
and 1968 and the amazing 1968 album Captain Nemo which
blends Beatlesque melodies, West Coast cool, frantic
showmanship, psychedelic production flourishes and a
mile-wide cheeky grin.
They also appeared in the Tony Curtis movie Don't Make
Waves (the one with the Byrds theme tune) and the
popular TV shows It Takes A Thief and The Flying Nun.
As if that wasn't enough, they toured with The Monkees
and Jimi Hendrix!
Quite how a band this good with these credentials
slipped so far off the radar in the intervening years
is a mystery. However, us splendid fellows at Rev-Ola
have leapt to the rescue by issuing this neglected gem
on CD for the first time ever and have wrapped it all
up in the kind of dazzling audio and visual package
you've grown to expect from us. How could we not?"
- Jim Shannon:
> I came across a record a few nights ago in the garageBob Haldeman:
> band category. "Always You" was released in 1967 by
> an upstate New York band called The Sundowners. Does
> anyone know if this was part of a film soundtrack? I
> don't recall the single charting that well.
I don't much about it, but I found a copy of the song on vol. 4
of "Melody Goes On."
-----
Paul Carr:
The Sundowners did appear in a 1967 movie with Tony Curtis,
"Don't Make Waves", but I don't know if "Always You" is in it. It
was just on TMC about a month ago. Their album, "Captain Nemo",
is available on CD:
-----
Michael Coxe:
Not on any soundtrack except the ones playing constantly in
the heads of pop-psych lovers worldwide. The song was written
by Roger Nichols and Peter Asher, with a different version
subsequently included in the Sundowners' album "Captain Nemo".
I'm happily surprised that "Captain Nemo" now has its own
Wikipedia article, which contains additional Sundowners info:
According to this article the "Always You" single was produced
by Bones Howe, and arranged by noted jazz chartist Hill Holman.
-----
Rich Grunke:
Neat sounding record. This side has an Association feel. Flip it
over and you get the Beatles with "Dear Undecided".
-----
- Bob Haldeman:
> I don't much about it, but I found a copy of the songIs this a series of Sunshine Pop compilations (as that is a favorite genre
> on vol. 4 of "Melody Goes On."
of mine)? On a related note, I have searched off and on for the Get Easy
Sunshine Pop collection. Any suggestions other than eBay (e.g. special
import sites) where I might be able to find a copy would be greatly
appreciated.
Best,
Justin McDevitt
Your message has been successfully submitted and would be delivered to recipients shortly. | https://groups.yahoo.com/neo/groups/spectropop/conversations/topics/45596 | CC-MAIN-2015-48 | en | refinedweb |
Initializes a Slapi_Mod structure that is a wrapper for an existing LDAPMod.
#include "slapi-plugin.h" void slapi_mod_init_byref(Slapi_Mod *smod, LDAPMod *mod);
This function takes the following parameters:
Pointer to an uninitialized Slapi_Mod.
This function initializes a Slapi_Mod containing a reference to an LDAPMod. Use this function when you have an LDAPMod and would like the convenience of the Slapi_Mod functions to access it. | http://docs.oracle.com/cd/E19693-01/819-0996/aaijc/index.html | CC-MAIN-2015-48 | en | refinedweb |
Namespaces are obsolete
To those of us who have been around for a while, namespaces have been part of the landscape. One could even say that they have been defining the large-scale features of the landscape in question.
However, something happened fairly recently that I think makes this venerable structure obsolete. Before I explain this development and why it’s a superior concept to namespaces, let me recapitulate what namespaces are and why they’ve been so good to us over the years…
Namespaces are used for a few different things:
- Scope: a namespace delimits the portion of code where a name (for a class, sub-namespace, etc.) has the specified meaning. Namespaces are usually the highest-level scoping structures in a software package.
- Collision prevention: name collisions are a universal problem. Some systems, such as jQuery, wave it away, but the problem remains. Namespaces provide a reasonable approach to global uniqueness (and in some implementations such as XML, enforce it). In .NET, there are ways to relocate a namespace to avoid those rare collision cases.
- Hierarchy: programmers like neat little boxes, and especially boxes within boxes within boxes. For some reason. Regular human beings on the other hand, tend to think linearly, which is why the Windows explorer for example has tried in a few different ways to flatten the file system hierarchy for the user.
1 is clearly useful because we need to protect our code from bleeding effects from the rest of the application (and vice versa). A language with only global constructs may be what some of us started programming on, but it’s not desirable in any way today.
2 may not be always reasonably worth the trouble (jQuery is doing fine with its global plug-in namespace), but we still need it in many cases. One should note however that globally unique names are not the only possible implementation. In fact, they are a rather extreme solution. What we really care about is collision prevention within our application. What happens outside is irrelevant.
3 is, more than anything, an aesthetical choice. A common convention has been to encode the whole pedigree of the code into the namespace. Come to think about it, we never think we need to import “Microsoft.SqlServer.Management.Smo.Agent” and that would be very hard to remember. What we want to do is bring nHibernate into our app.
And this is precisely what you’ll do with modern package managers and module loaders. I want to take the specific example of RequireJS, which is commonly used with Node.
Here is how you import a module with RequireJS:
var http = require("http");
This is of course importing a HTTP stack module into the code. There is no noise here. Let’s break this down.
Scope (1) is provided by the one scoping mechanism in JavaScript: the closure surrounding the module’s code. Whatever scoping mechanism is provided by the language would be fine here.
Collision prevention (2) is very elegantly handled. Whereas relocating is an afterthought, and an exceptional measure with namespaces, it is here on the frontline. You always relocate, using an extremely familiar pattern: variable assignment. We are very much used to managing our local variable names and any possible collision will get solved very easily by picking a different name.
Wait a minute, I hear some of you say. This is only taking care of collisions on the client-side, on the left of that assignment. What if I have two libraries with the name “http”? Well, You can better qualify the path to the module, which is what the require parameter really is.
As for hierarchical organization, you don’t really want that, do you?
RequireJS’ module pattern does elegantly cover the bases that namespaces used to cover, but it also promotes additional good practices.
First, it promotes usage of self-contained, single responsibility units of code through the closure-based, stricter scoping mechanism. Namespaces are somewhat more porous, as using/import statements can be used bi-directionally, which leads us to my second point…
Sane dependency graphs are easier to achieve and sustain with such a structure. With namespaces, it is easy to construct dependency cycles (that’s bad, mmkay?). With this pattern, the equivalent would be to build mega-components, which are an easier problem to spot than a decay into inter-dependent namespaces, for which you need specialized tools.
I really like this pattern very much, and I would like to see more environments implement it. One could argue that dependency injection has some commonalities with this for example. What do you think? This is the half-baked result of some morning shower reflections, and I’d love to read your thoughts about it. What am I missing? | http://weblogs.asp.net/bleroy/namespaces-are-obsolete | CC-MAIN-2015-48 | en | refinedweb |
Processing Image Pixels, Color Intensity, Color Filtering, and Color Inversion
Java Programming, Notes # 406
- Preface
- Background Information
- Preview
- Discussion and Sample Code
- Communication between the Programs
- Run the Programs
- Summary
- What's Next
- Complete Program Listings
Preface
Fourth in a series
The first lesson in the series was entitled Processing Image Pixels using Java, Getting Started. The previous lesson was entitled Processing Image Pixels Using Java: Controlling Contrast and Brightness..
A framework or driver program
The lesson entitled Processing Image Pixels using Java, Getting Started provided and explained a program named ImgMod02 that makes it easy to:
- Manipulate and modify the pixels that belong to an image.
- Display the processed image along with the original image.
The lesson entitled Processing Image Pixels Using Java: Controlling Contrast and Brightness provided an upgraded version of that program named ImgMod02a. ImgMod02a serves as a driver that controls the execution of a second program that actually processes the pixels.The program that I will explain in this lesson runs under the control of ImgMod02a. In order to compile and run the program that I will provide in this lesson, you will need to go to the lessons entitled Processing Image Pixels Using Java: Controlling Contrast and Brightness and Processing Image Pixels using Java, Getting Started to get copies of the program named ImgMod02a and the interface named ImgIntfc02.
Purpose of this lesson
The purpose of this lesson is to teach you how to write a Java program that can be used to:
- Control color intensity
- Apply color filtering
- Apply color inversion.
Sample program output
I will begin this lesson by showing you three examples of the types of things that you can do with this program. I will discuss the examples very briefly here and will discuss them in more detail later in the lesson.
Color intensity control
Figure 1 shows an example of color intensity control. The bottom image in Figure 1 is the result of reducing the intensity of every color pixel to fifty-percent of its original value. As you can see, this basically caused the intensity of the entire image to be reduced resulting in a darker image where the colors were somewhat washed out.
The user interface GUI
Figure 2 shows the state of the user interface GUI that produced Figure 1. Each of the three sliders in Figure 2 controls the intensity of one of the colors red, green, and blue. The intensity of each color can be adjusted within the range from 0% to 100% of its original value.
Each of the sliders in Figure 2 was adjusted to a value of 50, causing the intensity of every color in every pixel to be reduced to 50% of its original value.
(Note that the check box at the top was not checked. I will explain the purpose of this checkbox later.)
Color filtering
Figure 3 shows an extreme example of color filtering.
(I elected to provide an extreme example so that the results would be obvious.)
In Figure 1, there was no modification of any color relative to any other color. (The value of every color was adjusted to 50% of its original value.) However, in Figure 3, the relative intensities of the three colors were modified relative to each other.
There was no change to the color values for any of the red pixels in Figure 3. The color values for all of the green pixels were reduced to 50% of their original values. The color values for all blue pixels were reduced to zero. Thus, the color blue was completely eliminated from the output.
As you can see, modifying the pixel color values in this way caused the overall color of the processed image to be more orange than the original.
(Some would say that the processed image in Figure 3 is warmer than the original image in Figure 3 because it emphasizes warm colors rather than cool colors.)
The user interface GUI for Figure 3
Figure 4 shows the state of the user interface GUI that produced Figure 3.
The red slider in Figure 4 is positioned at 100, causing the red color values of all the pixels to remain unchanged. The green slider is positioned at 50, causing the green color values of all the pixels to be reduced to 50% of their original values. The blue slider is positioned at 0 causing the blue color values of all pixels to be reduced to 0.
Once again the checkbox at the top of Figure 4 is not checked. I will explain the purpose of this checkbox in the next section.
Color inversion
Figure 5 shows an example of color inversion with no color filtering.
(Note that it is also possible to apply a combination of color filtering and color inversion.)
What is color inversion?
I will have a great deal to say about color inversion later in this lesson. For now, suffice it to say that color inversion causes a change to all the colors in an image. That change is computationally economical, reversible, and usually obvious to the viewer. As you can readily see, the colors in the processed image in Figure 5 are obviously different from the colors in the original image.
The user interface GUI for Figure 5
Figure 6 shows the state of the user interface GUI that produced Figure 5.
The check box at the top of Figure 6 is checked, sending a message to the image-processing program to implement color inversion.
Each of the sliders in Figure 6 is positioned at 100. As a result, no color filtering was applied. As mentioned earlier, however, it is possible to combine color filtering with color inversion. In fact, by using comment indicators to enable and disable different blocks of code and recompiling, the program that I will discuss later makes it possible to combine color filtering and color inversion in two different ways:
- Filter first and then invert.
- Invert first and then filter.
The two different approaches can result in significantly different results.
Display format
The images shown in Figures 1, 3, and 5 were produced by the driver program named ImgMod02a. The user interface GUIs in Figures 2, 4, and 6 were produced by the program named ImgMod15.
As in all of the graphic output produced by the driver program named ImgMod02a, the original image is shown at the top and the processed image is shown at the bottom.
An interactive image-processing program
The image-processing program named ImgMod15 illustrated by the above figures allows the user to interactively
- Control the color intensity
- Apply color filtering
- Apply color inversion
Color intensity and color filtering are controlled by adjusting the three sliders where each slider corresponds to one of the colors red, green, and blue.
Color inversion is controlled by checking or not checking the check box near the top of the GUI.
After making adjustments to the GUI, the user presses the Replot button shown at the bottom of Figures 1, 3, and 5 to cause the image to be reprocessed and replotted..
File formats
The earlier lesson introduced and explained the concept of a pixel. In addition, the lesson provided a brief discussion of image files, and indicated that the program named ImgMod02a is compatible with gif files, jpg files, and possibly some other file formats as well..
Display of processed image results
When the image-processing program completes its work, the driver program named ImgMod02a:
- Receives a reference to a three-dimensional array object containing processed pixel data from the image-processing program.
- Displays the original image and the processed image in a stacked display as shown in Figure 1.
Reprocessing with different parameters
In addition, the way in which the two programs work together makes it possible for the user to:
- Provide new input data to the image-processing program.
- Invoke the image-processing program again.
- Create a new display showing the newly-processed image along with the original image.
The manner in which all of this communication between the programs is accomplished was explained in the earlier lesson entitled Processing Image Pixels using Java, Getting Started.
Will concentrate on the three-dimensional array of type int
This lesson will show you how to write an image-processing program that receives raw pixel data in the form of a three-dimensional array of type int, and returns processed pixel data in the form of a three-dimensional array of type int. The program is designed to achieve the image-processing objectives described.
Preview
Three programs and one interface
The program that I will discuss in this lesson requires the program named ImgMod02a and the interface named ImgIntfc02 for compilation and execution. I provided and explained that material in the earlier lessons entitled Processing Image Pixels using Java, Getting Started and Processing Image Pixels Using Java: Controlling Contrast and Brightness.
I will present and explain a new Java program named ImgMod15 in this lesson. This program, when run under control of the program named ImgMod02a, will produce outputs similar to those shown in Figures 1, 3, and 5.
(The results will be different if you use a different image file or provide different user input values.)
I will also provide, (but will not explain) a simple program named ImgMod27. This program can be used to display (in 128 different panels) all of the 16,777,216 different colors that can be produced using three primary colors, each of which can take on any one of 256 values. The different colors are displayed in groups of 131,072 colors in each panel.
The processImg method
The program named ImgMod15, method. The processImg method must return (see Figure 1 for an example of the display format).
Usage information for ImgMod02a and ImgMod15
To use the program named ImgMod02a to drive the program named ImgMod15, enter the following at the command line:
java ImgMod02a ImgMod15 images in Figures 16, 17, and 18 to download and save the images used in this lesson. Then you should be able to replicate the results shown in the various figures in this lesson.)
Image display format
When the program is started, the original image and the processed image are displayed in a frame with the original image above the processed image. The two images are identical when the program first starts running.15 provides a GUI for user input, as shown in Figure 2. The sliders on the GUI make it possible for the user to provide different filter values for red, green, and blue each time the image-processing method is rerun. The check box near the top of the GUI makes it possible for the user to request that the colors in the image be inverted.
To rerun the image-processing method with different parameters, adjust the sliders, optionally check the check box in the GUI, and then press the Replot button at the bottom of the main display.
Discussion and Sample Code
The program named ImgMod15
This program illustrates how to control color intensity, apply color filters, and apply color inversion to an image.
The program is designed to be driven by the program named ImgMod02a.
The before and after images
The program places two GUIs on the screen. One GUI displays the "before" and "after" versions of an image that is subjected to color intensity control, color filtering, and color inversion.
The image at the top of this GUI is the "before" image. The image at the bottom is the "after" image. An example is shown in Figure 1.
The user interface GUI
The other GUI provides instructions and components by which the user can control the processing of the image. An example of the user interface GUI is shown in Figure 2.
A check box appears near the top of this GUI. If the user checks the check box, color inversion is performed. If the check box is not checked, no color inversion is performed.
This GUI also provides three sliders that make it possible for the user to control color intensity and color filtering. Each slider controls the intensity of a single color. The intensity control ranges from 0% to 100% of the original intensity value for each color for every pixel.
Controlling color intensity
If all three sliders are adjusted to the same value and the replot button is pressed, the overall intensity of the image is modified with no change in the relative contribution of each color. This makes it possible to control the overall intensity of the image from very dark (black) to the maximum intensity supported by the original image. This is illustrated in Figure 1.
Color filtering
If the three sliders are adjusted to different values and the replot button is pressed, color filtering occurs. In this case, the intensity of each color is changed relative to the intensity of the other colors. This makes it possible, for example to adjust the "warmth" of the image by emphasizing red over blue, or to make the image "cooler" by emphasizing blue over red. This is illustrated in Figure 3.
A greenscale image
It is also possible to totally isolate and view the individual contributions of red, green, and blue to the overall image as illustrated in Figure 7.
The values for red and blue were set to zero for all of the pixels in the processed image in Figure 7. This leaves only the differing green values for the individual pixels, producing what might be thought of as a greenscale image (in deference to the use of the term grayscale for a common class of black, gray, and white images).
The user interface GUI for Figure 7
Figure 8 shows the state of the user interface GUI that produced the processed image in Figure 7. As you can see, the sliders for red and blue were set to zero causing all red and blue color values to be set to zero. The slider for green was set to 100 causing the green value for every pixel to remain the same as in the original image.
The checkbox was not checked. Therefore, color inversion was not performed.
Which comes first, the filter or the inversion?
As written, the program applies color filtering before it applies color inversion. As you will see later, sample code is also provided that can be used to modify the program to cause it to provide color inversion before it applies color filtering. There is a significant difference in the results produced by these two approaches, and you may want to experiment with them.
A practical example of color inversion
As a side note, Microsoft Word and Microsoft FrontPage appear to use color inversion to change the colors in images that have been selected for editing. I will have more to say about this later.
Beware of transparent images
This program illustrates the modification of red, green, and blue values belonging to all the pixels in an image. It works best with an image that contains no transparent areas. The pixel modifications performed in this program have no impact on transparent pixels. Therefore, if you don't see what you expect when you process an image, it may be because your image contains transparent pixels.
Will discuss in fragments
I will break the program down into fragments for discussion. A complete listing of the program is provided in Listing 8 near the end of the lesson.
The ImgMod15 class
The ImgMod15 class begins in Listing 1. In order to be suitable for being driven by the program named ImgMod02a, this class must implement the interface named ImgIntfc02.
The class extends Frame, because an object of this class is the user interface GUI shown in Listings 2, 4, 6, and 8. The code in Listing 1 declares four instance variables that will refer to the check box and the three sliders in Figure 8.
The constructor for ImgMod15
The constructor is shown in its entirety in Listing 2. Because of the way that an object of the class is instantiated by ImgMod02a, the constructor is not allowed to take any parameters.
Although the code in Listing 2 is rather long, all of the code in Listing 2 is straightforward if you are familiar with the construction of GUIs in Java. If you are not familiar with such constructions, you should study some of my other lessons on this topic. As mentioned earlier, you will find an index to all of my lessons at.
The processImg method
To be compatible with ImgMod02a,.
The beginning of the processImg method is shown in Listing 3.
It's best to make and modify a copy
Normally the processImg method should make a copy of the incoming array and process the copy rather than modifying the original. Then the method should return a reference to the processed copy of the three-dimensional pixel array. The code in Listing 3 makes such a copy.
Get the slider values
The code in Listing 4 gets the current values of each of the three sliders. This information will be used to scale the red, green, and blue pixel values to new values in order to implement color intensity control and color filtering. The new color values can range from 0% to100% of the original values
Process each color value
The code in Listing 5 is the beginning of a for loop that is used to process each color value for every pixel. The boldface code in Listing 5 is executed for the case where the check box near the top of Figure 2 has not been checked.
In this case, each color value for every pixel is multiplied by a scale factor that is determined by the position of the slider corresponding to that color. In effect, the product of the color value and the scale factor causes the processed color value to range from 0% to 100% of the original color value.
Note that the code in Listing 5 is the first half of an if-else statement.
Apply color inversion
In the event that the color-inversion check box is checked, the boldface code in Listing 6 is executed instead of the boldface code in Listing 5. The code in Listing 6 first applies color filtering using the slider values and then applies color inversion.
The formula for color inversion
Recall that an individual color value can fall anywhere in the range from 0 to 255. The code in Listing 6 performs color inversion by subtracting the scaled color value from 255. Therefore, a scaled color value of 200 would be inverted into a value of 55. Likewise, a scaled color value of 55 would be inverted into a value of 200. Thus, the inversion process can be reversed simply by applying it twice in succession.
Since it may not be obvious what the results of such an operation will be, I will discuss the ramifications of color inversion in some detail.
An experiment
Let's begin with an experiment. You will need access to either Microsoft Word or Microsoft FrontPage to perform this experiment.
Get and save the image
Figure 5 shows the result of performing color inversion on an image of a starfish. The original image is shown at the top of Figure 5 and the color-inverted image is shown at the bottom of Figure 5. Begin the experiment by right-clicking the mouse on the image in Figure 5 and saving the image locally on your disk.
Insert the image into a Word or FrontPage document
Now create a new document in either Microsoft Word or Microsoft FrontPage and type a couple of paragraphs of text into the new document.
Insert the image that you saved between the paragraphs in your document. It should be the image with the tan starfish at the top and the blue starfish at the bottom.
Select the image
Now use your mouse and select some of the text from both paragraphs. Include the image between the paragraphs in the selection. If your system behaves like mine, the starfish at the top should turn blue and the starfish at the bottom should turn tan. In other words, the two images should be exactly the same except that their positions should be reversed.
What does this mean?
Whenever an image is selected in an editor program like Microsoft Word or Microsoft FrontPage, some visual change must be made to the image so that the user will know that the image has been selected. It appears that Microsoft inverts the colors in selected images in Word and FrontPage for this purpose.
(Note, however, that the Netscape browser, the Netscape Composer, and the Internet Explorer browser all use a different method for indicating that an image has been selected, so this is not a universal approach.)
Why use inverted colors?
Color inversion is a very good way to change the colors in a selected image. The approach has several very good qualities.
Computationally economical
To begin with, inverting the colors is computationally economical. All that is required computationally to invert the colors is to subtract each color value from 255. This is much less demanding of computer resources than would be the case if the computation required multiplication or division, for example.
Overflow is not possible
Whenever you modify the color values in a pixel, you must be very careful to make sure that the new color value is within the range from 0 to 255. Otherwise, serious overflow problems can result. The inversion process guarantees that the new color value will fall within this range, so overflow is not possible.
A reversible process
The process is guaranteed to be reversible with no requirement to maintain any information outside the image regarding the original color values in the image. All that is required to restore the inverted color value back to the original color value is to subtract the inverted color value from 255. The original color value is restored after two successive inversions. Thus, it is easy and economical to switch back and forth between original color values and inverted color values.
Given all of the above, I'm surprised that the color-inversion process isn't used by programs other than Word and FrontPage.
Another example of color inversion
The color values in a digitized color film negative are similar to (but not identical to) the inverse of the colors in the corresponding color film positive. Therefore, some photo processing programs begin the process of converting a digitized color film negative to a positive by inverting the colors. Additional color adjustments must usually be made after inversion to get the colors just right.
You will find an interesting discussion of this process in an article entitled Converting negative film to digital pictures by Phil Williams.
What will the inverted color be?
Another interesting aspect of color inversion has to do with knowing what color will be produced by applying color inversion to a pixel with a given color. For this, let's look at another example shown in Figures 9 and 10.
Figure 9 shows the result of applying color inversion to the pure primary colors red, green, and blue.
The color bar at the top in Figure 9 shows the three primary colors. The color bar at the bottom shows the corresponding inverted colors.
No color filtering was applied
Figure 10 shows that no color filtering was involved. The colors shown in the bottom image of Figure 9 are solely the result of performing color inversion on the top image in Figure 9.
Experimental results
From Figure 9, we can conclude experimentally that applying color inversion to a pure red pixel will cause the new pixel color to be aqua. Similarly, applying color inversion to a pure green pixel will cause the new pixel color to be fuchsia. Finally, applying color conversion to a pure blue pixel will cause the new pixel color to be yellow. To summarize:
- Red inverts to aqua
- Green inverts to fuchsia
- Blue inverts to yellow
An explanation of the results
Consider why the experimental results turn out the way that they do. Consider the case of the pure blue pixel. The red, green, and blue color values for that pixel are as shown below:
- R = 0
- G = 0
- B = 255
Let the inverted color values be given by R', G', and B'. Looking back at the code in Listing 6 (with no color filtering applied), the color values for the pixel following the inversion will be:
- R' = 255 - 0 = 255
- G' = 255 - 0 = 255
- B' = 255 - 255 = 0
The inverted color is yellow
Thus we end up with a pixel having full color intensity for red and green and no intensity for blue. What do we get when we mix red and green in equal amounts? The answer is yellow. Adding equal amounts of red and green produces yellow. Hence, the inverted color for a pure blue pixel is yellow, as shown in Figure 9 and explained on the basis of the arithmetic.
We could go through a similar argument to determine the colors resulting from inverting pure red and pure green. The answers, of course, would be aqua for red and fuchsia for green.
A more difficult question
What colors are produced by inverting pixels that are not pure red, green, or blue, but rather consist of weighted mixtures of red, green, and blue?
The answer to this question requires a bit of an extrapolation on our part. First, let's establish the colors that result from mixing equal amounts of the three primary colors in pairs.
- red + green = yellow (bottom right in Figure 9)
- red + blue = fuchsia (bottom center in Figure 9)
- green + blue = aqua (bottom left in Figure 9)
A simple color wheel
Now let's construct a simple color wheel. Draw a circle and mark three points on the circle at 0 degrees, 120 degrees, and 240 degrees. Label the first point red, the second point green, and the third point blue.
Now mark three points on the circle half way between the three points described above. Label each of these points with the color that results from mixing equal quantities of the colors identified with that point's neighbors. For example, the point half way between red and green would be labeled yellow. The point half way between green and blue would be labeled aqua, and the point half way between blue and red would be labeled fuchsia.
Look across to the opposite side
Now note the color that is on the opposite side of the circle from each of the primary colors. Aqua is opposite of red. Fuchsia is opposite of green, and yellow is opposite of blue. Comparing this with the colors shown in Figure 9, we see that the color that results from inverting one of the primary colors on the circle is the color that appears on the opposite side of the color wheel.
A reversible process
Earlier I told you that the inversion process is reversible. For example, if we have a full-intensity yellow pixel, the color values for that pixel will be:
- R = 255
- G = 255
- B = 0
If we invert the colors for that pixel, the result will be:
- R' = 255 - 255 = 0
- G' = 255 - 255 = 0
- B' = 255 - 0 = 255
Thus, the color of the inverted yellow pixel is blue, which is the color that is opposite yellow on the circle.
General conclusion
In general, we can conclude that if we invert a pixel whose color corresponds to a color at a point on the color wheel, (such as the color wheel shown in Figure 11), the color of the inverted pixel will match the color at the corresponding point on the opposite side of the color wheel.
Experimental confirmation
We can demonstrate this experimentally by inverting the image of the color wheel without performing any color filtering. The result of such an inversion is shown in the bottom half of Figure 12. Once again, the original image of the color wheel is shown at the top, and the inverted image of the color wheel is shown at the bottom.
As you can see in Figure 12, each of the colors in the original image moved to the opposite side of the wheel when the color wheel was inverted.
Also, you can see from Figure 12 that white pixels turn into black pixels and black pixels turn into white pixels when they are inverted. You should be able to explain that by considering the color values for black and white pixels along with the inversion formula.
Another exercise
Another exercise might be useful. It might be possible to use the color wheel in Figure 11 to explain what happened to the colors when the starfish image was inverted in Figure 5. Pick a point on the starfish in the original image in Figure 5 and note the color of that point. Then find a point on the color wheel of Figure 11 whose color matches that point. Then find the corresponding point on the opposite side of the color wheel. The color of that point should match the color of the corresponding point on the inverted starfish image at the bottom of Figure 5.
May not have found the matching point
A potential problem here is that you may not be able to find a point on the color wheel that matches the color of a point on the starfish. That is because any individual pixel on the starfish can take on any one of 16,777,216 different colors. The colors shown on the color wheel are a small subset of that total and may not include the color of a specific point on the starfish.
Difficulty of displaying 3-dimensional data
The problem that we have here is the classic problem of trying to represent a three-dimensional entity in a two-dimensional display medium. Pixel color is a three-dimensional entity, with the dimensions being red, green, and blue. Any of the three color values belonging to a pixel can take on any one of 256 different values. It is very difficult to represent that on a flat two-dimensional screen, and a color wheel is just one of many schemes that have devised in an attempt to do so.
Could display as a cube
One way to represent these 16,777,216 colors is as a large cube having eight corners and six faces. Consider the large cube to be made up of 16,777,216 small cubes, each being a different color. Arrange the small cubes so as to form the large cube with 256 cubes (colors) along each edge. Thus, each face is a square with 256 small cubes along each side.
Arrange the small cubes so that the colors of the cubes at the corners on one face are black, blue, green, and aqua as shown in the top half of Figure 13. Arrange the remaining cubes on that face to contain the same colors in the same order as that shown in the top half of Figure 13.
(The colors in the bottom half of Figure 13 are the inverse of the colors shown in the top half.)
The opposite face
Arrange the small cubes such that the diagonal corners on the opposite face are set to white, yellow, red, and fuchsia as shown in the top half of Figure 14. Recall that these colors are the inverse of black, blue, green, and aqua. Arrange additional small cubes such that the colors on that face progress in an orderly manner between the colors at the corners as shown in the top half of Figure 14.
Inverse colors
Each of the colors in the top half of Figure 14 is the inverse of the color at the diagonally opposite location on the face shown in Figure 13. For example, the yellow hues near the bottom left corner of Figure 14 are the inverse of the blues hues near the upper right corner in Figure 13.
(Also, the colors in the bottom half of Figure 14 are the inverse of the colors at the corresponding locations in the upper half of Figure 14.)
Can't show all 16,777,216 colors
In order for me to show you all 16,777,216 colors, I would have to display 128 panels like those shown in Figures 13 and 14. Each panel would represent two slices cut through the cube parallel to the two faces shown in Figures 13 and 14.
(The top half of the panel would represent one slice and the bottom half would represent the other slice.)
Each slice would represent the colors produced by combining a different value for red with all possible combinations of the values for green and blue. Obviously, it would be impractical for me to attempt to display 128 such panels in this lesson.
(Because each panel shows the raw colors at the top and the inverse colors at the bottom, only 128 such panels would be required. If only the raw colors were shown in each panel, 256 panels would be required to show all 16,777,216 colors.)
Two slices from inside the cube
The top half of Figure 15 shows a slice through the cube for a red value of 50 combined with all possible values for green and blue. The bottom half shows a slice for the inverse red value given by (255 - 50) or 205.
(Once again, the colors in the bottom half of Figure 15 are the inverse of the colors in the top half of Figure 15.)
You can generate the colors yourself
Since it is impractical for me to show you all 16,777,216 colors and their inverse, I am going to do the next best thing. Listing 9 contains the program named ImgMod27 that I used to produce the output shown in Figures 13, 14, and 15. You can compile this program and run it yourself for any value of red from 0 to 255. Just enter the red value as a command-line parameter.
The top half of the output produced by the program displays the 65,536 colors represented by a single slice through the cube parallel to the faces shown in Figures 13 and 14. The bottom half of the output in each case represents the inverse of the colors shown in the top half.
Most colors don't have names
Most of the different colors don't have names, and even if they all did have names, most of us wouldn't have them all memorized. Therefore, it is impossible for me to describe in a general sense the color that will be produced by inverting a pixel having one of the 16,777,216 possible colors.
Contribution of red, green, and blue
By doing a little arithmetic, I can describe the inverse color numerically by indicating the contribution of red, green, and blue, but most of us would probably have difficulty seeing the color in our mind's eye even if we knew the contribution of red, green, and blue.
The colors that result from some combinations of red, green, and blue are intuitive, and others are not. For example, I have no difficulty picturing that red plus blue produces fuchsia, and I have no difficulty picturing that green plus blue produces aqua. However, I am unable to picture that red plus green produces yellow. That seems completely counter-intuitive to me. I don't see anything in yellow that seems to derive from either red or green.
Of course, things get even more difficult when we start thinking about mixtures of different contributions of all three of the primary colors.
Back to experimentation
So, that brings us back to experimentation. The program in Listing 9 can be used to produce any of the 16,777,216 colors in groups of 65,536 colors, along with the inverse of each color in the group. Perhaps you can experiment with this program to produce the color that matches a point on the starfish at the top of Figure 5. If so, the inverse color shown in your output will match the color shown in the corresponding point on the starfish at the bottom of Figure 5.
And that is probably more than you ever wanted to hear about color inversion.
The remaining code
Now back to the main program named ImgMod15. The remaining code in the program is shown in Figure 7.
(Note that the boldface code in Listing 7 is inside a comment block.)
As I mentioned earlier, the boldface code in Listing 6 filters (scales) the pixel first and then inverts the pixel. In some cases, it might be useful to reverse this process by replacing the boldface code in Listing 6 with the boldface code in Listing 7. This code inverts the color of the pixel first and then applies the filter. If you filter and you also invert, the order in which you perform these two operations can be significant with respect to the outcome.
The remaining code in Listing 7 signals the end of the processImg method and the end of the ImgMod15 class.
Communication between the Programs
In case you are interested in the details, this section describes how the program named ImgMod02a communicates with the image-processing program. If you aren't interested in this much detail, just skip to the section entitled Run the Program.
Instantiate an image-processing object
During execution, the program named ImgMod02aa:
- Has the pixel data in the correct format
- Has an image-processing object that will process those pixels and will return an array containing processed pixel values
All that the ImgMod02a program needs to do at this point is to invoke the processImg method on the image-processing object passing the pixel data along with the number of rows and columns of pixels as parameters.
Posting a counterfeit ActionEvent
The ImgMod02a processed pixel data, which is displayed as an image below the original image as shown in Figure 1.
Run the Programs
I encourage you to copy, compile, and run the programs named ImgMod15 and ImgMod27 provided in this lesson. Experiment with them, making changes and observing the results of your changes.
Process a variety of images
Download a variety of images from the web and process those images with the program named ImgMod15.
(Be careful of transparent pixels when processing images that you have downloaded from the web. Because of the quality of the data involved, you will probably get better results from jpg images than from gif images. Remember, you will also need to copy the program named ImgMod02a and the interface named ImgIntfc02 from the earlier lessons entitled Processing Image Pixels Using Java: Controlling Contrast and Brightness and Processing Image Pixels using Java, Getting Started.)
View a large number of different colors
Compute and observe the colors and their inverse for various slices through the color cube as provided by the program named ImgMod27.
Change the order of filtering and inversion
Run some experiments to determine the difference in results for various images based on filtering before inverting and on inverting before filtering.
(Of course, if you don't filter, it won't matter which approach you use.)
Write an advanced filter program
Write an advanced version of the program that applies color filtering by allowing you to control both the location and the width of the distribution for each of the three colors separately. You can get some ideas on how to do this from the program entitled Processing Image Pixels Using Java: Controlling Contrast and Brightness.
Replicate the results
To replicate the results shown in this lesson, right-click and download the jpg image files in Figures 17, 18, and 19 below.
Have fun and learn
Above all, have fun and use this program to learn as much as you can about manipulating images by modifying image pixels using Java.
Test images
Figures 17, 18, and 19 contain the jpg images that were used to produce the results shown in this lesson. You should be able to right-click on the images to download and save them locally. Then you should be able to replicate the results shown in this lesson.
Figure 17
Figure 18
Figure 19
Summary
In this lesson, I showed you how to write a Java program that can be used to:
- Control color intensity
- Apply color filtering
- Apply color inversion
I provided several examples of these capabilities. In addition, I explained some of the theory behind color inversion and showed you how to relate the colors on original and inverted pixels to points on a color wheel as well as pixels in a color cube.
What's Next?
Future lessons will show you how to write image-processing programs that implement many common special effects as well as a few that aren't so common. This will include programs to do the following:
- Blur all or part of an image.
- Deal with the effects of noise in an image.
- Sharpen all or part of an image.
- Perform edge detection on an image.
- Morph one image into another image.
- Rotate an image.
- Change the size of an image.
- Other special effects that I may dream up or discover while doing the background research for the lessons in this series.
Complete Program Listings
Complete listings of the programs discussed in this lesson are provided in Listings 8 and 9.
A disclaimer
The programs that I will provide and explain_3<< | http://www.developer.com/java/other/article.php/3512456/Processing-Image-Pixels-Color-Intensity-Color-Filtering-and-Color-Inversion.htm | CC-MAIN-2015-48 | en | refinedweb |
Tech Tips index
May 21, 1998
This issue presents tips, techniques, and sample code for the following topics:
Temporary Files
In programming applications you often need to use temporary files --
files that are created during program execution to hold transient information.
A typical case is a language compiler that uses several passes (such as
preprocessing or assembly) with temporary files used to hold the output
of the previous pass. In some cases, you could use memory instead of disk
files, but you can't always assume that the required amount of memory will
be available.
One feature in JDK 1.2 is the ability to
create temporary files. These files are created in a specified directory
or in the default system temporary directory (such as C:\TEMP on Windows
systems). The temporary name is something like the following:
t:\tmp\tmp-21885.tmp
The same name is not returned twice during the lifetime of the Java1 virtual
machine. The returned temporary file is in a File object and can be used
like any other file. Note: With Unix, you may find that your input file
has to also reside in the same file system where the temporary files are
stored. The renameTo method cannot rename files across file systems.
Here is an example of using temporary files to convert an input file to
upper case:
import java.io.*;
public class upper {
public static void main(String args[])
{
// check command-line argument
if (args.length != 1) {
System.err.println("usage: upper file");
System.exit(1);
}
String in_file = args[0];
try {
// create temporary and mark "delete on exit"
File tmpf = File.createTempFile("tmp");
tmpf.deleteOnExit();
System.err.println("temp file = " + tmpf);
// copy to temporary file,
// converting to upper case
File inf = new File(in_file);
FileReader fr = new FileReader(in_file);
BufferedReader br = new BufferedReader(fr);
FileWriter fw =
new FileWriter(tmpf.getPath());
BufferedWriter bw =
new BufferedWriter(fw);
String s = null;
while ((s = br.readLine()) != null) {
s = s.toUpperCase();
bw.write(s, 0, s.length());
bw.newLine();
}
br.close();
bw.close();
// rename temporary file back to original file
if (!inf.delete() || !tmpf.renameTo(inf))
System.err.println("rename failed");
}
catch (IOException e) {
System.err.println(e);
}
}
}
The input file is copied to the temporary file, and the file contents are
converted to upper case. The temporary file is then renamed back to the
input file.
JDK 1.2 also provides a mechanism whereby files can be marked for "delete
on exit." That is, when the Java virtual machine exits, the file is
deleted. An aspect worth noting in the above program is that this feature
handles the case where the temporary file is created, and then an error
occurs (for example, the input file does not exist). The delete-on-exit
feature guarantees that the temporary file is deleted in the case of
abnormal program termination.
Resource Bundles
One." A resource bundle contains locale-specific objects, for example
strings representing messages to be displayed in your application. The
idea is to load a specific bundle of resources, based on a particular
locale.
To show how this mechanism works, here's a short example that retrieves and
displays the phrase for "good morning" in two different languages:
# German greeting file (greet_de.properties)
morn=Guten Morgen
# English greeting file (greet_en.properties)
morn=Good morning
The above lines make up two text files, greet_de.properties and
greet_en.properties. These are simple resource bundles.
The following program accesses the resource bundles:
import java.util.*;
public class bundle {
public static String getGreet(String f,
String key, Locale lc)
{
String s = null;
try {
ResourceBundle rb =
ResourceBundle.getBundle(f, lc);
s = rb.getString(key);
}
catch (MissingResourceException e) {
s = null;
}
return s;
}
public static void main(String args[])
{
String fn = "greet";
String mornkey = "morn";
Locale ger = Locale.GERMAN;
Locale eng = Locale.ENGLISH;
System.out.println("German locale = " + ger);
System.out.println("English locale = " + eng);
System.out.println(getGreet(fn, mornkey, ger));
System.out.println(getGreet(fn, mornkey, eng));
}
}
The idea is that ResourceBundle.getBundle looks up a particular bundle,
based on the locale name ("de" or "en"). The bundles
in this example are property files (see java.util.Properties), with
"key=value" pairs in them, and the files are located in the
current directory. A particular bundle is retrieved based on the locale,
and then a specific key is looked up, and the corresponding value returned.
Note that there are a number of additional aspects to resource bundle naming
and lookup that you should acquaint yourself with if you're concerned with
internationalization issues. Resource bundles are commonly used to represent
a collection of message strings, but other types of entities, such as icons,
can also be stored in bundles.
The output of the program is:
German locale = de
English locale = en
Guten Morgen
Good morning
Finally, if you program your application's message display features in
terms of locales and resource bundles, as this example illustrates, then
you have taken an important step toward internationalizing your program.
_______
1 As used on this web site, the terms
"Java virtual machine" or "JVM" mean a virtual machine for the Java platform. | http://java.sun.com/developer/TechTips/1998/tt0521.html | crawl-002 | en | refinedweb |
Articles Index
Applets fuelled Java technology programming.
Applet:
.JNLP:
SimpleExample
SimpleExample.jnlp
<?xml version="1.0" encoding="utf-8"?>
<!-- JNLP File for SimpleExample Application -->.
codebase
<information>
</information>
Having created the .JNLP file, create the SimpleExample.html file used to launch the application:
SimpleExample.html
:
SimpleExample.jar.
application/x-java-jnlp-file
jigadmin
http-server/Indexers/default
<JNLP>.
This simple example application illustrates some of JFC's features. It's worth noting that the networked application was launched by a single click and was decoupled from the browser (which would have restricted the user interaction).
Depending on the options set up for Java Web Start, launching the same networked application will create a desktop shortcut at the discretion of the user. Figure 3 shows the dialog box where the user can make that choice. Creation of the desktop shortcut happens only on Windows platforms.
This simple Hello World application was intended to drive home the concepts of Java Web Start. The Java Web Start installation provides demonstrations of applications that have a more advanced user interface; such applications are appropriately decoupled from the browser.
This application, however, does not perform any operations restricted by the sandbox. To illustrate more of Java Web Start's advantages, I'll next demonstrate an application that performs local disk access.
Java Web Start security
We just looked at a simple example; now, we will look at another simple example that performs some operations not permitted in the default sandbox environment. Trusted signed code can, however, leave the sandbox, as we'll see. As a basis, we will use some of the code I used in my earlier Java security articles. I've modified the Java program to exit when the window closes if invoked either as an application or using Java Web Start.
Here's the code, with modifications:
/**
*.awt.event.*;WindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e)
{System.exit(0);} });
f.add("Center", writefile);
f.setSize(300, 100);
f.show();
}
}
Copy the writeFile.class and writeFile$1.class (the class generated as a result of the anonymous inner class corresponding to the WindowAdapter) into a subdirectory and generate the signed jar file:
The .JNLP file used with this application illustrates how it's possible for signed code to execute outside the default sandbox environment provided by Java Web Start. The appropriate code is illustrated in boldface:
<?xml version="1.0" encoding="utf-8"?>
<!-- JNLP File for WriteFile Application -->
<jnlp spec="1.0+"
codebase=""
href="writeFile.jnlp">
<information>
<title>writeFile Demo
Application</title>
<vendor>Rags</vendor>
<homepage href="docs/help.html"/>
<description>writeFile Demo
Application</description>
<description kind="short">A demo
of writeFile app. </description>
<icon href="images/swingset2.jpg"/>
<offline-allowed/>
</information>
<security>
<all-permissions/>
</security>
<resources>
<j2se version="1.3"/>
<jar href="writeFile.jar"/>
</resources>
<application-desc
</jnlp>
Provide an HTML file to launch the application as well. Now the application is ready to run. If the code has been signed and provisioned correctly on the server, the window shown in Figure 4 should pop up. The window will give the user the choice of continuing to run the application or to terminate it.
Clicking on the Start button will cache the signer locally, and the dialog popup will not appear for subsequent invocations of the same application. If you don't want to trust the signer subsequently, the local cache can be cleared using the Java Web Start Application Manager, as you will see later.
So far, you have seen how Java Web Start can be used without any programming. The JNLP and Java Web Start API provide a set of classes for user friendly of client-side applications. For example, even though code is unsigned, it may still be used for local disk access with the user's consent. This is somewhat akin to the Save As option and manipulating cookies available in most browsers.
The Java Web Start APIs deal with this limited set of out-of-the-sandbox operations in a controlled manner, by limiting interaction to the system explicitly via dialog boxes.
Use the JNLP API
We just saw how an application can access a local filesystem by signing the code. This can be accomplished by using the API as well.
I will discuss only some API details and use some of those to modify the writeFile example we discussed earlier. These APIs come with the developers' pack, which needs to be installed as a Java extension to successfully compile the code and subsequently generate the .jar files. Rather than installing the extensions, you could also include jnlp.jar in the classpath. Refer to the Resources section below for a more detailed explanation of the API. Examples of the API are:
writeFile
.jar
jnlp.jar
ClipBoardService
FileContents
FileOpenService
FileSaveService
JNLPRandomAccessFile
PersistenceService
PrintService
Many of these APIs apply even when run in an untrusted environment. That is, the code does not have to be signed and trusted by the user for the operation's successful completion. In some cases, Java Web Start will warn the user of the potential security risk of letting an untrusted application access potentially confidential information.
Let's dissect some of these APIs further. The FileOpenService, for instance, supports methods like openFileDialog() and openMultiFileDaialog(), which select a single file or multiple files, respectively. They return FileContents and FileContents[], respectively. Likewise, FileSaveService supports methods like SaveAsFileDialog and SaveFileDialog. As you might expect, these APIs interact with the user via dialog boxes explicitly.
openFileDialog()
openMultiFileDaialog()
FileContents[]
SaveAsFileDialog
SaveFileDialog
In the example below, we will look at the earlier application that was converted to write to a file picked by the user, rather than to /tmp/foo. The application uses some of the APIs mentioned above. This application need not be signed:
/tmp/foo
/**
* This program uses JNLP API to write to a file
* that is selected by the user
*
* @author Rags Srinivas
*/
import java.awt.*;
import java.awt.event.*;
import java.io.*;
import java.lang.*;
import java.applet.*;
import javax.jnlp.*;
public class writeFile extends Applet {
static FileOpenService fos;
static JNLPRandomAccessFile raf;
static FileContents fc;
static int count=0;
public void init() {
// Look up a FileOpenService
try {
fos = (FileOpenService)ServiceManager.lookup(
"javax.jnlp.FileOpenService");
} catch (UnavailableServiceException e) {
fos = null;
}
if (fos != null) {
try {
// get a file with FileOpenService
fc = fos.openFileDialog(null, null);
// If valid file contents
if (fc != null) {
long grantedLength =
fc.getLength();
if (grantedLength + 1024 >
fc.getMaxLength()) {
// attempt to increase the
// maximum file size
// defined by the client
grantedLength =
fc.setMaxLength(
grantedLength + 1024);
}
// Open the file for read write
raf = fc.getRandomAccessFile(
"rw");
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
public void paint(Graphics g) {
String msg="JWS can stun you! ";
if (fos != null && fc != null) {
try {
// seek to the beginning and write
raf.seek(0);
raf.writeBytes(msg + count++);
} catch (Exception e) {
e.printStackTrace();
}
}
}
public void destroy() {
if (fos != null && fc != null) {
try {
// close the file
raf.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
public static void main(String args[]) {
Frame f = new Frame("writeFile");
final writeFile writefile = new writeFile();
writefile.init();
writefile.start();
f.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e)
{ writefile.destroy(); System.exit(0);} });
f.add("Center", writefile);
f.setSize(300, 100);
f.show();
}
}
In the init() method, the file the user picks is opened as a random file for both read and write. The paint() method writes a fixed message appended with an incrementing count. Finally, the destroy() method closes the file. Invoking this application will bring up a dialog box through which the user can interact to select a local file to write to.
init()
paint()
destroy()
Note that the user explicitly picks the output file; no other file can be accidentally or maliciously overwritten. This means that the application securely modifies the local filesystem.
Although the code is written to work both as an applet and an application, this functionality will not be available in the applet environment since the JNLP APIs will not be available there.
The final piece in Java Web Start I will demonstrate is the application manager.
Java Web Start Application Manager
You can employ the Java Web Start Application Manager for a variety of administrative and related tasks. You can set the http proxy, view and start applications, create desktop shortcuts, and set other advanced preferences. The main window is illustrated in Figure 5.
http
You can manipulate other preferences as well, as illustrated in Figure 6. Examples include clearing the local cache, setting options on desktop shortcuts, and so on.
Debug Java Web Start applications
Using the application manager, it is possible to log the output or display the Java console by appropriate options in Preferences/Advanced. The Java console window is shown in Figure 7.
Combine servlets/JSPs, applets, and Java Web Start
Having shown Java Web Start's benefits I'll now turn to applets running in a plug-in environment and clients employing servlet/JSP-supported markup languages.
Consider an application that can be browsed by a variety of clients but is typically set up and administered on a desktop. In such a situation, it's possible to use servlets/JSP to support browsing, while the setup or administration functions are done using Java Web Start to provide a rich user interface.
You can employ applets in situations where sophisticated users need fine-grained security or situations where the browser is the de facto client.
None of these technologies are mutually exclusive. It is fairly easy to mix and match in the best interest of the application requirements.
Conclusion
We started where the Java revolution began -- Java applets. Middle-tier technologies, like servlets and JSPs, enabled client-side presentation and solved some issues with applets. However, with servlets and JSPs, the user interaction was confined to the hosted environment. Java Web Start technology overcomes this limitation while still retaining most of the advantages of the other technologies used hitherto to design Web clients.
We looked at several examples illustrating Java Web Start's different features. Java Web Start and JNLP offer a powerful paradigm to design Web clients by providing a rich user interaction without confining functionality to a browser. Some combined approaches were mentioned as well, which might be applicable to certain categories of applications.
About the author
Raghavan Srinivas is a Java technology evangelist at Sun Microsystems who specializes in Java and distributed systems. He is a proponent of Java technology and teaches graduate and undergraduate classes in the evening. Srinivas holds a master's degree in computer science from the Center of Advanced Computer Studies at the University of Southwestern Louisiana. He enjoys hiking, running, and traveling, but most of all loves to eat, especially spicy food.
Resources
Java is supported in applications and applets. Applets -- small programs that could execute in a browser environment -- spurred Java's early growth. Applet code downloads at runtime and executes in the context of a JVM hosted by the browser. As an applet's code can be downloaded from anywhere in the network, Java's early designers thought such code should not have unlimited access to the target system. That thinking led to the sandbox model -- the security model introduced with JDK 1.0.
The sandbox model deems as untrusted all code downloaded from the network and confines that code to a limited area of the browser -- the sandbox. For instance, code downloaded from the network could not update the local filesystem. It's probably more accurate to call this model a fenced-in model, since a sandbox does not imply severe confinement.
While this may well seem like a secure approach, there are some inherent problems. Firstly, it dictates a rigid policy closely tied to the implementation. Secondly, it's seldom a good idea to put all eggs in one basket. That is, it's unwise to rely entirely on one approach to provide overall system security.
Security needs to be layered in a depth of defense and should be flexible enough to accommodate different policies. The sandbox security model meets neither of these criteria.
Return to article
Java Plug-in primer
A plug-in program adds functionality to a Web browser (or any other program). Java Plug-in software enables Java applets to run using the Java 2 Runtime Environment, Standard Edition (JRE), rather than the browser's default runtime. The Java Plug-in delivers several capabilities on different versions of Netscape Navigator and Internet Explorer, using the respective browsers' plug-in architecture and extension mechanisms. The capabilities include:
In summary, the Java Plug-in provides a common framework for running Java applets across multiple browsers and versions.
Return to article
Reprinted with permission from the July 2001 edition of JavaWorld magazine. Copyright Web Publishing Inc., an IDG Communications company. Register for editorial e-mail alerts.
Have a question about programming? Use
Java Online
Support. | http://java.sun.com/developer/technicalArticles/JavaLP/javawebstart/ | crawl-002 | en | refinedweb |
..JSR 41 has added a simple assertion facility to Java. In the proposed final draft, assert statements have one of two forms:
assert Expr_1;
assert Expr_1:Expr_2;
Both of these forms, with and without parentheses around Expr_1 and Expr_2, are recognized by javac when the "-source 1.4" flag is used. However, javadoc (Standard Doclet version 1.4 beta (04 Aug 2001)) only recognizes
assert(Expr)
and
assert (Expr)
independent of whether or not the "-source 1.4" flag is used. In particular, the program
/**
* JavaDoc for AssertTest2
*/
public class AssertTest2 {
public static void main(String argv[]) {
assert true;
assert true:true;
assert(true):true;
assert (true):true;
assert true:(true);
assert(true):(true);
assert (true):(true);
}
}
compiles with javac -source 1.4 but javadoc generates an error for each line of input. By commenting out all preceeding assert lines, I verified each line is rejected by itself; i.e. the 2nd and subsequent assert statements are *not* rejected because of poor "error" recovery in javadoc for the first "mistaken" line, "assert true".
Moreover, javadoc accepts the previous two-arg assert construction
assert(Expr_1, Expr_2); // no longer valid
Concretely,
/**
* JavaDoc for AssertTest3
*/
public class AssertTest3 {
public static void main(String argv[]) {
assert(true, true);
assert (true, true);
}
}
Is parsed by javadoc but rejected by javac. This should be fixed.
Use only simple one-argument form of assert. This is suboptimal since the two-argument from allows more information to be provided.
oops.
xxxxx@xxxxx 2001-08-15
I read that this bug is fixed and closed, yet I fail running javadoc on
sources with asserts. I'm using J2SKD-1.4.0-b92 (latest version).
i have the same problem i'm also using the latest version! :
((
I'll third that: same problem here with 1.4.0-b92, including
assert(true, "test") working in javadoc but not in javac.
And yes, the workaround works.
The same experience even with JDK1.4.1beta. javadoc fails
with the following forms of assertions:
assert expr;
assert (expr1) : expr2;
JDK1.4.1RC still cannot handle assert statements. Please
unclose this log - it is NOT fixed.
Oops! My error - I didn't realize "-source 1.4" was needed as
a javadoc parameter for source with assert statements.
Sorry.... | http://bugs.sun.com/bugdatabase/view_bug.do%3Fbug_id=4492054 | crawl-002 | en | refinedweb |
#include <hallo.h> * Wichert Akkerman [Mon, Nov 11 2002, 12:12:24PM]: > > Boot-floppies drop the name of the selected network card in there to > > have the module loaded, so I'd call /etc/modules pretty vital for the > > current mode of operation in Debian. > > In that case boot-floppies are buggy, they should do something like > create a /etc/modutils/hardware file with an alias for eth0. To do this, we would need to "know" that the user loaded a driver providing eth0, since there is no hardware detection that could classify the drivers. So nothing really feasible. Gruss/Regards, Eduard. -- B.Gates: quality software :: R.McDonald: gourmet cuisine | http://lists.debian.org/debian-devel/2002/11/msg00738.html | crawl-002 | en | refinedweb |
> 2DCAD_duojiemian.rar > Math.h
// The following ifdef block is the standard way of creating macros which make exporting // from a DLL simpler. All files within this DLL are compiled with the MATH_EXPORTS // symbol defined on the command line. this symbol should not be defined on any project // that uses this DLL. This way any other project whose source files include this file see // MATH_API functions as being imported from a DLL, wheras this DLL sees symbols // defined with this macro as being exported. #ifdef MATH_EXPORTS #define MATH_API __declspec(dllexport) #else #define MATH_API __declspec(dllimport) #endif // This class is exported from the Math.dll class MATH_API CMath { public: CMath(void); // TODO: add your methods here. }; extern MATH_API int nMath; MATH_API int fnMath(void); MATH_API int Add(int a,int b); | http://read.pudn.com/downloads47/sourcecode/windows/console/157695/2DCAD/Math.h__.htm | crawl-002 | en | refinedweb |
> ucos+net.zip > InetAddr.h
/***************************************************************************** * InetAddr.h - * *. * * portions Copyright (c) 2001 by Cognizant Pty Ltd. * * The authors hereby grant permission to use, copy, modify, distribute, * and license this software and its documentation for any purpose, provided * that existing copyright notices are retained in all copies and that this * notice and the following disclaimer are included verbatim in any * distributions. No written agreement, license, or royalty fee is required * for any of the authorized uses. * *. * ****************************************************************************** * REVISION HISTORY (please don't use tabs!) * *(yyyy-mm-dd) * 2001-05-12 Robert Dickenson
, Cognizant Pty Ltd. * Minor modifications to original file from NetBSD. * ****************************************************************************** */ #ifndef _INETADDR_H_ #define _INETADDR_H_ u_long inet_addr(register const char *cp); int inet_aton(register const char *cp, struct in_addr *addr); #endif /* _INETADDR_H_ */ | http://read.pudn.com/downloads12/sourcecode/os/51266/ucos-ii_vc/m-ix86pm/MyTask/inc/InetAddr.h__.htm | crawl-002 | en | refinedweb |
> firev0.01.rar > interpolimg _INTERPOLBASEIMG_H #define _INTERPOLBASEIMG_H #include "image.hpp" #ifdef __HAVE_INTERPOLATION_STUFF__ #include "interpol.h" #include "coeff.h" #endif #include "diag.hpp" using namespace std; using namespace diag; namespace img { ///class to give interpolated values for between pixel points. /** * A class to calculate interpolated values of images. e.g. the * value at the point (0.5, 0.75). * * This class is based on some methods from Philippe Thevanaz *
*/ class InterpolBaseImage { public: /** * The one and only constructor. Give an image to it and let it * calculate the values needed to be later able to retrieve * interpolated values. This might take some time due to some * mathemagical calculations. * * @param inputimage give here the image which shall be processed * @param splineDegree is a value to change the interpolation */ InterpolBaseImage(const BaseImage &inputimage, int splineDegree=3) { width=inputimage.xDim(); height=inputimage.yDim(); degree=splineDegree; splineImage=(double*)malloc(height*width*sizeof(double)); memtest(splineImage); for(unsigned int i=0;i | http://read.pudn.com/downloads9/sourcecode/graph/34944/firev0.01/fire-v0.01/interpolimg.hpp__.htm | crawl-002 | en | refinedweb |
> ffmpeg_win32.rar > mpegaudiodecheader.h
/* * MPEG Audio header decoder *audiodecheader.c * MPEG Audio header decoder. */ #ifndef FFMPEG_MPEGAUDIODECHEADER_H #define FFMPEG_MPEGAUDIODECHEADER_H #include "common.h" #include "mpegaudio.h" /* header decoding. MUST check the header before because no consistency check is done there. Return 1 if free format found and that the frame size must be computed externally */ int ff_mpegaudio_decode_header(MPADecodeContext *s, uint32_t header); #endif /* FFMPEG_MPEGAUDIODECHEADER_H */ | http://read.pudn.com/downloads100/sourcecode/windows/multimedia/408202/ffmpeg_win32/libavcodec/mpegaudiodecheader.h__.htm | crawl-002 | en | refinedweb |
> Cimage.zip > GIFDECOD. * */ #define LOCAL static #define FAST register typedef short SHORT; // 16 bits integer typedef unsigned short USHORT; // 16 bits unsigned integer typedef unsigned char byte; // 8 bits unsigned integer typedef unsigned long ULONG; // 32 bits unsigned integer typedef int INT; // 16 bits integer #ifndef LONG typedef long LONG; // 32 bits integer #endif /* Various error codes used by decoder */ #define OUT_OF_MEMORY -10 #define BAD_CODE_SIZE -20 #define READ_ERROR -1 #define WRITE_ERROR -2 #define OPEN_ERROR -3 #define CREATE_ERROR -4 #define NULL 0L #define MAX_CODES 4095 class GIFDecoder { protected: SHORT init_exp(SHORT size); SHORT get_next_code(); public: SHORT decoder(SHORT linewidth, INT& bad_code_count); /* bad_code_count is incremented each time an out of range code is read. * When this value is non-zero after a decode, your GIF file is probably * corrupt in some way... */ friend int get_byte(void); // - This external (machine specific) function is expected to return // either the next byte from the GIF file, or a negative error number. friend int out_line(unsigned char *pixels, int linelen); /* - This function takes a full line of pixels (one byte per pixel) and * displays them (or does whatever your program wants with them...). It * should return zero, or negative if an error or some other event occurs * which would require aborting the decode process... Note that the length * passed will almost always be equal to the line length passed to the * decoder function, with the sole exception occurring when an ending code * occurs in an odd place in the GIF file... In any case, linelen will be * equal to the number of pixels passed... */ protected: /* Static variables */ SHORT curr_size; /* The current code size */ SHORT clear; /* Value for a clear code */ SHORT ending; /* Value for a ending code */ SHORT newcodes; /* First available code */ SHORT top_slot; /* Highest code for current size */ SHORT slot; /* Last read code */ /* The following static variables are used * for seperating out codes */ SHORT navail_bytes; /* # bytes left in block */ SHORT nbits_left; /* # bits left in current byte */ byte b1; /* Current byte */ byte byte_buff[257]; /* Current block */ byte *pbytes; /* Pointer to next byte in block */ }; | http://read.pudn.com/downloads/sourcecode/graph/1410/CIMAGE/GIFDECOD.H__.htm | crawl-002 | en | refinedweb |
One).
I'd argue that $54 is the ridiculous price, not $16.49
well when Scott Hanselman posted on Twitter last night that the book was going for so cheap I had to try and order it and to my surprise, Amazon are shipping to South Africa again so I'm stoked! Can't wait for it to arrive!
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Cant wait to grab the offer :-)
Pingback from Professional ASP.NET 3.5 Book (only $16 on Amazon for a short time) · Buwin Technology
Pingback from Young Wizard » Blog Archive » Professional ASP.NET 3.5 Book (only $16 on Amazon for a short time)
Thanks to the Euro/Dollar exchange rate, it's even cheaper for us Europeans...
Thanks for this tip, Scott!
Damn.. I got this book a few weeks ago. Great book! but it would have been better had the price been lower ;)
Hi
ASP.NET 3.5 Book is very good book
But i think ASP.Net 3.5 Unleashed is the King :-)
Thank you
Hey Scott,
Are there any VB books that you would recommend (with regards to ASP.NET)?
Pingback from Dew Drop - May 6, 2008 | Alvin Ashcraft's Morning Dew
This looks like it has everything but the kitchen sink. Nice. I just ordered a copy. Thanks for the info!
It's at #41 as I write this. Too good a price for me to pass up. Thanks for the heads-up and the shout out Scott.
its now at #14 !!!!!!!!!!
bought this this morning for £8, thanks scott you are king!
ORDERED
Scott's Book Club is almost outselling Oprah's Book Club. Developers Unite!
Thanks for the heads up. Not quite the same sale at Amazon.ca ($37.79 CAD) but good enough! :)
Thanks Scott..I have ordered one!
This book is now up to #8 WOW. Thanks for letting us know Scott. Ordered mine today, the reviews are great and should be a keeper for all .NET developers!
Now on #8. Incredible!
Thanks for the info, Scott!
Read "Pro LINQ: Language Integrated Query in C# 2008". Great book. One of the best explanations of Lamba.
Book ordered. Can't wait to read it.
I agree with Max: I have both books, and the "Unleashed" Walther book is perhaps three times as valuable, because it has perhaps that much more content that isn't just a restatement of the documentation. Still, $17 is a can't-miss price.
The price just went up from $16 to $27 while I was looking at it! Maybe they'll discount it again. Good book.
Looks like the price just rose. I got it before it went to $27.49 and even that is still a great deal.
I had already added it to my cart, and by the time I got to checkout (shopped for something else), they'd changed the price. Luckily for me, a co-worker was already checking out and was able to add a second one without his price changing.
Don't forget "ASP.NET 3.5 For Dummies" a really well written book on the same subject that is very "entertaining". I had a chance to meet the author at the MVP Summit and he is a great guy.
Promotion over already? Dang, it's back up to $27.49.
It seems that they've stopped selling the book for the $16.00... It now is $27... Dang it!
27. I should get it before they rise the price again.
It looks like the price just went up to $27 (still 50% off though).
The book is currently ranked #5 for *all* items on Amazon. I suspect they raised the price since it is burning up the charts and selling a bit better than they origionally planned. :-)
Here are a few other .NET books still on the list:
C# 2008:
VB 2008:
Price is up, but it's at #5 now.
Professional ASP.NET 3.5 Book (only $16 on Amazon for a short time) Instead of the usual $54 price,...
Some1 please tell Amazon to make it USD 16.49 again, pls :)
I was about to order, and prices shot up, like Gas!!
Actually the price has gone up by $7.49. Details:--
Original Price:- $16 + $4 (shipping) = $20
Current Price:- $27.49 (Free Shipping) = $27.49.
It is still a pretty good deal, and knowledge contained in it.
Thanks for the info!
I can't wait for Scott's book on Silverlight :-)
..Ben
Now they are selling "Beginning ASP.NET 3.5: In C# and VB.NET' for only $17.99.
That's a good deal too.
I bought the book just because it was $16 and full of information. However, in general I do not buy Wrox series books as I don't find them useful at all. I couldn't beat this price though.
This morning Scott Guthrie announced this book was only $16 ; when normally it is $54.99. Rigth now it
Scott,
How do you compare this book "Professional ASP.NET 3.5: In C# and VB" with
"ASP.NET 3.5 Unleashed by Stephen Walther"
which one is the best.
Any comments??
Thanks
Can't i use LINQ with Oracle have been playing around with linq thinking of adopting it for a new project with oracle database.
Une des choses que je suis de prêt est les ventes de livres sur Amazon.com, car cela me donne une bonne
I agree with Billkamm. Wrox series books are not that great. Most of Wrox books contain unnecessary information, like how to navigate visual studio. Who has time to read 1600 pages. I think they are getting good but not best yet. However, you can not beat this price. Basically, it’s not programmer to programmer. It is cr*p to programmer.
It looks like the price went up to $27.50 which is still better than $54.99!
Does it have a chapter on how to use MasterPages without name/ID mangling in the generated HTML? It still grinds my gears that I cannot use MasterPages... :(
What are the other books on Asp.net 3.5 that you would recommend (other then worx series)
its a whopping £85 on Amazon UK.
if that's not a ridiculous price i don't know what is...
The price is something That made me order this book.
Hopefully it's decent. First time I've ever bought a book based soley on price. Although these epic 1600-page programming books have always seemed silly to me. I mean who really sits down reads on of these all the way through?
"Wrox Sucks" - this was how wrox was before they got acquired by John Wiley. Present day Wrox books are a lot better. Also, I would recommend to buy Wrox book which has less # of authors like 2-3 max....not some book which is authored by a Football team ;).
Price is 27.49 now.
The Amazon.UK price in the UK was just £14.0. So really tempted, but at 1600 pages I find this rather daunting to me. As a RAILS fan, I have only just become interested in ASP.MVC which does not seem to be covered :(
Waiting for a great Silverlight 2.0 book (on ASP, MVC and SharePoint) AJAX is RIP ASAIK
Hi,
There is no way to contact you or to add comments in old posts, so I am adding one here.
This is for weblogs.asp.net/.../tip-trick-url-rewriting-with-asp-net.aspx
Please move it there.
I have run into issues because of the wrong C# code that was posted in the comments of the page and I wanted to contribute with a corrected version so others will not go through this problem again. But it's difficult to contribute in this Blog or contact you.
Comment for other blog post:
Do not use the C# code someone posted above or you will run into problems, use this fixed version:
using System.Web;
using System.Web.UI; RewriteFormHtmlTextWriter(System.IO.TextWriter writer)
base.InnerWriter = writer;
public override void WriteAttribute(string name, string value, bool fEncode)
// If the attribute we are writing is the "action" attribute, and we are not on a sub-control,
// then replace the value to write with the raw URL of the request - which ensures that we'll
// preserve the PathInfo value on postback scenarios
if (name == "action")
{
HttpContext Context = null;
Context = HttpContext.Current;
if (Context.Items["ActionAlreadyWritten"] == null)
{
// Because we are using the UrlRewriting.net HttpModule, we will use the
// Request.RawUrl property within ASP.NET to retrieve the origional URL
// before it was re-written. You'll want to change the line of code below
// if you use a different URL rewriting implementation.
value = Context.Request.RawUrl;
// Indicate that we've already rewritten the <form>'s action attribute to prevent
// us from rewriting a sub-control under the <form> control
Context.Items["ActionAlreadyWritten"] = true;
}
}
base.WriteAttribute(name, value, fEncode);)
There's actually a way to do URL rewriting without having to modify the action attribute or anything - it involves multiple RewritePath calls. I'll try to post it on nathanaeljones.com as soon as I can.
The book price is back at $27, btw.
Pingback from Professional ASP.NET 3.5 Book (only $16 on Amazon for a short time) « .NET Framework tips
Just wanted to point out that some images are not showing up (the ones with URLs pointing to). The ones in the sidebar are one example, but what's worse is the ones in the tutorials are missing, making it much harder to understand them (for example, the articles about MVC).
Is it possible to make those images available?
Other than that, great blog, and thanks for all the expertise shared here.
PS: Posted this here and not on the articles I mentioned because comments were disabled on those articles.
I'm in the middle of reading this book and must say - unfortunately a lot of annoyning mistakes/typos. At least too many for a book of such a caliber.
Another thing. I have to back "asrfarinha" (post above) - images from excellent past tutorials (e.g. Linq to SQL series) have dissapeared effectively making these tutorials useless. Can this please be restored.
Evgeny
Pingback from Professional ASP.NET 3.5 (s??lo 16$ en Amazon por unos d??as) « Thinking in .NET
El martes en la mañana Scott Guthrie anunció que este libro estaba a sólo $16 ; cuando normalmente cuesta
$16 USD? $27 USD?? Where? Its $35 USD now! At that price I go to Bookpool.com because I have never, ever, ever had a good experience with Amazon. More like Amadont for me... $16 would have been worth the trouble.
Is there anyway I could get the hardcopy for free? I think I overspent on books (amazon should be happy!) lately. Any good samaritan out there?
Mine just arrived today. Save your money. 1500 pages of bad. I think this is just a case of Scott helping Scott with some shameless promoting.
Well I know why it's so cheap now.
Great book so far, BUT the paper is dirt cheap. Feels like those copies you see for sale everyonce in a while of cheap text books where they tell you it's made with cheap paper. Seems like they would have told ya.
Rick,
Your comment is in a very poor taste. If you don't like the book, that's fine but don't you bad mouth Scott Guthrie. Do you even know what it takes to be that man?
Keep your stupid comments to yourself. | http://weblogs.asp.net/scottgu/archive/2008/05/06/professional-asp-net-3-5-book-only-16-on-amazon-for-a-short-time.aspx | crawl-002 | en | refinedweb |
In this tip, I demonstrate how you can create LINQ to SQL entities that do not contain any special attributes. I show you how you can use an external XML file to map LINQ to SQL entities to database objects.
In this tip, I demonstrate how you can create LINQ to SQL entities that do not contain any special attributes. I show you how you can use an external XML file to map LINQ to SQL entities to database objects.
I’ve talked to several people recently who are deeply bothered by the fact that the LINQ to SQL classes generated by the Visual Studio Object Relational Designer contain attributes. They want to take advantage of the Object Relational Designer to generate their entity classes. However, they don’t like the fact that the generated entities are decorated with a bunch of attributes.
For example, if you use the Object Relational Designer to generate a LINQ to SQL class that corresponds to the Movies database table, then you get the class in Listing 1. This class is generated in the Movies.Designer.cs file.
Listing 1 – Movie Class (abbreviated)
[Table(Name="dbo.Movies")]
public partial class Movie : INotifyPropertyChanging, INotifyPropertyChanged
{
private static PropertyChangingEventArgs emptyChangingEventArgs = new PropertyChangingEventArgs(String.Empty);
private int _Id;
private string _Title;
();
}
}
[Column(Storage="_Title", DbType="NVarChar(100) NOT NULL", CanBeNull=false)]
public string Title
return this._Title;
if ((this._Title != value))
this.OnTitleChanging(value);
this._Title = value;
this.SendPropertyChanged("Title");
this.OnTitleChanged();
}
Notice that the class in Listing 1 includes both [Table] and [Column] attributes. LINQ to SQL uses these attributes to map classes and properties to database tables and database table columns.
Some people are disturbed by these attributes. They don’t want to mix their persistence logic with their domain entities. They want to use POCO objects (Plain Old CLR Objects) for their entities.
Fortunately, LINQ to SQL supports two methods of mapping classes to database objects. Instead of using the default AttributeMappingSource, you can use the XmlMappingSource. When you use the XmlMappingSource, you use an external XML file to map classes to database objects.
You can create the XML file by hand or you can use the SqlMetal.exe command line tool. You run SqlMetal.exe from the Visual Studio Command Prompt (Start, All Programs, Microsoft Visual Studio 2008, Visual Studio Tools, Visual Studio 2008 Command Prompt).
Here’s how you use SqlMetal.exe to create an XML mapping file from a RANU SQL Express database named MoviesDB.mdf:
1. Navigate to the folder containing the MoviesDB.mdf database
2. Execute the following command:
SqlMetal /dbml:movies.dbml MoviesDB.mdf
SqlMetal /dbml:movies.dbml MoviesDB.mdf
3. Execute the following command:
SqlMetal /code:movies.cs /map:movies.map movies.dbml
SqlMetal /code:movies.cs /map:movies.map movies.dbml
After you execute these commands, you will end up with three files:
· movies.dbml – The movies database markup file
· movies.cs – The movie classes that correspond to the database objects
· movies.map – The XML map file that maps the classes to the database objects
· movies.dbml – The movies database markup file
· movies.cs – The movie classes that correspond to the database objects
· movies.map – The XML map file that maps the classes to the database objects
After you generate these files, you can add the movies.cs and movies.map file to your ASP.NET MVC application’s Models folder.
The C# Movie class file from the movies.cs is contained in Listing 2. The file in Listing 2 is almost exactly the same as the file in Listing 1 except for the fact that the file does not contain any special attributes. The Movie class in Listing 2 is a POCO object.
Listing 2 – Movie Class (abbreviated)
{
}
The file in Listing 3 contains the XML mapping file generated by the SqlMetal.exe tool. You could create this file by hand.
Listing 3 – movies.map
<?xml version="1.0" encoding="utf-8"?>
<Database Name="MoviesDB" xmlns="">
<Table Name="dbo.Movies" Member="Movies">
<Type Name="Movie">
<Column Name="Id" Member="Id" Storage="_Id" DbType="Int NOT NULL IDENTITY" IsPrimaryKey="true" IsDbGenerated="true" AutoSync="OnInsert" />
<Column Name="Title" Member="Title" Storage="_Title" DbType="NVarChar(100) NOT NULL" CanBeNull="false" />
</Type>
</Table>
</Database>
After you create an external XML mapping file, you must pass the mapping file to a DataContext object when you initialize the DataContext. For example, the controller in Listing 4 uses the movies.map file within its Index() method.
Listing 4 – HomeController.cs
using System.Data.Linq.Mapping;
using System.Linq;
using System.Web.Configuration;
using System.Web.Mvc;
namespace Tip23.Controllers
[HandleError]
public class HomeController : Controller
public ActionResult Index()
// Create data context
var connectionString = WebConfigurationManager.ConnectionStrings["movies"].ConnectionString;
var map = XmlMappingSource.FromUrl(Server.MapPath("~/Models/Movies.map"));
var dataContext = new MoviesDB(connectionString, map);
// Get movies
var movies = from m in dataContext.Movies select m;
// Return movies in view
return View(movies.ToList());
The Index() starts by retrieving a connection string from the Web configuration file. Next, the XML mapping file is loaded from the Models folder. The connection string and mapping source are passed to the DataContext constructor when the DataContext is created. Finally, a list of movies is retrieved from the database and sent to the view.
The point of this tip was to demonstrate that you can use an external XML file instead of attributes within your LINQ to SQL classes to map your classes to your database objects. Some people don’t want to dirty their classes with database persistence logic. LINQ to SQL is flexible enough to make these people happy.
I don't understand. What is so bad about using attributes?
And of course all those "this." are extra to make the code look simpler...
If both the cs and xml files are embedded into the assembly than I really don't see any benefit of using xml instead of attributes.
In general I prefer using attributes.
If you could show a way of breaking the SQL file into multiple files that would definitely make for a good tip ;)
Anyway thanks .. and keep them coming.
Great tip, didn't know that! That moves Linq-to-Sql a little more towards Linq-to-Entities. And thereby makes Linq-to-Sql an even more attractive choice!
Hi Stephen,
In order to have a complete POCO Model, you still have to get rid of the EntityRef and EntitySet collections for associations, which is a bad thing because you lose all the benefits of lazy loading.
Also, I would not implement the data context initialization in the controller itself, it makes the controller method very hard to test. Specially for all the references to configuration that you have in there.
Anyway, great work. Thanks
Pablo.
Here's an interesting explanation of why/how the INotifyPropertyChanging, INotifyPropertyChanged interfaces are used.
davidhayden.com/.../2949.aspx
If your table changed you still have to re-generate the code of entities and compile your project? Is there work around not need to do that just replace the mapping file?
@Michael - You can change the mapping file directly without a recompile. You also can change the entity class files by hand (changing the class files would, of course, require a recompile). There is no reason that you can't completely ignore SqlMetal.exe and do everything by hand.
Linq still doesn't appear to be loosely coupled even with the map file. With sqlmetal, the map and partial entity class is created but the class still has references to Data.Linq which means it's tightly coupled to Linq. Is there a way to use the map configuration without needing to reference Linq in the model? | http://weblogs.asp.net/stephenwalther/archive/2008/07/22/asp-net-tip-23-use-poco-linq-to-sql-entities.aspx | crawl-002 | en | refinedweb |
Usage
Signature:
final class AsyncDateTimeRangeValidator<V> implements AsyncValidator<V>
Typescript Import Format
//This class is exported directly as module. To import it
import AsyncDateTimeRangeValidator= require("ojs/ojasyncvalidator-datetimerange");
For additional information visit:
Final classes in JET
Classes in JET are generally final and do not support subclassing. At the moment, final is not enforced. However, this will likely change in an upcoming JET release. | https://www.oracle.com/webfolder/technetwork/jet/jsdocs/AsyncDateTimeRangeValidator.html | CC-MAIN-2021-10 | en | refinedweb |
Serializing DateTime into JSON
In my current project I ran into the requirement of serializing an object with a DateTime property into Json, specifically through the Json() method of the Controller class. I can't say it serialized pretty well though.
From a DateTime value that looks like this in SQL server:
2013-06-05 20:37:53.157
It became this on the page:
/Date(1370435873157)/
So why the difference?
The AJAX JSON serializer in ASP.NET encodes a DateTime instance as a JSON string in the following format: \/Date(ticks)\/ where ticks represents the number of milliseconds since January 1, 1970 in Universal Coordinated Time (UTC). The two forward slashes are escaped by the two backslashes, resulting in the long and cryptic number which popped up on my display. Unfortunately that number is not very readable, so I had to change it.
Enter Json.NET. Json.NET is a "popular high-performance JSON network for .NET". You can see a tabular comparison between Json.NET, DataContractJsonSerializer, and JavaScriptSerializer on their home page. It's available through NuGet, so I was able to install it easily.
Having installed Json.NET, I followed RickyWan's post on how to create a custom JsonResult (aptly named JsonNetResult in the code sample) and how to set it up so that calling the Json() method on a controller would return a JsonNetResult instead of the default JsonResult. It involves overriding the Json() method in a base controller and then letting all controllers inherit from that instead, which is a common and useful technique in other situations as well.
Update It seems that RickyWan's site is currently down, so I will post the relevant code here, with some slight modifications, inspired by Matt Honeycutt's Pluralsight course Build Your Own Application Framework with ASP.NET MVC 5.
The BetterJsonResult Class
The first thing we will create is a class that derives from
JsonResult. We are doing this for easier integration with the MVC framework.
public class BetterJsonResult : JsonResult { public override void ExecuteResult(ControllerContext context) { CopyOfBaseClassImplementation(context); if (Data == null) return; var settings = new JsonSerializerSettings { // Sample settings; you can insert your own settings here ContractResolver = new CamelCasePropertyNamesContractResolver(), Converters = new JsonConverter[] { new StringEnumConverter() }, NullValueHandling = NullValueHandling.Ignore }; context.HttpContext.Response.Write(JsonConvert.SerializeObject(Data, settings)); } // Just copying the MVC framework's implementation private void CopyOfBaseClassImplementation(ControllerContext context) { if (context == null) throw new ArgumentNullException(nameof(context)); var cannotGet = JsonRequestBehavior == JsonRequestBehavior.DenyGet && String.Equals(context.HttpContext.Request.HttpMethod, "GET", StringComparison.OrdinalIgnoreCase); if (cannotGet) throw new InvalidOperationException("GET access is not allowed. Change the JsonRequestBehavior if you need GET access."); var response = context.HttpContext.Response; response.ContentType = String.IsNullOrEmpty(ContentType) ? "application/json" : ContentType; if (ContentEncoding != null) { response.ContentEncoding = ContentEncoding; } } }
The heart of this class lies inside the
ExecuteResult method. There, we are using the
JsonConvert class for serialization. The
JsonConvert and
JsonSerializerSettings classes are part of the JSON.NET library.
We will also make a generic version of the BetterJsonResult class:
public class BetterJsonResult<T> : BetterJsonResult { public new T Data { get { return (T)base.Data; } set { base.Data = value; } } }
Finally, we will make our own controller which all our controllers will inherit from. In this base controller, we will add a method that returns an instance of the
BetterJsonResult class.
public class FrameworkController : Controller { protected BetterJsonResult<T> BetterJson<T>(T data) { return new BetterJsonResult<T> { Data = data, JsonRequestBehavior = JsonRequestBehavior.AllowGet }; } }
Inside the controller we can call the
BetterJson method instead of the regular
Json one.
public ActionResult MyAction() { var myData = // get data from somewhere return BetterJson(myData); }
JSON.NET serializes DateTime values into readable strings:
Jun 6, 2013 4:37:53 AM
Which is way more readable than showing the number of ticks between now and some 40 years ago. | https://www.ojdevelops.com/2013/06/serializing-datetime-into-json.html | CC-MAIN-2021-10 | en | refinedweb |
The articulation body the collider is attached to.
Returns null if the collider is attached to no articulation body.
Colliders are automatically connected to the articulation body attached to the same game object or attached to any parent game object.
using UnityEngine;
public class Example : MonoBehaviour { void Start() { // Lift the articulation body attached to the collider. GetComponent<Collider>().attachedArticulationBody.AddForce(Vector3.up); } } | https://docs.unity3d.com/kr/2020.1/ScriptReference/Collider-attachedArticulationBody.html | CC-MAIN-2021-10 | en | refinedweb |
This tutorial explains what is Python Lambda Function, how and when to use it with examples. Also compares Regular and Lambda functions:
Anonymous function, the term commonly used in computer programming, also known as lambda function has been a feature of many programming languages since 1958, originating in the invention of lambda calculus by Alonzo Church.
Today, a good number of programming languages support Anonymous functions or have libraries that have been made to provide support.
=> Check ALL Python Tutorials Here.
What You Will Learn:
- What Is A Lambda Function
- Lambda Function And Regular Functions
- Differences Between Lambda Functions and Regular Functions
- Similarities Between Lambda Functions And Regular Functions
- When To Use Lambda Functions
- When Not To Use Lambda Functions
- Frequently Asked Questions
- Conclusion
What Is A Lambda Function
Generally, an anonymous function is a function without a name. In Python, an anonymous function is created with the lambda keyword, hence sometimes referred to as lambda functions, lambda expressions, etc. In other programming languages, they are called differently.
Suggested reading =>> Lambdas in C++
Programming languages and how their anonymous functions are named.
In Python, a lambda function has a simple syntax, but unlike other supported languages like Haskell, is limited to only a single pure expression in its body. Meaning, it can’t make assignments or use statements like while, for, raise, return, assert, etc.
This doesn’t come as a surprise because originally, Python doesn’t allow statements in its expressions, and lambda is an expression.
Although many Python enthusiasts proposed that Python lambda functions be enhanced to support statements, and others think it’s useless since regular functions are already there for such complexity.
Syntax
A lambda function has the following syntax:
lambda arg1[, arg2,...,argN] : expression
As seen in the syntax above, a lambda function can have many parameters (separated by a comma and no parentheses) but only a single valid Python expression which is evaluated and returned without the explicit use of the return statement, which by the way is not supported.
NB: The square bracket indicates that the other arguments are optional.
Let’s look at some examples using the Python interactive shell.
Example 1: Define a lambda function that adds up two values.
>>> (lambda a,b: a+b)(2,4) # apply the function immediately 6 >>> add = lambda a,b: a+b # assign the function to a variable >>> add(2,4) # execute the function 6
NB: We should note the following:
- Applying a function immediately is known as IIFE which stands for Immediately Invoked Function Execution.
- Assigning a lambda function to a variable is a bad practice (more on this later).
The above code has the following parts.
As seen in the above figure, the lambda keyword creates the lambda expression. Two parameters are defined and receive any two arguments and pass them to the single python expression defined after the colon(:).
Arguments In Lambda Functions
When it comes to arguments, there is no difference between lambda functions and a regular function. Lambda functions support all the ways of passing arguments supported by regular functions. Check out Python Functions to know more about Function’s arguments.
Example 2: Define lambda functions with the different ways of passing arguments.
>>> (lambda a,b,c=2: a+b+c)(2,3,5) # positional and default parameters 10 >>> (lambda a,*args: a + sum(args))(3,4,2,7,8) # positional and arbitrary positional parameter 24 >>> (lambda **kwargs: sum(kwargs.values()))(a=3,b=9,c=8) # arbitrary keyword parameter 20 >>> (lambda a, *, b: a+b)(3,b=9) # positional and keyword-only parameter 12
Lambda Function And Regular Functions
To recap, lambda functions are functions with no names and are sometimes called anonymous functions, lambda expressions.
In this section, we will look at some similarities and differences between lambda functions and regular functions (functions defined with the def keyword).
Differences Between Lambda Functions and Regular Functions
Differences between Lambda Function and Regular Function.
The table above summarizes the differences between a lambda and a regular function. However, we shall elaborate more on those points in this section.
#1) Differences In Keywords
This is one of the basic and important differences. A lambda function is defined using the lambda keyword which is different from a regular function that uses the def keyword.
The lambda keyword creates and returns an object while the def keyword creates and binds the object to the function’s name.
Example 3: Keyword differences between Lambda and Regular functions.
#2) Differences In Tracebacks
One of the reasons why lambda functions are not highly recommended is because of the way traceback identifies them. We shall see that, when an exception gets raised in a lambda function, the traceback will identify it as <lambda>. This can be hard to debug as it’ll be difficult to locate the specific lambda function in question.
Traceback identifies regular functions differently and more appropriately by name. This makes them highly favored during debugging.
Example 4: Trigger the ZeroDivisionError exception and see how traceback differs in identifying them.
Traceback identification difference between Lambda and Regular functions.
#3) Difference In Line-Code
Lambda functions are commonly written as single-line code. However, they can also be expressed in multi-line code using the backslash(/) or parentheses(()).
As for regular functions, they are commonly expressed in a multi-line code but in cases where a single expression or statement is required, they can also be expressed in a single-line code as seen in example 3 above.
Example 5: Define lambda function in a multi-line code.
# express lambda function in a multi-line code using the multiline string backslash(\) multi_line = lambda : \ print('multiple line code') # execute the lambda function multi_line()
Output
However, for clarity, it is recommended that a lambda function be expressed in a single-line code while regular functions are expressed in a multi-line code.
#4) Differences In Invocation
One of the advantages of lambda functions over regular functions is that they can be invoked immediately, also known as Immediately Invoked Function Execution( IIFE). This makes their definitions suitable as arguments to higher-order functions like map(), filter(), etc. Check example 2 above for a few examples.
If we try to apply this to a regular function, a SyntaxError will be raised.
Example 6: Apply IIFE to a regular function.
>>> (def a(): return 4)() File "<stdin>", line 1 (def a(): return 4)() ^ SyntaxError: invalid syntax
#5) Difference In Binding
Since a lambda function is anonymous, it has no name, hence is not bound to a variable. Unlike a regular function that must be given a name during definition. This is one of the reasons why lambda functions are mostly used when reuse is not required.
#6) Differences In Annotation
Function annotations introduced in PEP 3107 enables us to attach metadata to our function’s parameters and return values. Unlike regular functions, lambda functions do not support annotations.
Annotations use colon(:) to attach metadata before any argument and an arrow(->) to set metadata for the return value. This doesn’t make it suitable for lambda functions as they don’t use parentheses to enclose their arguments and only support a single expression.
Check the article on Documenting and Introspecting Python Functions to know more about function annotations.
#7) Differences In Expressions
Unlike regular functions, a lambda function only supports a single expression. Though the lambda function can be expressed in a multi-line code as seen in example 5, and it still remains a single statement.
#8) Differences In Statements
In Python, statements are actions or instructions that can be executed and are mostly made up of reserved keywords like return, raise, if, for. In Python, we can’t have statements inside expressions. This is why a lambda function can’t support statements as it is an expression in itself.
A lambda function doesn’t support a return statement, but automatically returns the result of its expression. A regular function on the other hand requires a return statement in order to return a value. A regular function without a return statement automatically returns None.
Let’s check this claim in the below example.
Example 7: Check the return object of a regular function that has no return statement.
>>> def a(): ... pass # pass statement. Does nothing. ... >>> b = a() # execute our function and assign the return value. >>> b == None # compare to 'None' True
Similarities Between Lambda Functions And Regular Functions
Lambda function is just a function after all. Though they have some differences that we saw above, they also have some similarities. In this section, we shall take a look at some of their similarities.
#1) Execution By The Compiler
In a nutshell, a lambda function and a regular function with a single return statement have the same bytecode generated by the compiler during execution.
As we mentioned above, a lambda function is just a function. We can verify this claim with the built-in type() function as seen below.
Example 8:
>>> add = lambda a, b: a + b >>> type(add) <class 'function'>
To verify the similarities in bytecode, we can use the dis module as seen below.
Example 9:
>>> add = lambda a, b: a + b # define our lambda function >>> def add2(a,b): return a + b # define our regular function ... >>> import dis # import the dis module >>> dis.dis(add) # introspect the lambda function bytecode 1 0 LOAD_FAST 0 (a) 2 LOAD_FAST 1 (b) 4 BINARY_ADD 6 RETURN_VALUE >>> dis.dis(add2) # introspect the regular function bytecode 1 0 LOAD_FAST 0 (a) 2 LOAD_FAST 1 (b) 4 BINARY_ADD 6 RETURN_VALUE >>>
#2) Arguments
As we saw above, a lambda function is just a function. Hence, it supports the same ways of argument passing as regular functions. Check example 2 that demonstrates arguments in lambda functions.
#3) Decorators
A decorator is a feature in Python that allows us to add new functionality to an object without tempering with its original structure. This feature is very common in practice, so there is no doubt that a lambda function will support such a feature.
Though it supports decorators, it doesn’t use the @ syntax for decoration, but simply calls the lambda function.
Example 10:
# define our decorator that simply prints the function's name and arguments def print_args(func): def wrap(a, b): print("Arguments for :", func.__name__) print("Args 1: ", a) print("Args 2: ", b) return func(a,b) return wrap # Add decorator to regular function using the @ prefix @print_args def add(a,b): return a + b # Add decorator to lambda function through function call. add2 = print_args(lambda a, b: a + b) if __name__ == "__main__": add(4,5) # execute decorated regular function add2(6,3) # execute decorated lambda function
Output
NB: Though the lambda function is assigned to a variable(bad practice), it is not bound to that variable. Hence, it is identified by <lambda>
#4) Nesting
A lambda function can be nested because it is just a function. However, it is rare due to its limitations (single expression, no statements, etc).
Whenever a simple single expression is needed, a lambda function can be used rather than binding a variable with the def statement. But still, we recommend using the def statement for readability and debugging purposes.
Example 11:
def enclosing_func(a): # nesting a lambda function return lambda b: a + b if __name__ == "__main__": # execute enclosing function which returns the nested lambda function. add3 = enclosing_func(3) print(add3(32)) print(add3(2))
Output
When To Use Lambda Functions
As stated above, a lambda function is a function without a name. That being said, the most important reason for using lambda functions is when we only need that function once and it only requires a single expression.
Mostly these lambda functions are used in Higher-order built-in functions like map(), filter(), reduce(), sorted(), min(), etc as arguments or key attribute’s value.
Example 12: Sort a list of integers using the sorted() function.
>>> l = [4,5,3,7,9,1] # define our list of integers >>> sorted(l) # sort in ascending order [1, 3, 4, 5, 7, 9] >>> sorted(l, reverse=True) # sort in descending using 'reverse' [9, 7, 5, 4, 3, 1] >>> sorted(l, key=lambda x: -x) # sort in descending using 'key' attribute and lambda [9, 7, 5, 4, 3, 1]
Example 13: Filter out only the odd numbers from a list.
>>> l = [1, 5, 4, 6, 0, 8, 11, 13, 12] >>> list(filter(lambda x: x%2 != 0, l)) [1, 5, 11, 13]
NB: List comprehensions and generator expressions are arguably preferred in many cases. They are more readable and arguably faster.
Example 14:
>>> l = [1, 5, 4, 6, 0, 8, 11, 13, 12] >>> [x for x in l if x%2 != 0] [1, 5, 11, 13]
Note that this doesn’t mean that we can’t use lambda functions in higher-order functions, what we are trying to point out is that listcomps and generator expressions are preferred in many cases.
When Not To Use Lambda Functions
Lambda functions have their limitations and best practices. In this section, we will elaborate on a few.
#1) When Binding Is Required
In the PEP8 Guide, it is recommended to use a def statement instead of assigning a lambda function to a variable. Lambda functions are meant to be used directly and not saved for later usage.
Example 15:
>>> def test(x): return x*2 # Correct ... >>> test(3) 6 >>> t = lambda x: x*2 # Wrong >>> t(3) 6
In order to execute a lambda function immediately, we can use the IIFE as seen in example 1 above.
#2) When Type Annotations Are Required
We may be tempted to apply annotations like in normal functions. However, lambda functions do not support type annotations. If it is required, then we are better off defining our functions with the def statement.
Example 16: Apply type annotations in a lambda function.
>>> lambda x: int: x+2 File "<stdin>", line 1 SyntaxError: illegal target for annotation
As we can see above, a SyntaxError is raised if we try to use type annotations in a lambda function.
#3) When Statements Are Required
We already saw above that lambda functions don’t support statements. It is very common too, for example, to raise exceptions in a function whenever we identify a problem. Unfortunately, a lambda function can’t incorporate a raise statement.
If your function needs to work with statements, then define it with the def statement instead of a lambda statement.
Example 17: Raise an exception in a lambda function.
>>> lambda: raise Exception("Wrong move") File "<stdin>", line 1 lambda: raise Exception("Wrong move") ^ SyntaxError: invalid syntax
Frequently Asked Questions
Q #1) Can Lambda functions support type annotations?
Answer: Unfortunately, lambda functions also known as anonymous functions don’t support type annotations. If type annotation is required, then it is recommended to use the def statement to define a regular function.
Q #2) What type does Lambda return in Python?
Answer: Just like regular functions, the return type of a lambda function depends on what it computes in its body. It could be int, float, data structures(list, tuple, dictionary), and even functions.
Q #3) Do Lambda functions support ‘print’?
Answer: In Python 2, lambda functions don’t support ‘print’ just because it is a statement. However, in Python 3, we no longer have a print statement but a print function. So, the lambda function very much supports ‘print’ in Python 3.
Conclusion
In this tutorial, we looked at lambda functions. We learned that they are also called anonymous functions, lambda expressions, etc.
We differentiated a lambda function from a Python regular function and also saw the similarities between them. Lastly, we examined cases when and when not to use a lambda function.
=> Visit Here To See The Python Training Series For All. | https://www.softwaretestinghelp.com/python-lambda-function/ | CC-MAIN-2021-10 | en | refinedweb |
Location Is Everything: Getting Into ArcGIS in 5 Minutes
all data has a location if you look hard enough to all the thoughts in your brain (yet).
They don’t know “all that’s on your mind”. But analyzing your GPS data, search history (they own Instagram, remember) and yes, listening through your phone’s microphone is enough to understand “what’s on your mind enough that you bother talking and writing about”.
That’s why you see strangely specific ads for something you talked about with your friend but never searched for. This is context at work.
And as any real estate agent would tell you, there’s one type of context that usually trumps all others: location.
Geospatial Data
Everything (that we know of, at least) physical exists somewhere in space. If the object or topic you’re studying is able to be geotagged & recorded on a map, it’s often surprisingly helpful to do so.
Google Maps gives you a handy estimate of traffic on your route: It’s able to do this by tracking (hopefully anonymized) GPS data from phones it’s installed on.
By comparing the distance between consecutive location update “pings” and the time they were issued, it can discern in real-time when cars are backed up on a highway they normally travel 60mph on.
However, working with GPS data seems like an awfully big field to get into, technically speaking. Fortunately, other people have already done most of the heavy lifting regarding coordinate-friendly data structures and packaging:
ArcGIS is one of the more robust geospatial analysis software libraries out there. It has a host of features from data storage, server integration, in-built deep learning and smooth visualizations.
It is, without a doubt, the next clear step on our journey to understand why hexagons are so prevalent in nature, and why they’re so useful for geographic data modeling.
If that seems like a stretch, here’s a module diagram straight from their tutorial page. Tessellation is just another word for “efficient Euclidean data compression”.
They have various pricing plans for their full desktop & business suites, but the easiest way forward is to create a free Developer account to use with the API and Python modules.
They have a number of setup options, but I elected for the clean Anaconda command line installation:
conda install -c esri arcgis. If you’re like me and enjoy living dangerously, make sure to install all new packages directly in your main environment.
As is tradition for messing around with a new data science toolkit, open up a Jupyter notebook and install a good-looking theme, as only the most terrifying of beasts code against a bright white background.
Data in Frame
Their tutorial is quite comprehensive. We’ll start by importing the base library and instantiating a GIS object using our account we made earlier.
from arcgis.gis import GIS
gis = GIS("", "username", "password")
This
gis object will be the gateway to access most of the module’s content. We can load a default satellite overview by calling
gis.map(‘City, State’):
You can click to drag and zoom in the cell output, which is neat. We could draw all sorts of coordinates and lines on this map. But it doesn’t really feel like data science without a Dataframe (or at least some matrix serving the same purpose).
They’ve actually developed a
Spatially Enabled Dataframe that extends Pandas. Tremendous.
# get your imports in order:
import pandas as pd
from arcgis.features import GeoAccessor, GeoSeriesAccessor
Already we find ourselves in familiar territory: Assigning a variable to
gis.content.get(‘item_string’) lets us grab data from publicly-hosted map-layer
items and store it in-memory much as regular Pandas Dataframes do. They directly extend Pandas, so go ahead and try out the regular operations — slicing, subselection formats are quite the same.
Cellular Mapping
Rasters are essentially a cell grid where data can be stored in each cell.
A honeycomb is a raster, and fishnet stockings are 3-dimensional raster manifolds. Math is consistent, even when it’s not.
Let’s draw an image from the NASA-USGS Landsat-8 satellite and unpack the first layer:
Calling
l8_lyr.properties[‘description’] tells us that it’s an image analysis service covering most of the world’s landmass at 30-meter resolution, and can be used for purposes such as vegetation, agriculture and boundary studies.
ArcGIS covers the intermediate functions to compute raster methods directly on these map layers, saving a ton of memory.
Let’s apply some to the
nyc map we made earlier. The below loop operates on the cell output since we call
nyc.add_layer().
We essentially just loop through a list of raster functions (agriculture, bathymetric, infrared etc) from the Landsat’s first image layer and cycle through adding and removing them from the map.
The Earth Opens Up Before You
You’re now working with ArcGIS data. There’s a lot of other things to do with these tools; you could integrate live ocean buoy data feeds into a live wave-height map or conduct deep learning on Yellowstone wolf movements.
To search for more data, you may want to check out the API’s search functions:
my_content = gis.content.search(query='california',
item_type="Feature Layer",
max_items=20)
These objects can generally be explored in the same way as above. Next time we’ll look into hexagonal rasters. | https://medium.com/swlh/location-is-everything-getting-into-arcgis-in-5-minutes-aec9b0a6b1c | CC-MAIN-2021-10 | en | refinedweb |
curl / libcurl / API / curl_easy_setopt / CURLOPT_POSTFIELDSIZE
CURLOPT_POSTFIELDSIZE explained
NAME
CURLOPT_POSTFIELDSIZE - size of POST data pointed to
SYNOPSIS
#include <curl/curl.h>
CURLcode curl_easy_setopt(CURL *handle, CURLOPT_POSTFIELDSIZE, long size);
DESCRIPTION
If you want to post data to the server without having.
If you post more than 2GB, use CURLOPT_POSTFIELDSIZE_LARGE.
DEFAULT
PROTOCOLS
EXAMPLE
CURL *curl = curl_easy_init(); if(curl) { const char *data = "data to send"; curl_easy_setopt(curl, CURLOPT_URL, ""); /* size of the POST data */ curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, (long) strlen(data)); curl_easy_setopt(curl, CURLOPT_POSTFIELDS, data); curl_easy_perform(curl); }
AVAILABILITY
RETURN VALUE
Returns CURLE_OK if HTTP is supported, and CURLE_UNKNOWN_OPTION if not.
SEE ALSO
CURLOPT_POSTFIELDS(3), CURLOPT_POSTFIELDSIZE_LARGE(3),
This HTML page was made with roffit. | https://curl.se/libcurl/c/CURLOPT_POSTFIELDSIZE.html | CC-MAIN-2021-10 | en | refinedweb |
PROBLEM LINK:
Practice
Div-2 Contest
Div-1 Contest
Author & Editorialis: Vasyl Protsiv
Tester: Istvan Nagy
DIFFICULTY:
Simple
PREREQUISITES:
Number theory
PROBLEM:
For an array a of size N let’s construct array B of size N as follows:
B_i = max \space j such that A_i divides A_j. (1 \le j \le N)
Given array B that was constructed from some array A with 1 \le A_i \le 4 \cdot 10^6. Find any suitable array A.
QUICK EXPLANATION:
A_i = N - B_i + 1 for each i from 1 to N will work.
EXPLANATION:
The first observation is that B_i \ge i for each i, because A_i divides A_i.
First subtask:
From the first observation we have that the only possible array B is [1, 2, ..., N] so all we have to do is to ensure that A_j is not divisible by A_i for j > i. One of the possible solutions is to make any decreasing array A. Also we can make an array A of pairwise distinct prime numbers. There are many other solutions.
Full solution:
Consider the following directed graph: for each i from 1 to N there is an edge from i to B_i. In order to simplify the explanations we will use indices in the array and corresponding vertices in the graph interchangeably.
The key observation is that there are no simple paths of length 2.
Proof: assume there is a path i -> j -> k (j = B_i, k = B_j), such that i \neq j \neq k. By definition of array B we have that A_i divides A_j and A_j divides A_k, therefore A_i divides A_k. From the other side, for each m > j we have that A_i does not divide A_m. But this contradicts the fact that k > j (from the first observation and j \neq k) and A_i divides A_k.
Thus vertices can be divided into two types:
- B_i = i
- B_i > i and B_{B_i} = B_i
We will call two vertices u and v equivalent if B_u = B_v. Note that all vertices can be partitioned into equivalence classes.
We claim that it’s always possible to create array A such that for each equivalence class all elements have the same value A_i within its class.
Proof:
Consider any valid answer A. Let’s define array A' as follows: A'_i = A_{B_i}, so now each element has same value A' as value A in rightmost element from its equivalence class. Clearly B_i can’t become smaller because A'_i still divides A'_{B_i}. And B_i can’t become larger because we only changed A'_i for vertices of second type, and if there were some indices i and j such that j > B_i and A'_i divides A'_j (notice that A_i divides A'_i) then there would be index k = B_j that k > B_i and A_i divides A_k. This contradicts the fact that A was some valid answer, so we can conclude that A' is also valid answer.
Hence now we can ignore vertices of the second type, and only assign values to entire equivalence classes.
Now clearly if for each i of the first type A_i doesn’t divide any A_j such that j > i then for each i A_i doesn’t divide any A_j such that j > B_i, because otherwise A_{B_i} (which is vertex of first type) would divide A_j.
Thus we have reduced our full problem to the problem from the first subtask. Now let’s assign any decreasing sequence of values to our classes (assuming numbering classes by the index of rightmost element). Also, we can assign pairwise distinct prime numbers, this way correctness is even more obvious. There are also many other solutions.
Time complexity is O(N) to assign decreasing sequence or O(N + maxA \cdot log(log(maxA))) to assign prime numbers.
SOLUTIONS:
Setter's Solution
#include <bits/stdc++.h> using namespace std; using LL = long long; using ULL = unsigned long long; using VI = vector<int>; using VL = vector<LL>; using PII = pair<int, int>; using PLL = pair<LL, LL>; #define SZ(a) (int)a.size() #define ALL(a) a.begin(), a.end() #define MP make_pair #define PB push_back #define EB emplace_back #define F first #define S second #define FOR(i, a, b) for (int i = (a); i<(b); ++i) #define RFOR(i, b, a) for (int i = (b)-1; i>=(a); --i) #define FILL(a, b) memset(a, b, sizeof(a)) void dout() { cerr << endl; } template <typename Head, typename... Tail> void dout(Head H, Tail... T) { cerr << H << ' '; dout(T...); } void solve() { int n; cin >> n; VI b(n); FOR(i, 0, n) { cin >> b[i]; b[i]--; } VI a(n); FOR(i, 0, n) { a[i] = n - b[i]; } for (auto x : a) { cout << x << " "; } cout << '\n'; } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); int t; cin >> t; while (t--) solve(); return 0; }
Tester's Solution
#include <bits/stdc++.h> #define forn(i, n) for (int i = 0; i < (int)(n); ++i) #define fore(i, a, b) for (int i = (int)(a); i <= (int)(b); ++i) using namespace std; //RESTORE int main(int argc, char** argv) { ios::sync_with_stdio(false); cin.tie(nullptr); cout.precision(10); cout << fixed; const int MAX = 1'500'002; vector<bool> p(MAX, true); vector<int> pr; fore(i,2 ,MAX-1) { if (p[i]) { int j = 2 * i; while (j < MAX) { p[j] = false; j += i; } pr.push_back(i); } } int T; cin >> T; forn(tc, T) { int N, b; cin >> N; vector<int> B(N), A(N); forn(i, N) { cin >> b; cout << pr[b] << ' '; } cout << endl; } return 0; } | https://discuss.codechef.com/t/restore-editorial/79945 | CC-MAIN-2021-10 | en | refinedweb |
Before Python 2.6 there was no explicit way to declare an abstract class. It changed with the
abc (Abstract Base Class) module from the standard library.
abc module
abc module allows to enforce that a derived class implements a particular method using a special
@abstractmethod decorator on that method.
from abc import ABCMeta, abstractmethod class Animal: __metaclass__ = ABCMeta @abstractmethod def say_something(self): pass class Cat(Animal): def say_something(self): return "Miauuu!"
>>> a = Animal() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Can't instantiate abstract class Animal with abstract methods say_something
An abstract method can also have an implementation, but it can only be invoked with
super from a derived class.
class Animal: __metaclass__ = ABCMeta @abstractmethod def say_something(self): return "I'm an animal!" class Cat(Animal): def say_something(self): s = super(Cat, self).say_something() return "%s - %s" % (s, "Miauuu")
>>> c = Cat() >>> c.say_something() "I'm an animal! - Miauuu"
There is more feautres provided by
abc module, but they are less common in use
than these described in this post. For details check the documentation.
More Pythonic Approach?
Such explicit declaration provided by
abc module may be considered not very pythonic. Because of Python's dynamic nature there are few things being checked during compilation, and there is no advanced type checking at that stage. For that reason, we could declare an abstract method by just raising a
NotImplementedError.
class Animal: def say_something(self): raise NotImplementedError()
Additionaly, a class could follow some naming conventions e.g. prefixing a class name with
Base or
Abstract.
Summary
Despite additional complexity, I find
abc module quite useful: it provides
a slightly efficient way to communicate the purpose of the code due to its
explicitness along with an better flexibility due to possible implementation
inside the abstract method.
My Tech Newsletter
Get emails from me about programming & web development. I usually send it once a month | https://zaiste.net/posts/abstract-classes-python/ | CC-MAIN-2021-10 | en | refinedweb |
As you recall from the previous blog post, I’d installed Unity and JetBrains on my Fedora 32 computer via Flatpaks. I was going to use them for the Unity Multiplayer course I was taking on Udemy. Unfortunately it was an immediate fail and in lesson one after they have me install a new inputs library and restart Unity, it would always crash upon loading the file. I’m currently installing Unity 2020.1 on my Windows computer where I don’t expect to have that issue. Assuming I don’t, then it’s a big fat nope on using Unity on Fedora via Flatpak (at least for this class). Which, to be fair, is not on their supported OS list – which is only Ubuntu and CentOS 7. (And the latter for movie-making)
Unity and JetBrains Rider on Fedora via Flathub
As I mentioned last year in my 2019 in Programming post, I created a bunch of 2D games in Unity by following along with the Gamedev.tv classes. I would watch the videos on Linux and jump over to my Windows computer for the programming, learning how to use SourceTree and Microsoft Video Studio in the process. But for some reason, going back and forth with the KVM when running Unity would sometimes freeze up the Windows computer. So when I saw someone on Fedora Planet running Unity Hub, I thought I’d see if there was a Flatpak – and there IS! Also, I’ve fallen in love with JetBrain’s Pycharm, so I thought I’d go ahead and use their game dev IDE, Rider. (There’s a Flatpak for that, too!) So, let’s see how well this works!
Apparently if you go this route, you have to handle licensing first. Just clicked on manual activation. That led me to login to Unity.com with my already-extant Unity creds. After answering some questions about how much money I make with Unity, they gave me a license file that I attached to Unity hub. Then I went to the General section on the left there to tell it where to install Unity versions. Once that was done, the hub more or less looked like I remembered it on Windows.
I already knew, from my previous forays, that I would need to add a Unity install, so it was off to the Installs section. This is what the selection looked like:
I don’t know if this is how it is on Windows now, too, since it’s been a long time since I worked in Unity. But, having found myself with a new version of Unity every time I signed in – I’m glad they have LTS versions, now. The CentOS of Unity, if you will. I went and checked the next course I want to do, GameDev.tv’s Unity Multiplayer class (here on Udemy and here on Gamedev.tv), and they want 2020.1. So I’ll install that version. For some reason, it wouldn’t let me download that version – it complained it would take up too much space (even though I have 900ish GB free and it said it would take up 10GB). But it decided it could install the LTS version. So, in the interest of seeing if it could open and run the games I’d previously developed, I just went with the LTS version for now. It quit out and complained about a corrupted download. But I don’t know where it was downloading to, because nothing was in the folder I told it to download to. If it is downloading somewhere else first, like /tmp – maybe that would explain the issues.
Eventually I tell it not to do any runtimes and I keep trying to install a bunch of times. Like a reverse Murphy’s law – as soon as I start posting about the problem on the /r/unity3D subreddit it starts working.
Despite my inability to install Unity 2020 over two days (edit: after literally 10 tries, restarting Unity Hub and restarting my computer, I finally got Unity 2020 to install), at least it ran my code from last year’s class ok, including upgrading it to the LTS version (which came out after I last worked on it). When I hit play it also ran reasonably well – didn’t seem to be at some incredibly low FPS. (Of course, this is a 2D game without lots of resources, but that’s still encouraging). It wanted me to have VS Code running. I think I also saw that on Flathub, but I decided to see if I could somehow get it to work with Rider in Flatpak form.
I launched Rider for the first time and it started off by asking for my preferences. First off was the UI theme:
And what kind of hacker would I be if I didn’t go with dark? Then it was time for the color schemes:
If I’d used a bunch more Visual Studio, I’d go with the middle selection. I’m a big fan of the Dracula themes I’ve been using in various editors, but I don’t think that’s the same as their Darcula theme since that mentions Intellij. So I just went with Rider Dark.
I didn’t have any particular preference here, so I just went with Visual Studio since I figured that would probably match shortcuts that the GameDev.tv guys would use. I decided not to do a Desktop Entry since, a you can see at the top of the next screenshot, it already seemed to have an icon:
I don’t have the environment installed for C#.
A quick bit of Googling seems to imply that Unity is using Mono for their C#, so I will try and get that installed first. After a bit of searching, I installed the mono-complete package on Fedora.Then it was time to choose Plugins.
I didn’t do any featured plugins. Afterwards I chose the free 30 day evaluation (maybe there isn’t a community version like Pycharm?) and decided to open my Block Destroyer project.
It didn’t automatically install the Rider Unity plugin, but I blame that partially on flatpak and partially on Fedora. (Everyone assumes Linux Unity dev is on Ubuntu or Centos 7) In the end I couldn’t quite figure out how to connect the two, but I think it’ll be easy enough to just load the files after I create my project. I did test that editing it in Rider will eventually recompile it in Unity once it realizes that the file has changed on disk. So I’m going to give this a shot for that new GameDev.tv class. I’ll report back on whether it’s worth it or if you should just stick to Windows (or Ubuntu) if you’re doing Unity game development.
PyGame 2.0 is out!
I.
Fedora 33 is out!
It came out this Tuesday and last night I updated my laptop. The only thing I had to do for the upgrade was remove a python3-test package. Since I’m using virtual environments, for the most part I don’t care which Python packages the system has. So that was a nice, easy upgrade! Good job Fedora packagers and testers! Speaking of Python, it’ll be nice to start upgrading my projects to Python 3.9. (Fedora 33 includes the latest programming language versions as part of its “First” values)
Probably the next upgrade will be Scarlett’s laptop since she has a school-provided Chromebook for school.
What I’ve been up to in Programming: Python
Selenium))
Spent a bunch of today trying to get SSL working correctly
And failed and left my site offline most of the day. So I’ll have to try some stuff on the side and give it another shot..
Last Few Days in Programming: Lots of Python
Been quite busy with Python, keeping me away from other pursuits, like video games. (Although the kids have been requesting Spelunky 2 whenever it’s time to hang out with them)
Extra Life Donation Tracker (eldonationtracker)
For my Extra Life Donation Tracker I pushed out a new release, v5.2.2. A user of my program (man, I never get tired of how awesome that is!!) had wholly anonymous donors which was causing an issue I thought I’d handled. But it turns out that the folks that run the Donor Drive API are a little inconsistent in how they handle that in the donor endpoint vs the donations endpoint. So I pushed that fix out and now things should be dandy for game day (about 2 weeks away!!)
Automating some Boring Stuff
In these COVID-19 times I have a problem – the YMCA where I’m a member has instituted signups for swimming. But you have to sign up EXACTLY 48 hours before you want to swim. Since I’m swimming every other day, that means that sign up time is when I’m swimming. For a while I would just wait until after my swim to sign up. But it’s a VERY popular time. So I started taking my phone to the pool to sign up. There are many negatives to this:
- It takes ~5 minutes or so with my phone and LTE connection (out of a 45 minute session which is already shorter than I’d normally spend in there)
- It uses data
- I risk dropping my phone into the pool or into a pool of water around the pool
- It means my phone is right there in my gym bag where someone could steal it (although that would give me a great excuse to buy a new one…)
So, while I was swimming today (best source of thoughts other than the shower), I realized I could probably use Selenium to automate this. I’ve never used it before, but I’d heard a lot about it. I knew that Al Sweigart talked about it in his book, Automate the Boring Stuff with Python. I bought a copy of the first edition, but I wanted to make sure I was up to date on the latest stuff so I went to that link I just shared where he has it available to read for free. He’s using the model Corey Doctorow used to use where it’s there for free, but you can also buy it and help him and the publisher. Also, he has a class on Udemy that covers the same topics. Anyway, I spent all morning (literally) digging around in my browser’s inspector mode to get all the data I needed to use it to automate the sign up. I believe I’ve got it working (I’d already signed up for my next swim session, so I had to pretend to sign up for another time, but you can only sign up for one time per day). I set up a cron job and what I’m going to do is let it sign me up and I’ll double-check (safety valve in case it doesn’t work). I’m not ready to share this code at this point – mostly because I’d prefer if it could keep working. However, it was a great experience in debugging and in how web scraping is just as annoying now as when I first learned about it somewhere around 15 years ago with O’Reilly books with titles like “Google Hacking” and “Flickr Hacking”.
raspigaragealert
As I mentioned in Switching up the hardware for the Garage IOT, I recently moved my Raspberry Pi-powered garage alert software from a Raspberry Pi 1B to a Raspberry Pi Zero W. The Raspberry Pi 1B is now in the office providing temperature and humidity data – quantifying just HOW HOT it is in here. This led me to have a renewed interest in the program. So I went ahead and created another config file in order to make it more generally usable to folks who aren’t me. Then I also created documentation. The documentation still needs a bit more work, but it could help others. Also, since it’s Hacktoberfest, someone made a PR for my code!! If this isn’t the first PR someone’s made against my code in a project in which they were co-authors, it’s at least one of the first. So that’s exciting!
Python Morsels
Finally, for this time period there was the most recent Python Morsels exercise. I fell a little behind with some other projects (and Spelunky 2) so my most recent assignment was to “create a
ProxyDict class which offers an immutable dictionary-like interface that wraps around a given dictionary.” The first bonus was to add support for len, items, values, and get. The second bonus has to implement iteration and a nice repr string. The final bonus had to support equality.
At first I was a bit lost. I tried a naive solution where I just passed the keys of the dictionary I received in the __init__ method, but I got stuck on figuring out __getitem__. So then I thought I needed to use abstract base classes. I’d seen them in some book I read in the past few months. But I couldn’t remember what they were called. So I clicked on Trey’s first hint, which showed that I was right and reminded me of the term “abstract base class”. This was not a “gimme” for there was not a Dictionary in collections.abc. So after looking at the table in for a while, I thought Collection would give me a lot of what I wanted. But it was still missing a bit, so I looked at Mapping, which was probably the best thing to use because it was immutable and inherited from Collection. Unlike other problems in Python Morsels, this is a very, very esoteric part of Python, but it was interesting to learn how to implement an ABC; particularly the fact that it will let you know which dunder methods you’re missing when you try to create a class. Turns out that by doing this, I got bonuses 1 and 3 (and part of 2) for free!)
As for the __repr__ method – I’m a pro at those at this point. I kept thinking there must be some way to cheat and use the one from the dictionary I was proxying, but I didn’t know how. So) def __repr__(self): center_list = [] for key, value in self.proxy_dictionary.items(): if isinstance(key, int): center_list.append(f"{key}: '{value}'") else: center_list.append(f"'{key}': '{value}'") center = (', '.join(center_list)) return "ProxyDict({"+center+"})"
I don’t think I have the prettiest syntax for my repr method. I was trying to be elegant and use a list comprehension. That looked like:
center_list = [f"{key}: '{value}'" for key, value in self.proxy_dictionary.items()] center = (', '.join(center_list))
But without using a lamba or something, I couldn’t figure out how to implement the if/else logic in the list comprehension.
What I learned from Trey’s Solution
First of all, when I said this was an esoteric thing, I wasn’t kidding. There’s actually already a way to do this without any work:
from types import MappingProxyType as ProxyDict
Thanks mostly to Trey’s problem sets I knew that I wanted to use yield or that I probably wanted to do a generator. So I thought my solution was pretty good. But it turns out there are two simpler ways I could have done it. Since I’m proxying a dict, which already has an iter method, I could have done:
def __iter__(self): yield from self.proxy_dictionary
or I could have done:
def __iter__(self): return iter(self.proxy_dictionary)
I actually think the first one is more readable. For the repr I kept thinking there must be some easier way to do this. Because the dictionary already has a repr. But I thought that would result in something like ProxyDict(dict(stuff)); apparently not. Because this is Trey’s solution:
def __repr__(self): return f"ProxyDict({repr(self.proxy_dictionary)})"
Although, that locks in the class name and causes issues if someone wants to do the same thing with our class. So the better way is:
def __repr__(self): return f"{type(self).__name__}({self.proxy_dictionary!r})"
the !r is the same as repr(self.proxy_dictionary).
Well, time to go check out that Pull Request on raspigaragealert!
First 24 Hours with Podcast Republic:
Evaluating moving from Doggcatcher to Podcast Republic
I’ve been using Doggcatcher for YEARS – ever since I first got a smartphone something like 8 or so years ago. I started using Doggcatcher on Dan’s recommendation. One of the best features it’s had is the ability to speed up podcasts without chipmunking the voice. (I think that came a year or so after I started using it). Recently I’ve been a bit annoyed at Doggcatcher, particularly with podcasts from the EarWolf network (although there may be other networks with the same behavior). Every time Doggcatcher checks for updates, all the episodes from EarWolf will disappear and redownload. Until it is done, I can’t listen to the episode.
Neil deGrasse Tyson’s podcast is also annoying in that if a new episode comes out before I’ve finished the previous one, it’ll overwrite it so that I now have two copies of the same file. This makes it more stressful than it needs to be when I’m trying to choose the next podcast to listen to. So I started asking folks for recommendations. Dan recommended Podcast Republic to me. I don’t know if it’ll fix things for me because Dan was using it because Doggcatcher wasn’t working well for him for authenticated feeds, but I’m hopeful.
It does have some features that I didn’t know I wanted: being able to sync across devices – would have helped when I changed phones as well as being able to listen on web and sync (not something I’d use a LOT, but might use a bit). So I’m going to try it out and let you know what I think.
Brave on Windows Part 1
This post continues a series on exploring new browsers:
I’ve been using Vivaldi on Windows for about four months now. As I keep saying, my browser needs on Windows aren’t too huge. Mostly I access youtube, the Stardew Valley Farm uploaded, and Google Docs. But I want to keep checking out new browsers on Windows first precisely since they are so important on my Linux computer. I don’t want to mess up a good thing there.
So let’s start off with Brave’s new user tour:
Interestingly it doesn’t see Vivaldi as a Browser to import from:
Now onto the important part of what makes Brave, Brave:
Intelligently they tell you how to turn it off if it’s breaking sites, rather than let users think the browser’s rendering is broken. We’ll see how well it works for the sites I visit – probably just fine on Windows.
Now, this I REALLY like. I guess since everyone else either owns search (Google and Microsoft) or gets paid by the search engine (Mozilla getting Google payments) I never see this. But I think this is the type of transparency that browsers should be providing! Not surprising since one of the Brave founders came from Mozilla.
Rewards is a weird name for this, since I’m not getting paid or any items. But I do like the idea – you earn tokens that equate to money that gets paid out to the websites you want to support. Here’s a little more about it:
I’m not going to sign up now because I don’t really visit enough sites on this computer and I just want to get on with it. Here’s the page I get after that:
Now, it may look suspicious to you that it claims to have already blocked some trackers when I’ve only gone through their welcome page. I, too, was suspicious at first. But then I remembered when I imported settings from Chrome, it took me to some Adobe page. So new tabs always look like this. I opened a new tab without doing anything else:
Looks like they make money from Cryptocurrencies? However, true to what you’d expect, unlike Vivaldi it doesn’t pre-populate your new tab with a bunch of sponsored sites. In fact, my speed dial still looks exactly the same on Vivaldi. Here’s how the blog looks on Brave:
I like the fonts it chooses to render with. It claims to have stopped trackers on my site. I don’t know of any, so I’m going to guess that the “Share This” has some of that embedded as do embedded YouTube videos. Let’s take a quick look at two sites I use that have ads. First Ars Technica:
But I guess these things aren’t ads:
And a quick look at reddit:
There’s definitely an ad missing in that square. Supposedly also 13 ads and trackers. Again.
Brave doesn’t seem to have nearly as many widgets as Vivaldi, but that’s not surprising; Vivaldi, like Opera before it is known for being a power user’s browser. I don’t know if this ends up being pro or con for Brave in the long run. It’s a nice clear browser that more or less seems to look and feel like a regular browser – just with supposedly less tracking and ads. To get the same experience as Vivaldi would probably involve lots of potentially dangerous extensions. We’ll see how it handles my day-to-day on Windows.
Switching up the hardware for the Garage IOT
Back in May, I set up my Raspberry Pi B as my garage door monitor. Unfortunately it stopped working, I haven’t investigated yet, but I wouldn’t be surprised if it got hit with the infamous SD card corruption that was a big problem with the early Raspberry Pi boards. (I think I read it’s much less of a problem with the Raspberry Pi 4) So I decided to go ahead and switch it with a Raspberry Pi Zero W, especially since you can get it with headers from Adafruit for only $14. As a bonus, it’s got a better processor (same as the Raspberry Pi 3, I think) and built-in WiFi. It’s also got a smaller footprint, but that doesn’t matter to me for where it’s mounted. So now I’m back to having a Raspberry Pi B without a job to do (assuming the hardware is fine and it just ended up in an unbootable state. I’ve also now got a usb WiFi module for it, so maybe that’ll help me think of something for it to do. I think the Raspberry Pi rover project I got in a Humble Bundle uses a 1st gen Raspberry Pi, but I’d been thinking of using a 4th gen Pi in order to maybe do some more fun stuff with it like maybe some openCV based Computer Vision and/or machine learning.
All Journey and No Destination: Friday and Fast Times at Ridgemont High
By complete coincidence I ended up watching Fast Times at Ridgemont High and Friday (each for the first time) back to back this week. I watched Fast Times because it was being covered by Paul Scheer and Amy Nicholson were covering it on Unspooled, their film podcast. As for Friday, well, that’s a slightly more convoluted story. Five Iron Frenzy, one of my most consistently favorite bands, was doing a Kickstarter for their new album. As part of promotion for the campaign, Reese Roper appeared on Mike Herrera’s podcast, The Mike Herrera Podcast. Herrera is the lead singer and songwriter for MxPx, a band I’ve been listening to off-and-on since 1996ish. The Roper episode led me to lookup MxPx’s latest release, MxPx. There’s a song on there called Friday Tonight that had some lyrics that didn’t make sense to me:
So I went to genius.com’s page for the song and found out it was this scene from the movie Friday:
So I decided to check out the movie. It was an interesting couple of movies to watch back-to-back for the first time.
In the first season of the Unspooled podcast they covered the movies on the AFI Top 100 list. For this season they are looking at movies that perhaps should have a place on the list (although the stated fate of the season 2 list is to be sent into space) and are exploring the movies by category. The first category is high school movies. I’d never seen Fast Times at Ridgemont High because it came out when I was too young and, for some reason, I never happened to catch it on Comedy Central, TNT, or any of the other cable channels that used to just show TV edits of movies before they started having shows in their own right.
I’m not entirely sure what I was expecting, but from the trailer and various bits of the movie that had become part of the culture/memes/etc, I was expecting a zany film. Or at least a film that operated on the level of reality of Ferris Bueller, which came out four years later. Or maybe something like Grease, but without the music. Instead we got a movie where, when we reached the scene with Spucoli taking a joy ride in the football player’s car, I turned to my wife and asked, “What’s the point of this movie? I’m not getting a plot.” Instead it’s almost a series of vignettes that takes us through an entire school year at Ridgemont High. I learned afterwards (while listening to the podcast) that this is because it was based on a non-fiction book written by a Rolling Stone writer who studied the senior class at a high school. (Incidentally, Mean Girls was also based on a book, but that one ended up having a much more conventional plot) Plot-wise this movie seems to be at least one of the seeds that leads to most of the movies from Kevin Smith’s View-Askew-niverse – particularly Clerks and Mallrats. It also wasn’t nearly as comedic as I thought it would be. There are funny moments, but it’s more of a drama with funny moments – like real life.
Mostly we follow Stacy Hamilton (Jennifer Jason Leigh) who puts in an amazing performance as a 15-year-old who falls for the trope of having an unexplained need to lose her virginity; a trope that persisted until the 1990s when we finally started taking AIDS and other STDs seriously. What Ii mean by unexplained need is that Stacy seems not to want sex simply because of her teenage hormones, but more because it seems to be expected in her peer group if she doesn’t want people to consider her a baby. I even remember a Fresh Prince of Belair episode where Carlton is very embarrassed to be a virgin. By contrast, by the time I was in High School in the last 90s there wasn’t really any pressure to graduate without one’s virginity. It was more of a personal choice that people made – at least among my non-church peers. They’d been scaring us about STDs and the almost 100% chance of teenage pregnancy for so long that I was shocked when, as a married man, we didn’t get a baby on the first try. Anyway, her arc ends up being the most realistic movie depiction I’ve ever seen of the disappointment of teenage sex from the girl’s point of view. (The podcast clarified this was one of the director’s messages) First attempt is the famous dugout scene. Second attempt, she gets thwarted in a humiliating way. Third attempt, the dude is a one-minute man. By contrast, movie sex is usually from a male perspective. I also loved the way she handled talking to Damone once she got pregnant, not taking his attempt to shift blame onto her.
“No, take that back.” Man, that was really great writing of a strong character. A different writer might have made her cave there, but Stacy isn’t playing victim, she’s just trying to get Damone to be fair by paying for his part in it.
That’s the clearest arc in the movie. Jeff Spicoli (Sean Penn) is merely comic relief. Linda Barrett (Phoebe Cates) is simply there to give bad advice to Stacy. Mark Ratner (Brian Backer) seems like he’s going to be the main character, but he’s mostly just a foil to Damone (Robert Romanus) and a second attempt at sex for Stacy. And Brad Hamilton (Judge Reinhold) is almost certainly the basis of Kevin Smith’s long-suffering Dante Hicks (Clerks and Clerks 2). Despite a good work ethic, capitalism just beats him down over and over throughout the film. None of of the usual plots are in evidence – no one is in danger of not graduating (maybe Spicoli is, but he’s merely a comedic element, not a real character), no one is trying to get into the big party, no male or female is in a “she cleans up nicely” trope, even Ratner isn’t trying very hard to get with Stacy. Yet, somehow this movie really hits for me. Perhaps it’s the more documentary-ish story telling due to it being based on a book. In the hands of our director (the same director of Clueless), the characters and situations aren’t heightened. As someone who worked in high school (selling shoes, as a lifeguard, in a movie theatre, and a bank teller), that aspect of the story really worked for me compared to the newer movies where the kids just have cash without needing to do anything for it.
A few odds and ends before moving on to Friday:
I have to give kudos to the to the set designer for selecting oversized chairs in the restaurant during Ratner and Stacy’s date. They look ridiculously oversized, emphasizing that they are kids playing at being adults.
My wife and I are fond of remarking on something we’ve noticed in movies from the late 70s and through the 80s (and I’m pretty sure I’ve mentioned it on the blog at some point). Movies from that time period will inevitably have precocious kids using profanity (the “worse” the word, the “funnier” it seems) and you will see lots of gratuitous breasts. Fast Times at Ridgemont High is no exception. (Judge Reinhold’s fantasy is completely without consequence to the plot). During the Unspooled episode about the movie, the director mentioned that during the 80s, the amount of breast shots required in a movie was a requirement for securing financing for the movie. So it’s not just something we’ve noticed, it’s an actual thing that was going on. (Frankly, on seeing how things were handled in Fast Times at Ridgemont High with bare breasts, I’m surprised we don’t end up seeing Cameron naked in Ferris Bueller)
Speaking of nudity, the director mentioned that during a screening, in the scene where Damone and Stacy have sex – she originally wanted to show full frontal nudity of Damone becuase there was already a bunch of full frontal female nudity in rated R movies at the time. She was told no because the male anatomy is automatically an aggressive organ while a female is passive, so it would have been rated X. Of course, the sad part, thanks to Hollywood being so silly that we have the term Hollywood-ugly to describe someone that the characters consider ugly but who is beautiful by normal standards, during a preview screening someone yelled out “fat chick” at Stacy’s naked body. I’m going to link to the image (rather than posting it in this post) in order to keep this post safe for work. (SO This LINK IS NOT SAFE FOR WORK) Yeah, I noticed that I was surprised Hollywood let a woman look like that in a movie, but she is definitely NOT fat.
One last thing – does Stacy’s boyfriend in Chicago exist? I thought he didn’t until she started crying at the end of the movie because he wasn’t coming to graduation. My wife thought he was real. Paul Scheer was sure he was fake and Amy Nicholson thought he was real, but was maybe convinced by the end of the podcast that he wasn’t.
While Fast Times at Ridgemont High takes place over a school year, Friday takes place over the course of one day. My wife had seen it enough times to be able to quote lines as they were happening. I never saw it because it was rated R and my parents were very strict about seeing movies rated higher than our ages. And later I was into very different movies, so I never thought about it until MxPx brought it back to my attention.
Interestingly, even though both of these movies ostensibly are without traditional plot structures, this movie just didn’t quite do it for me as well as Fast Times at Ridgemont High did. Perhaps this is because Friday only takes place over the course of one day, so there isn’t even a character progression. Yet Ferris Bueller also takes place over a single day. I think the big difference is that Bueller and friends are out on an illicit adventure (and, near the end, the need to avoid getting caught) while Ice Cube and the rest of the cast are simply sitting outside. Perhaps a more successful movie for me would have involved Ice Cube and Chris Tucker sitting outside for a normal day only to end up dragged on some sort of quest or to have things go insanely wrong. Instead, there are only two desires our main characters have. Chris Tucker wants to get Ice Cube high for the first time. This is accomplished midway through the movie and doesn’t have any consequences. He doesn’t do anything or cause anything to happen from being high – it doesn’t even mess up Ice Cube’s chances with the girl across the street. And that’s his desire, but it’s not as though he is a nerd who’s never had a girl – he CURRENTLY has a girlfriend. (Although, for all her protesting at Ice Cube interacting with other women, my wife noticed that she has a guy in her bed when she called Ice Cube on the phone). Instead we get an SNL skit-like day where the same folks keep stopping by over and over.
Why isn’t anyone working or in school? I was at a loss to figure out what age anyone was supposed to be, partially because Hollywood tends to cast way older (something they’ve started to fix), afterall, except for Jennifer Jason Leigh, no one in Fast Times at Ridgemont High looks like they belong in High School. Well, one potential plotline could have been the fact that Ice Cube lost his job because there is video footage of him stealing. Ice Cube says the guy in the video isn’t him. A few characters say different things about the robbery, but by the time the movie is done, I have no idea whether or not it was him. A different movie could have had him proving that it wasn’t him or trying to get another job and either succeeding or failing in comedic ways. But this paragraph is where I state something I’ve been thinking of as I’ve worked on this essay a little at a time over the past week. Maybe all of this makes sense if you grew up in a neighborhood like the one in the movie? Maybe there are some people for whom the plot – with some folks just sitting on the porch and others stopping by over and over makes sense. But for me it just fell flat when combined with the lack of a traditional plot motivation for any of the characters.
It also seemed to take a wild swing at the end when it went from a mostly goofy movie to DEADLY serious when Zeus gives Felicia a black eye and then hits the girl Ice Cube would like to get with. It’s suddenly about whether shooting a gun is worth it. And while we did literally have Chekhov’s Gun, it was some real tonal whiplash. Then again, I remember some Fresh Prince of Bel-Air episodes doing that, too. So maybe it’s just an expected trope.
A couple stray thoughts:
I finally got to see the origin of the meme “Bye Felicia”. However, the character of Felicia didn’t make sense to me. Throughout the first ¾ of the movie she appears at Ice Cube’s house asking to borrow things that don’t make sense to borrow – like a microwave. She looks and acts like she’s probably a homeless addict. Yet, near the end you find out that she’s the sister of the girl Ice Cube has been after. So, does this mean she’s just mentally ill? And if she is, does that make all the jokes at her expense worse? (Although my question does imply it’s OK to laugh at an addict. But we do have a male character who’s a homeless addict who is 100% just played for laughs)
Why is Bernie Mac a shady preacher both in this movie and a shady judge in Booty Call? Was it part of his standup at the time or is he just really good at that role?
In the end, I think it’s interesting that I watched both of these cultural touchstone movies back-to-back without any foreknowledge of the plot and they both happened to be movies without traditional plot structures. Fast Times at Ridgemont High turned out to be really enjoyable while Friday turned out to be a dud for me. The next episode of Unspooled is going to be Dazed and Confused, but I don’t know if it’ll merit a blog post on its own. Time will tell.
Last few weeks in Programming: Python, Ruby
You”.
Review: InvestiGators
My rating: 4 of 5 stars
Read this to my four-year-olds and I found it to be a blast. Most of the word-play went over their heads. In fact, after finishing it with my four-year-olds, I recommended it to my 8-year-old. We’ll see what she thinks. This is definitely one of those books you can read with the kids and, if you like Dad Jokes and Puns, you’ll be enjoying it rather than wishing you were doing something else.
View all my reviews | http://www.ericsbinaryworld.com/page/4/ | CC-MAIN-2021-10 | en | refinedweb |
Hide Forgot
python-http-client fails to build with Python 3.10.0a4.
=================================== FAILURES ===================================
________________________ DateRangeTest.test__daterange _________________________
self = <tests.test_daterange.DateRangeTest testMethod=test__daterange>
def test__daterange(self):
> self.assertIn(self.pattern, self.license_file)
E AssertionError: 'Copyright (C) 2021, Twilio SendGrid, Inc.' not found in 'MIT License\n\nCopyright (C) 2020, Twilio SendGrid, Inc. <help@twilio.com>\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of\nthis software and associated documentation files (the "Software"), to deal in\nthe Software without restriction, including without limitation the rights to\nuse, copy, modify, merge, publish, distribute, sublicense, and/or sell copies\nof the Software, and to permit persons to whom the Software is furnished to do\n'
tests/test_daterange.py:19: AssertionError
=========================== short test summary info ============================
FAILED tests/test_daterange.py::DateRangeTest::test__daterange - AssertionErr...
========================= 1 failed, 11 passed in 0.13s =========================
For the build logs, see:
For all our attempts to build python-http-client a 2021 problem. I bet it also happens on rawhide.
(In reply to Miro Hrončok from comment #1)
> I bet it also happens on rawhide.
Indeed:
Fixed in python-http-client-3.3.1-2.fc34 | https://bugzilla.redhat.com/show_bug.cgi?id=1914225 | CC-MAIN-2021-10 | en | refinedweb |
Qt SQL C++ Classes
Provides a driver layer, SQL API layer, and a user interface layer for SQL databases. More...
Namespaces
Classes
Detailed Description
To include the definitions of the module's classes, use the following directive:
#include <QtSql>
To link against the module, add this line to your qmake
.pro file:
QT += sql
See the SQL Programming guide for information about using this module. | https://doc.qt.io/archives/qt-5.9/qtsql-module.html | CC-MAIN-2021-10 | en | refinedweb |
public class Singleton { private static final Singleton INSTANCE = new Singleton(); private Singleton() {} public static Singleton getInstance() { return INSTANCE; } }
It can be argued that this example is effectively lazy initialization. Section 12.4.1 of the Java Language Specification states:
- T is a top level class, and an assert statement lexically nested within T is executed.
Therefore, as long as there are no other static fields or static methods in the class, the
Singleton instance will not be initialized until the method
getInstance() is invoked the first time. | https://riptutorial.com/java/example/5070/singleton-without-use-of-enum--eager-initialization- | CC-MAIN-2021-10 | en | refinedweb |
Froala Editor alternatives and similar libraries
Based on the "Editors" category.
Alternatively, view Froala Editor alternatives based on common mentions on social networks and blogs.
vuetify9.6 9.8 Froala Editor VS vuetify🐉 Material Component Framework for Vue
quill9.5 5.0 L5 Froala Editor VS quillA cross browser rich text editor with an API.
CodeMirror9.4 8.8 L2 Froala Editor VS CodeMirrorIn-browser code editor.
ace9.4 6.5 L3 Froala Editor VS aceAce (Ajax.org Cloud9 Editor).
Monaco Editor9.3 7.2 Froala Editor VS Monaco EditorA browser based code editor
slate9.1 7.3 Froala Editor VS slateA completely customizable framework for building rich text editors.
Draft.js9.1 8.1 L3 Froala Editor VS Draft.jsA React framework for building text editors.
Quasar FrameworkDeveloper-oriented, front-end framework with VueJS components for best-in-class high-performance, responsive websites, PWA, SSR, Mobile and Desktop apps, all from the same codebase.
medium-editor8.7 4.0 L4 Froala Editor VS medium-editorMedium.com WYSIWYG editor clone.
trix8.6 5.9 Froala Editor VS trixA rich text editor for everyday writing. By Basecamp.
Editor.js8.6 4.8 Froala Editor VS Editor.jsA block-styled editor with clean JSON output
TOAST UI Editor8.3 8.4 Froala Editor VS TOAST UI EditorGFM Markdown Wysiwyg Editor - Productive and Extensible
Summernote8.3 7.1 Froala Editor VS SummernoteSuper simple WYSIWYG editor.
TinyMCE8.1 9.7 L4 Froala Editor VS TinyMCEThe JavaScript Rich Text editor.
jsoneditor7.9 8.0 L2 Froala Editor VS jsoneditorA web-based tool to view, edit and format JSON.
SimpleMDE7.7 0.0 Froala Editor VS SimpleMDEA simple, beautiful, and embeddable JavaScript Markdown editor. Delightful editing for beginners and experts alike. Features built-in autosaving and spell checking.
buefy7.6 9.4 Froala Editor VS buefyLightweight UI components for Vue.js based on Bulma
bootstrap-wysiwygTiny bootstrap-compatible WYSIWYG rich text editor.
wysihtml57.2 0.0 L4 Froala Editor.5 0.0 L5 Froala Editor VS bootstrap-wysihtml5Simple, beautiful wysiwyg editor
pen6.2 0.0 L3 Froala Editor VS penenjoy live editing (+markdown).
ProseMirror6.1 2.0 L5 Froala Editor VS ProseMirrorThe ProseMirror WYSIWYM editor
vim.js5.9 0.0 L1 Froala Editor VS vim.jsJavaScript port of Vim with a persistent ~/.vimrc
EpicEditor5.9 0.1 L2 Froala Editor VS EpicEditorAn embeddable JavaScript Markdown editor with split fullscreen editing, live previewing, automatic draft saving, offline support, and more.
Squire5.9 4.6 L2 Froala Editor VS SquireHTML5 rich text editor.
sweet.js5.9 0.0 Froala Editor VS sweet.jsHygienic Macros for JavaScript!
Trumbowyg5.7 6.5 L3 Froala Editor VS TrumbowygA lightweight and amazing WYSIWYG JavaScript editor.
ContentTools5.7 0.0 L4 Froala Editor VS ContentToolsA JS library for building WYSIWYG editors for HTML content.
editor5.0 0.0 L2 Froala Editor VS editorA markdown editor. still on development.
jquery-notebook4.2 0.0 Froala Editor VS jquery-notebookA simple, clean and elegant text editor. Inspired by the awesomeness of Medium.
Mobiledoc Kit3.8 5.0 Froala Editor VS Mobiledoc KitA toolkit for building WYSIWYG editors with Mobiledoc
popline3.4 0.2 L4 Froala Editor VS poplinePopline is an HTML5 Rich-Text-Editor Toolbar
ckeditor-releases3.4 2.5 L2 Froala Editor VS ckeditor-releasesThe best web text editor for everyone.
Monod3.0 0.0 L4 Froala Editor VS MonodMonod is a React-based Markdown editor. You can use it anytime (offline mode), share documents with anyone (encrypted), and render your content with a set of templates.
raptor-editor2.8 0.0 L5 Froala Editor VS raptor-editorRaptor, an HTML5 WYSIWYG content editor!
React PDF viewerA React component to view a PDF document
php-parser2.3 5.2 Froala Editor VS php-parser🌿 NodeJS PHP Parser - extract AST or tokens (PHP5 and PHP7)
esprima2.2 0.5 Froala Editor VS esprimaECMAScript parsing infrastructure for multipurpose analysis.
ppo1.3 0.5 Froala Editor VS ppoppo is a super small and useful utils library for JavaScript
Zepcode1.1 0.6 Froala Editor VS ZepcodeZeplin extension that generates Swift snippets from colors, fonts and layers
react-component-widgetComponent for resizing and repositioning charts, parsing transferred data when working with Recharts library.
convert-plain-text-into-linksAn npm module which replaces any plain text link within string with achor tag
jquery-connect0.2 4.3 Froala Editor VS jquery-connectEasily connect your jQuery code to stores like Redux
pixotree0.2 2.3 Froala Editor VS pixotreeSuper fast TreeView in VanillaJS, packed with features. Froala Editor or a related project?
README
Froala Editor V3
- Awesome new features
Demos
- Basic demo:
- Inline demo:
- Full list:
Download and Install Froala Editor
Install from npm
npm install froala-editor
Install from bower
bower install froala-wysiwyg-editor
Load from CDN
Using Froala Editor from CDN is the easiest way to install it and we recommend using the jsDeliver CDN as it mirrors the NPM package.
<!-- Include Editor style. --> <link href="[email protected]/css/froala_editor.pkgd.min.css" rel="stylesheet" type="text/css" /> <!-- Create a tag that we will use as the editable area. --> <!-- You can use a div tag as well. --> <textarea></textarea> <!-- Include Editor JS files. --> <script type="text/javascript" src="[email protected]/js/froala_editor.pkgd.min.js"></script> <!-- Initialize the editor. --> <script> new FroalaEditor('textarea'); </script>
Load from CDN as an AMD module
Froala Editor is compatible with AMD module loaders such as RequireJS. The following example shows how to load it along with the Algin plugin from CDN using RequireJS.
<html> <head> <!-- Load CSS files. --> <link rel="stylesheet" type="text/css" href="[email protected]/css/froala_editor.css"> <script src="require.js"></script> <script> require.config({ packages: [{ name: 'froala-editor', main: 'js/froala_editor.min' }], paths: { // Change this to your server if you do not wish to use our CDN. 'froala-editor': '[email protected]' } }); </script> <style> body { text-align: center; } div#editor { width: 81%; margin: auto; text-align: left; } .ss { background-color: red; } </style> </head> <body> <div id="editor"> <div id='edit' style='margin-top:30px;'> </div> </div> <script> require([ 'froala-editor', 'froala-editor/js/plugins/align.min' ], function(FroalaEditor) { new FroalaEditor('#edit') }); </script> </body> </html>
Load Froala Editor as a CommonJS Module
Froala Editor is using an UMD module pattern, as a result it has support for CommonJS. The following examples presumes you are using npm to install froala-editor, see Download and install FroalaEditor for more details.
var FroalaEditor = require('froala-editor'); // Load a plugin. require('froala-editor/js/plugins/align.min'); // Initialize editor. new FroalaEditor('#edit');
Load Froala Editor as a transpiled ES6/UMD module
Since Froala Editor supports ES6 (ESM - ECMAScript modules) and UMD (AMD, CommonJS), it can be also loaded as a module with the use of transpilers. E.g. Babel, Typescript. The following examples presumes you are using npm to install froala-editor, see Download and install FroalaEditor for more details.
import FroalaEditor from 'froala-editor' // Load a plugin. import 'froala-editor/js/plugins/align.min.js' // Initialize editor. new FroalaEditor('#edit')
For more details on customizing the editor, please check the editor documentation.
Use with your existing framework
- Angular JS:
- Angular 2:
- Aurelia:
- CakePHP:
- Craft 2 CMS:
- Craft 3 CMS:
- Django:
- Ember:
- Knockout:
- Meteor:
- Ruby on Rails:
- React JS:
- Reactive:
- Symfony:
- Vue JS:
- Yii2:
- Wordpress:
Browser Support
At present, we officially aim to support the last two versions of the following browsers:
- Chrome
- Edge
- Firefox
- Safari
- Opera
- Internet Explorer 11
- Safari iOS
- Chrome, Firefox and Default Browser Android
Resources
- Demo:
- Download Page:
- Documentation: froala.com/wysiwyg-editor/docs
- License Agreement:
- Support: wysiwyg-editor.froala.help
- Roadmap & Feature Requests:
- Issues Repo guidelines
Reporting, and you can start from this basic one.
- Some issues may be browser specific, so specifying in what browser you encountered the issue might help.
Technical Support or Questions
If you have questions or need help integrating the editor please contact us instead of opening an issue.
Licensing
In order to use the Froala Editor you have to purchase one of the following licenses according to your needs. You can find more about that on our website on the pricing plan page.
*Note that all licence references and agreements mentioned in the Froala Editor README section above are relevant to that project's source code only. | https://js.libhunt.com/wysiwyg-editor-alternatives | CC-MAIN-2021-10 | en | refinedweb |
Hi,I digged around HPSF and found the following bug. Word 8.0/97 docs
DocumentSummaryInformation have 2 sections, but getCategory() (Category is
located within the section with index 0) (implicitly) calls GetSingleSection()
which throws an exception if sectionCount != 1. Word 6.0/95 has single section
and this works fine. Here's my solution to the problem until you find a better
way... Than my class can be simply removed and everything will work ok... After
putting the code below through this form it may need some beautifying
(indentation)...
Regards,
Mickey
<code>
/**
* This class is a manual work around HPSF
* <NOBR>DocumentSummaryInformation.getCategory()</NOBR> bug. This method
calls
* <NOBR>getProperty();</NOBR> which further calls
* <NOBR>getSingleSection().getProperty();</NOBR>. Now,
<NOBR>getSingleSection()</NOBR>
* throws a <I>NoSingleSectionException</I> for <NOBR>Word 8.0/97-
2000</NOBR> documents
* because these have two sections and only one is expected. Here's the
stack trace: <BR>
* <PRE>
* org.apache.poi.hpsf.NoSingleSectionException: Property set contains 2
sections.
* at org.apache.poi.hpsf.PropertySet.getSingleSection(PropertySet.java)
* at org.apache.poi.hpsf.SpecialPropertySet.getSingleSection
(SpecialPropertySet.java)
* at org.apache.poi.hpsf.PropertySet.getProperty(PropertySet.java)
* at org.apache.poi.hpsf.DocumentSummaryInformation.getCategory
(DocumentSummaryInformation.java)
* </PRE>
*
* @author Miroslav Obradovic (micky@eunet.yu)
*/
public class MyDocumentSummaryInformation extends
DocumentSummaryInformation {
/**
* Creates a DocumentSummaryInformation from a given PropertySet.
*/
public MyDocumentSummaryInformation(final PropertySet ps)
throws org.apache.poi.hpsf.UnexpectedPropertySetTypeException {
super(ps);
}
/**
* Returns the stream's category (or <code>null</code>).
*/
public String getCategory() {
int pid =
org.apache.poi.hpsf.wellknown.PropertyIDMap.PID_CATEGORY; // equals 2
String category = null;
List sections = getSections();
int sectionCount = (int) getSectionCount();
org.apache.poi.hpsf.Section section = null;
org.apache.poi.hpsf.Property[] properties = null;
// Iterate through sections, get their properties and look for
Category.
// Category should be found in the section with index 0.
for (int i = 0; i < sectionCount; i++) {
try {
// Get the current section.
section = (org.apache.poi.hpsf.Section) sections.get(i);
// Get section properties and look for Category.
properties = section.getProperties();
for (int j = 0; j < properties.length; j++) {
if (properties[j].getID() == pid) {
category = (String) properties[j].getValue();
break;
}
}
// If Category found, break the loop.
if (category != null) {
break;
}
} catch (Exception e) {
category = null;
}
}
return category;
}
}
</code>
Miroslav, can you please attach your Word file to this bug in Bugzilla? Or
better, can you create a minimal Word file which behaves as you described? I
need a test case to verify the bug. Thanks!
The author of this bug did not provide a test file nor did he respond to any e-mail.
hi there,
i'm sorry for the delay. i'm not used to using these forums and stuff... i've
just found a work-around for the problem i once had and thought it would be
useful if i post it, in case someone else needs it.
it was long ago, but i'll try to find the sample word file.
best regards,
miroslav
Created attachment 6990 [details]
here's the java code (word document plain text content extractor) i have developed when i noticed the bug...
Created attachment 6991 [details]
this is the POI library i have used for my project when noticed the bug...
well, here we are. i have added two attachments and here are a few words about
these:
the second attachment (POI library) is poi-1.5.1.jar file i used when i
noticed the bug (or what i think it was a bug). i don't remember the date
well, but i think it was the latest stable version at the moment i have
written the code.
the first attachment is a part of the project i have worked on when i noticed
this bug. it's a content (plain text) extractor for word file format. i don't
know if you have something similar added to POI, but if you find this code
useful (there are a lot of comments in there!), you can freely use this code
(though, it would be nice of you if you'd mention me as a developer
somewhere, :-) )
the problem is that there are some new Summary Info "pages" added with new
versions of ms word and i think you have assumed there (in poi) that there is
only a single one. i guess you could use a solution similar to the one i have
attached (in MyDocumentSummaryInfo.java), since Micro$oft can add more and
more of these new "pages" with new releases of office.
i hope this was useful :-)
best regards,
miroslav
sorry, i forgot to mention.
the sample word file you requested (sample.doc) is included in the first of
the two attachments.
mickey
oh, i'm the most boring man today...
i tried to download attachments, but i guess you must know which type of
binary files is in it to properly download and save the file.
the first attachment should be saved as .zip (created win WinZip 8.1)
the second attachment should be saved as .jar
i hope this is the last one :-)
bye,
m
The current CVS HEAD can process your sample application without any flaws. I
suggest an upgrade. | https://bz.apache.org/bugzilla/show_bug.cgi?id=14734 | CC-MAIN-2020-29 | en | refinedweb |
IDEA-64675 (Bug)
Add Framework - groovy - Can't add Framework - Ok Button isn't active
WI-4899 (Bug)
Smarty 3: Escapes not recognized in strings
IDEA-64179 (Bug)
User Interface hangs during editing with Background Indexing
IDEA-64521 (Bug)
Good CSS highlighted red
IDEA-64916 (Exception)
Cannot open JSPx files: java.lang.ClassCastException: com.intellij.lang.jsp.JspxFileViewProviderImpl cannot be cast to com.intellij.lang.jsp.JspFileViewProviderImpl
IDEA-63562 (Bug)
Incorrect indentation of JS code after formatting
WI-5006 (Task)
Add new constant JSON_ERROR_UTF8
WI-4263 (Bug)
join() returns wrong type
WI-4994 (Bug)
PHP: namespace imported without aliasing is not resolved
WI-4995 (Bug)
PHP: namespace imported without leading backslash is not resolved
WI-4575 (Bug)
Namespaces + parent::foo() causes "undefined method foo"
WI-4822 (Bug)
PHP: Namespaces bugs
WI-4986 (Bug)
Doc Comment before namespace declaration highlights as error by inspection
WI-4832 (Bug)
Code Reformat splitting strings should use concatenation
WI-5032 (Exception)
PHP: StringIndexOutOfBoundsException at FieldReferenceImpl.getName() on Code Analysis if there are dynamic filed references
WI-4997 (Bug)
[CSS3] border-radius error reporting
WI-4978 (Bug)
unable to indicate parameters in local JS Debug Configuration
IDEA-57859 (Cosmetics)
flex: unnecessary empty space between icons and label in completion popups
IDEA-65010 (Bug)
Good code marked red static constants and fields
IDEA-64956 (Bug)
Intellij is taking 30 minutes to start
IDEA-64851 (Bug)
Osmorc facet settings opening causes out of memory sometimes
IDEA-64952 (Cosmetics)
UML: correct error message on attempt to create class in some library package
IDEA-64939 (Bug)
UML: if package contains subpackage(s), they are not shown on diagram after package expanding
IDEA-64943 (Bug)
UML: package nodes are not shown on diagram if their names clash
IDEA-64940 (Bug)
UML: if diagram contains package node, it is possible to add this package's subpackages to same diagram, but they cannot be expanded
IDEA-64944 (Exception)
CME at com.intellij.diagram.DiagramDataModel.removeAll
IDEA-64783 (Bug)
Can't use private repos in github
IDEA-64906 (Bug)
Subversion + ssh: if in the SSh AuthenticationRequired dialog i select to save credentials, and in the VerifyServerKeyFingerprint dialog uncheck the 'add key to svn cache' option, then i get no server dialog on restart
IDEA-64905 (Bug)
Subversion + SSH: with private key authentication is used, ssh login information is not saved
IDEA-65049 (Bug)
Incorrect default paths when module file located not under content root
WI-4317 (Usability Problem)
XDebug zero configuration: "Resolve mapping problem" interface is confusing
WI-4787 (Usability Problem)
User setting to control debug mouse-hover tooltip pop-up, please.
WI-4842 (Usability Problem)
Show error stream when tool fails to execute
IDEA-64868 (Bug)
Linux: IDEA in a path with space: Tomcat run configuration fails to start with code coverage | https://confluence.jetbrains.com/display/IDEADEV/IDEA+X+103.59+Release+Notes | CC-MAIN-2020-29 | en | refinedweb |
By Loc Q Nguyen, published on December 15 , 2016
Message Passing Interface (MPI) is a standardized message-passing library interface designed for distributed memory programming. MPI is widely used in the high-performance computing (HPC) domain because it is well-suited for distributed memory architectures.
Python* is a modern, powerful interpreter that supports modules and packages. Python supports extension C/C++. While HPC applications are usually written in C or Fortran for performance, Python can be used to quickly prototype a proof of concept and for rapid application development because of its simplicity and modularity support.
The MPI for Python* (mpi4py*) package provides Python bindings for the MPI standard. The mpi4py package translates MPI syntax and semantics and uses Python objects to communicate. Thus, programmers can implement MPI applications in Python quickly. Note that mpi4py is object-oriented, and not all functions in the MPI standard are available in mpi4py; however, almost all the commonly used functions are. More information on mpi4py can be found here. In mpi4py,
COMM_WORLD is an instance of the base class of communicators.
mpi4py supports two types of communications:
send(), recv(), bcast(), scatter(), gather(),and so on). In this type of communication, the sent object is passed as a parameter to the communication call.
Send(), Recv(), Bcast(), Scatter(), Gather(),and so on). Buffer arguments to these calls are specified using tuples. This type of communication is much faster than Python objects communication type.
Intel® Distribution for Python* is a binary distribution of the Python interpreter; it accelerates core Python packages including NumPy*, SciPy*, Jupyter*, matplotlib*, mpi4py, and so on. The package integrates Intel® Math Kernel Library (Intel® MKL), Intel® Data Analytics Acceleration Library (Intel® DAAL), Intel® MPI Library, and Intel® Threading Building Blocks (Intel® TBB).
The Intel Distribution for Python 2018 is available free for Python 2.7.x and 3.5.x on macOS*, Windows* 7 and later, and Linux* operating systems. The package can be installed as a standalone or with the Intel® Parallel Studio XE 2018.
In the Intel Distribution for Python, mpi4py is a Python wraparound for the native Intel MPI implementation (Intel® MPI Library). This document shows how to write an MPI program in Python, and how to take advantage of Intel® multi-core technology using OpenMP* threads and Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instructions.
Intel Distribution for Python supports both Python 2 and Python 3. There are two separate packages available in the Intel Distribution for Python: Python 2.7 and Python 3.5. In this example, the Intel Distribution for Python 2.7 on Linux (
l_python2_pu_2018.1.023.tgz) is installed on an Intel® Xeon Phi™ processor 7250 @ 1.4 GHz and 68 cores with 4 hardware threads per core (a total of 272 hardware threads). To install, extract the package content, run the install script, and follow the installer prompts:
$ tar -xvzf l_python2_pu_2018.1.023.tgz $ cd l_python2_pu_2018.1.023 $ ./install.sh
After the installation completes, activate the root Intel® Distribution for Python* with the conda* package:
$ source /opt/intel/intelpython2/bin/activate root
While multithreaded Python workloads can use Intel TBB optimized thread scheduling, another approach is to use OpenMP* to take advantage of Intel® multi-core technology. This section shows how to implement multithread applications using OpenMP and the C math library in Cython*.
Cython is an interpreted language that can be built into native language. Cython is similar to Python, but it supports C function calls and C-style declaration of variables and class attributes. Cython is used for wrapping external C libraries that speed up the execution of a Python program. Cython generates C extension modules, which are used by the main Python program using the
import statement.
For example, to generate an extension module, one can write a Cython code (
.pyx) file. The .pyx file is then compiled by Cython to generate a
.c file, which contains the code of a Python extension code. The .c file is in turn compiled by a C compiler to generate a shared object library (
.so file).
One way to build Cython code is to write a disutils setup.py file (disutils is used to distribute Python modules). In the following
multithreads.pyx file, the function
vector_log_multiplication computes
log(a)*log(b) for each entry in the A and B arrays and stores the result in the C array. Note that a parallel loop (
prange) is used to allow multiple threads to be executed in parallel. The log function is imported from the C math library. The function
getnumthreads() returns the number of threads:
$ cat multithreads.pyx
cimport cython import numpy as np cimport openmp from libc.math cimport log from cython.parallel cimport prange from cython.parallel cimport parallel @cython.boundscheck(False) def vector_log_multiplication(double[:] A, double[:] B, double[:] C): cdef int N = A.shape[0] cdef int i with nogil, cython.boundscheck(False), cython.wraparound(False): for i in prange(N, schedule='static'): C[i] = log(A[i]) * log(B[i]) def getnumthreads(): cdef int num_threads with nogil, parallel(): num_threads = openmp.omp_get_num_threads() with gil: return num_threads
The
setup.py file invokes the
setuptools build process that generates the extension modules. By default, this
setup.py uses GNU Compiler Collection* to compile the C code of the Python extension. In order to take advantage of Intel AVX-512 and OpenMP multithreading in the Intel Xeon Phi processor, one can specify the options
-xMIC-avx512 and
-qopenmp in the compile and link flags, and use the Intel® C++ Compiler. For more information on how to create the
setup.py file, refer to the Writing the Setup Script section of the Python documentation.
$"], libraries=["m"], extra_compile_args = ["-O3", "-xMIC-avx512", "-qopenmp" ], extra_link_args=['-qopenmp', '-xMIC-avx512'] ) ] )
In this example, the Intel Parallel Studio XE 2018 update 1 is installed. First, set the proper environment variables for the Intel® C compiler:
$ source /opt/intel/parallel_studio_xe_2018.1.038/psxevars.sh intel64 Intel(R) Parallel Studio XE 2018 Update 1 for Linux* Copyright (C) 2009-2017 Intel Corporation. All rights reserved.
To explicitly use the Intel compiler
icc to compile this application, execute the
setup.py file with the following command:
$ LDSHARED="icc -shared" CC=icc python setup.py build_ext –inplace running build_ext cythoning multithreads.pyx to multithreads.c building 'multithreads' extension creating build creating build/temp.linux-x86_64-2.7 icc -fno-strict-aliasing -Wformat -Wformat-security -D_FORTIFY_SOURCE=2 -fstack-protector -O3 -fpic -fPIC -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/opt/intel/intelpython2/include/python2.7 -c multithreads.c -o build/temp.linux-x86_64-2.7/multithreads.o -O3 -xMIC-avx512 -qopenmp icc -shared build/temp.linux-x86_64-2.7/multithreads.o -L/opt/intel/intelpython2/lib -lm -lpython2.7 -o ./multithreads.so -qopenmp -xMIC-avx512
As mentioned above, this process first generates the extension code
multithreads.c. The Intel compiler compiles this extension code to generate the dynamic shared object library
multithreads.so.
In this section, we write an MPI application in Python. This program imports the
mpi4py and
multithreads modules. The MPI application uses a communicator object,
MPI.COMM_WORLD, to identify a set of processes that can communicate within the set. The MPI functions
MPI.COMM_WORLD.Get_size(), MPI.COMM_WORLD.Get_rank(), MPI.COMM_WORLD.send(), and MPI.COMM_WORLD.recv() are methods of this communicator object. Note that in mpi4py there is no need to call
MPI_Init() and
MPI_Finalize() as in the MPI standard because these functions are called when the module is imported and when the Python process ends, respectively.
The sample Python application first initializes two large input arrays consisting of random numbers between 1 and 2. Each MPI rank uses OpenMP threads to do the computation in parallel; each OpenMP thread in turn computes the product of two natural logarithms
c = log(a)*log(b) where a and b are random numbers between 1 and 2 (1 ≤ a,b ≤ 2). To do that, each MPI rank calls the
vector_log_multiplication function defined in the
multithreads.pyx file. Execution time of this function is short, about 1.5 seconds. For illustration purposes, we use the
timeit utility to invoke the function 10 times, just to have enough time to demonstrate the number of OpenMP threads involved.
Below is the application source code
mpi_sample.py. Note that if the running time of the program is too short, you may increase the value of
FACTOR in the source code file to make the execution time longer. In this example, the value of
FACTOR is changed from 512 to 1024:
$ cat mpi_sample.py
from mpi4py import MPI from multithreads import * import numpy as np import timeit def time_vector_log_multiplication(): vector_log_multiplication(A, B, C) size = MPI.COMM_WORLD.Get_size() rank = MPI.COMM_WORLD.Get_rank() name = MPI.Get_processor_name() THOUSAND = 1024 FACTOR = 1024 NUM_TOTAL_ELEMENTS = FACTOR * THOUSAND * THOUSAND NUM_ELEMENTS_RANK = NUM_TOTAL_ELEMENTS / size repeat = 10 numthread = getnumthreads() if rank == 0: print "Initialize arrays for %d million of elements" % FACTOR A = 1 + np.random.rand(NUM_ELEMENTS_RANK) B = 1 + np.random.rand(NUM_ELEMENTS_RANK) C = np.zeros(A.shape) if rank == 0: print "Start timing ..." print "Call vector_log_multiplication with iter = %d" % repeat t1 = timeit.timeit("time_vector_log_multiplication()", setup="from __main__ import time_vector_log_multiplication",number=repeat) print "Rank %d of %d running on %s with %d threads in %d seconds" % (rank, size, name, numthread, t1) for i in xrange(1, size): rank, size, name, numthread, t1 = MPI.COMM_WORLD.recv(source=i, tag=1) print "Rank %d of %d running on %s with %d threads in %d seconds" % (rank, size, name, numthread, t1) print "End timing ..." else: t1 = timeit.timeit("time_vector_log_multiplication()", setup="from __main__ import time_vector_log_multiplication",number=repeat) MPI.COMM_WORLD.send((rank, size, name, numthread, t1), dest=0, tag=1)
Run the following command line to launch the above Python application with two MPI ranks:
$ mpirun -host localhost -n 2 python mpi_sample.py Initialize arrays for 1024 million of elements Start timing ... Call vector_log_multiplication with iter = 10 Rank 0 of 2 running on knl-sb2 with 136 threads in 6 seconds Rank 1 of 2 running on knl-sb2 with 136 threads in 6 seconds End timing ...
While the Python program is running, the top command in a new terminal displays two MPI ranks (shown as two Python processes). When the main module enters the loop (shown with the message
"Start timing …"), the top command reports almost 136 threads running (about 13,600 percent CPU). This is because, by default, all 272 hardware threads on this system are utilized by two MPI ranks, thus each MPI rank has 272/2 = 136 threads.
Figure 1. On an Intel® Xeon Phi™ processor, the "top" command shows two MPI ranks running. Each MPI rank spawns 136 threads.
To get detailed information about MPI at run time, we can set the
I_MPI_DEBUG environment variable to a value ranging from 0 to 1000. The following command runs four MPI ranks and sets the
I_MPI_DEBUG to the value 4. Each MPI rank has 272/4 = 68 OpenMP threads as indicated by the top command:
$ mpirun -n 4 -genv I_MPI_DEBUG 4 python mpi_sample.py [0] MPI startup(): Multi-threaded optimized library [0] MPI startup(): shm data transfer mode [1] MPI startup(): shm data transfer mode [2] MPI startup(): shm data transfer mode [3] MPI startup(): shm data transfer mode [0] MPI startup(): Rank Pid Node name Pin cpu [0] MPI startup(): 0 136454 knl-sb2 {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,68,69,70,71,72,73,74,75,76,77,78,79,80, 81,82,83,84,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152, 204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220} [0] MPI startup(): 1 136455 knl-sb2 {17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,85,86,87,88,89,90,91,92,93,94 ,95,96,97,98,99,100,101,153,154,155,156,157,158,159,160,161,162,163,164,165,166, 167,168,169,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237} [0] MPI startup(): 2 136456 knl-sb2 {34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,102,103,104,105,106,107,108,1 09,110,111,112,113,114,115,116,117,118,170,171,172,173,174,175,176,177,178,179,1 80,181,182,183,184,185,186,238,239,240,241,242,243,244,245,246,247,248,249,250,2 51,252,253,254} [0] MPI startup(): 3 136457 knl-sb2 {51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,119,120,121,122,123,124,125,1 26,127,128,129,130,131,132,133,134,135,187,188,189,190,191,192,193,194,195,196,1 97,198,199,200,201,202,203,255,256,257,258,259,260,261,262,263,264,265,266,267,2 68,269,270,271} Initialize arrays for 1024 million of elements Start timing ... Call vector_log_multiplication with iter = 10 Rank 0 of 4 running on knl-sb2 with 68 threads in 6 seconds Rank 1 of 4 running on knl-sb2 with 68 threads in 6 seconds Rank 2 of 4 running on knl-sb2 with 68 threads in 6 seconds Rank 3 of 4 running on knl-sb2 with 68 threads in 6 seconds
Figure 2. Run with four MPI ranks, each MPI rank spawns 68 threads.
We can specify the number of OpenMP threads used by each rank in the parallel region by setting the
OMP_NUM_THREADS environment variable. The following command starts four MPI ranks; 34 threads for each MPI rank (or 2 threads/core):
$ mpirun -host localhost -n 4 -genv OMP_NUM_THREADS 34 python mpi_sample.py Initialize arrays for 1024 million of elements Start timing ... Call vector_log_multiplication with iter = 10 Rank 0 of 4 running on knl-sb2 with 34 threads in 6 seconds Rank 1 of 4 running on knl-sb2 with 34 threads in 6 seconds Rank 2 of 4 running on knl-sb2 with 34 threads in 6 seconds Rank 3 of 4 running on knl-sb2 with 34 threads in 6 seconds End timing ...
Figure 3. Run with four MPI ranks, each MPI rank spawns 34 threads.
Note that if we run four MPI ranks, 17 threads for each MPI rank (or 1 thread/core), the program takes more time to run as expected:
$ mpirun -host localhost -n 4 -genv OMP_NUM_THREADS 17 python mpi_sample.py Initialize arrays for 1024 million of elements Start timing ... Call vector_log_multiplication with iter = 10 Rank 0 of 4 running on knl-sb2 with 17 threads in 8 seconds Rank 1 of 4 running on knl-sb2 with 17 threads in 8 seconds Rank 2 of 4 running on knl-sb2 with 17 threads in 8 seconds Rank 3 of 4 running on knl-sb2 with 17 threads in 8 seconds End timing ...
Finally, we can force the program to allocate memory in MCDRAM (high-bandwidth memory on the Intel Xeon Phi processor). For example, before the execution of the program, the "
numactl –hardware" command shows the system has two NUMA nodes: node 0 consists of CPUs and 96 GB DDR4 memory, node 1 is the on-board 16 GB MCDRAM memory:
$87 MB node 1 cpus: node 1 size: 16384 MB node 1 free: 15921 MB node distances: node 0 1 0: 10 31 1: 31 10
Run the following command, which indicates allocating memory in MCDRAM if possible:
$ mpirun -n 4 numactl --preferred 1 python mpi_sample.py
While the program is running, we can observe that it allocates memory in MCDRAM (NUMA node 1):
$89 MB node 1 cpus: node 1 size: 16384 MB node 1 free: 112 MB node distances: node 0 1 0: 10 31 1: 31 10
Readers can also try the above code on an Intel® Xeon® processor system with the appropriate setting; for example, on an Intel® Xeon® Scalable processor, using
–xCORE-AVX512 instead of
–xMIC-AVX512, and set the appropriate number of available threads. Also note that the Intel Xeon Scalable processor doesn’t have high-bandwidth memory.
This article introduced the MPI for Python package and demonstrated how to use it via the Intel Distribution for Python. Furthermore, it showed how to use OpenMP and Intel AVX-512 instructions in order to fully take advantage of the Intel Xeon Phi processor architecture. A simple example was included to show how one can write a parallel Cython function with OpenMP, compile it with the Intel compiler with Intel AVX-512 enabled option, and integrate it with an MPI Python program to fully take advantage of the Intel Xeon Phi processor architecture.
Loc Q Nguyen received an MBA from University of Dallas, a master’s degree in Electrical Engineering from McGill University, and a bachelor's degree in Electrical Engineering from École Polytechnique de Montréal. He is currently a software engineer at Intel Software and Services Group. His areas of interest include computer networking, parallel computing, and computer graphics.
Intel® Distribution for Python*
Intel® Parallel Studio XE
Intel® AVX-512 Instructions
Cython C-Extensions for Python | https://software.intel.com/content/www/us/en/develop/articles/code-sample-exploring-mpi-for-python-on-intel-xeon-phi-processor.html | CC-MAIN-2020-29 | en | refinedweb |
I looked and there are offtheshelfe solution, but they are rather expensive and not as fun to build as a RPi setup
So I have the newest RPi and bought the PiCam.
I found this project : and had a go with the code.
Except for a small problem with the Print statement, I got it working.
The RPi setup will be left unattended for 1-2 weeks in a box.
The change I want to make is so it only takes pictures if its within a certain timeframe (to save space on the memory card)
The problem I cant see to figure out how to solve (Im am new at this) is how to set it up so the loop in the code will only run if its between 8:00AM and 4:00PM
This is the code now :
What would you suggest?
Code: Select all
import os import time import RPi.GPIO as GPIO import logging) # Define the location where you wish to save files. Set to HOME as default. # If you run a local web server on Apache you could set this to /var/www/ to make them # accessible via web browser. folderToSave = "/home/timelapse/timelapse_" + str(initYear) + str(initMonth) + str(initDate) + str(initHour) + str(initMins) os.mkdir(folderToSave) # Set up a log file to store activities for any checks. logging.basicConfig(filename=str(folderToSave) + ".log",level=logging.DEBUG) logging.debug(" R A S P I L A P S E C A M -- Started Log for " + str(folderToSave)) logging.debug(" Support at") # Set the initial serial for saved images to 1 fileSerial = 1 # Run a WHILE Loop of infinitely while True: d = datetime.now() if d.hour < 99: # = 800 # Max = 2592 imgHeight = 600 #) + "_" + str(hour) + str(mins) + ".jpg -sh 40 -awb auto -mm average -v") # Write out to log file logging.debug(' Image saved: ' + str(folderToSave) + "/" + str(fileSerialNumber) + "_" + str(hour) + str(mins) + ".jpg") # Increment the fileSerial fileSerial += 1 # Wait 60 seconds (1 minute) before next capture time.sleep(60) else: # Just trapping out the WHILE Statement print " ====================================== Doing nothing at this time" | https://www.raspberrypi.org/forums/viewtopic.php?f=91&t=160412&p=1040356 | CC-MAIN-2020-29 | en | refinedweb |
3 Ways improve Redux Reducers
- 2021
Improving Redux Reducers in 3 Ways. Below is a simple `switch` statement that you probably have seen in 99% of the Redux/reducers examples out there.
In this article, I am going to assume you know what Redux is and what the reducers do.
I will go over how to improve your Redux reducers by making them faster and how to avoid the cyclomatic complexity warning/error you might get with something like SonarQube when the number of actions increases.
Below is a simple
switch statement that you probably have seen in 99% of the Redux/reducers examples out there.
switch (action.type) { case ShowsAction.REQUEST_SHOW_FINISHED: return { ...state, show: action.payload, }; case ShowsAction.REQUEST_EPISODES_FINISHED: return { ...state, episodes: action.payload, }; default: return state; }
reducer-switch-statement.js
The way we are going to improve it is by using a
dictionary.
Dictionary (Key-Value Pair)
A
dictionary is just a simple JavaScript object where you can add a string value as key and assign a value to it.
Below is a simplified version of what we are going to create. Notice how the key is the action
type name and the assigned value is a function that takes the current state and payload as arguments to create a new state.
const dictionary = {}; // Add keys to the dictionary dictionary['REQUEST_SHOW_FINISHED'] = (state, payload) => { return { ...state, show: payload, } }; dictionary['REQUEST_EPISODES_FINISHED'] = (state, payload) => { return { ...state, episodes: payload, } }; dictionary['REQUEST_CAST_FINISHED'] = (state, payload) => { return { ...state, actors: payload, } }; // Usage const newState = dictionary[action.type](state, action.payload); // Warning: This will break if the action.type is not found
dictionary.js
JavaScript Functional Approach
Let’s improve the dictionary example by creating a
baseReducer function that takes the
initialState as the first argument and a dictionary as the second argument.
Hopefully, the code below is easy to read/understand but, basically, we use the action type constant as the function name.
export const initialState = { currentShowId: '74', show: null, episodes: [], actors: [], }; export const showsReducer = baseReducer(initialState, { [ShowsAction.REQUEST_SHOW_FINISHED](state, action) { return { ...state, show: action.payload, }; }, [ShowsAction.REQUEST_EPISODES_FINISHED](state, action) { return { ...state, episodes: action.payload, }; }, [ShowsAction.REQUEST_CAST_FINISHED](state, action) { return { ...state, actors: action.payload, }; }, });
_showsReducer.js
export default function baseReducer(initialState, reducerDictionary) { // returns a redux reducing function return (state = initialState, action) => { // if the action type is used for a reducer name then this be a reference to it. const reducer = reducerDictionary[action.type]; // if the action type "reducer" const is undefined or the action is an error // return the state. if (!reducer || action.error) { return state; } // if there is a valid reducer call it with the state and action objects. return reducer(state, action); }; }
baseReducer.js
In the above code, the
reducerDictionary parameter is the dictionary that was passed in. Notice how
action.type is used here,
reducer[action.type], to get access to the correct reducer function.
JavaScript Class Approach
Let’s improve the dictionary example by creating a
BaseReducer class for our class reducers to extend.
Below, notice how
ShowsReducer
extends
BaseReducer. This is inheritance and it abstracts some of the logic to another class so the reducers only have the necessary stuff.
export default class ShowsReducer extends BaseReducer { initialState = { currentShowId: '74', show: null, episodes: [], actors: [], }; [ShowsAction.REQUEST_SHOW_FINISHED](state, action) { return { ...state, show: action.payload, } } [ShowsAction.REQUEST_EPISODES_FINISHED](state, action) { return { ...state, episodes: action.payload, } } [ShowsAction.REQUEST_CAST_FINISHED](state, action) { return { ...state, actors: action.payload, } } }
_ShowsReducer.js
export default class BaseReducer { initialState = {}; reducer = (state = this.initialState, action) => { const method = this[action.type]; if (!method || action.error) { return state; } return method.call(this, state, action); }; }
BaseReducer.js
If you look at the above
BaseReducer, you will see:
Line 2: Is the
initialStatethat will be overridden when a reducer class extends this
BaseReducer.
Line 4: Is the reducer method that will be used by Redux.
Line 5: Gets access to the class method that matches the
action.type.
Line 7: If the method is not found (
!method) or if the action is an error (
action.error), then it returns the current
state.
Line 11: Calls the found method with the
stateand
actionarguments which will return the modified
statethat Redux will use.
Code Examples
If you want to see these code examples in action, check out my other article for the sample application and source code for both the functional and class-base approaches. I have TypeScript versions too! | https://geekwall.in/p/T7WR9-8N/3-ways-improve-redux-reducers | CC-MAIN-2020-29 | en | refinedweb |
Having a problem understanding the difference between ruby blocks, procs and lamdas. What are blocks? What is the difference between procs and lambdas? Lets break this down.
BLOCKS
A block is a collection of code enclosed in a do / end statement or between braces { }. They are chunks of code that you can pick up and drop into another method as input or chunk of code that you associate with a method call. If you have used
each before to loop through an Enumerable then you have used blocks.
Defining a block
def block_method
puts "we are in the method"
endblock_method { puts "The block is called"}
Here we have defined the method block_method. Below is a the method call after which we pass a block.
Can you guess the output?
Yeah, you guessed right.
> we are in the method
The output is only of the defined method. This is because we have not invoked the block in any way. Lets see how we invoke the block in the next section.
Yielding a block
def my_method
puts "we are in the method"
yield
puts "we are back in the method"
endmy_method { puts "The block is called"}
What really happens in the above code?
First we have created a method called my_method. Then on the next line we print out the string
we are in the method . In next line, notice we have used the
yield keyword which will find and invoke the block the method was called with.
What happens here, is that yield will go to the method call and execute the block after which control returns to the method, to resume running method body. In our case we have called the method my_method then passed a block using {}. Yield will execute the block code which was passed after calling my_method then method body execution continues.
Passing parameters to a block
What if you want to pass parameters to yield. Think of how you pass arguments to methods like each whenever you give it a block.
[1,2,3].each {|x| puts x*2 }
In this case the each method takes a block that has an argument. How about we do this with our defined block and see how
yield takes arguments.
def my_block
yield 2
yield 3
endmy_block {|parameter| puts "parameter is: #{parameter}" }
Here yield will invoke the block passed with the method call. In our case give the block an argument since the block takes a parameter. First round it will invoke the block with parameter being 2. Control resumes to the method and then invokes the block once again this time with parameter being 3.
> parameter is: 2
> parameter is: 3
Some fun facts about blocks
- Did you know you can call each method on an enumerable and not pass it a block? It will give you back an enumerable object
- Inside blocks you cannot explicitly return, blocks return the last line of execution
- When calling each method, the last line is usually ignored and the each method returns the original enumerable
- When calling map, the last line is added into an array which will be returned
PROCS
So what if you want to pass two blocks to your function. How can you save your block into a variable?
Ruby introduces procs so that we are able to pass blocks around. Proc objects are blocks of code that have been bound to a set of local variables.Once bound, the code may be called in different contexts and still access those variables.
Defining procs
You can call new on the Proc class to create a proc . You can use the kernel object
proc. Proc method is simply an alias for Proc.new. This can be assigned into a variable.
factor = Proc.new {|n| print n*2 }or factor = proc {|n| print n*2}//using the proc value
[3,2,1].each(&factor)
>642
We precede the argument with
& so that ruby knows that this is a proc and not a variable.
Defining a method that takes in a proc/block
def my_each(&block )
self.length.times do |i|
# and now we can call our new Proc like normal
block.call( self[i] )
end
end[1,2,3].my_each { |i| puts i*2 }
On our definition,
&converts the block into a proc so we treat the block as a proc inside our method.
We no longer use yield since it is possible to pass more than one proc to a method, we have to say which one we are calling.
There are different ways of calling procs in our methods. Using
call,
() or using
[].
LAMBDAS
Can be defined using the method lambda or can be defined as stuby lambda
lamb = lambda {|n| puts 'I am a lambda' }lamb = -> (n) { puts 'I am a stuby lambda' }
Difference between Procs and Lambdas
- Procs don’t care about the correct number of arguments, while lambdas will raise an exception.
- Return and break behaves differently in procs and lambdas
- Next behaves same way in both procs and lambdas
References
Pluralsight course: Ruby Fundamentals
The vikingscodeschool website:
rubyguides.com : | https://medium.com/podiihq/ruby-blocks-procs-and-lambdas-bb6233f68843 | CC-MAIN-2020-29 | en | refinedweb |
1. What is the list view?
Answer:.
2. What is Inline editing?
Answer: On the detail page without clicking on the edit button we can edit a particular field if it is not read-only.
3. Explain the term “Data Skew” in Salesforce?
Answer: .
4. Explain the skinny table. What are the considerations for Skinny Table?.
5. Mention what are the actions available in workflow?
Answer: Actions available in the workflow are:
- Task
- Field Update
- Outbound Message
6. What is the Enhanced list view?
Answer: In list views, we can modify multiple records at a time using Enhanced list views.
7. Which fields are automatically Indexed in Salesforce?
Answer: Only the following fields are automatically indexed in Salesforce:
- Primary keys (Id, Name and Owner fields).
- Foreign keys (lookup or master-detail relationship fields).
- Audit dates (such as System Mod Stamp).
- Custom fields marked as an External ID or a unique field.
8. What is the search layout?
Answer: Whenever we click on a tab or we click on a lookup icon or search for a record we see only one standard field by default, to enable the.
9. For which criteria in workflow “time dependent workflow action” cannot be created?
Answer: Time dependent workflow action cannot be created for: “created, and every time it’s edited”.
10. What is the mini page layout and how to enable it?
Answer: For lookup fields on record detail page we see a link, whenever we put the cursor on that link we see a popup window that displays few fields. To control the visibility of the field, on that lookup field parent object page layout we see a mini page layout that we can control.
11. For which data type we can enable external id?
Answer: Text, Number, Auto number, Email.
12. Mention what are the different types of custom settings in Sales force?
Answer: Different types of custom settings in Salesforce includes:
- Hierarchy type: Hierarchy custom settings are a type of custom setting that uses built-in hierarchical logic for “personalizing” settings for specific profiles or users.
- List type: List custom settings are another type of custom setting that provides a reusable set of static data that can be accessed across your organization irrespective of user/ profile.
13. What is the lead process?
Answer: To control the picklist values of the status field on the lead object we should create the lead process.
- Without selecting the lead process we can’t create the record type for the lead object.
14. What is the advantage of using a custom setting?
Answer: The advantage of using custom settings is that it allows developers to create a custom set of access rules for various users and profiles.
15. What is the sales process?
Answer: To control the picklist values of the stage field on the opportunity object we should create a sales process.
- Without selecting the sales process we can’t create the record type for opportunity object.
16. How many active assignment rules can you have in a lead/ case?
Answer: Only one rule can be active at a time.
17. What is the Support process?
Answer: To control the picklist values of the status field on the case object we should create a support process.
- Without selecting the support process we can’t create the record type for case object.
18. Mention what is the difference between Is Null and Is Blank?
Answer:
- Is Null: It supports for a number field
- Is Blank: It supports for Text field.
19. What is web-to-lead?
Answer: On the lead object, we can generate the HTML code by selecting lead fields and by mentioning the return URL from a web-to-lead option. The generated HTML code can be hosted on any of the websites. Upon entering the information in those fields and clicking on the submit button that information will be saved into the lead object of the Salesforce.
20. What are custom labels in the Salesforce? What is the character limit of custom labels?
Answer: Custom labels are custom text values that can be accessed from Apex classes or Visual force pages. The values here can be translated into any language supported by Salesforce.
Their benefit is that they enable developers to create a multilingual application that automatically presents information in a user’s native language.
You can create up to 5,000 custom labels for your organization, and they can be up to 1,000 characters in length.
21. What is the Queue?
Answer: In the queue, we can add a group of users and we can assign the objects to the Queue. After creating the queue one of the lists view automatically created on the objects which are selected for the queue. We can assign this queue as the owner of the records. Later users who are part of that queue can claim the ownership by navigating to the list view corresponding to the queue. In that list view, users who are part of the queue can select the record and click on the accept button so that record ownership will be transferred from queue to accepted person.
22. What is the difference between a Role and Profile in the Sales force?.
The role, however, is not mandatory for every user. The primary function of the Role/ Role hierarchy is that it allows higher-level users in the hierarchy to get access to records owned by lower-level users in the hierarchy. An example of that is Sales Managers getting access to records owned by Sales Reps while their peers do not get access to it.
23. What is a public group?
Answer: We can add a set of random users in the public group. We can’t assign the public group as an owner of the record. In manual sharing, sharing rules and in list views, we can use public groups.
24. How many callouts to external service can be made in a single Apex transaction?
Answer: Governor limits will restrict a single Apex transaction to make a maximum of 100 callouts to an HTTP request or an API call.
25. What are the Assignment rules?
Answer: On lead and case objects we can create the Assignment rules. Whenever any record is submitted for lead/case if specified condition in the Assignment rule satisfied based on that we can decide the owner of the case/lead. Note: While submitting case/lead we should check for ‘Assign using active assignment rule’ checkbox which will display under the Optional section.
26. What is the difference between a standard controller and a custom controller?
Answer: The standard controller in Apex inherits all the standard object properties and standard button functionality directly. It contains the same functionality and logic that are used for standard Salesforce pages.
A custom controller is an Apex class that implements all of the logic for a page without leveraging a standard controller. Custom Controllers are associated with Visual force pages through the controller attribute.
27. What are the Auto-Response Rules?
Answer: On lead and case objects we can create the Auto-Response Rules. Whenever any record is submitted for lead/case if specified condition in the Auto-Response Rules satisfied based on that we can decide the email format which should send as auto-response.
28. How many records can a select query return? How many records can a SOSL query return?
Answer: The Governor Limits enforces the following:-
A maximum number of records that can be retrieved by SOQL command: 50,000.
A maximum number of records that can be retrieved by SOSL command: 2,000. Salesforce Admin Training
29. What are the Escalation rules?
Answer: On the case object, we can create an Escalation rule. Based on the priority we can send escalation mails.
30. What are the three types of bindings used in Visual force? What does each refer to?
Answer: There are three types of bindings used in Salesforce:-
- Data bindings, which refer to the data set in the controller
- Action bindings, which refer to action methods in the controller
- Component bindings, which refer to other Visual force components.
Data bindings and Action bindings are the most common and they will be used in every Visual force page.
31. Is it possible to create the Master-Detail Relationship field for the child object which is having existing records?
Answer: No, we cannot create directly. To create first we should create a Lookup relationship then populate the field value for all the records and then convert the lookup a relationship to master-detail relationship.
32. What are the different types of collections in Apex? What are maps in Apex?
Answer: Collections are the type of variables that can be used to store multiple numbers of records.
33. Is it possible to convert Master-Detail Relationship to Look Up Relationship?
Answer: If the parent object doesn’t have Roll up Summary fields for the child object then we can convert.
34. What is the difference between the public and global classes in Apex?
Answer: Global class is accessible across the Salesforce instance irrespective of namespaces.
Public classes are accessible only in the corresponding namespaces.
35. Is it possible to delete junction – Object in case of Master – Detail Relationship?
Answer: If the parent objects don’t have Roll up Summary fields for the child object then we can delete.
To delete a child object it should not be referred to in Apex Classes and Apex Triggers.
Later if we undelete the object, Master-detail fields on the junction objects will be converted to look up Fields.
Note:
- If we delete only the Master-Detail Relationship field from the child object and undelete it from the Recycle Bin then it will be converted to look up a relationship.
- Parent Object we cannot delete because it will be referred to in the child object.
36..
Answering the second part of the question, each user can only be assigned 1 profile.
37. What will happen if we undelete the deleted Junction Object?
Answer: Master – Detail Relationship data types will be converted to look up relationship data types.
38. Can you edit an apex trigger/apex class in a production environment? Can you edit a Visualforce page in a production environment?
Answer: No, it is not possible to edit apex classes and triggers directly in sandboxes and in production.
Only if the page has to do something unique, it would have to be developed via Sandbox.
39. What is Junction Object?
Answer:
A child object which is having master-detail relationships with two different parent object is called a junction object.
Example:
Object1: Department
Object2: Project
Child Object: Employee
- Field1: Department (Master Detail with Department)
- Field2: Project(Master Detail with Project)
Note: From the above example we can say Employee Object as Junction Object
40. What are the different types of object relations in salesforce? How can you create them?
Answer: know is that being the controlling object, the master field cannot be empty. If a record/ field in a Master-Detail relationship, you can think of this as a form of parent-child relationship where there is only one parent, but many children i.e. 1:n relationship.
The difference here is that despite being a.
41. What happens to a detailed record when a master record is deleted? What happens to child records when a parent record is deleted?
Answer: In a Master-Detail relationship, when a master record is deleted, the detail record is deleted automatically (Cascade delete).
In a Lookup relationship, even if the parent record is deleted, the child record will not be deleted.
42. How to handle comma within a field while uploading using Data Loader?
Answer: In a Data Loader.CSV, if there is a comma in field content, you will have to enclose the contents within double quotation marks: ”
43. What are the examples of non-deterministic Force.com formula fields?
Answer:() Salesforce Training Online
44. What are the different ways of deployment in Salesforce?
Answer:
- You can deploy code in Salesforce using:
- Change Sets
- Eclipse with Force.com IDE
- Force.com Migration Tool – ANT/Java-based
- Salesforce Package.
45. What is an external ID in Salesforce? Which all field data types can be used as external IDs?
Answer: An external ID is a custom field that.
If we delete parent object record all the child object records relationship’s field value will be get deleted. (Entire record won’t be get deleted)
46. What is Trigger.new?
Answer: Triger.new is a command which returns the list of records that have been added recently to the subject. To be more precise, those records will be returned which are yet to be saved to the database. Note that this sObject list is only available in insert and update triggers, and the records can only be modified before triggers.
But just for your information, Trigger. old returns a list of the old versions of the sObject records. Note that this sObject list is only available in update and delete triggers.
47. What all data types can a set store?
Answer:
Sets can have any of the following data types: Primitive types
- Collections
- Objects
- User-defined types
- Built-in Apex types
48. What are getter methods and setter methods?
Answer:
Get (getter) method is used to pass values from the controller to the VF page.
Whereas, the set (setter) method is used to set the value back to the controller variable.
49. What is an Apex transaction?
Answer:.
50. What will happen to child records if we delete a parent record in case of Lookup Relationship?
Answer: If we delete parent object record all the child object records relationship’s field value will be get deleted. (Entire record won’t be get deleted)
Example:
Child Object: Employee (Employee object have a Department field which is related to Department Object)
Parent Object: Department
Suppose N number of employee records related to the IT department, if we delete the IT department all the child(Employee) records Department field value related to the IT department will be get deleted.
Note:
Salesforce store deleted records only for 15 days in Recycle bin later it will remove the records permanently.
If we undelete the IT department record from the Recycle bin then all the related child records department field value will be restored.
All Salesforce Interview Questions
Mentor Based Training40 Classes | 80+ Challenges
- Experienced Faculty
- Real-time Scenarios
- Free Bundle Access
- Course Future Updates
- Sample CV/Resume
- Interview Q&A
- Complimentary Materials | https://svrtechnologies.com/salesforce-interview-questions-and-answers/ | CC-MAIN-2020-29 | en | refinedweb |
Message for a Key eXchange for a tunnel, with authentication. More...
#include </home/handbook/gnunet/src/cadet/cadet_protocol.h>
Message for a Key eXchange for a tunnel, with authentication.
Used as a response to the initial KX as well as for rekeying.
Definition at line 291 of file cadet_protocol.h.
Message header with key material.
Definition at line 296 of file cadet_protocol.h.
Referenced by GCC_handle_kx_auth(), GCT_handle_kx_auth(), handle_tunnel_kx_auth(), and send_kx_auth().
KDF-proof that sender could compute the 3-DH, used in lieu of a signature or payload data.
Definition at line 311 of file cadet_protocol.h.
Referenced by GCT_handle_kx_auth(), and send_kx_auth(). | https://docs.gnunet.org/doxygen/d5/d6f/structGNUNET__CADET__TunnelKeyExchangeAuthMessage.html | CC-MAIN-2020-29 | en | refinedweb |
I have a mockservice that listens on port 80 and receive the JSON callback from a API server. Rightnow im using the below code in OnrRequest script to get the response and assign it to a project's property like below.
mockRunner.mockService.project.setPropertyValue("ResponseBody",requestBody)
In the testsuite, i poll for the "ResponseBody" and when it gets some value then I read the responsebody value and process it in the testsuite. After processing i will move to the next hit. The polling/waiting is consuming time and instead of waiting for the callback response i want to complete all the hits and finally process the callback responses collected in the mockservice. Is there anyway to collect all responses in the mockservice and refer the list in the testsuite?
i tried below but Im not sure whether its correct or not.In the Mockservice's "Start Script" i put the below codedef mockList=[]context.setProperty("MockResponseList",mockList)
In the Mockservice's "OnRequest Script" i put the below codedef requestBody = mockRequest.getRequestContent()
def jsonResponseList=context.getProperty("MockResponseList")jsonResponseList.add(requestBody)1)but i dont know how to refer the above jsonResponseList in the project's testsuites2)jsonResponseList will stay the same for each OnrequestScript call? or it will be unique?3)Is there anyway to collect all responses in the mockservice and refer the list in the testsuite?4)What is the best way to collect all responses and process in one go? | https://community.smartbear.com/t5/SoapUI-Open-Source/Add-the-responses-to-a-list-in-Mockservice-s-OnRequestScript-and/td-p/192617 | CC-MAIN-2020-34 | en | refinedweb |
Original article was published on Deep Learning on Medium
(W-ROV)Webserver Remotely Operated Vehicle Observation-Class I, Deep Learning Enabled Python, Github Code Included
Story
On this tutorials we explore:
In this tutorial, we will combine what we have learned before, controlling our camera position through the internet, as shown in the example:
The above gif shows the camera controlled by buttons, pre-programmed with fixed Pan/Tilt angles. In this tutorial, we will also explore other alternatives to control the camera position through the internet.
Step 1: Used Instruments
Main parts:
Step 2: Installing the PiCam
1. With your RPi turned-off, install the Camara on its special port as shown below:
2. Turn on your Pi and go to Raspberry Pi Configuration Tool at the main menu and verify if Camera Interface is enabled:
If you needed to Enabled it, press [OK] and reboot your Pi. Make a simple test to verify if everything is OK:
raspistill -o /Desktop/image.png
You will realize that an image icon appears on your Rpi desktop. Click on it to open. If an image appears, your Pi is ready to stream video! If you want to know more about the camera, visit the link: Getting started with picamera.
Step 3: Instaling Flask
There are several ways to stream video. The best (and “lighther”) way to do it that I found was with Flask, as developed by Miguel Grinberg. For a detailed explanation about how Flask does this, please see his great tutorial: flask-video-streaming-revisited.
On my tutorial: Python WebServer With Flask and Raspberry Pi, we learned in more details how Flask works and how to implement a web-server to capture data from sensors and show their status on a web page. Here, on the first part of this tutorial, we will do the same, only that the data to be sent to our front end, will be a video stream.
Creating a web-server environment:
The first thing to do is to install Flask on your Raspberry Pi. If you do not have it yet, go to the Pi Terminal and enter:
sudo apt-get install python3-flask
The best when you start a new project is to create a folder where to have your files organized. For example:
From home, go to your working directory:
cd Documents
Create a new folder, for example:
mkdir camWebServer
The above command will create a folder named “camWebServer”, where we will save our python scripts:
/home/pi/Documents/camWebServer
Now, on this folder, let’s create 2 sub-folders: static for CSS and eventually JavaScript files and templates for HTML files. Go to your newer created folder:
cd camWebServer
And create the 2 new sub-folders:
mkdir static
and
mkdir templates
The final directory “tree”, will look like:
├── Documents
├── camWebServer
├── templates
└── static
OK! With our environment in place let’s create our Python WebServer Application to stream video.
Step 4: Creating the Video Streaming Server
First, download Miguel Grinberg’s picamera package: camera_pi.py and save it on created directory camWebServer. This is the heart of our project, Miguel did a fantastic job!
Now, using Flask, let’s change the original Miguel’s web Server application (app.py), creating a specific python script to render our video. We will call it appCam.py:
from flask import Flask, render_template, Response
# Raspberry Pi camera module (requires picamera package, developed by Miguel Grinberg)
from camera_pi import Camera
app = Flask(__name__)
@app.route('/')
def index():
"""Video streaming home page."""
return render_template('index.html')
def gen(camera):
"""Video streaming generator function."""
while True:
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
@app.route('/video_feed')
def video_feed():
"""Video streaming route. Put this in the src attribute of an img tag."""
return Response(gen(Camera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__ == '__main__':
app.run(host='0.0.0.0', port =80, debug=True, threaded=True)
The above script streams your camera video on an index.html page as below:
<html>
<head>
<link rel="stylesheet" href='../static/style.css'/>
</head>
<body>
<h1>MJRoBot Lab Live Streaming</h1>
<h3><img src="" width="90%"></h3>
<hr>
</body>
</html>
The most important line of index.html is:
<img src="" width="50%">
There is where the video will be “feed” to our web page.
You must also include the style.css file on the static directory to get the above result in terms of style.
All the files can be downloaded from my GitHub: camWebServer
Only to be sure that everything is in the right location, let’s check our environment after all updates:
├── Documents
└── camWebServer
├── camera_pi.py
├── appCam.py
├── templates
| └── index.html
└── static
└── style.css
Now, run the python script on the Terminal:
sudo python3 appCam.py
Go to any browser in your network and enter with (for example, in my case: 10.0.1.27)
NOTE: If you are not sure about your RPi IP address, run on your terminal:
ifconfig
at wlan0: section you will find it.
The results:
That’s it! From now it is only a matter to sophisticate a page, embedded your video on another page etc.
Step 5: The Pan Tilt Mechanism
Now that we have the camera working and our Flask WebServer streaming its video, let’s install our Pan/tilt mechanism to position the camera remotely.
The servos should in series, between Raspberry Pi GPIO and Server data input pin. This would protect your RPi in case of a servo problem.
Let’s also use the opportunity and test our servos inside our Virtual Python Environment.
Let’s use Python script to execute some tests with our drivers: an equivalent duty cycle.
Step 6: Using Incremental — Decremental Angle Buttons
Sometimes what we only need is a few buttons to move our servos in steps:
- Pan: Left / Right
- Tilt: Up / Down
We can also use +/- buttons (Incremental — decremental angle), your choice. Let’s create a new directory:
mkdir PanTiltControl2
On this directory we should have the following environment and files:
├── Documents
└── PanTiltControl2
├── camera_pi.py
├── angleServoCtrl.py
├── appCamPanTilt2.py
├── templates
| └── index.html
└── static
└── style.css
The files camera_pi.py and angleServoCtrl.py are the same used before. You can download both from my GitHub, clicking on correspondent links or use the ones that you have downloaded before.
Now we need the appCamPanTilt2.py, the index.html and style.css. You can download those files from my GitHub, clicking on corresponding links. Pay attention to its correct position on your directory.
Let’s see the NEW index.html:
<html>
<head>
<title>MJRoBot Lab Live Streaming</title>
<link rel="stylesheet" href='../static/style.css'/>
</head>
<body>
<h3><img src="" width="80%"></h3>
<hr>
<h4> PAN Angle: <a href="/pan/-"class="button">-</a> [ ] <a href="/pan/+"class="button">+</a> </h4>
<h4> TILT Angle: <a href="/tilt/-"class="button">-</a> [ ] <a href="/tilt/+"class="button">+</a> </h4>
<hr>
</body>
</html>
The index.html is very similar to the previous one. The bunch of lines used on the last index.html was replaced by only 2 lines, where we will only have now 4 buttons Pan [+], Pan [-], Tilt [+] and Tilt [-].
Let’s analyze one of the 4 buttons:
<a href="/pan/-"class="button">-</a>
This is also a simple HTML hyperlink TAG, that we have styled as a button (the button style is described in style.css). When we click on this link, we generate a “GET /<servo>/<Increment or decrement angle>”, where <servo> is “pan” and <-> is “decrease angle”. Those parameters will be passed to the Web Server App (appCamPanTilt2.py).
Let’s see this part of code on appCamPanTilt2.py:
@app.route("/<servo>/<angle>")
def move(servo, angle):
global panServoAngle
global tiltServoAngle
if servo == 'pan':
if angle == '+':
panServoAngle = panServoAngle + 10
else:
panServoAngle = panServoAngle - 10
os.system("python3 angleServoCtrl.py " + str(panPin) + " " + str(panServoAngle))
if servo == 'tilt':
if angle == '+':
tiltServoAngle = tiltServoAngle + 10
else:
tiltServoAngle = tiltServoAngle - 10
os.system("python3 angleServoCtrl.py " + str(tiltPin) + " " + str(tiltServoAngle))
templateData = {
'panServoAngle' : panServoAngle,
'tiltServoAngle' : tiltServoAngle
}
return render_template('index.html', **templateData)
In this example, “servo” is equal to “pan”, the lines below will be executed:
if angle == '+':
panServoAngle = panServoAngle + 10
else:
panServoAngle = panServoAngle - 10
os.system("python3 angleServoCtrl.py " + str(panPin) + " " + str(panServoAngle))
Once the “angle” is equal to “-”, we will decrease 10 from panServoAngle and pass this parameter to our command. Suppose that the actualpanServoAngle is 90. The new parameter will be 80.
So, PanPin will be translated by “27” and panServoAngle to “80”. The app will generate the command.
Step 9: Conclusion
As always, I hope this project can help others find their way into the exciting world of electronics!
For details and final code, please visit my GitHub repository: | https://mc.ai/w-rovwebserver-remotely-operated-vehicle-observation-class-i-deep-learning-enabled-python/ | CC-MAIN-2020-34 | en | refinedweb |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <net_config.h>
void modem_init (void);
The modem_init function initializes the modem driver. The
function:
The modem_init function for the null modem is in the
RL-TCPnet library. The prototype is defined in net_config.h. If you
want to use a standard modem connection, you must copy std_modem.c
into your project directory.
note
The modem_init function does not return any value.
modem_dial, modem_hangup, modem_listen
void modem_init (void) {
/* Initializes the modem variables and control signals DTR & RTS. */
mlen = 0;
mem_set (mbuf, 0, sizeof(mbuf));
wait_for = 0;
wait_conn = 0;
modem_st = MODEM_IDLE;
}. | https://www.keil.com/support/man/docs/rlarm/rlarm_modem_init.htm | CC-MAIN-2020-34 | en | refinedweb |
Strongly customizable React component helping you make animated background
View Demo · Report Bug · Request Feature
Getting StartedGetting Started
This component has been built to help you create customizable animated background. You can provide a list of colors, decide how long each color should be visible, set animation timing and its type. Thanks to simple and intuitive API you can create really amazing effects.
InstallationInstallation
With
npm
npm install --save react-animated-bg
Or with
yarn
yarn add react-animated-bg
UsageUsage
Basic usageBasic usage
The very basic usage is to just wrap your content in
AnimatedBg. This component requires only one parameter - array of an
colors. Colors can be passed in
hex,
rgba or any other system which is accepted by
background CSS property.
import React from "react";import AnimatedBg from "react-animated-bg";// by default delay = 0 and duration = 0.2sconst Wrapper =<AnimatedBg =>My element with animated BG</AnimatedBg>;
Set animation duration time and delay before the next animation startsSet animation duration time and delay before the next animation starts
You can decide how long the duration of the animation will last. Furthermore, if you want the background to stay for some time before the next transition starts, you can set
delay prop according to your needs.
duration and
delay take numeric value representing seconds.
import React from "react";import AnimatedBg from "react-animated-bg";const Wrapper =<AnimatedBg=== //="ease-out"="section-styles"><h2>Duration and Delay</h2><p>Each color will be visible for 4 seconds and will change to another in500ms</p></AnimatedBg>;
How the animation will behave?How the animation will behave?
Decide how the animation should behave. To make it happen all you have to do is set
timingFunction property. By default, it's
linear. but you can pass here any option from the list below.
ease,
linear,
ease-in,
ease-out,
ease-in-out,
step-start,
step-end
import React from "react";import AnimatedBg from "react-animated-bg";const Wrapper =<AnimatedBg==="ease-out"="section-styles"><h2>ClassName and other props</h2></AnimatedBg>;
Hoorah!! You can animate images too!Hoorah!! You can animate images too!
⚠️ Animating images doesn't work on Firefox since the browser doesn't support the transition for background-image. ⚠️
Because under the hood CSS
background property is updating, the
colors prop array can contain everything which is supported by CSS eg.
url('image.jpg). However eg.
linear-gradient can't be animated this way. The example you can find in the demo
Important: Remember to wrap the images with the url( ) formula.
import React from "react";import AnimatedBg from "react-animated-bg";const Wrapper =const imagesList ='url("")''url("")''url("")''url("")''url("")';return<AnimatedBg===="ease-out"><<h1>Animated images</h1><h3>- duration: 2s</h3><h3>- delay: 1s</h3><h3>- transition type: ease-out</h3></div></AnimatedBg>;;
Choose the next background randomlyChoose the next background randomly
By default the background is changed according to the order given in the
colors array. If you want you can change it to the random ordering by adding
randomMode prop.
import React from "react";import AnimatedBg from "react-animated-bg";const Wrapper =<AnimatedBg===="linear"><h2>Random mode</h2><p>Next background will be choosen randomly</p></AnimatedBg>;
This component takes following propsThis component takes following props
LicenseLicense
Distributed under the MIT License. See
LICENSE for more information. | https://preview.npmjs.com/package/react-animated-bg | CC-MAIN-2020-34 | en | refinedweb |
read24 - Server: Administrating Students
July 24, 2020
I will begin by implementing the logic to administrate students first. To remind myself, here are the routes that will be needing some shiny new code:
GET /admin/classroom/:classroomId/students POST /admin/classroom/:classroomId/students PUT /admin/classroom/:classroomId/students GET /admin/classroom/:classroomId/students/:studentId DELETE /admin/classroom/:classroomId/students
Listing all Students by Classroom ID
Listing all the students by classroom is as simple as implementing a method in
the
Student resource class called
listByClassroomId to find all the students with their
classroomId property being the
classroomId passed
in.
public static async listByClassroomId(classroomId: number) { const studentTypes = await DatabaseConnector.select('students', (s: StudentType) => s.classroomId === classroomId) as StudentType[]; return await Promise.all(studentTypes.map(async s => await new Student().load(s.id))); }
Then, the route can just call this and return the JSON list.
app.get('/admin/classroom/:classroomId/students', async (req, res) => { const classroomId = parseInt(req.params.classroomId); const students = (await Student.listByClassroomId(classroomId)).map(s => s.json()); return res.status(200).json(students); });
Adding a Student to a Classroom
I will handle any access control checking at a later time. For now, I just want to be able to add an additional student to the database, given the classroom ID.
The approach to adding a student is to 1) Create the
User, then
2) Create the
Student tied to the user.
The request payload is assumed to have all the information necessary to create both objects in the database.
Create the
User by hashing the given password, and including it
in the
User object with the
username, and then
calling the
insert method.
const hashedObj = hashPassword(password); const user = await (new User({ username, salt: hashedObj.salt, password: hashedObj.hashed })).insert();
Then
user will have the new
User which was created
in the database. We can reuse the
id found in this object to
create the
Student object.
But how do we do that? We currently don't know what ID is inserted to the database, but we can modify the database connector to return the latest ID after insert.
In
sql.ts, modify the
insert method to return the
insertId from the
results object.); console.log('Last Insert ID', results.insertId); return resolve(results.insertId); }); }); return promise; }
And then in
base_resource.ts, modify
insert to set
the ID returned by the connector's
insert method.
public async insert() { if (this.id > 0) throw new Error(`Cannot insert this row because it already has an ID value: ${this.id}.`); this.dateCreated = Date.now(); const toSave = this.serializeForDb(); const id = await DatabaseConnector.insert(this._tableName, toSave); this.id = id; return this; }
So now, all resource classes will have the
id set after insert
when the
insert method is invoked.
After the
User is created, we can then access the
id of the user when creating the
Student.
const student = await (new Student({ classroomId, firstName, middleName, lastName, grade: parseInt(grade, 10), userId: user.id })).insert();
Update a Student
The precondition is that the
studentId is included in the request
body. Once the
studentId is received, a
Student instance can be loaded from the
studentId.
Then this instance can be updated from the rest of the data from the request
body, and then saved with the
update call.
app.put('/admin/classroom/:classroomId/students', async (req, res) => { const { studentId, firstName, lastName, middleName, grade } = req.body; if(!studentId) return res.status(404).json({message: 'Not found'}); const student = await new Student().load(studentId); student.firstName = firstName; student.lastName = lastName; student.middleName = middleName; student.grade = parseInt(grade, 10); await student.update(); return res.status(200).json(student.json()); });
Retrieve a Student
This is the easiest method to implement so far. Once a
studentId is received, a simple
Student object can
be loaded from the
studentId, and returned.
app.get('/admin/classroom/:classroomId/students/:studentId', async (req, res) => { const classroomId = parseInt(req.params.classroomId, 10); const studentId = parseInt(req.params.studentId, 10); const student = (await new Student().load(studentId)).json(); if(student.classroomId !== classroomId) return res.status(404).send({message: 'Student not found'}); return res.status(200).json(student); })
Deleting a Student
I actually do not have any functionality implemented to delete objects from the database (in-memory, or MySQL).
Before implementing the
DELETE route, I will need to add a
delete method in the database connector.
In
db_connector.ts, let's define a new function signature to
be required for implementation:
public abstract delete(tableName: string, whereFunc: (o: Partial<DataRow>) => boolean): Promise<number>;
Since now there is a new interface method to be implemented, both in-memory,
and MySQL database connectors will need to explicitly implement the
delete method.
First, for the in-memory version, the IDs of what is needing to be deleted
gathered by the
whereFunc will just be the IDs to be filtered out
from in the array returned by
db.data[tableName].
Then we take this array which has the data filtered out, and reassign it back
to
db.data[tableName].
public delete(tableName: string, whereFunc: (o: DataRow) => boolean): Promise<number> { const filteredIds = db.data[tableName].filter(whereFunc).map((f: DataRow) => f.id); db.data[tableName] = db.data[tableName].filter( (r: DataRow) => !filteredIds.find((f: DataRow) => f.id === r.id)); return Promise.resolve(filteredIds.length); }
For the MySQL version of the
delete method, is to first select
the rows needing to be deleted, then construct a list of IDs to build out the
DELETE query.
public delete(tableName: string, whereFunc: (o: DataRow) => boolean): Promise<number> { const promise = new Promise<number>(async (resolve, reject) => { const selected = await this.select(tableName, whereFunc); if (selected.length === 0) return resolve(0); const ids = selected.map((r: DataRow) => r.id); const idGroup = `(${ids.join(', ')})`; const query = connection.query(`DELETE FROM ${TableMapping[tableName]} WHERE id IN ${idGroup}`, (err, results) => { if (err) return reject(err); console.log(query.sql); console.log(`DELETED IDS: ${idGroup} FROM ${tableName}`); return resolve(results.affectedRows); }); }); return promise; }
Finally, in
base_resource.ts, the
delete method that
the resource can call will call the
delete method found in the
current database connector, and return itself for method chaining.
public async delete() { if (this.id === 0) return this; await DatabaseConnector.delete(this._tableName, (o: DataRow) => o.id === this.id); return this; }
After all this code, the implemented
DELETE route is quite
simple:
app.delete('/admin/classroom/:classroomId/students', async (req, res) => { const classroomId = req.params.classroomId; const studentId = parseInt(req.body.studentId); const deleted = (await new Student().load(studentId)).delete(); return res.status(200).json({classroomId, studentId}); });
.env File
It's now time to do a little upgrade to the current development environment setup. I want to be able to switch between using the in-memory, and MySQL data sources easily.
The best way to do that for now is by the use of a
.env file. I
can specify a
.env file containing the
DATA_SOURCE specifying the type of database connector to use.
DATA_SOURCE=mysql
Then in code, this can be loaded by the
dotenv npm package.
npm install --save dotenv npm install --save-dev @types/dotenv
In
index.ts, the
.env parameters should be loaded
immediately at the top of the file.
import * as dotenv from 'dotenv'; const config = dotenv.config(); | http://rogerngo.com/article/20200724_110_read24_administrating_students/ | CC-MAIN-2020-34 | en | refinedweb |
Generate a pseudo-random nonnegative long integer in a thread-safe manner
#include <stdlib.h> long nrand48( unsigned short xsubi[3] );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The nrand48() function uses a linear congruential algorithm and 48-bit integer arithmetic to generate a nonnegative long integer uniformly distributed over the interval [0, 231].
The xsubi array should contain the desired initial value; this makes nrand48() thread-safe, and lets you start a sequence of random numbers at any known value.
A pseudo-random long integer.
POSIX 1003.1 XSI
drand48(), erand48(), jrand48(), lcong48(), lrand48(), mrand48(), seed48(), srand48() | http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/n/nrand48.html | CC-MAIN-2020-34 | en | refinedweb |
Since the advent of Node.js in 2009, everything we knew about JavaScript changed. The seemingly dying language made a phoenix-like comeback, growing to become the most popular language in the world.
JavaScript was earlier seen as a web-browser’s language, but Node.js came and made it server-side. In essence, Node.js allows developers to develop web-servers with JavaScript. Thus, JavaScript was not only used in browsers, but also in the development of web servers.
In January 2010, NPM was introduced to the Node.js environment. It makes Node.js easier for developers to publish and share the source code of JavaScript libraries. The developers can then use the code by installing the library and importing it into their code.
NPM has since been the de-facto software registry for JavaScript and Node.js libraries. Many frameworks have emerged using NPM to distribute their library. React, Vue, Angular, and many other apps are developed using NPM. You either install their boilerplates or install their official CLI tool. All this happens through NPM, and of course, Node.js must be installed.
Right now, there are billions of libraries in NPM. Angular, React and its cousins are all imported from NPM, and modules dependent on these frameworks are also hosted in NPM. Normally, it is quite easier to write and host a JS library in NPM because it is not dependent on any other framework. The challenge here is how do we write and publish a module dependent on a JS framework to be used as an NPM library.
That’s what we are going to solve here and we will be developing a library for the React.js framework.
In this tutorial, we are going to see how to create a React component library and publish it on NPM.
As a demo, we are going to build a countdown timer.
A countdown timer is used to display the countdown to any event. Like, in wedding anniversary, countdown timers can be used to cut the cake. You know the popular: “10! 9! 8! …0!”
So, we are going to develop our own countdown timer for the React framework, so that it can be used by other devs in their React apps. They just need to pull in our library, instead of re-inventing the wheel.
The source code we are going to build in this article can be found here.
Here is a list of things we are going to achieve in this article:
Configure Babel to transform JSX to JS.
Configure Rollup to produce efficient, minified code that works in all browsers(both old and new browsers).
Deploy our React component to NPM.
I’ll assume you are familiar with these tools and frameworks:
Node.js, NPM, Babel, Rollup
React.js
Git
JavaScript, ES6, and CSS
Also, make sure you have Node.js, IDE (Visual Studio Code, Atom), and Git all installed. NPM comes with Node.js and it doesn’t need a separate installation.
Let’s set up our project directory. I’ll call mine countdown-timer. Inside that, we will create src directory for sources and test directory for unit tests:
mkdir countdown-timer cd countdown-timer mkdir src
After that, the directory countdown-timer will look like this:
+- countdown-timer +- src
Next, we are going to make our directory i.e a Node.js project directory:
This command creates a package.json file with the basic information we supplied to NPM. -y flag makes it possible to bypass the process of answering questions when using only the npm init command.
package.json is the most important file in a Node.js project. It is used to let NPM know some basic things about our project and, crucially, the external NPM packages it depends on.
We install libraries that are important to our development process:
npm i react -D
We installed the react library as a dev Dependency since we don't want NPM to download it again when the user installs our library. This is because the user would have already installed the react library in his React app.
So after the above command, our package.json will look like this:
{ "name": "countdown-timer", "version": "1.0.0", "description": "A React library used to countdown time", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "repository": { "type": "git", "url": "git+" }, "keywords": [], "author": "Chidume Nnamdi <kurtwanger40@gmail.com>", "license": "ISC", "bugs": { "url": "" }, "homepage": "", "devDependencies": { "react": "^16.3.2" } }
Next, we create countdown.js in the src folder:
countdown.js will contain our code implementation. We won't go down to explain our code. You can just add anything, maybe a text, "Holla! My First Component". It doesn't matter, all you have to know is the essential configurations needed to deploy and use a React component as a library.
To build a React component for NPM, we must first import React and Component from the react library.
// src/countdown.js import React, { Component } from 'react'
Next, we defined our component, CountDown:
// src/countdown.js import React, { Component } from 'react' class CountDown extends Component { }
We defined CountDown which extends Component i.e it overrides and inherits all props and methods from the Component class. The reason we imported the Component from react is that it can be used by our module bundler to make React global.
Paste this code in our class, CountDown:
// src/component.js ... class CountDown extends Component { constructor(props) { super(props) this.count = this.count.bind(this) this.state = { days: 0, minutes: 0, hours: 0, secounds: 0, time_up:"" } this.x = null this.deadline = null } count () { var now = new Date().getTime(); var t = this.deadline - now; var days = Math.floor(t / (1000 * 60 * 60 * 24)); var hours = Math.floor((t % (1000 * 60 * 60 * 24)) / (1000 * 60 * 60)); var minutes = Math.floor((t % (1000 * 60 * 60)) / (1000 * 60)); var seconds = Math.floor((t % (1000 * 60)) / 1000); this.setState({days, minutes, hours, seconds}) if (t < 0) { clearInterval(this.x); this.setState({ days: 0, minutes: 0, hours: 0, seconds: 0, time_up: "TIME IS UP" }) } } componentDidMount() { this.deadline = new Date("apr 29, 2018 21:00:00").getTime(); this.x = setInterval(this.count, 1000); } render() { const { days, seconds, hours, minutes, time_up } = this.state return ( <div> <h1>Countdown Clock</h1> <div id="clockdiv"> <div> <span className="days" id="day">{days}</span> <div className="smalltext">Days</div> </div> <div> <span className="hours" id="hour">{hours}</span> <div className="smalltext">Hours</div> </div> <div> <span className="minutes" id="minute">{minutes}</span> <div className="smalltext">Minutes</div> </div> <div> <span className="seconds" id="second">{seconds}</span> <div className="smalltext">Seconds</div> </div> </div> <p id="demo">{time_up}</p> </div> ) } } export default CountDown
Starting at the constructor, we bound the count function to the class instance. We have declared our state object which contains days, minutes, hours, seconds, and time_up properties. They will store the current values when our timer ticks(.i.e. counts down). We defined the this.x variable which will hold a reference to a setInterval function. The this.deadline will store the time or the deadline that our timer will tick down to.
We used componentDidMount to start our timer. You know, the constructor first executes, followed by componentDidMount and finally, the render method comes last. That's the reason we delegated initialization to the constructor then started the timer at componentDidMount, render then displays the values: hours, days, minutes, seconds.
constructor ==> componentDidMount ==> render
Finally, we have successfully exported our CountDown class. So now our users can import the CountDown component in their React project when they install our library.
Now, we are done with our component, next step is to bundle our component using Rollup.
Rollup is a module bundler that takes all our JS files in our project and bundles them up into one JS file.
First, we install the rollup library:
NB: You can use -D or --save-dev flag. -D is shortcut notation for --save-dev.
This downloads the rollup library from npm registry into node_modules folder and registers it in the devDependencies section in our package.json.
... "devDependencies": { "react": "^16.3.2", "rollup": "^0.58.2" } ...
To let rollup know how to bundle our JS files, we have to create a configuration file, rollup.config.js:
We can actually pass our options to rollup using commands. But, to save ourselves from the stress of repeating, we created the js file, rollup.config.js to pass all our options to it. Upon execution, rollup reads the options in it and responds accordingly.
So, now we open up the rollup.config.js and add these following code:
// rollup.config.js const config = { input: 'src/countdown.js', external: ['react'], output: { format: 'umd', name: 'countdown', globals: { react: "React" } } } export default config
Let’s talk about what each of these does:
input: This is the bundle's entry point. It reads the file, then through it imports, draws up a list of files to bundle.
external: This is the array of modules that should remain external to our bundle.
output: This property defines how our output file will look like.
output.format: Defines the JS format to use (umd, cjs, es, amd, iife, system).
output.name: The name by which other scripts can access our module.
output.globals: Defines external dependency that our module relies on.
Rollup made it possible for devs to add their own functionalities to Rollup. These additional functionalities are called plugins. plugins allow you customize Rollup's behavior, by, for example, minifying your code for size or transpiling your code to match old browsers.
We will need some plugins to:
minify our code
add ES5 support
add JSX support
To minify our code we will use rollup-plugin-uglify . To add ES5 features and JSX support, Babel got us covered.
Babel is a project that transpiles ES5, ES6, ES7, and beyond into ES5, which can run on any browser.
Let’s talk about the Babel JSX support
JSX is a JS-XML formatting popularised by React.js used to render HTML on browsers. Our component, CountDown in its render method returns HTML-like syntax.
// src/countdown.js ... render () { return ( <div> <div> <h1>Countdown Clock</h1> <div id="clockdiv"> ... </div> ) } ...
It’s called JSX. JSX produces React elements from it. Before, React components are bundled and executed in browser there JSX compositions are transformed to React.createElement() calls. React uses Babel to transform the JSX. Our above code compiles down to:
... render () { return ( React.createElement('div',null, React.createElement('div',null, React.createElement('h1',null,'Countdown Clock'), React.createElement('div', props: { id: "clockdiv" }, ... ) ) ) } ...
React.createElement returns an object which ReactDOM uses to generate virtual DOM and render it on browser’s DOM.
So, before we bundle our component it has to be first transpiled to JS from its JSX. To do that we will need the babel plugin, babel-preset-react. To transpile to ES5 features we will need, rollup-plugin-babel.
Install rollup/babel plugins
List of our proposed plugins:
rollup-plugin-uglify
rollup-plugin-babel
babel-preset-react
NB: Babel preset is a set of plugins used to support a particular JS features.
All babel plugins or presets need the babel-core in order to work. So, we go ahead to install the babel-core module:
Next, we install our plugins:
npm i rollup-plugin-uglify rollup-plugin-babel babel-preset-react -D
All installed a dev dependency, not needed in production.
Create a .babelrc
To use babel plugins, there are two ways to configure it. The first is in package.json:
// package.json { babel: { "presets": [ "react" ] } }\
Second is in a file, .babelrc.
For this project, we are going to use the .babelrc approach. Configuring babel plugins is a way to tell babel which preset should be used in transpiling.
We create .babelrc in our project's root directory:
Inside, add the following:
{ "presets":[ "react" ] }
Update rollup.config.js
To use plugins, it must be specified in the plugins key of the rollup.config.js file.
First, we import the plugins:
// rollup.config.js import uglify from 'rollup-plugin-uglify' import babel from 'rollup-plugin-babel' ...
Then, we create a plugins array key and call all our imported plugins functions there:
// rollup.config.js import uglify from 'rollup-plugin-uglify' import babel from 'rollup-plugin-babel' ... plugins: [ babel({ exclude: "node_modules/**" }), uglify() ], ...
We added exclude key to babel function call to prevent it from transpiling scripts in the node_modules directory.
Update package.json
We will add a build key in our package.json scripts section. We will use it to run our rollup build process.
Open up package.json file and add the following:
... "scripts": { "build": "rollup -c -o dist/countdown.min.js", "test": "echo \"Error: no test specified\" && exit 1" }, ...
The command "rollup -c -o dist/countdown.min.js" bundles our component to dist folder, with the name countdown.min.js. Here, we overrode the name we gave it in rollup.config.js, so whatever Rollup doesn't get from command it gets from rollup.config.js if it exists.
Next, we will point our library entry point to dist/countdown.min.js. The entry point of any NPM library is defined in its package.json main key.
... "main": "dist/countdown.min.js", ...
Now, we are done setting up our Rollup/Babel and their configurations. Let’s compile our component:
This command will run "rollup -c -o dist/countdown.min.js". Like it was given it will create a folder dist/ in our project's root directory and put the bundled file countdown.min.js in it.
We are done bundling our library. It is now time to deploy it to NPM registry. But before we do that, we have to ignore some files from publishing alongside our library.
Our project directory by now will contain files and folders used to build the library:
dist/ src/ node_modules/ .babelrc package.json rollup.config.js
The dist folder is the folder we want to publish, so we don't want other folders and files to be also included alongside the dist folder. To do that we have to create a file, .npmignore. As the name implies, it tells NPM which folders and files to ignore when publishing our library.
So, we create the file:
Next, we add the folders/files we want to ignore to it:
src/ test/ .npmignore .babelrc rollup.config.js
Notice, there is no node_modules in it. NPM automatically ignores it.
Before we publish an NPM library, we must host the project on Git before publishing.
Create a new repository in any Version control website of your choice, then run these commands in your terminal:
git init && git add. git commit -m 'First release' && git add remote origin YOUR_REPO_GIT_URL_HERE git pull origin master && git push origin master
These commands initialize an empty repo, stages your files/folders, adds a remote repo to it and uploads your local repo to the remote repo.
Now, we run npm publish to push our library to NPM:
npm publish + @chidumennamdi/countdown-timer@0.0.1
See here!! we have successfully published a React library.
If the project name has already been taken in NPM. You can choose another name by changing the name property in package.json.
// package.json ... "name": "countdown-timer" ...
To consume our library, you can create a new React project, then pull in our library:
create-react-app react-lib-test cd react-lib-test npm i countdown-timer
Then, we import the component and render it:
// src/App.js import React, { Component } from 'react'; import CountDown from 'countdown-timer' class App extends Component { render() { return ( <CountDown /> ) } } export default App
I know this article is fairly complex to understand, that is what it takes to develop apps using modern JS development method.
We saw a lot of tools and their uses:
Rollup: used to bundle and minify our library
Babel: used to transform/transpile our library to run on any browser.
In the end, we saw how easy it was to extract a React JS component and publish it on NPM. All we did was write the library, bundle it using Rollup with help from Babel, tell Rollup to bundle it as a React dependency, and then run the npm publish command. That's all!!
Please, feel free to ask if you have any questions or comments in the comment section.
Thanks !!!
Your email address will not be published. Required fields are marked * | https://www.zeolearn.com/magazine/step-by-step-guide-to-deploy-react-component-as-an-npm-library | CC-MAIN-2020-34 | en | refinedweb |
# 4. Integrate file upload
Our API is starting to look great now that we can add new stories. But it would be even better if we could attach some cute pictures to our stories, right?
# Set up storage account access
You already created a storage account in Step 2, so you now have to generate an access token to allow our application uploading files in it:
# Generate the SAS key # It will be valid until the defined expiry date az storage account generate-sas --account-name <your-funpets-storage> \ --services btf \ --resource-types sco \ --permissions acdlrw --expiry 2021-12-31
Now edit the file
local.settings.json, and add these properties to the
Values list:
"AZURE_STORAGE_ACCOUNT": "<your storage account name>", "AZURE_STORAGE_SAS_KEY": "<your SAS key>"
These values will be exposed to our app as environment variables by the Functions runtime, to allow access to your Azure storage.
# Configure Azure Storage module
We will use Azure Blob Storage to store pets images in the cloud. It can be used to store any kind of file, and is also capable of hosting static websites.
As you already created and set up your storage account access, you only need to integrate the
@nestjs/azure-storage package with this command:
npm install @nestjs/azure-storage
Open the file
src/app.module.ts and add the
AzureStorageModule to the module imports:
@Module({ imports: [ AzureStorageModule.withConfig({ sasKey: process.env.AZURE_STORAGE_SAS_KEY, accountName: process.env.AZURE_STORAGE_ACCOUNT, containerName: 'funpets-images', }), ... ]
Don't forget to add the missing imports at the top:
import { AzureStorageModule } from '@nestjs/azure-storage';
# Handle file upload
Now let's update the
Open
src/stories/stories.controller.ts and update the function you created to create new stories:
- Add
@UseInterceptors(AzureStorageFileInterceptor('file'))just below the
- Add
@UploadedFile() file: UploadedFileMetadatain the function parameters.
Don't forget to also add the missing imports at the top:
import { FileInterceptor } from '@nestjs/platform-express'; import { AzureStorageFileInterceptor, UploadedFileMetadata } from '@nestjs/azure-storage'; import { UseInterceptors, UploadedFile } from '@nestjs/common';
The
AzureStorageFileInterceptor will directly upload the file to Azure Storage container
funpets-images specified in the module configuration, and will fill in the stored file URL in
file.storageUrl.
Once you have the storage URL you can set the
imageUrl of the created
Story entity.
Your final function should look like this:
@Post() @UseInterceptors(AzureStorageFileInterceptor('file', fileUploadOptions)) async createStory( @Body() data: Partial<Story>, @UploadedFile() file: UploadedFileMetadata, ): Promise<Story> { const story = new Story(data); if (!story.createdAt) { story.createdAt = new Date(); } if (file) { story.imageUrl = file.storageUrl || null; } return await this.storiesRepository.save(story); }
# Test your endpoint
After you finished the modifications, start your server using the functions emulator:
npm run start:azure
After the server is started, you can test if uploading file works using
curl:
curl \ -F "file=@<path_to_image_file>" \ -F "animal=cat" \ -F "description=Happy cat"
You can download and use the the happy cat image to test the file upload if don't have an image at hand.
Note
Using the
-F curl option will automatically set the request content type to
multipart/form-data which is required for Nest.js file upload support. Note that in that case, the payload for the
Story property will also have to be form data and not JSON, as you can see in the curl command.
# Limit accepted file type/size
Your API now supports file uploads, but surely you don't want any file to be uploaded and may want to set some reasonable limits on file size?
Just like the base NestJS
FileInterceptor, the
AzureStorageFileInterceptor() decorator supports a second
options argument. The
options object is of type
MulterOptions and can be used to achieve what we want, using the
limits and
fileFilter properties. This is the same object used by the multer constructor.
Now it's your time to work and find out how to restrict file uploads to support only:
- Maximum file size of 2MB
pngand
jpegimage types
Some hints to get started:
- Look at the
limitsand
fileFilteroptions to see how they work.
- You can get the uploaded file name using
file.originalname.
- Explore Node.js
pathmodule to get extract extension from a file name.
Don't forget to test your solution with various scenarios using
curl, to make sure your API accepts/rejects files properly!
Tip
you can use
mkfile <size[k|m]> <filename> to generate dummy files with a given size (for Windows users:
fsutil file createnew <filename> <size_in_bytes>).
# Redeploy
Once everything works locally let's deploy your latest changes:
# Build your app npm run build # Create an archive from your local files and publish it # Don't forget to change the name with the one you used previously func azure functionapp publish <your-funpets-api> --nozip
Then run again the previous
curl command against your deployed API URL to check that everything works fine:
curl https://<your-funpets-api>.azurewebsites.net/api/stories -F "file=@<path_to_image_file>" \ -F "animal=cat" \ -F "description=Happy cat"
Solution: see the code for step 4 | https://black-cliff-0123f8e1e.azurestaticapps.net/step4/ | CC-MAIN-2020-34 | en | refinedweb |
C# 6.0 Features Series
- How to try C# 6.0 and Rosyln?
- Getter-only (Read Only) Auto Properties in C# 6.0
- Lambda and Getter Only Auto-Properties in C# 6.0
- Initializers for Read-Only Auto Properties in C# 6.0
- Initializers via Expression Auto Properties in C# 6.0
- C# 6.0 – A field initializer cannot reference the non-static field, method, or property
- Lambda Expression for Function Members in C# 6.0
- Dictionary Initializers (Index Initializers) in C# 6.0
- Expression Bodies on Methods returning void in C# 6.0
- using keyword for static class in C# 6.0
- Unused namespaces in Different Color in Visual Studio 2015
- Null-Conditional Operator in C# 6.0
- Null-Conditional Operator and Delegates
- nameof Operator in C# 6.0
- Contextual Keywords in C#
- String Interpolation in C# 6.0
- Exception Filters in C# 6.0
- Await in Catch and finally block in C# 6.0
In one of the previous blog post , we explored one of the feature of C# called Lambda Expression for Function Members in C# 6.0 where the example method used there had a return type of integer. In this blog post , let see the usage of the expression bodies on the methods returning void.
Expression Bodies on Methods returning void in C# 6.0
The Expression bodies can still be used on the methods returning void or Task (async methods) . In this case , it is necessary that expression immediately after the lambda expression syntax (arrow) to be a statement expression.
Below is a sample code snippet demonstrating the usage of the Expression Bodies on Methods returning void in C# 6.0
using System; using System.Collections.Generic; using System.Linq; namespace MobileOSGeekApp { class Program { public static void Display() => Console.WriteLine("Welcome to developerpublish.com .NET Tutorials Section"); static void Main(string[] args) { Display(); Console.ReadLine(); } } } | https://developerpublish.com/expression-bodies-on-methods-returning-void-in-c-6-0/ | CC-MAIN-2020-34 | en | refinedweb |
import "github.com/nanobox-io/golang-scribble"
Package scribble is a tiny JSON database
Version is the current version of the project
Driver is what is used to interact with the scribble database. It runs transactions, and provides log output
New creates a new scribble database at the desired directory location, and returns a *Driver to then use for interacting with the database
Delete locks that database and then attempts to remove the collection/resource specified by [path]
Read a record from the database
ReadAll records from a collection; this is returned as a slice of strings because there is no way of knowing what type the record is.
Write locks the database and attempts to write the record to the database under the [collection] specified with the [resource] name given
type Logger interface { Fatal(string, ...interface{}) Error(string, ...interface{}) Warn(string, ...interface{}) Info(string, ...interface{}) Debug(string, ...interface{}) Trace(string, ...interface{}) }
Logger is a generic logger interface
Options uses for specification of working golang-scribble
Package scribble imports 7 packages (graph) and is imported by 37 packages. Updated 2020-02-09. Refresh now. Tools for package owners. | https://godoc.org/github.com/nanobox-io/golang-scribble | CC-MAIN-2020-34 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.