text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
In this tutorial you’ll learn to build a card-matching game, in Flash, that uses Flickr as the source of its images. The first part of this two-part series will get you started, leaving you with a basic game structure that you can improve on later.
This game uses the Flickr API to get the images used for game play. In the SWF you will see the start page of the game. Pressing the PLAY button will allow you to begin a new game, the INSTRUCTIONS button will show the insructions for the game, and pressing the ABOUT button will give some information about the game. Try different keywords to retrieve different sets of photos: food is a good one!
Also available in this series:
- Create a Flickr-Based Pairs Game With Flash: The Basics
- Create a Flickr-Based Pairs Game With Flash: The Interface
Step 1: Getting a Flickr Account
To be able to use the Flickr Api you must be a registered user on flickr.com. On the homepage of Flickr choose the Sign Up link.
Step 2: Setting Up the App and Getting an API Key
Once logged in, you will need to visit the App Garden to get started.
You will want to bookmark this page if you are planning on doing alot of Flickr development, as it contains alot of useful information for developers.
Click on the "Create An App" Link once you arrive at the App Garden.
Under "Get your API Key" click the "Request an API Key" link.
Here you will need to choose whether you intend to use the app for commercial or non-commercial purposes. For this app I chose non-commercial.
Next you will be taken to the app details page. Enter the name of your app, a description of what your app does, accept the agreements, and click the submit button.
Next you will be presented with your key and your secret. We will not be using the secret key here as our app does not require authentication. Make a note of your API key as we will need it for the rest of this tutorial.
Step 3: Diving Into the Flickr API
Flickr has a Rest API for developers. We will be using two methods from the Rest API as listed below.
The
flickr.photos.search method will allow us to search for photos, while the
flickr.phots.getInfo method will allow us to get information for a single photo, such as the username(owner) of the Photo, the Title, and the URL to the photo on Flickr.
If you visit one of the links above, at the bottom of the page there is a link to the API Explorer where you can enter some data and get an example response.
The image above is for the
flickr.photos.search method. Go ahead and click the link now.
There are a lot of options, and it may seem overwhelming, but all we are interested in for this tutorial is the "tags" option. I entered "dog" into the tags search box. Also make sure you choose JSON as the output method, as we will be using JSON in this tutorial. Finally press the "Call Method" Button. Once you press the "Call Method" button an example of the response will be returned.
Below is a portion of the JSON that is returned from choosing "dog" as a tag
{ "photos": { "page": 1, "pages": "44339", "perpage": 100, "total": "4433823", "photo": [ { "id": "5892368188", "owner": "99393049@N00", "secret": "05a32e3fea", "server": "5159", "farm": 6, "title": "IMG_6107", "ispublic": 1, "isfriend": 0, "isfamily": 0 }, { "id": "5891801539", "owner": "99393049@N00", "secret": "a40fccb849", "server": "5074", "farm": 6, "title": "IMG_6106", "ispublic": 1, "isfriend": 0, "isfamily": 0 }, { "id": "5892367712", "owner": "99393049@N00", "secret": "95cc08ae63", "server": "5069", "farm": 6, "title": "IMG_6104", "ispublic": 1, "isfriend": 0, "isfamily": 0 }, { "id": "5891800955", "owner": "99393049@N00", "secret": "e4baabe873", "server": "5036", "farm": 6, "title": "IMG_6101", "ispublic": 1, "isfriend": 0, "isfamily": 0 }, { "id": "5891805969", "owner": "99393049@N00", "secret": "2904cf36c8", "server": "5263", "farm": 6, "title": "IMG_6102", "ispublic": 1, "isfriend": 0, "isfamily": 0 }, { "id": "5892368656", "owner": "99393049@N00", "secret": "d43d284c32", "server": "5022", "farm": 6, "title": "IMG_6112", "ispublic": 1, "isfriend": 0, "isfamily": 0 }, { "id": "5891806581", "owner": "99393049@N00", "secret": "4f00c94d98", "server": "6002", "farm": 7, "title": "IMG_6110", "ispublic": 1, "isfriend": 0, "isfamily": 0 }, { "id": "5891801901", "owner": "99393049@N00", "secret": "8ee8ef91d7", "server": "6099", "farm": 7, "title": "IMG_6109", "ispublic": 1, "isfriend": 0, "isfamily": 0 }, { "id": "5891780613", "owner": "64562380@N04", "secret": "6e39772812", "server": "6033", "farm": 7, "title": "Mr. Bilbo Baggins", "ispublic": 1, "isfriend": 0, "isfamily": 0 }, { "id": "5892344104", "owner": "50185661@N03", "secret": "7a4f97826b", "server": "5070", "farm": 6, "title": "Bring Hanna home", "ispublic": 1, "isfriend": 0, "isfamily": 0 }, { "id": "5892353174", "owner": "74269181@N00", "secret": "5606968ce6", "server": "5320", "farm": 6, "title": "Maximillian", "ispublic": 1, "isfriend": 0, "isfamily": 0 }, { "id": "5891780247", "owner": "23228490@N04", "secret": "63f5df2d27", "server": "5022", "farm": 6, "title": "Aung San Suu Kyi at home in Rangoon, Burma", "ispublic": 1, "isfriend": 0, "isfamily": 0 },
We will be using the data from the response to construct our URLS to the images. The URLS take the form of:{farm-id}.static.flickr.com/{server-id}/{id}_{secret}_[mstzb].jpg
To make a usable URL we just replace what is inside the {} with the data we get from above, for example using the first response would yield the following:
And if we now use that link, we go to the photo More info about the URLs can be found in the Flickr Documentation for Photo Source URLS.
Michael James Williams wrote a tutorial on Understanding JSON, be sure to check it out to learn about the intricacies of JSON.
Step 4: Setting Up the Project
Go to File > New and choose "ActionScript 3.0 Document".
Give the Flash document the following properties.
- Stage Color: White
- Size: 600x600px
Save this document as "PhotoMatch.fla".
Go to File > New and choose "ActionScript File". Save this as "Main.as" in the same folder that you saved the "PhotoMatch.fla" in.
Next we need to set the Document Class. Set the Document Class to "Main".
Step 5: Setting Up "Main.as"
Inside "Main.as" add the following code:
package { import flash.display.MovieClip; import flash.events.*; import flash.net.*; import com.adobe.serialization.json.JSON; public class Main extends MovieClip { public function Main() { } }
Here we opened the package, imported the classes we will need for the first part of this tutorial, delclared our Main class to extend Movie Clip, and set up our constructor function and closed out the package
Step 6: Using the Flickr API to Retrieve a List of Images
Now that we have our flash project set up we are ready to get a list of Image URLS from Flickr.
Unfortunately, Flash at this moment does not have any built in capability to parse JSON files. It has been anounced however, that the newest version of the Flash Player will support JSON files natively as mentioned by Joseph Labrecque in his Industry News: Week 20 2011 post on ActiveTuts+.
To get around this limitation, we will be using Abode's as3CoreLib hosted by Mike Chambers. So grab a copy on GitHub. Once you get a copy of the as3CoreLib, extract it to a convenient place on your hard drive. Inside the extracted folder, inside the "src" folder is a "com" folder. Copy this "com" folder into the same directory you saved the PhotoMatch.fla.
Now we are ready to parse some JSON files.
Underneath the line
public class Main extends MovieClip add the following.
public class Main extends MovieClip { //The API Key you recieved from Flickr private var APIKEY:String = "YOUR API KEY"; //How many results will be returned from the search private var perPage:int = 10; //An array to store the URLs to the images private var imageURLArray:Array;
Be sure to replace "YOUR API KEY" with the API KEY you recieved from Flickr in the steps above.
Underneath the
Main() constructor in "Main.as" add the following code.
public function Main() { } private function doRequest(e:Event):void{ var searchURL:String = ""; searchURL += "&api_key=" + APIKEY; searchURL += "&tags=" + "dog";); }
Here we set up our
searchURL to point to the
flickr.photos.search method of the flickr api. This is how you will send any requests to flickr, after the "?" you put whatever method you are interested in, here we are searching for photos.
Next we append the arguments onto the searchURL. Below are the options with an explanation of each
- api_key: "This is the API Key Flickr issued to you for your app".
- tags: "This is the tags you want to search for, you can specify multiple tags by seperating them with a comma".
- per_page: "This is how many photos you want returned at once, we are getting 10 for now".
- format: "This is the format you want the results returned in "Can be JSON or XML".
- license: "This is the types of licenses you wish the photos to have. Because the photos have different licenses you need to make sure you app complys with the licenses of the photos. More info is below".
- nojsoncallback: "The Flickr API will return a callback function for javascript users. Since we are not using javascript we set it to 1 (which means no callback is provided)".
Again for reference here is the documentation to the flickr.photos.search method so you can cross reference the arguments we have used
As mentioned the photos have a certain license attached to them, below are the licenses along with a link for futher explanation.
<license id="0" name="All Rights Reserved" url=""/> <license id="4" name="Attribution License" url=""/> <license id="6" name="Attribution-NoDerivs License" url=""/> <license id="3" name="Attribution-NonCommercial-NoDerivs License" url=""/> <license id="2" name="Attribution-NonCommercial License" url=""/> <license id="1" name="Attribution-NonCommercial-ShareAlike License" url=""/> <license id="5" name="Attribution-ShareAlike License" url=""/> <license id="7" name="No known copyright restrictions" url=""/> <license id="8" name="United States Government Work" url=""/>
For the purpose of this tutorial we are allowing images with license IDs of 5 and 7. It's important only to use images that we are allowed to.
Next we set up a
URLRequest , a
URLLoader, add an
EventListener to the URLLoader, and finally load the request.
Step 7: Coding the
completeHandler
Here we code the
completeHandler() method that we added to the URLLoader in the step above. Add the following to "Main.as" just below the
doRequest() function you created above.
private function completeHandler(e:Event) { var loader:URLLoader = e.target as URLLoader; //The loader.data holds the JSON returned from flickr var rawString:String = loader.data; var decodedJSON:Object = JSON.decode(rawString); imageURLArray = new Array();(imageURL); } }
Here we set up a URLLoader, put the data from the Loader into the
rawString variable, and use the "JSON" class we imported to decode the JSON into an Object.
Next we use a
for each loop to loop through the object.
On each iteration of the loop we set up an
imageURL variable, push the imageURL into the
imageURLArray, and also push the
photo.id from the current photo. We will need a pointer to the photo.id later in the tutorial when we get information for a single photo.
Add the following within the
Main() constructor function:
public function Main() { doRequest(null); }
You can now test the Movie and you should see the 10 imageURLs traced to the output panel.
Be sure to check the download files for the code for this step.
Step 8: Creating the
shuffleCards() function
To make this game useful we will need to be able to shuffle our cards. For now we are just going to shuffle the URLs, when we get to loading the actual images/cards we will update this function to shuffle cards instead.
Add the following to "Main.as" underneath the
completeHandler function
private function shuffleCards(theArray:Array) { for (var i:int=0; i<theArray.length; i++) { var r:int = Math.floor(Math.random() * theArray.length); var temp:Object = theArray[i]; theArray[i] = theArray[r]; theArray[r] = temp; } }
Here we setup our shuffle function to take an Array as a parameter. Next we choose a random element from the Array, and set the
temp variable equal to the Arrays current index.
Then we swap out the current index of the Array with the random Array index we chose, and swap the random Array index with our temp Array index Remove the
trace(imageURL) line from within the
completeHandler() function. Then in the same function underneath the
for each loop add the following.("-------BEFORE SHUFFLING-------"); for(var i:int = 0;i<imageURLArray.length;i++) { trace(imageURLArray[i].imageUrl); } shuffleCards(imageURLArray); trace("-------AFTER SHUFFLING-------"); for(var j:int = 0;j<imageURLArray.length;j++) { trace(imageURLArray[j].imageUrl); }
If you test the movie now you should see the imageURLs before we shuffle them, and after they have been shuffled.
Be sure to check the download files for the complete code for this step.
Step 9: Putting the Images on the Stage
Now that we have a shuffle function, lets get the images onto the stage.
Add the following to the variables section above
public function Main().
//An array to hold the URLS to the images private var cardImages:Array; //An Vector to hold our Cards private var cardArray:Vector.<UILoader>;//Should be UILoader //A Number to hold how many cards have loaded private var numCardsLoaded = 0; //Used to position our Cards .x property private var xPos:int = 50; //Used to position our Cards .y property private var yPos:int = 220;
We also need to add a new import because we are using a
Loader, so add the following to your imports.
import fl.containers.UILoader;
The reason we have a separate array (
cardImages) from
imageURLArray is because the imageURLArray will hold all of the images we load for the game (up to 300) at one time. Since our game only uses 10 images at a time we need a seperate array for just those 10 images.
The
cardArray is used to hold our cards. For now it just holds
UILoaders but once we make an actual card class we will change this to hold "Cards".
The
numCardsLoaded is used to keep track of how many Cards have been loaded.
The
xPos and
yPos variables are used to position our cards on the stage. These are used in our postion cards function which is coming up shortly.
Add the following after the
shuffleCards function
private function loadCards():void { cardImages = new Array(); cardArray = new Vector.<UILoader>;//Should be UILoader(Event.COMPLETE, positionCards); for(var j:int=0;j<2;j++) { var card:UILoader = new UILoader(); card.source = cardImages[i].imageUrl; cardArray.push(card) as UILoader; } } }
The
loadCards() function first resets the
cardImages Array and
cardArray Vector, then we splice the first 10 elements of
imageURLArray into the
cardImages Array. Remember the game will only use 10 images at any time, but imageURLArray will eventually hold up to 300 elements.
Next we loop through the
cardImages Array and setup a
URLLoader,tell the URLLoader to load the URL, and add an
Event.COMPLETE listener to the URLLoader.
We then run a for loop, which runs twice for each element in the
cardImages Array. We create card as an
UILoader, set it's source, and finally push it into the card Array. Essentially this just makes 2 copies of a "Card".
The
positionCards() function is where we will positions our cards on stage, this function is next.
Add the following below the
loadCards function.
private function positionCards(e:Event):void { numCardsLoaded++; if (numCardsLoaded == 10) { shuffleCards(cardArray); for(var i:int=0;i<cardArray.length;i++){ var tempCard:UILoader = cardArray[i]; tempCard.width = 75; tempCard.height =75; if (i == 5 || i == 10 || i == 15) { xPos = 50; yPos += 90; } xPos += 80; tempCard.x = xPos; tempCard.y = yPos; addChild(tempCard); tempCard.x -= 37.5; tempCard.y -=37.5; } } }
Here we increment the
numCardsLoaded, and if it equals 10 we shuffle the cards within the
cardArray, loop through the
cardArray creating a "Card", set the position of the Card, and finally add the Card to the Stage. We subtract 37.5 from the UILoaders
.x and
.y because we want the "registration" point in the center of the UILoader. When we build our "Card" class we will set up the UILoaders this way.
Before we can test this we need to update the shuffle function to take an Vector, since the
cardArray is a Vector and not an Array.
Change the
shuffleCards() to the following.
private function shuffleCards(theVector:Vector.<UILoader>) { for (var i:int=0; i<theVector.length; i++) { var r:int = Math.floor(Math.random() * theVector.length); var temp:UILoader = theVector[i]; theVector[i] = theVector[r]; theVector[r] = temp; } }
Remove the following from within the
completeHandler function
trace("-------BEFORE SHUFFLING-------"); for(var i:int = 0;i<imageURLArray.length;i++) { trace(imageURLArray[i].imageUrl); } shuffleCards(imageURLArray); trace("-------AFTER SHUFFLING-------"); for(var j:int = 0;j<imageURLArray.length;j++) { trace(imageURLArray[j].imageUrl); }
Finally we can test our Movie and see the URLLoaders randomly arranged on the stage. This is starting to look like a game now:
Be sure to check the download files for the complete code up to this step.
Step 10: Checking for Card Matches
Now that we have our "Cards" shuffled and arranged on stage, we will check to see if they "Match" when clicked on. Add the following to the bottom of your variable declarations.
private var chosenCards:Vector.<UILoader> = new Vector.<UILoader >(2);//Should be UILoader
The
chosenCards Vector will hold the cards the user clicks on. Since we only want two cards chosen at any one time we give the Vector a length of 2.
Add the following below the
positionCards() function.
private function doFlip(e:Event):void { if(chosenCards.length == 2){ chosenCards = new Vector.<UILoader>; } if(chosenCards.indexOf(e.currentTarget) == -1){ chosenCards.push(e.currentTarget)as UILoader; e.currentTarget.removeEventListener(MouseEvent.CLICK, doFlip); } if(chosenCards.length == 2){ removeListeners(); checkForMatch(null); } }
This function will soon be used to flip our Cards. But for now we just use it to handle our
chosenCards Vector.
The first thing we do is check if the
chosenCards.length is equal to two, and if it is we reset the chosenCards to a new Vector. Next we check if
chosenCards does not already contain the "Card"; if it does not then we push the Card into
chosenCards and remove the EventListener for the card. Last we check if
chosenCards length is equal to 2 and if it is we call
removeListeners() which removes the EventListeners from all the cards, finally we call the
checkForMatch() method which is coming up shortly.
Add the following inside the
positionCards(). The difference is highlighted.
private function positionCards(e:Event):void { numCardsLoaded++; if (numCardsLoaded == 10) { shuffleCards(cardArray); for(var i:int=0;i<cardArray.length;i++){ cardArray[i].addEventListener(MouseEvent.CLICK, checkForMatch); var tempCard:UILoader = cardArray[i]; tempCard.width = 75; tempCard.height =75; if (i == 5 || i == 10 || i == 15) { xPos = 50; yPos += 90; } xPos += 80; tempCard.x = xPos; tempCard.y = yPos; tempCard.x -= 37.5; tempCard.y -=37.5; addChild(tempCard); } }
What we do here is add an EventListener that will call the
checkForMatch() function when the user clicks on one of the cards. This function is below.
Add the following to "Main.as" underneath the
positionCards function.
private function checkForMatch(e:Event):void{ if(chosenCards.length == 2){ if(chosenCards[0].source == chosenCards[1].source){); removeChild(chosenCards[0]); removeChild(chosenCards[1]); addListeners(); if(cardArray.length==0){ trace("GAME OVER!!"); } }else{ addListeners(); } } }
The first thing we do is check if
chosenCards length is equal to 2; if it is we check if the first "Card" in the
chosenCards is the same as the second "Card", if it is we remove the EventListeners from them, splice them out of the cardArray, remove them from the stage and add the EventListeners back to the remaining cards.
We then check if
cardArray.length is equal to 0 and if it is we simply trace "GAME OVER!!", finally we add the EventListeners back to all the cards whether there was a match or not.
Below are the methods
removeListeners and
addListeners. In these two methods we simply add or remove the EventListeners to the "Cards" within the cardArray.
Add these two methods underneath the
checkForMatch() method.
private function removeListeners():void { for (var i:int = 0; i<cardArray.length; i++) { cardArray[i].removeEventListener(MouseEvent.CLICK, doFlip); } }
private function addListeners():void { for (var i:int = 0; i<cardArray.length; i++) { cardArray[i].addEventListener(MouseEvent.CLICK, doFlip); } }
Be sure to check the download files for the complete source code up to this step.
Step 11: Doing a Custom Search
Now that we have our cards laid out on the stage, and we are able to check for matches, let's set the game up to do a custom search.
Select the text tool and drag out a TextField onto the stage.You will want to position it toward the top of the stage so it does not interfere with the images.
Give the TextField the instance name "searchText" and make sure the the type is set to "Classic Text" and "Input Text" respectively. Also make sure you select "Show border around text" so you can actually see the TextField. This is a temporary TextField just so we can test a custom search. We will be building a "SearchPanel" class shortly that contains a TextField and a Button.
We need to be able to press the "Enter" key to do the search, so replace the
doRequest(null) with the following code inside the
Main() constructor function
public function Main() { addEventListener(Event.ADDED_TO_STAGE, addedToStage); }
Here we added an
ADDED_TO_STAGE Event Listener. This get called when the movie is fully loaded and the "Stage" is ready.
Next up we need to code the
addedToStage method. Add the following code directly under the
Main() constructor function.
public function Main() { addEventListener(Event.ADDED_TO_STAGE, addedToStage); } private function addedToStage(e:Event):void { stage.addEventListener(KeyboardEvent.KEY_DOWN, keyDownHandler); }
Here we add an
KeyboardEvent.KEY_DOWN listener. The
keyDownHandler function is where we will handle the pressing of the "Enter" key. This function is up next.
Add the following code directly underneath the
addedToStage method.
private function keyDownHandler(e:KeyboardEvent) { if(e.keyCode == Keyboard.ENTER){ doRequest(null); } }
Here we test if the player pressed Enter; if they did then we call the
doRequest() function.
Since we are using the "KeyBoard" Class we will need to import that class.
Add the following to the bottom of your import statements:
import flash.ui.Keyboard;
Now we need to change the doRequest function to handle the text from our
searchText TextField so change the following in the
doRequest() function
var searchURL:String = ""; searchURL += "&api_key=" + APIKEY; searchURL += "&tags=" + searchText.text;
You can now test the Movie and enter your own search text such as "Flower". Give it a try!You will have to wait a brief second before the images pop up, so be patient.
Be sure to check the download files for the complete source code up to this step
Step 12: Creating the Game Button
Now that we can do a custom search, we will create the "GameButton" class that is used throughout the game. Then we will create a search panel that holds the search TextField and the "Search" button.
Go to Insert > New Symbol or alternatively press Ctrl+F8. Give it the name "Game Button" and check "Export for Actionscript" and make sure Class is set to "GameButton".
Select the rectangle tool and under "Fill And Stroke" make sure the stroke is not set and give the fill the color "#F26AF2". Next under "Rectangle Options" set the corner radius to 15.
Now drag a rectangle out inside the new MovieClip. Once dragged out, click on it to select it and set the following properties under "Position And Size"
- X: 0.00
- Y: 0.00
- W: 150.00
- H: 32.00
Next select the text tool and drag out a TextField on top of the Rounded Rectangle you just created. Make sure to give the TextField the color White.
Make sure the TextField is selected and give it the instance name "button_txt" and make sure the pulldowns are set to "Classic Text" and "Dynamic Text" respectively.
Next set the following properties under "Position And Size".
- X: 14.00
- Y: 4.00
- W: 128.00
- H: 23.00
You can now exit this MovieClip.
Next we need to create the class for this MovieClip. Go to File > New and choose "ActionScript File"
Save this file as "GameButton.as". Now add the following code to this file.
package { import flash.display.MovieClip; public class GameButton extends MovieClip{ public function GameButton() { this.buttonMode = true; this.mouseChildren = false; } public function setText(theText:String):void{ this.button_txt.text = theText; } } }
Here we set up our "GameButton" class to extend MovieClip. Inside our constructor function we set
buttonMode to true, and set
mouseChildren to false.
Setting
buttonMode to
true makes the hand pointer show when the MovieClip is moused over. And by setting
mouseChildren to
false we cause any nested MovieClips within this class to not recieve Mouse Events. While I was testing the TextField was selectable (even though I had chosen not selectable in the "CHARACTER" panel); by setting
mouseChildren to false I was able to make the TextField non-selectable.
Next we create a
setText(theText:String) method.
This function takes a String as its argument, then simply sets the
button_txt TextField's text to the String we passed in.
Step 13: Creating the Search Panel
Now that we have our GameButton ready we can create the SearchPanel and respective class.
Go to Insert > New Symbol or alternatively press Ctrl+F8. Give it the name "SearchPanel" and check "Export for Actionscript" and make sure Class is set to "SearchPanel".
Select the Text Tool and drag out a TextField into this MovieClip. Give it the instance name "searchTxt". Under the "CHARACTER" panel give the TextField the color "#F26AF2".
Next set the following properties on the TextField
- X: -130.00
- Y: -12.00
- W: 261.00
- H: 23.00
Now go to the library and drag an instance of the GameButton into this MovieClip, give it the instance name "searchBtn" and set the following properties on it.
- X: 147.00
- Y: -16.00
- W: 150.00(Default Size)
- H: 32.00(Default Size)
You can now exit this MovieClip
Now we are ready to write the class for the SearchPanel. Go to File > New and choose "ActionScript File"
Save this file as "SearchPanel.as" and enter the following code
package { import flash.display.MovieClip; public class SearchPanel extends MovieClip { public function SearchPanel() { // constructor code } public function clearText():void{ this.searchTxt.text=""; } public function setButtonText(theText:String):void{ this.searchBtn.button_txt.text =theText; } public function getText():String{ return this.searchTxt.text; } } }
Here we set up our SearchPanel to extend MovieClip and set up three methods within the class. The
clearText() function simply clears the TextField by setting it to an empty String.
The
setButtonText(theText:String) takes a String as an argument and sets the
searchButtons text to the String that was passed in.
The
getText() returns whatever was entered into the TextField.
Step 14: Testing the Search Panel
Now that we have a functional Search Panel we can add it to our game and give it a quick test.
If you have not done so, delete the previous TextField we were using from the Stage.
Add the following variable at the bottom of the current variables.
private var searchPanel:SearchPanel = new SearchPanel();
This is simply a variable for our searchPanel that the game will use. Now we need a way to add the searchPanel to the stage, this function is next.
Add the following to the "Main.as" underneath the
keyDownHandler() function.
private function addSearchPanel():void { searchPanel.x= 235; searchPanel.y = 36; addChild(searchPanel); searchPanel.setButtonText("SEARCH"); //searchPanel.searchTxt.text=""; searchPanel.searchBtn.addEventListener(MouseEvent.CLICK, doRequest); }
Here we set the searchPanel's
.x and
.y properties and add it to the Stage. Then we set the button's text and add an EventListener that will call our
doRequest method.
Now we need to clean up the code from the previous Step where we were using a single TextField to do the search.
We are first going to make sure the Search Panel TextField is not empty. Then we change where we were just grabbing the ".text" property of the TextField to use the
getText() method of the SearchPanel class.
The updated code for the
doRequest method is below
private function doRequest(e:Event):void { if(searchPanel.searchTxt.text!=""){ var searchURL:String = ""; searchURL += "&api_key=" + APIKEY; searchURL += "&tags=" + searchPanel.getText();); } }
Before we can test the Movie we need to add the Search Panel to the stage. Add the following function above the
keyDownHandler() you created in the steps above
private function setupGameElements() { addSearchPanel(); }
This method will be used to set up our game elements. For now we are using it to add the Search Panel.
Finally within the
addedToStage() function add the
setupGameElements() method.
private function addedToStage(e:Event):void { stage.addEventListener(KeyboardEvent.KEY_DOWN, keyDownHandler); setupGameElements(); }
You can now test the Movie and enter a search into the Search Panel.
Step 15: Creating the Card Class
Next up we will create the "Card" Class. We will use a
UILoader within this class to hold the images we get from Flickr. Our card class is going to be utilizing TweenLite, so if you don't have a copy go grab one from GreenSock.com. You will want to put the greensock folder into the same "com" folder that you created in Step 7.
Now we are ready to write the class for the SearchPanel. Go to File > New and choose "ActionScript File".
Save this file as "Card.as" and enter the following code.
package { import fl.containers.UILoader; import flash.display.MovieClip; import com.greensock.TweenLite; import flash.display.Loader; import flash.events.Event; public class Card extends MovieClip { private var uiLoader:UILoader; private var loader:Loader; //An Image URL on Flickr private var imageURL:String; //Used to tell whether the image has been set yet private var imageSetInTween:Boolean = false; //Used to tell whether image is on front of Card private var cardFace:String = "front"; private var userName:String; private var photoURL:String; public function Card() { } }
Here we open our package, import the classes we will need for this class, set up some variables we will need and code our constructor function
Now add the following inside the constructor function.
public function Card() { uiLoader = new UILoader(); uiLoader.x = -37.5; uiLoader.y = -37.5; uiLoader.width = 75; uiLoader.height = 75; addChild(uiLoader); }
Here we create a new
UILoader and set its .x,
.y,
width, and
height properties. Then we add it to the stage. Now add the following code below the
Card() constructor function.
public function setURL(theURL:*) { this.uiLoader.source = theURL; }
The
setURL(theURL:*) function is set up to take any data type as denoted by the *.Then we set the source of the loader to what was passed in. The reason I used a * here is because we will set the source of the
UILoader to a MovieClip when we show the Flickr Logo Image (the card Front),and it will be set to a String when we show the actual images.
Add the following code below the
setURL() method.
public function flip() { if (cardFace == "front") { TweenLite.to(this,.5,{rotationY:180, onUpdate:setImage, onComplete:setCardFaceBack}); } else { TweenLite.to(this,.5,{rotationY:0, onUpdate:setImage, onComplete:setCardFaceFront}); } }
This is the code we use to flip our cards. If the
cardFace is equal to
"front" we flip the images to the left by changing the
rotationY property from 0 to 180. We also call an
onUpdate function that runs continuously while the image is being animated, and an onComplete function that runs when the animation is complete.
We similarly use TweenLite again to rotate our images
rotationY from 180 to 0 if it is not on the
"front" of the Card and call the
onUpdate and
onComplete functions.
Now add the following code below the
flip() function.
private function setImage() { if (this.rotationY >= 90) { if (imageSetInTween == false) { setURL(imageURL); scaleX=-1; } imageSetInTween = true; } else if (this.rotationY <= 90) { imageSetInTween = false; scaleX=1; setURL(cardFront); } }
This function is called continuously while the image is animating. We first check to see whether the images
rotationY is >=90. If it is, we check to see if
imageSetInTween is equal to false. If it is we use the
setURL function to set the
UILoader's source equal to imageURL, (which will be a URL to an image on Flickr), set the Cards
scaleX to -1 ,and finally set the
imageSetInTween to true.
The reason we set the
scaleX to -1, is because when we flip the images they appeared mirrored. (Text would read from right to left).
The
imageSetInTween needs some explanation. When I was creating the Card Class I was getting a "blank" image in the
UILoader meaning the image would not show. After awhile of debugging I realized that it was repeatedly setting the image and it would appear that there was no image showing. I think this has to do with the
UILoader repeatedly "unloading" and "reloading" the image creating a blank effect. So I set up this variable so it would only load it one time.
Next we check whether the
rotationY of the card is <= 90. If it is we reset the
imageSetInTween to false, set the
scaleX back to 1, and set the image back to the cardFront.
Add the following code beneath the
setImage() method.
private function setCardFaceFront():void { cardFace = "front"; } private function setCardFaceBack():void { cardFace = "back"; dispatchEvent(new Event("finished")); }
These two functions are called when the animation completes. We simply set
cardFace to "front" or "back".
We use dispatchEvent() in the
setCardFaceBack() function to dispatch a new Event called "finished".We will use this event in our Main Class to alert us when the card is finished animating.
Add the following code beneath the two functions above.
public function remove():void { TweenLite.to(this,1,{height:0, alpha:0, onComplete:removeFromStage}); } function removeFromStage():void { parent.removeChild(this); }
The
remove() function will be used to tween the Cards height to zero, while also decreasing it's alpha. We call this function in our Main Class when we want to remove a card from the stage. When the tween is complete we call
removeFromStage() which removes the Card from the stage. We call
parent.removeChild() which would be the main movie/stage.
Next add the following code below the
removeFromStage() function.
public function setImageURL(imageURL:String):void { this.imageURL = imageURL; } public function getImageURL():String { return this.imageURL; } public function setUserName(userName:String):void { this.userName=userName } public function getUserName():String { return userName; } public function setPhotoURL(photoURL:String):void { this.photoURL=photoURL; } public function getPhotoURL():String { return this.photoURL; }
These are some simple getter and setter functions that get and set our variables because we made them private.
The
imageURL will be a URL to the image on flickr. The
userName will be the owner of the photo on flickr.
The
photoURL will be the URL to the actual user and photoPage on Flickr; this is different from the
imageURL which is just the photo itself.
This completes our "Card" class. Next up we will change our game to use cards instead of individual image loaders.
Step 16: Using the Card Class
Now that we've got our Card Class Ready we need to update our code to use our Cards instead of individual UILoaders. Back in "Main.as" change the
cardArray Vector to hold Cards now instead of
UILoaders.
private var cardArray:Vector.<Card>;//Should be Card
Next we need to update our
chosenCards Vector to hold Cards as well.
private var chosenCards:Vector.<Card> = new Vector.<Card>(2);
Next up is changing our
shuffle() function to take Cards instead of Vectors. This is a small change all we need to change is the parameter type from
UILoader to
Card and change the
temp variable to be typed as an
Card instead of an
UILoader. Go ahead and make those changes.
private function shuffleCards(theVector:Vector.<Card>) { for (var i:int=0; i<theVector.length; i++) { var r:int = Math.floor(Math.random() * theVector.length); var temp:Card = theVector[i]; theVector[i] = theVector[r]; theVector[r] = temp; } }
Next up in the list of changes is the
loadCards().Within the
loadCards change the
cardArray to equal a new
Vector of Cards instead of
UILoaders.
cardArray = new Vector.<Card>;
Instead of creating a new
Card as an
UILoader we will create it as a card and use the
setURL and
setImageURL. methods of our
Card class. Change the following within the
loadCards() function.
var card:Card = new Card(); card.setURL(cardFront); card.setImageURL(cardImages[i].imageUrl); cardArray.push(card) as Card;
In the library I have provided a Movie Clip to be used as the front of the Card. Since we want our Cards starting out with the Flickr logo showing, we use
setURL to set the cards "front". Then we use
setImageURL so we have a reference to the image for this particular card.
Thats is all the changes we need to make in the
loadCards() function for now. Next up is changing the
positionCards function.
The only change we need to make is where we create the
tempCard instead of typing it as an
UILoader we need to type it as as a
Card make that change now.
var tempCard:Card = cardArray[i];
Next up in our changes is the
doFlip() function. The first thing we need to do is tell the card to flip by using the cards
flip() method. Next we need to change where we create a new
chosenCards Vector. Currently we are using
UILoader for the type. We need to change it to
Card.
We will also need to change where we are pushing our Cards into the
chosenCards Vector. Change it to from
UILoader to
Card as well.
Last, where we are checking if chosenCards length is equal to 2 we remove
checkForMatch(null) and replace it with our custom EventListener we made in the card class.
e.currentTarget.flip(); if(chosenCards.length == 2){ chosenCards = new Vector.<Card>; } if(chosenCards.indexOf(e.currentTarget) == -1){ chosenCards.push(e.currentTarget)as Card; e.currentTarget.removeEventListener(MouseEvent.CLICK, doFlip); } if(chosenCards.length == 2){ removeListeners(); chosenCards[1].addEventListener("finished",checkForMatch); }
The EventListener
"finished" gets called when the card finishes the animation. Since we only want to call
checkForMatch after the second card finished its animation we add the EventListener to the second card.
Last up in our changes is the
checkForMatch() function. The first thing we do is remove the
"finished" EventListener we just added to the Card in the Step NaNabove. Next, the code where we check if
chosenCards[0].source ==
chosenCards[1].source can now be changed to use our
getImageURL() function. We also use our
remove() method instead of calling
removeChild() directly. Last if the cards do not match, we want to flip them back over, so we tell them both to flip again.
chosenCards[1].removeEventListener("finished",checkForMatch); if(chosenCards.length == 2){ if(chosenCards[0].getImageURL() == chosenCards[1].getImageURL()){); chosenCards[0].remove(); chosenCards[1].remove(); addListeners(); if(cardArray.length==0) { trace("GAME OVER!!"); } }else{ addListeners(); chosenCards[0].flip(); chosenCards[1].flip(); } }
You can now test the movie. You should have a playable game with the cards doing flipping animations.
That's it for this part of the tutorial! In the next part, we'll add multiple levels and vastly improve the interface. Hope you've enjoyed it so far!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://gamedevelopment.tutsplus.com/tutorials/create-a-flickr-based-pairs-game-with-flash-the-basics--active-9468 | CC-MAIN-2017-26 | en | refinedweb |
In order to understand the format of suggestions, please read the Suggesters page first.
The
completion suggester is a so-called prefix suggester. It does not
do spell correction like the
term or
phrase suggesters but allows
basic
auto-complete functionality.
The first question which comes to mind when reading about a prefix suggestion is, why you should use it at all, if you have prefix queries already. The answer is simple: Prefix suggestions are fast.
The data structures are internally backed by Lucenes
AnalyzingSuggester, which uses FSTs (finite state transducers) to
execute suggestions. Usually these data structures are costly to
create, stored in-memory and need to be rebuilt every now and then to
reflect changes in your indexed documents. The
completion suggester
circumvents this by storing the FST (finite state transducer) as part
of your index during index time. This allows for really fast
loads and executions.
In order to use this feature, you have to specify a special mapping for this field, which enables the special storage of the field.
curl -X PUT localhost:9200/music curl -X PUT localhost:9200/music/song/_mapping -d '{ "song" : { "properties" : { "name" : { "type" : "string" }, "suggest" : { "type" : "completion", "index_analyzer" : "simple", "search_analyzer" : "simple", "payloads" : true } } } }'
Mapping supports the following parameters:
index_analyzer
- The index analyzer to use, defaults to
simple.
search_analyzer
- The search analyzer to use, defaults to
simple. In case you are wondering why we did not opt for the
standardanalyzer: We try to have easy to understand behaviour here, and if you index the field content
At the Drive-in, you will not get any suggestions for
a, nor for
d(the first non stopword).
payloads
- Enables the storing of payloads, defaults to
false
preserve_separators
- Preserves the separators, defaults to
true. If disabled, you could find a field starting with
Foo Fighters, if you suggest for
foof.
preserve_position_increments
- Enables position increments, defaults to
true. If disabled and using stopwords analyzer, you could get a field starting with
The Beatles, if you suggest for
b. Note: You could also achieve this by indexing two inputs,
Beatlesand
The Beatles, no need to change a simple analyzer, if you are able to enrich your data.
max_input_length
- Limits the length of a single input, defaults to
50UTF-16 code points. This limit is only used at index time to reduce the total number of characters per input string in order to prevent massive inputs from bloating the underlying datastructure. The most usecases won’t be influenced by the default value since prefix completions hardly grow beyond prefixes longer than a handful of characters. (Old name "max_input_len" is deprecated)
curl -X PUT 'localhost:9200/music/song/1?refresh=true' -d '{ "name" : "Nevermind", "suggest" : { "input": [ "Nevermind", "Nirvana" ], "output": "Nirvana - Nevermind", "payload" : { "artistId" : 2321 }, "weight" : 34 } }'
The following parameters are supported:
input
- The input to store, this can be a an array of strings or just a string. This field is mandatory.
output
- The string to return, if a suggestion matches. This is very useful to normalize outputs (i.e. have them always in the format
artist - songname). This is optional. Note: The result is de-duplicated if several documents have the same output, i.e. only one is returned as part of the suggest result.
payload
- An arbitrary JSON object, which is simply returned in the suggest option. You could store data like the id of a document, in order to load it from elasticsearch without executing another search (which might not yield any results, if
inputand
outputdiffer strongly).
weight
- A positive integer or a string containing a positive integer, which defines a weight and allows you to rank your suggestions. This field is optional.
Even though you will lose most of the features of the completion suggest, you can choose to use the following shorthand form. Keep in mind that you will not be able to use several inputs, an output, payloads or weights. This form does still work inside of multi fields.
{ "suggest" : "Nirvana" }
The suggest data structure might not reflect deletes on
documents immediately. You may need to do an Optimize for that.
You can call optimize with the
only_expunge_deletes=true to only target
deletions for merging. By default
only_expunge_deletes=true will only select
segments for merge where the percentage of deleted documents is greater than
10% of
the number of document in that segment. To adjust this
index.merge.policy.expunge_deletes_allowed can
be updated to a value between
[0..100]. Please remember even with this option set, optimize
is considered a extremely heavy operation and should be called rarely.
Suggesting works as usual, except that you have to specify the suggest
type as
completion.
curl -X POST 'localhost:9200/music/_suggest?pretty' -d '{ "song-suggest" : { "text" : "n", "completion" : { "field" : "suggest" } } }' { "_shards" : { "total" : 5, "successful" : 5, "failed" : 0 }, "song-suggest" : [ { "text" : "n", "offset" : 0, "length" : 1, "options" : [ { "text" : "Nirvana - Nevermind", "score" : 34.0, "payload" : {"artistId":2321} } ] } ] }
As you can see, the payload is included in the response, if configured
appropriately. If you configured a weight for a suggestion, this weight
is used as
score. Also the
text field uses the
output of your
indexed suggestion, if configured, otherwise the matched part of the
input field.
The basic completion suggester query supports the following two parameters:
field
- The name of the field on which to run the query (required).
size
- The number of suggestions to return (defaults to
5).
The completion suggester considers all documents in the index. See Context Suggester for an explanation of how to query a subset of documents instead.
The completion suggester also supports fuzzy queries - this means, you can actually have a typo in your search and still get results back.
curl -X POST 'localhost:9200/music/_suggest?pretty' -d '{ "song-suggest" : { "text" : "n", "completion" : { "field" : "suggest", "fuzzy" : { "fuzziness" : 2 } } } }'
The fuzzy query can take specific fuzzy parameters. The following parameters are supported:
If you want to stick with the default values, but
still use fuzzy, you can either use
fuzzy: {}
or
fuzzy: true. | https://www.elastic.co/guide/en/elasticsearch/reference/1.7/search-suggesters-completion.html | CC-MAIN-2017-26 | en | refinedweb |
Originally posted by Viju: Now, compile the same code after removing "public" before the class name, this time the code will compile and run without any error.
but my question is WHY? What is the logic behind this?
Originally posted by Vishal Pandya: 'I' think there is no logic behind it.It's simply a rule.
Imagine that you have two files, A.java and B.java containing classes A and B. Furthermore, imagine that class A mentions class B. Now, you type "javac A.java". At some point, the compiler is going to look for B.class, and since it doesn't exist yet, what should the compiler do? Of course, what it does is look for "B.java" and expect class B to be defined in it. If B.java instead contained class C, and C.java contained class B, then the compiler would fail to compile anything. So even without the rule about one public class per file, you can see another common sense rule: if a class is referenced outside of the source file in which it is defined, the source file should be named after the class. Many Java compilers will warn about violations of this even for non-public classes. Now, the only reason for a class to be public is for it to be used outside of its own source file, right? So it makes sense to make the rule stronger in this case. The common-sense rule, the one that people follow consciously or not, is that a source file should contain at most one class that is ever mentioned by name outside of the file, and the file should be named after that one class.
Originally posted by Vijay Arora: Thanks Campbell for the hint..searched through the forum and found the above information.
Originally posted by Vijay Arora: Ya sure..following is the url where i found the above quote
Payel Bera wrote:Hi Campbell, Thank you for your clarification!!
1) If no public class, then the name of the java file can be the same as any of the non public class or any other name like abc.java,,, please let me know if my understanding is correct.
2) if we have class sample in file sample1.java then the code inside sample.java will execute whenn we run sample1.java,,, please let me know if my understanding is correct. Thanks in advance,. | http://www.coderanch.com/t/410062/java/java/Compiling-java-code-file-class | CC-MAIN-2014-41 | en | refinedweb |
The EF mapping system supports mapping stored procedures for all four CRUD (Create, Read,
Update, and Delete) operations. For more details and a full example of reading entities
from a stored procedure, see:
One way to map read-only entities is to use a Defining Query. See Mapping Read-only
Entities
and QueryView Element (EntitySetMapping)
for more details.
When you generate the DDL script from the model, there are cases when you might want to
customize the generated SQL. For example, when you have a property of binary type
in the model and you want to map it to timestamp or rowversion (rowversion should be used
instead of timestamp starting with SQL Server 2008) currently, you cannot do it in the
designer. You can customize the T4 template that does the DDL generation script by doing
the following:
1. Add a structured annotation to the CSDL, by declaring your own namespace and then adding an element using that namespace:
For more information, see.
The CopyToSSDL annotation tells the database generation pipeline to copy the annotation to the SSDL.
For example, find the following line of code in the .tt file:
[<#=Id(prop.Name)#>] <#=prop.ToStoreType()#>
<#=WriteIdentity(prop, targetVersion)#>
<#=WriteNullable(prop.Nullable)#>
<#=(p < entitySet.ElementType.Properties.Count - 1) ? "," : ""#>
And replace it with the code that reads the annotation (if one exists) and uses its value
instead of the built-in value for the database type:
[<#=Id(prop.Name)#>]
<#
if (prop.MetadataProperties.Contains(""))
{
MetadataProperty annotationProperty =
prop.MetadataProperties[ ""];
XElement e = XElement.Parse(annotationProperty.Value.ToString());
string value = e.Value.Trim();
#> <#=value#> <#
} else {
#> <#=prop.ToStoreType()#> <# }
#> <#=WriteIdentity(prop, targetVersion)
#>
<#=WriteNullable(prop.Nullable)
#>
<#=(p < entitySet.ElementType.Properties.Count - 1) ? "," : ""
#>
Note: To make the preceding code work you need to add some assemblies and namespaces:
<#@ assembly name="System.Xml.Linq" #>
<#@ assembly name="System.Xml" #>
<#@ import namespace="System.Xml.Linq" #>
<#@ import namespace="System.Xml" #>
You will get this error when you start with model first and you haven't generated the tables
yet. The model will not validate until all objects in the model are mapped to the database.
After you do Generated Database from Model, this error should go away.
You need to add a condition for the Active column of your entity type. You then need to
remove the mapping for the Active column and also remove the Active property from the
entity type (in this order). Consider the following example. Here we have an Entity2
type that has the Active property. This property is of a bit type in SQL Server.
We need to add the condition that specifies to only bring back entities where Active = 1,
clear the mapping, and remove the Active property from Entity2, as show in the following screenshot.
Now when we execute a query to return results of the Entity2 type, only records where
Active is set to 1 will return.
foreach (var e in context.Entity2)
Console.WriteLine(e.ID);
On the client you will probably also want to add some logic that takes care of not really deleting objects,
but instead updating the Active column. For example:
public override int SaveChanges(SaveOptions options)
{
foreach (ObjectStateEntry entry in ObjectStateManager.GetObjectStateEntries(
EntityState.Deleted))
// Change the status to Unchanged, since we don't want for the Entity Framework
// to issue a delete command.
Entity2 e = entry.Entity as Entity2;
entry.ChangeState(EntityState.Unchanged);
// Update the Active columns in the database.
ExecuteStoreCommand("UPDATE Entity2 SET Active = 0 WHERE Entity2.ID = {0}", e.ID);
}
return base.SaveChanges(options);
Yes. The Entity Framework supports views to complex type mapping. Views really aren't any
different than tables except that the EF doesn't have all the same schema information about them
in the sense that it doesn't know the key values, etc. Note that you cannot have an entity
key of a complex type. Keys must be made up of one or more properties that are primitive types.
If you set the StoreGeneratedPattern to Identity (if the value is only computed on insert) or
Computed (if it is computed on every update), then when you call SaveChanges the EF will get
the value generated in the database and bring it back to update your object. The EF will not
automatically set the value when you create a new object because the computation actually happens
in the database.
Also note that there is a StoreGeneratedPattern property on properties of your entities in
the EDM designer, but it is only used for model first. If you reverse engineer your model
from the database, then you need to manually update the SSDL section of your EDMX file to have
the StoreGeneratedPattern attribute on the appropriate entity property. This is something that
should be addressed in future releases of the EF designer (so that the property in the designer
works for both Model First and Database First scenarios).
No. | http://social.technet.microsoft.com/wiki/contents/articles/3793.entity-framework-faq-mapping-and-modeling.aspx | CC-MAIN-2014-41 | en | refinedweb |
A Feature activation dependency expresses a requirement in the relationship between two Features. You can express activation dependencies either for Features of the same scope or for Features across different scopes. Please check this link for getting more information about the Activation Dependency. More . Here we are going to talk about the Feature Activation Dependency with Hidden features.
A hidden feature cannot have any activation dependencies. For eg: Feature-A has an activation dependency with Feature-B, that means, only if we activate the Feature-B then only we can activate the Feature-A, thus if the Feature-B is hidden then we can’t activate it, as a result we can’t activate both features.
If you want to check this one quickly you can try the following feature definitions.
1: <Feature Id="74EE1A2F-2E2D-4a52-A674-BB16575234C7"
2: Title="testActivate one"
3: Description="Test one"
4: Version="1.0.0.0"
5: Scope="Site"
6: Hidden="FALSE"
7:
8: </Feature>
2. Create a feature with name Feature2 and then create a feature.xml with the following.
1: <Feature Id="B92139A4-04D7-4d9d-9B61-2DF6E4A59D66"
2: Title="testActivate two;"
3: Description="Test two"
4: Version="1.0.0.0"
5: Scope="Site"
6: Hidden="TRUE"
7:
8: <ActivationDependencies>
9: <ActivationDependency FeatureId="74EE1A2F-2E2D-4a52-A674-BB16575234C7" />
10: </ActivationDependencies>
11: </Feature>
This means, we can activate Feature1 only after activating the Feature2. Thus now we can see what will be outcome while installing these features using STSADM.EXE
1. STSADM.exe –o installfeature –name Feature1 –force
Operation completed successfully
2. STSADM.exe –o installfeature –name Feature2 –force
Hidden features with activation dependencies are not supported.
If you want to ensure quickly about the real usage of activation dependency in OOB, in the site feature you can try to activate the Office SharePoint Server Publishing. Actually, this feature is depending on the Office SharePoint server Publishing Infrastructure in the site collection feature. Thus, once you activate that site collection feature then only you can activate the site feature. In this scenario, think about if the site collection feature (Office SharePoint server publishing Infrastructure) is hidden, then we can’t activate that feature which results that we can’t activate the site feature (Office SharePoint server Publishing.)
Office SharePoint Server Publishing
Create a Web page library as well as supporting libraries to create and publish pages based on page layouts.
One or more features must be turned on before this feature can be activated.
Office SharePoint Server Publishing Infrastructure
Provides centralized libraries, content types, master pages and page layouts and enables page scheduling and other publishing functionality for a site collection.
Actually, this was one my customers requirement. So, what I have tried that whenever I install the feature 1 then hook an event handler which will capture that event and before completing that transaction install and activate the hidden feature 2 which is the dependent one. Unfortunately we don’t have a synchronous event (Feature Activating) event but, what I have seen that if any exception thrown in Feature Activated event it will not roll back and the feature will be still in inactive stage only. Thus once we activate any other feature at the time of feature activated event, first that feature 2 activation process will over and then it will complete the feature activation process of feature 1.
In my testing scenario, there were two features, Feature1 and Feature2. Feature1 is a hidden feature and Feature 2 is not a hidden feature. Our intention is whenever we try to activate the Feature2 then first we must need to activate the Feature1 and then only we can activate Feature2.
Then I decided to do the following:
1. Install the hidden Feature1
2. Install the Feature2
3. Register an FeatureActivated event handler for Feature2
4. Whenever we activate the feature, capture that event and then activate the hidden feature Feature1.
Here is the code snippet for the feature event handler.
1: using System;
2: using System.Collections.Generic;
3: using System.Text;
4: using Microsoft.SharePoint;
5: using Microsoft.SharePoint.Administration;
6: using Microsoft.SharePoint.Utilities;
7: using System.Diagnostics;
8:
9: namespace FeatureActivationDependency
10: {
11: public class ActivationDependency : SPFeatureReceiver
12: {
13: public override void FeatureActivated(SPFeatureReceiverProperties properties)
14: {
15:
16: SPSite oSite = properties.Feature.Parent as SPSite;
17:
18: SPFarm oFarm = oSite.WebApplication.Farm;
19:
20: SPFeatureDefinitionCollection oFeatures = oFarm.FeatureDefinitions;
21: SPFeatureDefinition oFeature = oFeatures[new Guid("B92139A4-04D7-4d9d-9B61-2DF6E4A59D66")];
22:
23: if (oFeature == null)
24: {
25: // install & activate the feature
26: Process oProcess = new Process();
27: ProcessStartInfo oProcInfo = new ProcessStartInfo();
28:
29: // here I am hardcoding the physical locatio nof the STSADM command.
30: oProcInfo.FileName = @"C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN\STSADM.EXE";
31: // Here Act2 is my hidden feature’s name | if you want to test this functionality then first you can install an unhidden feature then you
32: can see that the feature get installed and activated in the features list.
33: oProcInfo.Arguments = " -o installfeature -name Act2 -force";
34: oProcess.StartInfo = oProcInfo;
35:
36: // executing the stsadm command.
37: oProcess.Start();
38: oProcess.WaitForExit();
39:
40: // after installing the feature we are activating this feature
41: SPFeatureDefinitionCollection oAllFeatures = oFarm.FeatureDefinitions;
42: SPFeatureDefinition oHiddenFeature = oAllFeatures[new Guid("B92139A4-04D7-4d9d-9B61-2DF6E4A59D66")];
43: oSite.Features.Add(oHiddenFeature.Id, true);
44:
45: }
46: else
47: // assume that feature already installed so we are activating this feature
48: oSite.Features.Add(oFeature.Id, true);
49:
50:
51: }
52: public override void FeatureDeactivating(SPFeatureReceiverProperties properties)
53: {
54:
55: }
56: public override void FeatureInstalled(SPFeatureReceiverProperties properties)
57: {
58:
59: }
60: public override void FeatureUninstalling(SPFeatureReceiverProperties properties)
61: {
62:
63: }
64: }
65: }
PingBack from
Thanks for very useful information.
Manish
Une rapide recherche sur ActivationDependencies vous permettra de comprendre que c'est un élément du
I have small doubt dont mind if it wrong. Can I activate Hidden feature from stadm command.
@Bhargava,
You can activate the hidden feature as usual. Only thing is your hidden feature cannot have activation dependencies.
Hidden feature will not displayed under "Site collection Features" for restricting the end user to activate or deactivate.
If you have permission you can activate the hidden feature. | http://blogs.msdn.com/b/sowmyancs/archive/2008/10/07/feature-activation-dependency-with-hidden-features.aspx | CC-MAIN-2014-41 | en | refinedweb |
Sorry if this looks a bit weird. Im right now using Lynx, text-mode web browser and im not sure how things turn out.
The source is pretty simple. I just made a new file
and i added the simple basic thing:
#include <stdio.h>
void main()
{
printf("hello");
}
I think im missing a a binary. But this is going to be like looking for a needle in a hay stack.
But it's worth it. I like this compiler a lot.
Take care
Stef | http://cboard.cprogramming.com/brief-history-cprogramming-com/8094-compiler.html | CC-MAIN-2014-41 | en | refinedweb |
TI Home
»
TI E2E Community
»
Support Forums
»
Microcontrollers
»
MSP430™ Microcontrollers
»
MSP430 Ultra-Low Power 16-bit Microcontroller Forum
»
Using Pointer to help send multiple AD Conversions out Serially
Hey guys,
I am trying to use a DMA and Pointers to help me send A/D values through serial. Right now, I can only get the code to display the values coming in from ADC12MEM0
on my terminal program. The other values being dumped into my RAM are not being transmitted to the terminal. Therefore, I am trying to set up a pointer that will keep track of
where I am in RAM and send every conversion to the terminal. Here is my code, I tried using the comments to explain what I was thinking
# include <msp430xG46x.h>int ADCSample;int *Pointer1;int ADCSample2;void main(void){WDTCTL = WDTPW+WDTHOLD; // Stop watchdogP5DIR |= 0x02;P5OUT |= 0x02;// Initialization of ADC12//P6SEL |= 0x01; // Enable A/D channel A0ADC12CTL0 &= ~ENC; // Disable ConversionsADC12CTL0 = ADC12ON + SHS_1 + REFON + REF2_5V + MSC; // turn on ADC12, set samp timeADC12CTL1 = SHP+CONSEQ_2; // Use sampling timerADC12MCTL0 = SREF_1+INCH_0; // Vr+=VeREF+ (external)//ADC12IFG = 0;//Timer ATACCR0 = 1500; // Delay to allow Ref to settleTADC12CTL0 |= ENC; // Enable conversions// Initialization of Rs-232//FLL_CTL0 |= XCAP14PF; // Configure load capsdo/TXDUCA0CTL1 |= UCSSEL_1; // CLK = ACLKUCA0BR0 = 0x03; // 32k/9600 - 13.65UCA0BR1 = 0x00; //UCA0MCTL = 0x06; // ModulationUCA0CTL1 &= ~UCSWRST; // **Initialize USCI state machine**IE2 |= UCA0TXIE + UCA0RXIE; // enable RXD and TXD interrupt;// Initialize DMADMACTL0 = DMA0TSEL_6; // ADC12IF setDMACTL1 = DMAONFETCH;__data16_write_addr((unsigned short) &DMA0SA, (unsigned long) &ADC12MEM0); // Source address__data16_write_addr((unsigned short) &DMA0DA, (unsigned long) 0x001108); // Destination single addressDMA0SZ = 0x0FFF; // Set DMA Block size ADCSample sizeDMA0CTL = DMADT_4 + DMADSTINCR_3 + DMAIE + DMADSTBYTE + DMASRCBYTE; // Repeat single, inc dst, interruptsADCSample = ADC12MEM0; //Set integer to value of ADC12MEM0Pointer1 = &ADCSample; //Set Pointer1 to the address of ADCSample(Not sure where this address, I want it to be at 0x001108)DMA0CTL |= DMAEN; //Enable DMAADC12CTL0 |= ADC12SC; //Start Conversions//Serial Loopwhile(*Pointer1 <= 0x030FF) // Execute loop until loop reaches 0x030FF address{ADCSample2 = *Pointer1; //Set integer to value stored in Pointer1UCA0TXBUF = ADCSample2 >> 8; //Send upper byte from Pointer1 to Serialwhile(!(IFG2 & UCA0TXIFG)){__delay_cycles(1000);// wait for first transmit}UCA0TXBUF = ADCSample2;*Pointer1 = (*Pointer1 + 1) &0x030FF; //Increment Pointer1 from address 0x001108 to 0x030FF__bis_SR_register(LPM0_bits + GIE);}}
Martin Novotny102956IE2 |= UCA0TXIE + UCA0RXIE; // enable RXD and TXD interrupt;
You didn't post any ISR, so I assume you have none, so the first interrupt 8after GIE has been set) will jump into the void, crashing and resetting the MSP.
Martin Novotny102956ADC12CTL0 |= ADC12SC; //Start Conversions
Martin Novotny102956while(*Pointer1 <= 0x030FF) // Execute loop until loop reaches 0x030FF address
The DMA probably does the transfer of 4k of words (= 8k data) from ADC12MEM0 to 0x1108 to 0x3106 (overwriting everything that happens to be placed there by the linker), but it would be pure coincidence if this changes *Pointer1 without also messing up Pointer1 itself.
Martin Novotny102956while(!(IFG2 & UCA0TXIFG)){__delay_cycles(1000);// wait for first transmit}
Martin Novotny102956*Pointer1 = (*Pointer1 + 1) &0x030FF; //Increment Pointer1 from address 0x001108 to 0x030FF
What thsi does is: it takes the value pointed to by Pointer1, increments this value by 1, then does a bit-wise AND with 0x30ff and stores the result back to the memory location Pointer1 points to (whcih still is either ADCSample variable, or maybe anywhere int the addressing range, depending on what damage teh DMA has done.
The whole construct makes no sense, sorry.I guess, you didn't really understand what pointers in C are and how they are used.
____________________________________.
Sorry, I guess I'm not sure why you bothered to even reply to my post when all you did was tell me everything I did wrong and offered no means of assistance. Of course, the code does not do what I was describing and NO I do not have a great understanding of what pointers in C are, but no where in your post did you make any attempt to help me learn or suggest ways to improve my code. Therefore do not bother responding to this reply because I won't be coming back to this forum. Its embarassing that you are to the go to expert on these forums as I posted a problem and instead of posting possible fix or even some good advice you simply told me everything already know which is that my code is obviously flawed.
Jens-Michael Gross is one of more tolerant gurus. You must have caught him at a bad moment.
I am guessing you are trying to DMA a number of samples from ADC to a buffer and simultaneously sending the buffer to the serial port. I think your code is using unallocated memory as though it was an array. This is not safe as the unallocated memory can move. Better to allocate yourself an array and ask the DMA to fill that array. The questions is not really about pointers but more of buffer arrays and DMA transfer. My background is not with MSP430...here's my "straw man" version of your code. See if others will knock it down.
# include <msp430xG46x.h>#define SAMPLES 256volatile int ADCSamples[SAMPLES];void main(void){ int i; int x; WDTCTL = WDTPW+WDTHOLD; // Stop watchdog P5DIR |= 0x02; P5OUT |= 0x02; // Initialization of ADC12// P6SEL |= 0x01; // Enable A/D channel A0 ADC12CTL0 &= ~ENC; // Disable Conversions ADC12CTL0 = ADC12ON + SHS_1 + REFON + REF2_5V + MSC; // turn on ADC12, set samp time ADC12CTL1 = SHP+CONSEQ_2; // Use sampling timer ADC12MCTL0 = SREF_1+INCH_0; // Vr+=VeREF+ (external) //ADC12IFG = 0; //Timer A TACCR0 = 1500; // Delay to allow Ref to settle T ADC12CTL0 |= ENC; // Enable conversions // Initialization of Rs-232// FLL_CTL0 |= XCAP14PF; // Configure load caps do {/TXD UCA0CTL1 |= UCSSEL_1; // CLK = ACLK UCA0BR0 = 0x03; // 32k/9600 - 13.65 UCA0BR1 = 0x00; // UCA0MCTL = 0x06; // Modulation UCA0CTL1 &= ~UCSWRST; // **Initialize USCI state machine** IE2 |= UCA0TXIE + UCA0RXIE; // enable RXD and TXD interrupt; // Initialize DMA // Repeat single src, inc dst, interrupts DMACTL0 = DMA0TSEL_6; // ADC12IF set DMACTL1 = DMAONFETCH; __data16_write_addr((unsigned short) &DMA0SA, (unsigned long) &ADC12MEM0); // Source __data16_write_addr((unsigned short) &DMA0DA, (unsigned long) ADCSamples); // Dest DMA0SZ = SAMPLES; // Set DMA Block size ADCSample size BYTES? ITEMS????? DMA0CTL = DMADT_4 + DMADSTINCR_3 + DMAIE + DMADSTBYTE + DMASRCBYTE; DMA0CTL |= DMAEN; //Enable DMA ADC12CTL0 |= ADC12SC; //Start Conversions // Need to wait for first sample to complete here. How to? //Serial Loop //Send samples from DMA buffer to serial port. //Assumes that ADC samples faster that the serial port transmits. for(i=0; i<SAMPLES; i++) { x = ADCSamples[i]; // Get a 16 bit sample UCA0TXBUF = x >> 8; // Send upper byte to Serial while(!(IFG2 & UCA0TXIFG)) continue; // Wait for first transmit UCA0TXBUF = x; // Send lower byte to Serial while(!(IFG2 & UCA0TXIFG)) continue; // Wait for second transmit __bis_SR_register(LPM0_bits + GIE); // What does this do? }}
I've put questions marks '?" where I'm not sure what is your intent. Note that the code sends binary values across the serial port. A terminal program such as Hyperterminal wants ASCII characters. Maybe your terminal program has a mode where it will print out binary values in human readable form.
Martin Novotny102956Sorry, I guess I'm not sure why you bothered to even reply to my post when all you did was tell me everything I did wrong and offered no means of assistance.
It won't help you if I jsut write a workign code version for you. You wouldn't learn anything and come back to the forum with your next non-working code. Telling you where you did wrong (and giving you the advice to take a class about usage of C pointers) will, if accepted by you, increase your knowledge and therefore your ability to do it right by yourself, while at the same time decreasing the probability that you come back for more assistance soon - with the very same mistakes.
Martin Novotny102956 I do not have a great understanding of what pointers in C are, but no where in your post did you make any attempt to help me learn or suggest ways to improve my code
Martin Novotny102956you simply told me everything already know which is that my code is obviously flawed
And I never provide complete code. First because I don't have the time to write it and test it (I surely don't want to release untested code), then I don't have the equipment to test the code (there are ~400 MSP derivates, not counting the required external circuitry for each case). And finally, nobody learns walking if he's carried around all the time.I may lend a hand while some tries to walk, but if someone hasn't even discovered that he has legs... well, I have given private lessons in the past. Mainly chemistry, physics and math. But for cash. And back then I had the time for doing so. Today, I have a fulltime job, and it's not being the 'go to expert'. It's not even for TI. I do this is my spare time. Free of charge.
Norman WongJens-Michael Gross is one of more tolerant gurus. You must have caught him at a bad moment.
But now to your code. I didn't check the clock and por tinitialization. However, there are a few things...
Norman Wong //__bis_SR_register(LPM3_bits + GIE); // Wait for delay, Enable interrupts
TACCTL0 &= ~CCIFG; clear interrupt flagwhile (!(TACCTL0&CCIFG)); // wait until TAR has counted to TACCR0
Norman WongIE2 |= UCA0TXIE + UCA0RXIE; // enable RXD and TXD interrupt;
Norman Wong DMA0SZ = SAMPLES; // Set DMA Block size ADCSample size BYTES? ITEMS?????
Norman Wong DMA0CTL = DMADT_4 + DMADSTINCR_3 + DMAIE + DMADSTBYTE + DMASRCBYTE;
Norman Wong // Need to wait for first sample to complete here. How to?
Norman Wong __bis_SR_register(LPM0_bits + GIE); // What does this do?
However, there is no ISR at all, so this line is not doing any good.It should be replaced by something that checks DMA0SZ:
while(DMA0SZ>=(SAMPLES. | http://e2e.ti.com/support/microcontrollers/msp430/f/166/t/181938.aspx | CC-MAIN-2014-41 | en | refinedweb |
Consult the talk page or me for any questions or concerns. See the Poo Lit Archives for the results from past competitions.
edit The competition
edit What is the Poo Lit Surprise?
A writing competition held on Uncyclopedia for a
small cash prize selection of incredibly sharp kitchen knives and a fabulous garden strimmer. It is designed to jump-start writing quality at Uncyclopedia. After the previous eight competitions, a significant number of featured articles for the following month(s) were from the PLS, and it could be argued that the overall quality of featured articles increased.
edit Who can enter and what are the rules?
All registered members of Uncyclopedia are encouraged to enter, though a few conditions and limitations apply. Judges are barred from entering the category they are judging , chopped into small manageable pieces and served with a light salad. behaviour will result in both articles being disqualified. Collaborations have their own category. Collaboration are not permitted in any other category. Collaborative accounts are not permitted and a collaborative team may include a maximum of two users who are subject to all the same rules as participants of other categories.
Sockpuppetry or deception in the PLS is unacceptable and discovery will result in the entry being disqualified along with any other entries that user has submitted. Administrators should exercise their discretion in enforcing competition rules and any concerns should be reported to Romartus.
Articles created prior to the competition cannot be submitted. Also, no plagiarism. Should we discover your work is not original, you will be disqualified. Resources such as the Reefer Desk, Image Request, or Pee Review are forbidden. Users are encouraged to use Vital, Uncyclopedia:The Creative Process, Category:Rewrite, Special:Wantedpages, Uncyclopedia:Requested Articles and/or Inspire an Article to get ideas.
edit When is the PLS going to be held?
- From September 20th ― October 4th, entries will be accepted.
- From October 4th ― October 11th, entries will be locked and judged.
- On October 12th, winners will be announced, articles will be moved into the mainspace, and all entries will be unlocked.
- October 12th to October 31st - Dance party
(n.b.: all times are to be measured by UTC, and all phases of the contest end at midnight on the specified day)
edit Where should I put my entry?
The article should be placed on your userspace (i.e. the title should be in the format User:<insert name here>/[article name]). Alternatively, you can skip a step and type the article name below:
Between Sept. 20 and October 4th,.
edit What's the fattest animal on Earth?
Sockpuppet of an unregistered user's mom.
edit Entries
edit Best Article
This category is for articles (duh) of any namespace EXCEPT UnTunes and UnPoetia.
edit Best Noob Article
A category for noobs (users who have been on Uncyclopedia for three months or less). May be of an alternate namespace.
edit Best Illustrated Article
Based on how well an article's images contribute to the humor of the article. May be of an alternate namespace (images and article must be created by the user).
edit Best Rewrite
For articles which are rewrites of existing articles.
edit Best Collaboration
For the best article written by two users. May be of an alternate namespace. | http://uncyclopedia.wikia.com/wiki/Uncyclopedia:Poo_Lit_Surprise?diff=cur&oldid=3250495 | CC-MAIN-2014-41 | en | refinedweb |
Patent application title: Content delivery network service provider (CDNSP)-managed content delivery network (CDN) for network service provider (NSP)
Inventors:
Timothy N. Weller (Boston, MA, US)
Charles E. Leiserson (Cambridge, MA, US)
Assignees:
AKAMAI TECHNOLOGIES, INC.
IPC8 Class:
USPC Class:
705 30
Class name: Data processing: financial, business practice, management, or cost/price determination automated electrical financial or business practice or management arrangement accounting
Publication date: 2012-05-24
Patent application number: 20120130871
Abstract:.
Claims:
1. Apparatus associated with a content delivery network (CDN), the CDN
deployed, operated and managed by a content delivery network service
provider (CDNSP), comprising: a set of processors; and computer memory
associated with each processor and holding computer program instructions
that when executed by the processor comprise a content server, the
content servers executing on the processors together comprising a private
content delivery network associated with a network service provider (NSP)
entity distinct from the CDNSP, where the private content delivery
network enables the NSP entity to provide content delivery to content
providers associated with the NSP entity; the content servers in the
private content delivery network providing the content providers
associated with the entity on-net content delivery over the private
content delivery network; and one or more machines associated with the
private content delivery network for managing content server data
collection for the private content delivery network, for managing billing
for the private content delivery network, and for providing a network
operations center (NOC) for the private content delivery network.
Description:
[0001] This application is a continuation of Ser. No. 12/951,091, filed
Nov. 22, 2010, now U.S. Pat. No. 8,108,507, which application was a
continuation of Ser. No. 12/122,764, filed May 19, 2008, now U.S. Pat.
No. 7,840,667, which application was a continuation of Ser. No.
11/636,849, filed Dec. 11, 2006, now U.S. Pat. No. 7,376,727, which
application was a division of Ser. No. 10/114,080, filed Apr. 2, 2002,
now U.S. Pat. No. 7,149,797, which application was based on Ser. No.
60/280,953, filed Apr. 2, 2001.
BACKGROUND OF THE INVENTION
[0002] 1. Technical Field
[0003] The present invention relates generally to delivery of digital content over a distributed network.
[0004] 2. Description of the Related Art
[0005] It is well-known to deliver digital content (e.g., HTTP content, streaming media and applications) using an Internet content delivery network (ICDN). addition, the CDN infrastructure typically includes network monitoring systems to continuously monitor the state and health of servers and the networks they are in, a network operations command center (NOCC) that monitors the state of the network on a 24×7×365 basis, a customer-facing extranet portal through which CDN customers obtain real-time and historical usage information and access to content management provisioning tools and the like, administrative and billing systems, and other CDN infrastructure and support. Some CDN service providers provide ancillary infrastructure and services such as high performance, highly-available, persistent storage, tiered distribution through cache hierarchies, edge content assembly, content targeting, and the like.
[0007] that provides web content, media streaming and application delivery is available from Akamai Technologies, Inc. of Cambridge, Mass.
[0008] Implementation, operation and management of a global distributed network--such as an Internet CDN--is a complex, costly and difficult endeavor. A large CDN may have thousands of servers operating in hundreds of disparate networks in numerous countries worldwide. Typically, the CDN service provider (a CDNSP) does not own physical support infrastructure (i.e., networks, buildings, and the like) on which the CDN servers run, nor does the CDNSP necessarily have the capability of administrating those servers that are often deployed throughout the world. Rather, the service provider must deploy and then remotely administer these services and applications as what is, in effect, a virtual network overlaid on the existing (often third party owned and controlled) physical networks and data centers.
[0009] Many network service providers desire to provide content delivery services, however, the cost of designing, installing, managing and operating a full end-to-end CDN is prohibitive.
BRIEF SUMMARY OF THE INVENTION
[0010] According to the present invention,.
[0011] The CDNSP provides its NCDN customer with content delivery service for the NCDN's participating content providers in a so-called "on-net" manner, meaning that the content that is actually available to be delivered over the CDN is transported on the NSP's own network. When the NSP's private CDN is over-loaded, however, according to the invention the CDN service provider allows the NCDN to overflow onto the global CDN so that end users can still obtain the desired content in an efficient manner. When traffic is overflowed to and delivered over the global CDN, it is said to be "off-net." Thus, according to the present invention, one or more private CDNs share infrastructure of a larger CDN, which manages the private CDNs and makes its potentially global network available to handle off-net overflow traffic.
[0012] In a conventional content delivery network implementation, the CDN service provider may already have its content servers located "on-net," i.e., in the NSP's network, which simplifies the provisioning of the private CDN.13] In a representative embodiment, a content delivery network service provider shares its infrastructure to enable network service providers to offer CDN services to participating content providers. One or more private CDNs are deployed, operated, and managed by the CDNSP on behalf of one or more respective network partners. The CDN service provider is paid to deploy, operate and manage the private CDN on behalf of the NSP. In addition, preferably the CDN service is paid by the network to deliver the network's off-net traffic to peers or to a bandwidth pool.
[0014]15] FIG. 1 is a diagram of a known content delivery network in which the present invention may be implemented;
[0016] FIG. 2 is a simplified block diagram of a CDN server;
[0017] FIG. 3 is simplified diagram of a streaming media overlay network;
[0018] FIG. 4 is a block diagram of the basic CDN infrastructure services that are made available to the NSP according to the present invention;
[0019] FIG. 5 is a simplified illustration of how a CDNSP provides a managed CDN on behalf of a network service provider (NSP) according to the present invention; and
[0020] FIG. 6 illustrates how NCDNs are implemented using naming schemes that differ from the CDN; and
[0021] FIG. 7 illustrates a representative mapping mechanism to enable NCDNs to overflow onto the global CDN.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
[0022] As seen in FIG. 1, a known23] In one service offering, available commercially from Akamai Technologies, Inc. of Cambridge, Mass., content is tagged for delivery from the CDN using a content migration refers to a set of control options and parameters for the object (e.g., coherence information, origin server identity information, load balancing information, customer code, other control codes, etc.), and such information may be provided to the CDN content servers in various ways.24] FIG. 2 illustrates a typical machine configuration for a CDN content edge server. For HTTP content,. In operation,.
[0025] As described above, the Internet CDN may also provide live and on-demand streaming media. In one approach, the CDN incorporates a fan-out technique that allows the data comprising a given stream to be sent on multiple redundant paths when required to enable a given edge server to construct a clean copy of the stream when some of the paths are down or lossy. FIG. 3 illustrates the use of a so-called "reflector" transport network that implements such a process. This network is described in U.S. Pat. No. 6,751,673, titled "Streaming media subscription mechanism for a content delivery network."
[0026] As seen in FIG. 3, a broadcast stream 300 is sent to a CDN entry point 302. An entry point, for example, comprises two servers (for redundancy), and each server can handle many streams from multiple content providers. Once the entry point receives the stream, it rebroadcasts copies of the stream to set reflectors 304a 306d, of a set of regions 306a-n. A subscribing region 306d is a CDN region that contains one or more streaming edge nodes 308a 309, e.g., a local area network (LAN). Typically, an edge node, e.g., node 308d, comprises a streaming server 312 and it may include a cache 310. 314 to facilitate the subscription mechanism as described in the above-identified application.
[0027] Although not shown in detail, a typical content delivery network includes other infrastructure (servers, control facilities, applications, and the like) in addition to the HTTP and streaming delivery networks described above. Such infrastructure provides for an integrated suite of services and functions as set forth in the CDN "building block" diagram illustrated in FIG. 4. These services and function are representative, of course, and a given CDN need not include all of them or be implemented according to the configurations described above and illustrated in FIGS. 1-3. For purposes of the present invention, it is assumed that a content delivery network operates a distributed infrastructure on behalf of its participating content provider customers, typically leveraging the networks of various third party network service providers. Thus, for example, it is assumed that the CDN operates servers in various large (e.g., backbone) networks such as those owned and operated by such companies as: UUNet, AT&T, Qwest Communications, Cable & Wireless, Sprint, British Telecom, Deutsche Telecom, NTT, and the like. It is also assumed that a given NSP desires to offer content delivery to its own participating content providers (which, typically, will not be the same content providers that use the services of the global CDN).
[0028] According to the present invention, and as illustrated in FIG. 5, the global CDN 500 shares its infrastructure with one or more NSPs 502a-n to enable each NSP to provide its participating content providers with a so-called "private CDN" 504. As illustrated, a given private CDN (or network content delivery network ("NCDN") preferably uses the CDN servers provisioned in the network 506. Participating content providers 508a-n migrate their content to the NCDN using tools, techniques or instructions provided or facilitated by the CDNSP, and the CDNSP deploys, operates and manages the NCDN on behalf of the NSP partner. Participating content providers 508 have their content made available from the private CDN (instead of from their origin servers), thus providing an enhanced end user experience and flash crowd protection for participating content provider web sites. Generalizing, the Content Delivery Network for Network Service Providers (NCDN) is a CDNSP-managed service that offers network service providers (NSPs) a turnkey content delivery network (CDN). The CDNSP preferably provides the hardware and software services required to build, deploy and manage a CDN for the NCDN customer. The NCDN customer is provided with their own fully branded CDN that has similar core functional capabilities for HTTP object and streaming delivery as does the CDNSP CDN. In particular, the CDNSP preferably provides to the NCDN customer (i.e., the NSP) the functional CDN capabilities illustrated, for example, in FIG. 4. These functional components are offered as an integrated solution for the NCDN customer to bring its own content delivery services to market.
[0029] As illustrated in FIG. 6, the CDNSP typically operates its set of content servers under a global CDN namespace 600, e.g., cdnsp.net. When an end user browser makes a name query against that domain, the CDN request routing mechanism returns the IP address of a CDN edge server that is not overloaded and that is likely to host the desired content. A representative technique is described in U.S. Pat. No. 6,108,703, which is incorporated herein by reference. Network Service Providers (NSPs) in which they have their own separate namespaces 602 and 604 (e.g., ncdn1.net and ncdn2.net instead of cdnsp.net) and, optionally, their own branding. Preferably, the NSP leases its equipment from the CDNSP, and the NSP provides Tier 1 customer support to its own customers. Although not a requirement, the NSP preferably does not have root access to its CDN machines, but contracts for the CDNSP to provide CDNSP-owned software and to operate their NCDN. The NSP may have a virtual Network Operations Control Center (NOCC) that gives it the ability to monitor their NCDN, but preferably all actual operating decisions are made by the CDNSP.
[0030] The NCDN has many advantages. It enables the NSP to provide its content provider customers with flash-crowd protection, it enables the NSP to diminish traffic within its network, and it provides good performance. An NSP may offer content delivery to its content provider customers at a lower cost than a CDNSP's premium service, as well as other benefits, because many of the NSP's customers may already be hosting their web sites with the NSP. As illustrated in FIG. 6, the NSP may rely on the CDNSP to handle overflow using the CDNSP's global network whenever the NSP's own smaller network cannot handle the load. Preferably, the CDNSP network does not overflow into the NSP's NCDN, and one NCDN will not be allowed to overflow into another NCDN. The CDNSP preferably invoices the NSP on a periodic basis, e.g., a monthly service charge, for operating the NCDN, and that fee may be based on the size of deployment, traffic served, or some other metric. In addition, CDNSP may charge the NSP for overflowing into the CDNSP network according to a fee structure, e.g., a structure that encourages the NSP to deploy more servers rather than to use CDNSP bandwidth.
[0031] As noted above, preferably the NCDN uses shared CDN infrastructure. Thus, auxiliary NCDN services, such as data aggregation, collection, log delivery, content management, customer portal services, and the like, use the CDNSP tools, machines, systems and processes. Some or all of the customer-facing services, however, may be labeled with the NSP's branding, rather than with the CDNSP's. Billing data for the NSP customers is provided by the CDNSP to the NSP in any convenient format, although preferably via an XML-formatted file transfer mechanism. To account for charges due to NCDN overflow to the CDNSP, the CDNSP billing database may include appropriate fields to identify the NSP's customer for the hit and whether the NSP or the CDNSP served the content. In one embodiment, the hostname portion of an embedded object URL (e.g., ncdn1.net, ncdn2.net, cdnsp.net, or the like) is logged by CDN content server to enable these fields to be populated by billing software. In addition, log-based alerts may warn when an NCDN customer has erred in migrating its content to the CDN, which might cause an NSP customer to use the CDNSP network improperly or one NSP's customer to use a different NSP's NCDN network.
[0032] By sharing the CDN infrastructure, the NCDN preferably uses the streaming network and, thus, the NSP also provides live and video-on-demand (VOD) streaming, perhaps in multiple formats such Real, Windows Media, and QuickTime. The NSP's preferably have at least several (e.g., three) entry points within their network for redundancy. Live streaming events preferably are managed (and charged for) by the CDNSP subject to CDNSP approval. Preferably, overflow is handled by the CDNSP network via a reflector subscription mechanism, such as the mechanism described in U.S. Pat. No. 6,751,673. In an illustrated embodiment, a limit is placed on the number of CDNSP servers on which the NCDN can overflow so that no load underestimation by the NSP can adversely impact the CDNSP's network. To handle VOD storage needs, the NSP may use its own storage network or, alternatively, the CDNSP's storage solution. An illustrated storage solution implemented within a content delivery network infrastructure may be the technique described in U.S. Pat. No. 7,340,505, titled "Content storage and replication in a managed Internet content storage environment."
[0033] As noted above, other CDN infrastructure support is provided to the NSP. Thus, for example, delivery over SSL may be provided using a generic SSL solution based on wildcard certificates and the Key Management Infrastructure. Service Level Agreement (SLA) monitoring for NCDN content delivery service may be implemented using, for example, redundant monitoring agents (a pair of servers in nearby datacenters) download test images from both the NSP's customer and the NSP's NCDN network. The minimum time for corresponding downloads from the two servers may then be used to determine whether the SLA conditions have been met. The CDNSP preferably monitors both the SLA from the CDNSP to the NSP, as well as from the NSP to its customers. The NSP-customer SLA may be modeled after the existing CDNSP-customer SLA. In one embodiment, the CDNSP-NSP SLA offers a given uptime guarantee, perhaps rebating any fees for the duration of any delivery outage. An NSP's virtual NOCC preferably provides simplified information on the status of each of the NCDN servers. It may have status windows to show machines that need to be fixed, overflow status, and the like. Detailed information preferably is displayed in the CNDSP NOCC. Preferably, the virtual NOCC does not have control of the NCDN software. The NSP's customers preferably access the CDNSP's customer portal (preferably a secure extranet application) for real-time reporting and historical data. Secure connections to the portal are made, for example, via an NCDN-branded domain, either by setting up separate portal machines or allocating additional IP addresses on existing portal machines.
[0034] Several different techniques may be used to "map" requests for NCDN content to the NCDN. In a first approach, referred to as overlay NCDN, the NSP does not have dedicated regions or machines. In this example, the NSP's content providers are just identifiable with separate mapping rules so that requests are simply mapped to a set of machines that serve the content and other non-NCDN content.
[0035] A second embodiment is sometimes referred to as an independent NCDN without overflow. In this embodiment, the NSP has a set of dedicated machines organized into regions, with each region being a set of CDN content servers sharing a back-end switch. These regions preferably are only used to deliver content of a specific service for private CDN requests. This type of solution may be over-provisioned to handle NCDN traffic spikes, and any overflow control mechanism in the CDN is modified to prevent overflow between the CDN and the NCDN.
[0036] In a preferred embodiment, however, an independent NCDN has overflow capability. In this embodiment illustrated in FIG. 7, the NSP also has a separate set of one or more regions 700a-n, each of which include content servers 704 dedicated to the NCDN. This embodiment, however, also uses an overflow controller 706. In a representative implementation, the overflow controller 706 is a mechanism for overriding the assignment of clients to regions generated by a CDN mapping process 708. In particular, it is assumed that the CDN mapping process has the capability of assigning clients (typically, an end user's local name server) to CDN regions, each of which comprise a set of one or more content servers. The overflow controller 706 modifies these assignments under certain conditions. Thus, for example, when a region is receiving more than a given % of the load that it is deemed to be capable of serving (e.g., 75%), the overflow controller begins to spill traffic to other regions. Preferably, the overflow controller accomplishes this by instructing an individual low-level nameserver that is directing traffic into the region to begin directing traffic to another region. The low-level nameserver then joins a load-balancing group for the new region. If necessary, the overflow controller directs more than one low-level nameserver to direct traffic to another region, and different low-level nameservers may point to different regions. The overflow controller may comprise a group of machines, located on different backbones, each running the overflow controller process. These machines elect a leader, called the lead overflow controller, who is responsible for determining if any low-level nameservers should be redirected. The lead overflow controller determines which low-level nameservers should be redirected, and to where, e.g., by performing a minimum-cost flow calculation. The nodes in the graph are CDN regions.
[0037] Thus, in a preferred embodiment, separate maprules are used to cause overflow within an NCDN to favor the NCDN servers over CDNSP servers. A maprule is a set of rules for a type of content defining which CDNSP region (a subset of CDNSP servers that typically share a common backbone, such as a LAN, in a datacenter) may serve it. A maprule defines which regions should be used for a certain service and defines overflow preferences. In a simpler embodiment as noted above, an NCDN simply serves the NSP's customers out of the CDNSP network. As NCDN regions are added within the NSP's network, they are added to the CDNSP network as ordinary CDNSP regions. NCDN regions would then be prioritized over CDNSP regions using a maprule, but overflow from an NCDN region preferably is not prioritized to within the NCDN's other regions. In the preferred embodiment, as noted above, the NCDN is partitioned from the CDNSP network, and overflow is prioritized to within the NCDN.
[0038] The following describes additional details for an illustrative implementation.
Mapping and Load Balancing
[0039] It is required to determine proper maps for each NCDN and to use these maps to make load-balancing decisions. Because the ability of the NCDN to overflow into the CDNSP network when needed is an advantage of the NCDN product/service, an audit trail is preferably provided to justify any overflow into the CDNSP's network. In one embodiment, the NCDN uses a simple maprule that preferentially serves customers from the NCDN regions but which does not prioritize the NCDN regions during overflow. The CDNSP preferably includes a mapping mechanism that monitors the Internet and maps based on dynamic conditions. The mapping mechanism may use monitoring agents, such as servers that perform network traffic tests including traceroutes, pings, file downloads, and the like, to measure dynamic network conditions. Such data may then be used to assess which CDNSP regions are best, in terms of Quality of Service (QoS) performance, to serve content for each (NsIP, MapRule), where NsIP is the IP address of a requesting client's name server and the MapRule is a set of rules for a type of content defining which region may serve it. These maprules preferably encode the fact that content should be preferentially served from the NCDN regions. Preferably, the mapping mechanism is able to generate candidate regions (best data centers for delivering content) in the NCDN for each client name server (or group of name servers) so that a given scoring routine (based on some Q-o-S criteria) can provide sufficiently many scores within the NCDN for each group of client name servers, and so that, in turn, a region assignment can be made to map end users of NCDN content to an appropriate region. The particular algorithms and techniques for performing the actual region assignment are not part of the present invention. As the CDNSP deploys more NCDNs, it should ensure that the set of candidate regions for a specific client name server over all maprules does not grow too large, because otherwise too much network traffic testing may occur and/or become unwieldy.
Content Migration and Content Servers
[0040] Content providers served by an NCDN may utilize one or more content migration tools or other techniques to migrate content provider (CP) content to the NCDN. Preferably, each content provider is assigned a distinct CP-code from all other CP-codes using the same NCDN, another NCDN, or the CDNSP. A table may be used to provide the mapping from CP-code to legal domains on the content provider's origin server. Distribution of the legal domains to the CDNSP content servers may be provided by a metadata transmission system.
[0041] A CDNSP's content servers preferably are used for the NCDN. A conventional content server is a Pentium-based caching application with a large amount of RAM and disk storage.
[0042] For collecting data from CDNSP content server regarding the content served, preferably the CDNSP uses a log delivery service that logs host headers.
Distributed Data Collection
[0043] The CDNSP's billing of the NCDN may include a preference to prescribe a large bursting rate in the case of an overflow condition, whereby the NCDN (due to network traffic congestion) is permitted to serve content from the CDNSP network. The overflow degree is the difference between the traffic served by the NCDN's content servers and the total traffic served by the CDNSP's network for the particular NCDN's CP codes. Direct measurement of the overflow degree is highly desirable from an auditing perspective (both in reviewing billing and accidental misapplication of the content migration process by the NCDN's customer). Direct measurement may be accomplished by including an additional field in a log database application to collect host header names, which allows the CDNSP to use existing SQL queries to collate this data. Unique IDs may be assigned during the provisioning process of each NCDN. The new fields in the log database may be used, for example, to indicate what NCDN was named in the rebranded portion of the URL and to show out of what network the content was served.
Logging and Billing
[0044] A log database changes can be generated as follows: parse additional fields in logs, map modified URL hostname into NCDN code, map content server IP to NCDN code, and automate delivery into a distributed data collection (DDC) function. In addition to logging this data for the host header name, logging may raise alerts when an overflow condition exists.
[0045] For the NCDN's billing of their customers, a reseller approach may be utilized. For example, the CDNSP delivers to the NCDN on a monthly basis an XML file containing summarized traffic information for each of their CP-codes. The NCDN is then responsible for filtering this data and generating branded invoices for their customers.
Streaming
[0046] A reflector network as described in U.S. Pat. No. 6,751,673 may be provided for each NCDN. This network preferably comprises 3 distribution trees (to provide fault tolerance and redundancy), with at least 3 entry points. To support the requirement for the NCDN to be able to overflow onto the CDNSP network, the NCDN reflector network may be rate limited and will be mapped to overflow into no more than a small percentage of the CDNSP network. This restriction minimizes the impact on the CDNSP reflector network that would occur if the NCDN took on more load than both networks could handle.
Storage
[0047] Preferably, the NCDN uses the NSP's existing storage solution. NSP's preferably obtain storage in several ways, e.g.: NSPs that host web sites can provide storage "behind" the customer web site, and NSPs can provide storage servers "in front" of the customer web site to which the customer uploads content. For storage "behind" the web site, the customer web server accesses the storage directly. For storage "in front" of website, the CDNSP may require the NSP to use a CDNSP global load balancing mechanism service such as described in U.S. Ser. No. 10/113,183, identified above. This mechanism provides an IP-based access control for the NCDN storage customers. During provisioning of the NCDN, an appropriate CDNSP DNS entry for the NSP's storage servers is provided. Once this has been done, the NCDN can CNAME its customer names on their nameserver to the CDNSP DNS entry. Preferably, the NCDN is responsible for managing issues with their customer's uploads into the storage sites. This solution provides a level of anonymity to the storage servers and branding of the storage servers with the NSP's domain and customer names without any software modifications.
Performance Assurance and Agents
[0048] A basic approach to Service Level Agreements (SLAs) will be the use of two types of SLA's: Type 1--the SLA between the NSP and its customers (e.g., one for web content delivery, one for streaming media content delivery); Type 2--The SLA between the CDNSP and the NSP (e.g., one for web content delivery, one for streaming media content delivery).
[0049] Preferably, the CDNSP monitors the NCDN, alerts the NSP of any problems, and assists the NSP in developing a monitoring and alert policy with its customers.
NOCC Monitoring Applications
[0050] The monitoring of the NCDN by the CDNSP NOCC may provide region-level monitoring and region overflow monitoring. For region-level monitoring, the NOCC operating data is monitored, aggregated and used to update relevant NOCC GUI displays. The NOCC may assign the same or different priorities on NCDN and CDNSP alerts. There may also be a general prioritization of the form "CDNSP network is more important than any CDN" (or vice versa). There may be various types of alerts for NCDN operation including, e.g.: NCDN has only N % machines serving; NCDN is overflowing N % of traffic to CDNSP; NCDN not visible to NOCC (which will happen if there is a peering problem with the network), or the like. Of course, the above examples are merely representative. An indication whether the NCDN is overflowing into a CDNSP region may be provided.
NSP Virtual NOCC Applications
[0051] Preferably, the CDNSP limits the amount (and type) of data viewable by the NSP, although this is not a requirement. The data presented to the NSP's virtual NOCC may be the following: an "indicator" showing whether a particular server on the NCDN network is operational or non-operational and, by region, traffic served in megabits per second and hits per second. The definition of operational machines preferably includes suspended machines, and the only machines considered non-operational are those with hardware or connectivity problems.
Reporting
[0052] The CDNSP's realtime and historical data modules may be used. Preferably, logs are aggregated by the NCDN CP-codes and delivered to the NSP. The NSP is then responsible for filtering this data and providing it to the NCDN customers (i.e., participating content providers).
[0053] Thus, according to the present invention, a content delivery network shares its infrastructure by building and managing private content delivery (sub)-networks (an NCDN) for one or more network service providers (or other third parties). Although part of a shared infrastructure, each NCDN preferably comprises at least one private CDN region that is separate from the CDN regions that comprise the potentially global CDN. These private regions can share the same data centers and racks as some of the CDN regions but preferably have separate backend networks and may be tracked in the CDN system separately. This separation makes it possible to prevent off-net overflow from CDN regions, if desired. It might also be possible for off-net traffic to overflow into private CDN regions if the CDNSP chooses.
[0054] Sharing CDN infrastructure provides additional advantages to the CDN service provider.55] The following is a more concrete example of a representative bandwidth pooling exchange as contemplated by the present invention. In this example, the CDNSP offers the following options to its NSP customers that have bought an NCDN. As noted above, an NCDN is a private CDN based on the CDN infrastructure (in whole or in part) to otherwise sharing a common interface with the CDN of the CDNSP. If the NSP wishes to deliver some content from "off-net" servers, i.e., using servers on the global CDN of the CDNSP, the NSP pays the CDNSP given agreed-upon consideration. The rate paid, for example, may be based on the rate for another NSP where the particular servers in question are located. The CDNSP then aggregates groups of NSPs (aggregating across all or some networks where the CDNSP's servers are located) and offers "tranches" of NSP pricing. Thus, for example:
[0056] 1. $350/Mbps for UUNET bandwidth
[0057] 2. $300/Mbps for any NSP in which CDNSP servers are co-located
[0058] 3. $400/Mbps for domestic Tier 1 NSPs
[0059] 4. $500/Mbps for cable company NSPs
[0060] 5. $700/Mbps for European telco NSPs
[0061] 6. etc.
[0062] Of course, the above amounts and classifications are merely representative, as the CDNSP can designate any number of classifications and pricing variables as it desires.
[0063] To facilitate a bandwidth exchange, the CDNSP can offer a particular NSP NCDN customer reduced "off-net" bandwidth prices in exchange for cheaper bandwidth on the network of that NSP (which would then be resold to other NSP NCDN customers of the CDNSP). If the NSP customer offered to others enough "on-net" bandwidth, the NSP could, in theory, buy its "off-net" cost down to zero (or even be paid). Thus, a true CDN bandwidth exchange would be provided, with buyers and sellers of bandwidth, and the CDNSP getting a "spread" between buy and sell rates. The bid-ask prices for bandwidth on each participating NSP preferably are posted by the CDNSP over an online system, e.g., through web-based access through the CDNSP extranet portal.
[0064] The present invention may be implemented in any content delivery network. As noted above, preferably, the CDN includes mapping and load balancing software that uses dynamic monitoring of the system status (load, connectivity, etc.) and Internet conditions to direct end users (primarily via DNS) to an optimal region and then an optimal server in that region. In general, any request can be sent to any region, and the edge servers dynamically fetch content, run applications, and serve end-user requests. Additionally, the CDN may run several overlay networks to provide efficient and scalable communications between customers' web sites and the large CDN edge network. Some of these overlay networks are dynamically constructed of the edge regions themselves, while others may use specialized non-edge regions (e.g., the streaming fan-out network). A variety of side channels (e.g., the metadata transmission system) may also exist to efficiently deliver configuration and other data to the edge servers as well as to collect system state data for real-time monitoring and other functions. This CDN infrastructure is shared by a number of network service provider (NSP) partners, who each operate a private CDN for their participating content provider customers.
Patent applications by Charles E. Leiserson, Cambridge, MA US
Patent applications by AKAMAI TECHNOLOGIES, INC.
Patent applications in class Accounting
Patent applications in all subclasses Accounting
User Contributions:
Comment about this patent or add new information about this topic: | http://www.faqs.org/patents/app/20120130871 | CC-MAIN-2014-41 | en | refinedweb |
03 February 2009 20:19 [Source: ICIS news]
(Adds David Weidman statements throughout)
HOUSTON (ICIS news)--Celanese chief executive David Weidman on Tuesday characterised his company's fourth quarter as "very chilly" as the global recession seemed to freeze demand everywhere.
"There was huge destocking in that space," Weidman said in an earnings conference call. "It was very chilly in the fourth quarter."
The freeze in the September-to-December period led the Dallas-based company to post a quarterly loss of $159m (€124m), compared with a profit of $214m during the same period last year.
Celanese's full-year profit of $278m represented a 34% decline from $426m earned in 2007.
Weidman noted that historically weak market conditions had brought about a huge change for the company late in the year, coupled with an unprecedented inventory destocking by customers.
The destocking also affected Celanese's big bet on acetic acid production in ?xml:namespace>
In April 2008, Weidman said he expected acetic acid pricing in
Celanese's current pricing for acetic acid produced in
US Gulf acetic acid contract prices have dropped almost 30% since April 2008, currently in a range of $500-600/tonne during the week ended 3 February, according to global chemical intelligence service ICIS pricing.
Weidman said he expected destocking in
"We would expect volumes to remain under pressure in 2009, even with the easing of inventory destocking," Weidman said.
($1 = €0.78) | http://www.icis.com/Articles/2009/02/03/9189852/celanese-chief-calls-fourth-quarter-very-chilly.html | CC-MAIN-2014-41 | en | refinedweb |
11 July 2012 04:02 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The producer shut down the cyclohexanone unit on 14 May for maintenance, the source added.
The restart had little impact on the spot market as its output is mainly being kept for a downstream 10,000 tonne/year caprolactam unit that will start up in August, the source said.
Cyclohexanone prices were at yuan (CNY) 10,900-11,000/tonne ($1,711-1,727/tonne) EXW (ex-works) north
( | http://www.icis.com/Articles/2012/07/11/9577101/chinas-shandong-fangming-runs-cyclohexanone-unit-at-100.html | CC-MAIN-2014-41 | en | refinedweb |
The Java EE 7 Tutorial
17.13 Nonblocking I/O
Web containers in application servers normally use a server thread per client request. To develop scalable web applications, you must ensure that threads associated with client requests are never sitting idle waiting for a blocking operation to complete..
Put the request in asynchronous mode as described in Asynchronous Processing.
Obtain an input stream and/or an output stream from the request and response objects in the service method.
Assign a read listener to the input stream and/or a write listener to the output stream.
Process the request and the response inside the listener's callback methods.
Table 17-4 and Table 17-5 describe the methods available in the servlet input and output streams for nonblocking I/O support. Table 17-6 describes the interfaces for read listeners and write listeners.
17.13.1 Reading a Large HTTP POST Request Using Nonblocking I/O
The code in this section shows how to read a large HTTP POST request inside a servlet by putting the request in asynchronous mode (as described in Asynchronous Processing) and using the nonblocking I/O functionality from Table 17-4 and Table 17-6.
@WebServlet(urlPatterns={"/asyncioservlet"}, asyncSupported=true) public class AsyncIOServlet extends HttpServlet { @Override public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException { final AsyncContext acontext = request.startAsync(); final ServletInputStream input = request.getInputStream(); input.setReadListener(new ReadListener() { byte buffer[] = new byte[4*1024]; StringBuilder sbuilder = new StringBuilder(); @Override public void onDataAvailable() { try { do { int length = input.read(buffer); sbuilder.append(new String(buffer, 0, length)); } while(input.isReady()); } catch (IOException ex) { ... } } @Override public void onAllDataRead() { try { acontext.getResponse().getWriter() .write("...the response..."); } catch (IOException ex) { ... } acontext.complete(); } @Override public void onError(Throwable t) { ... } }); } }
This example declares the web servlet with asynchronous support using the
@WebServlet annotation parameter
asyncSupported=true. The service method first puts the request in asynchronous mode by calling the
startAsync() method of the request object, which is required in order to use nonblocking I/O. Then, the service method obtains an input stream associated with the request and assigns a read listener defined as an inner class. The listener reads parts of the request as they become available and then writes some response to the client when it finishes reading the request. | http://docs.oracle.com/javaee/7/tutorial/doc/servlets013.htm | CC-MAIN-2014-41 | en | refinedweb |
This post is a follow-up on my series about validating business objects throughout different layers of a software system - domain, persistence, and (ASP.NET MVC) GUI. It demonstrates how a self-written validation can be incorporated into a web page (using a bit of JavaScript) and how this can be mapped to a custom validation on the domain side.
In the first part of the above mentioned series, we developed a simple, custom validation aspect that checks if a value really is a member of a specified enumeration. Here's the typical usage for this EnumValue attribute:
[EnumValue(typeof(Gender))]
public Gender Gender
{
get { return this.gender; }
set { this.gender = value; }
}
And this is the validation method that gets called in the end:
if (!Enum.IsDefined(EnumType, @value))
throw new ValidationException(string.Format(
"'{0}' is not a valid value for enumerations of type '{1}'.",
@value,
EnumType.FullName));
In the third part of the series then, I demonstrated how server-side validations (using the PostSharp-based ValidationAspects framework) can be "translated" to client-side validations that reside in ASP.NET MVC pages, using the xVal framework to do the mapping.
The aforementioned post showed only the mapping for the built-in aspects of the VA library. To do the same thing for our custom EnumValue attribute, we must declare another xVal RulesProvider, which is specific to this attribute, and provides a set of xVal rules that correspond to the EnumValue attribute. The xVal framework provides the PropertyAttributeRuleProviderBase<TAttribute> base class for this purpose, all we have to do is to provide an implementation of the MakeValidationRulesFromAttribute enumeration method for the specific attribute. Here's the declaration for the EnumValue validation aspect:
public class EnumValueRulesProvider : PropertyAttributeRuleProviderBase<EnumValueAttribute>
protected override IEnumerable<Rule> MakeValidationRulesFromAttribute(EnumValueAttribute attribute)
{
yield return new RequiredRule(); // field must not be empty
const string jsMethodName = "isOneOf";
var parameters = new { allowedValues = Enum.GetNames(attribute.EnumType),
secretValue = "EASTER_EGG" };
string errorMessage = string.Format("Not a valid value for the '{0}' enumeration.",
attribute.EnumType.Name);
yield return new CustomRule(jsMethodName, parameters, errorMessage); // checks actual value against enum values
}
} // class EnumValueRulesProvider
As you can see, the EnumValue validation aspect is represented by two xVal rules on the client side: The first one (xVal's RequiredRule) makes sure that there actually is some value provided on the web form, whereas the second one is a bit more interesting: An instance of the CustomRule class is used to reference a self-written JScript function ("isOneOf()") along with the required parameters ("allowedValues" and "secretValue") and the error message that should appear in case of validation failures. The function arguments must be provided in the form of an anonymous type, with the individual items as its members.
And of course, our new RulesProvider must be registered with the application on startup. Hence, the following line must be added to the Application_Start() event handler in Global.asax.cs:
xVal.ActiveRuleProviders.Providers.Add(new EnumValueRulesProvider());
Now on to the client side. A JScript function, that is referenced in an xVal CustomRule class, must take three parameters:
The function should return true or false, depending on the result of the validation operation. The above referenced isOneOf() function could look like this:
// Return true if 'value' matches one of the values in 'params.allowedValues' (case insensitive).
// Used in a custom xVal rule to map the 'EnumValue' VA aspect.
//
// Parameters
// ----------
// params.allowedValues
// Comma-separated list of possible values for the 'value' param
// params.secretValue
// A 'secret' string (for demo purposes)
function isOneOf(value, element, params) {
if (params.secretValue == value) {
alert("Congratulations!\nYou have found the secret value (but it won't validate...).");
var possibleValues = params.allowedValues
.toString()
.toLowerCase()
.split(",");
var valueToCheck = value.toString().toLowerCase();
for (i = 0; i < possibleValues.length; i++) {
if (possibleValues[i] == valueToCheck) {
return true;
}
return false;
That's it! Quite simple and straightforward. As a result, we will have a client-side validation similar to this:
...or this, respectively:
I could think of many possible use cases for this technique of mapping C# validations (server-side) to JScript functions (client-side)... | http://geekswithblogs.net/thomasweller/archive/2009/11/26/including-custom-client-side-validations-in-your-asp.net-mvc-app.aspx | CC-MAIN-2014-41 | en | refinedweb |
Your Account
by Rick Jelliffe
And it is very WS-* centric.
More details here:
Having co-authored one XML configuration language already, and written an implementation plus test cases (The GGF CDDLM CDL language), I'm not ovewhelmed with SML. If we are staying in the XML space, we could do better with
-a simple schema that does not need to be extended by every configuration author. That is, if all you are doing are nested name/value pairs with simple types, then you can use elements with fixed names, values in nested elements, type defined as attributes which can be overwritten on demand.
-a way of adding constraints to descriptions (schematron may fit here)
-A way of validating your model with meaningful messages
Good OSS implementations, with a public test suite.
I haven't yet proposed my CDL2 language yet, but I am thinking of something very simple, very RESTy. The alternative would be to bite the bullet and go for RDF in N3 notation, which does open the tooling up quite widely.
The actual details of SML are at
"The Service Modeling Language (SML) provides a rich set of constructs for creating models of complex IT services and systems. These models typically include information about configuration, deployment, monitoring, policy, health, capacity planning, target operating range, service level agreements, and so on. "
:
1. Schemas - these are constraints on the structure and content of the documents in a model. SML uses a profile of XML Schema 1.0 [2,3] as the schema language. SML also defines a set of extensions to XML Schema to support inter-document references.
2. Rules - are Boolean expressions that constrain the structure and content of documents in a model. SML uses a profile of Schematron [4,5,6] and XPath 1.0 [9] for rules."
I haven't looked too closely at this, but I see no signs that it is at all "WS-* centric." There is no reference to SOAP or WSDL, and the only reference to any WS-* spec says "An SML reference is a link from one element to another. It can be represented by using a variety of schemes, such as Uniform Resource Identifiers (URIs) [7] and Endpoint References (EPRs) [8]." (reference 8 is to the WS-Addressing spec).
"SML supports a conforming profile of Schematron. All elements and attributes are supported." XPath 1 extended by sml namespace elements is used. The little Schematron schemas can appear in complex type declarations and global element declarations.
SML is quite interesting too in that they define their own key/uniqueness constraint language: an extended version of the XSD one but allowing cross-document checks. At ISO, DSDL has a part for path-based integrity checks, which has been dormant because we did not see anyone in industry demanding or prototyping anything: I wonder if we at DSDL should cooperate with the SML people and adopt their extended keyref language for DSDL Part ?6?. I had been thinking of adopting a StaX (that nice streaming profile of XML) language, but I didn't get any response from the instifators of StaX.
© 2014, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://archive.oreilly.com/pub/post/service_modeling_languagel_wit.html | CC-MAIN-2014-41 | en | refinedweb |
NAME
insque, remque - insert/remove an item from a queue
SYNOPSIS
#include <stdlib.h> void insque(struct qelem *elem, struct qelem *prev); void remque(struct qelem *elem);
DESCRIPTION
insque() and remque() are functions for manipulating queues made from doubly-linked lists. Each element in this list is of type struct qelem The qelem structure is defined as struct qelem { struct qelem *q_forw; struct qelem *q_back; char q_data[1]; }; insque() inserts the element pointed to by elem immediately after the element pointed to by prev, which must NOT be NULL. remque() removes the element pointed to by elem from the doubly-linked list.
CONFORMING TO
SVR4
BUGS
The q_data field is sometimes defined to be type char *, and under solaris 2.x, it doesn’t appear to exist at all. The location of the prototypes for these functions differ among several versions of UNIX. Some systems place them in <search.h>, others in <string.h>. Linux places them in <stdlib.h> since that seems to make the most sense. Some versions of UNIX (like HP-UX 10.x) do not define a struct qelem but rather have the arguments to insque() and remque() be of type void *. | http://manpages.ubuntu.com/manpages/maverick/pt/man3/insque.3.html | CC-MAIN-2014-41 | en | refinedweb |
Java's single inheritance limitation is usually not a problem in the normal course of development. In fact, the need to use multiple inheritance could be a sign of a bad design. There are times, however, when developers wish they could extend more than one class. Although Java prevents a developer from extending more than one class, a simple trick can be used to simulate multiple inheritance.
I have used this technique in both a Swing application and a Web-based application. The Swing application packaged and deployed services to application servers. In that case, I needed multiple inheritance because I was adding the ability to drag and drop objects between the different components of the GUI and wanted all the GUI components to share the same drag-and-drop methods. Thus, all the GUI components needed to extend two classes: the GUI component itself (a
JTree or
JList) and the common drag-and-drop class. The technique described in this article simplified the drag-and-drop implementation.
To illustrate my technique for simulating multiple inheritance in Java, let's examine how to use it in a Web-based application, where the
Servlet class, along with another class, needs to be extended. The application being developed is a text-based message system used to send text messages to a mobile phone from another mobile phone, the Web, a PDA, or some other device with access to the Web or the phone network.
The heart of the system is a message server that receives a message from a client and forwards it to the appropriate phone. To make client development easier, I developed a
MessageClient class to contain all the common methods needed to communicate with the message server. The class facilitates client development because it can be used as the base class for all the possible clients.
The
MessageClient class, shown in Listing 1, contains three methods. The
sendMessage() method sends the actual message to the server.
connectToServer() connects to the message server, which, in this example, is an RMI (remote method invocation) server. The last method is
getServerName(), which is an abstract method because each device that uses this class has a different way of determining the name of the message server. This means that all the clients that extend
MessageClient must implement the
getServerName() method.
Listing 1. MessageClient
import java.rmi.Naming; public abstract class MessageClient { private MessageServer messageServer; public MessageClient() { System.out.println("Initializing Message Client"); } /** * Method used to connect to the message server * * @param serverName name of the server that contains the message server */ protected void connectToServer() { String serverName = getServerName(); try { String name = "//" + serverName + "/MessageServer"; messageServer = ((MessageServer) Naming.lookup(name)); } catch(Exception e) { System.out.println("Error connecting to Message Server. Exception is " + e); e.printStackTrace(); } } /** * Method used to send message to server * * @param phoneNum phone number to send message to * @param message message to send */ public boolean sendMessage(String phoneNum, String message) { try { return(messageServer.sendMessage(phoneNum,message)); } catch(Exception e) { System.out.println("Error Sending Message. Exception is " + e); e.printStackTrace(); return(false); } } public abstract String getServerName(); }
We need multiple inheritance when developing the Web client that talks to the message server. Our Web client is a simple servlet used to get the message from a form and send it to the message server. To complete that task, the servlet must be both an
HttpServlet
and a
MessageClient
. Since Java does not allow such behavior, the main class extends the
HttpServlet
class, as shown in Listing 2. This main class contains an inner class that extends
MessageClient
. The outer class then creates an instance of the inner class.
Listing 2. SendMessageServlet
public class SendMessageServlet extends HttpServlet{ private MessageClient m_messageClient; private String m_serverName; public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException{ try{ //Get server name m_serverName = request.getServerName(); System.out.println("ServerName is " + m_serverName); //Create message client to communicate with message server m_messageClient = new ServletMessageClient(); System.out.println("Created Message Client"); m_messageClient.connectToServer(); //Get message and phone number String phoneNum = (String) request.getParameter("PhoneNum"); String message = (String) request.getParameter("Message"); //Send message m_messageClient.sendMessage(phoneNum,message); //Display page to tell user message was sent response.setContentType("text/html"); RequestDispatcher dispatcher = getServletContext().getRequestDispatcher("/SendMessageForm.jsp"); dispatcher.include(request, response); }catch (Exception e){ e.printStackTrace(); } } public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { doGet(request, response); } /** Inner class used to extend MessageClient */ public class ServletMessageClient extends MessageClient { public ServletMessageClient(){ super(); } public String getServerName(){ System.out.println("Returning ServerName " + m_serverName); return(m_serverName); } } }
This approach isn't true multiple inheritance because we used delegation. (i.e.,
MessageClient
is extended by a member of the outer class and not the outer class itself), but the effect is the same. Although
MessageClient
could have been extended in a separate class, using an inner class allows it to access all the members and methods of the outer class. This makes it easier for the two classes to interact.
This example only extends two classes, but there is no reason why this technique couldn't be used to extend as many classes as needed by creating an inner class for each class that must be inherited.
Note that the message server example could have been completed without multiple inheritance. If the
MessageClient
class had a constructor that accepted the server name, the
getServerName()
method would not need to be abstract, which means the Web client would not have to extend the
MessageClient
class. The Web client could have used the class directly. Developers should be cautious and only use multiple inheritance if a clear reason warrants it because multiple inheritance complicates design and is easily misused.
Learn more about this topic
- For more articles on multiple inheritance, browse through the following JavaWorld articles
- "Java Diamonds Are Forever," Tony Sintes (March 2001)
- "Object-Oriented Language Basics, Part 6," Jeff Friesen (September 2001)
- Browse the Core Java section of JavaWorld's Topical Index | http://www.javaworld.com/article/2072159/core-java/simulate-multiple-inheritance-in-java.html | CC-MAIN-2014-41 | en | refinedweb |
The History And Background Of Marriott Hotels Marketing Essay
In 1927 J. William Marriot and his wife Alice opened a root beer in Washington, which nowadays is one of the largest and most powerful brand in hospitality industry. The Marriot has nearly 3800 hotels and restaurants in United States and in 73 other countries in the world.
Task 1- Situation Analysis
Marriott Group of Hotels within Travel and Tourism Industry
A Marriott group of hotels is partnership with International Tourism Partnership. It is an organization with companies having a membership of companies within the travel and tourism industry. The aim of International Tourism Partnership is to provide the ability and knowledge for the development of practical solutions for a responsible business. The founding member of International Tourism Partnership is Marriott. It has contributed towards the ‘sustainable hotel sitting and design guide’ development and sponsorship. World Travel & Tourism Council is supported and endorsed by Marriott. The hospitality industry promotes the travel and tourism industry where it facilitates the tourists by providing various hospitality services to them which improvises the touring time (Hutt & Speh 2012, 34).
Marriott groups of hotels also facilitate its members and customers by providing them quality services and the touring sites and destinations. The travel and tourism industry is promoted by the Marriott groups of hotels if the country’s economic country is boosting and there are many tourists visiting the country time to time. Accommodations, food services, recreations and attractions are provided by the Marriott Groups of Hotels which is a subsector of travel and tourism industry. Therefore, the Marriott Groups of Hotels plays a major part in promoting the travel and tourism industry with provision of quality services in terms of food, accommodation, recreational activities, attraction, and tourist guides. Marriott Groups of Hotels are located in important locations worldwide where there is more potential and chances of tourists’ attraction sites and places and more tourists pouring in on annual basis.
1.2 Main stakeholders and their interests
Stakeholders of Marriott group of hotels play an important role in driving the success and shaping the sustainability strategy. The main stakeholders are associates, customers, communities, shareholders, associations, supply chain, owners and franchises, nongovernmental organizations and government. The interests of associates are to create programs such as Cultural Appreciation Day, Marriott Jobs & Careers, Takecare wellness program, Living the Gold Standards, Associate Appreciation Week and other related activities. The interests of communities are to get involved in the community engagement programs, fundraising, volunteering, disaster relief and in-kind donations. The interests of shareholders comprise of analyst meetings, annual report, conference calls, carbon disclosure, sustainability reporting and annual shareholder meetings. The interests of associations are workshops, research, board memberships, working groups, partnerships, advisors and lobbying (Butcher 2009, 1).
The interests of supply chain are supply chain screening, local supplier capacity building, strategic partnerships, supplier diversity programs, engagement workshops and sustainable procurement programs. The interests of owners and franchises are sustainable hotel development and economic development. The nongovernmental organizations have interest on the working groups, strategic partnerships on global issues, executive committees, advocacy for Reduced Emissions from deforestation and Degradation (REDD) projects. The interests of Government are regulatory fillings, briefings, pilot projects, research, lobbying, meetings and advocacy. The main areas where the stakeholders collaborate are the improving the associate health with Take care, greening the furniture, fixtures and equipment global supply chain, engaging associates by ‘Living Our Core Values’, addressing global recruitment through social media gaming- ‘My Marriott Hotel’, collaborating with the industry to address human trafficking, responding to customers with industry standard for carbon measurement, advocating for the new golden age of travel and sourcing sustainable seafood through ‘FutureFish (Brier 2012, 1)’.
1.3 PESTLE analysis
1.3.1 Political factors - Marriott Hotels do not discriminate its employees on the basis of gender, orientation, ethnicity, religion at the time of hiring. The Marriott Hotels and its government support the tourism industry. It especially focuses on the working environmental conditions related to physical and psychological factors. The main group discussion focuses on making travelling easy for customers, stability of political environment, and payment of taxes in return (Kotler 2009, 43).
1.3.2 Economical factors - Marriott hotels require providing an image of affordability to its customers because economic crises are common now. The image of Marriott Hotels and its goodwill is strong so when the economy is expected to recover, Marriott Hotels will boost back as luxurious hotels. The cost of employment is high along with high taxation in different countries. The group discussion focuses on the interest rates for loans, employee benefits, GDP, inflation and other employee related factors (Naresh 2012, 34).
1.3.3 Socio-cultural factors - Marriott Hotels is required to focus on the eco-friendly activities, recycling, and green issues and to avoid wastage. It also includes the elderly population in countries which like to spend on hotels, leisure and travelling.
1.3.4 Technological factors - The facilities and services provided by the Marriott Hotels must be luxurious and up-to-date according to the latest technologies. The latest technologies comprise of payments of bills via cell phones, functional and easy-to-use websites and free, accessible, fast Wi-Fi.
1.3.5 Legal Factors - As Marriott Hotels is located in various countries, developed and non-developed; therefore, it is important for the Marriott Hotels to follow all the legal activities and laws abiding by that particular country.
1.3.6 Environmental Factors - Marriott Groups of Hotels tries to involve its activities in the security and sustainability of the environment of the countries it is located in (Brier 2012, 1).
1.4 Marketing Issues
Companies with a highly localized zone of influence, are facing a tough competitive situation at the level of marketing that can be summarized in five major chains and factors related. Every time we have a competitive advantage is harder and takes less time. The market is very transparent, so know the competition is easy and also copies the good start, it makes our products or services increasingly look more like those of the competition, and good ideas are cloned with quickly.
If products and services are equivalent to those of the competition, how I can do to make me choose? Well being closer to your customers, i.e., with more outlets, with a broader sales network and greater visibility, but opening outlets is becoming more expensive, have a wide network of trade also and have high visibility in the market resulting in significant investments in communication and image. Therefore marketing costs have skyrocketed (Naresh 2012, 34).
Exponentially with increasing marketing costs, the amount becomes a figure that is necessary to monitor and analyze from the point of view of investment, i.e., we need to measure the Return on Investment (ROI), but that's really complicated you can find in traditional media. How I made X a specific ad sales in a concrete fence, how much I reported the newspaper advertisement of our company that appeared last month? What was most effective, the notice published on Wednesday or Thursday? What was the best, the newspaper A or B? These are all unanswered questions, and in times of crisis, we do not know whether an investment will be profitable or not, generally took the decision not to invest, to ensure that these resources are not lost.
What brings this problem? Not only the difficulty of designing multiple marketing strategies, different for each segment, but find marketing media and allowing public turn to as concrete and specific. When I insert my ad on a billboard, all who pass through it can see it, but not all of them interest me. The result is more costs, lower price, and shrinking margin.
Companies with a local acting largely already know the benefits they receive from the implementation of promotional, advertising, etc, but very few actually designing a comprehensive and coordinated Local Marketing, which creates the true synergies between actions and thus allow those companies that do, have a competitive advantage over time.
Only through coordinated efforts between supplier and retailer, can generate sufficient capabilities to perform powerful actions resources Local Marketing, e.g. VAR must rely on the operators to benefit from their brand image and reputation, and these in turn must trust the outlets so they adapt their actions micro zone.
Actions such as mailing, telemarketing, e-mail marketing, or some purely promotional, have the great advantage of allowing very precise measurements of the return on investment and therefore the appropriateness of use. The Local Marketing, to put in the hands of the retail outlets, the development and adaptation of business strategy, benefits from a very important qualitative understanding of the micro-zone of influence, and can concentrate optimizing investment actions. The billboard will continue watching everyone pass ahead, but they all live, work or are walking next to where we sell our products and the Local Marketing has tools to put names to the communication (mail by example), which together with a greater understanding and closer to the customer becomes the key to success (Reid & Bojanic, 2009, 1). Especially all of these factors allow:
Differentiate ourselves from our competitors next.
Create synergies in the distribution channel.
To maximize business investment
More visibility to our customers and prospects.
Task 2- Marketing Report
2.0 Current Marketing Strategy
Marriott Hotels focus on the social media marketing strategy. Twitter is heavily used by Marriott where the customer issues are resolved and customers’ issues are shared. Marriott Hotels in each country focuses on its marketing strategies depending on the culture and customs of that country. Marriott Hotels usually go for the approach: “one size fits all” conducting marketing operations wherever it is located. The major marketing activities consisted of marketing, advertising, branding across an international level. With more competition and changes in customer’s demands, Marriott Hotels adopted customization strategy. To fulfill the diverse marketing needs, support to Marriott’s properties and brands was required. Marriott Hotels created distinction in its marketing approach by building customized materials and marketing. For their properties on global scale, they developed a portal with the partnership of Pica9 and Excella Consulting. It was an automated marketing portal to make the marketing materials customization in quick, cost-effective and easy way (Taylor 2010, 1).
This portal also allowed the managers to view brand strategies, download marketing templates, share materials and creation of customization marketing collateral. With the implementation of latest technologies, Marriott Hotels have been able to update its marketing strategies offering various branding strategies for different targeted audiences. The latest technological applications have allowed the hotel management to create positive customized piece, supported in different languages. Marriott Hotels is also focusing on social media strategies for unique branding and positioning to look for new research destinations, lodging options and travelling plans. Brand Works, another application used by Marriott Hotels, have also improvised the ways of information sharing and collaboration among the corporate teams and global locations. This also allows the marketing management of Marriott to activate its brand according to the location and customers (Naresh 2012, 34).
2.1 Segmentation, Targeting and Position strategies (STP)
2.1.1 Segmentation
Segmentation refers to division of customers according to various needs, demands, purchasing power and expectations. Marriott Hotels provide its customers with hosting services. The strategies developed by Marriott Hotels must be same to cater all segments but must be flexible also so that the management is allowed to change the strategies according to the demands of various consumers. First segment is form of the group of customers who have the demand of having a comfortable stay with reasonable expectations. The second segment comprises of extremely rich customers who demand exclusive suites, private portions, beautiful views and exquisite foods. Third segments comprises of consumers belonging to local communities. They are usually the inhabitants living in the vicinity of the hotel who do not look for accommodations; instead they look fro other services provided by the hotel. Fourth type of segments is those customers who need customized services. They have their own particular requirements and desires (Kotler 2009, 43).
2.1.2 Targeting
For targeting the market segments, Marriott Hotels did follow a certain criteria through which the target markets were identified. The criteria included measurable, accessible, substantial, actionable and identifiable. The target markets of Marriott Hotel geographic locations, personal characteristics, demographic characteristics. The potential and current customers of Marriott are distributed among two major groups. First, is the group of local community which require services related to the weddings, baptisms, business meetings and other related events. The second group are identified according to the geographic establishments which form the travelers and tourists whether national or international citizens. They mainly require the services of accommodation. As per the personal features, the customers are divided into age, lifestyle and income (Hutt & Speh 2012, 34).
2.1.3 Positioning
The target consumers are targeted and the required services are positioned according to the features and characteristics of each segment. Such as, for the business meetings, the professionals and business people will be targeted. Therefore, the services must be positioned and placed facilitating the business meetings. For the tourists and travelers, the services of Marriott hotel must be positioned according to the demands of the location and sites. The location of Marriott Hotels must be near tourists’ sites.
2.2 Main products and services
Product
Service
Groups
Lodging Service
Resorts
Facilitating Service
Hotels
Check in service
Hospitality Management Company
Boutique Hotels
The main products and services offered by Marriott group of hotels are the hotel groups, resorts, hotels, and hospitality Management Company and boutique hotels. Accommodation, F&B and leisure services are major products from Marriott hotels. The main brands offered by Marriott hotels are full-service lodging, select-service lodging, extended-stay lodging, timeshare and Great Americas Park.
For the Marriott group of hotels the main product is in the form of service. The different services offered by the company, tangible or intangible, form the product of Marriott hotels. The core service is the lodging element along with the room, valet, reception, restaurant and auxiliary services.
The core service of Marriott hotels is to stay in the market by providing the lodging services. For the core services to be utilized by the customers, additional services are also required. The supporting services may include the check-in services and facilitating services. The facilitating services are the services required by the customers so that they can easily use and consume the core product. The supporting services, unlike the facilitating services, do not support the core service but instead they increase the value and create differentiation in the core service from the services offered by the competitors. The facilitating services are mandatory and hold more importance than supporting services (Capon & Hulbert 2007, 37).
The augmentation of the service constitute of the three basic characteristics which are consumer participation, accessibility of the service and the interaction with the service organization. The augmentation of the services has to be combined with the core product.
2.3 Pricing strategies
The pricing strategies of Marriott Hotels comprises of the strategy to sell the right product to the right consumer and the right time with the right price. The pricing strategy includes a few guiding principles for the Marriott hotel management and its related properties. These guidelines are proper benchmark rate positioning, rational pricing, single image inventory, appropriate discount rates and routine performance tracking. The pricing strategy of Marriott hotels is done in the way that the management can earn revenues.
Marriott hotel realizes their full potential when deciding the final prices for their customers. The pricing strategy is set that it earns the maximum revenue for the company along with maintaining the brand image. Pricing strategy determines the purchase and sales orders and set their prices. The pricing strategy of Marriott Hotels is mainly based on the fixed amount, promotion or sales campaign, quantity break, price prevailing on entry, specific vendor quote, combination of multiple order or lines and shipment or invoice date. The management of Marriott hotels prevent pricing errors by automating the setup of pricing and maintenance. Marriott hotel determines the pricing policy by estimating the demand curve and the possible quantities of products/services sold within each possible price. It is estimated how at different levels of output, different costs are estimated. It is also possible for Marriott hotels to cut or raise their standard prices depending on the situations. Marriott hotels’ pricing strategy is flexible to change the duration and understand the intent of competitors and change (Butcher 2009, 1).
2.4 Promotion mix
With the technological advancements, a Marriott hotel does its brand promotion through internet. Marriott hotels have developed its own site with an extensive, detailed and clear view about the management: its products and services. The website facilitates the customers to book online and retrieve any related information. The focus and priority of Marriott hotels is to retain the existing customers. The existing customers pave way for the promotion campaigns by the company. The promotion strategy is mostly targeted to the mass communication through trade publications, print ads and through internet. The cost effective means of communication for the current and potential clients is done through direct mail campaigns. This helps the company build long-term relationships with its local community and helps the Marriott hotels to generate high level of corporate activity (marriott.com).
With the promotion mix, the public relation activities play an important role. Promotion mix comprises of advertising, public relations and direct marketing. The promotion strategy of Marriott Hotels is targeted to its customers mostly through the internet, various advertisements, and branding. The Brand Works helps Marriott hotels to publish its offers and deals to the individual property sites, corporate and brand sites. This enables Marriott hotels to realize what tools must be activated within each brand and to what audience and how to deliver the customer with a better experience along with assurance of consistency, value and marketing data. The advertisements are used for attraction and reminder purposes. The promotion mix is used for the purpose of communication with the customers. The promotion mix increases the awareness of the services offerings by Marriott hotel. Globally, Marriott hotel utilizes the promotion mix to increase the number of customers and attract them (Kotler 2009, 43).
2.5 Distribution strategy
Marriott hotels go about their distribution channels via the connection with central reservation system by leveraging the electronic distribution channels. This supports the Marriott hotel operations. Marriott’s central reservation system is simply connected to the Direct Connect for External Channels. In the current and new markets, Marriott hotel is facing growth because of the enhancements in the inventory distribution infrastructure. The additional capabilities enable Marriott hotel to increase its speed into the market and lower its overall costs. It is important for Marriott hotels to choose the channel mix whether they want direct channels or third party channels. Every option has different set of benefits and costs.
Marriott hotel requires choosing the distribution channel which favors the streams of revenues and paves way for the sustainable growth. It is of utmost importance for Marriott hotels to provide its customers with the right price and so that their operations can earn profits. It is important to identify the correct approach for pricing, monitor the productions, and benchmark the results. In Marriott hotels, the distribution channels are both direct and indirect. The business, through indirect channels, is conducted through online third parties, wholesalers and travel agencies. The direct distribution channels of Marriott hotels have to be competitive and strong. The distribution channels via the third parties also form the part of overall revenue of Marriott and for the provision of leisure and business customers. Overall, the distribution channel plays a very important role in the profitability contributions and business mix.
People
Process
Physical Evidence
2.6 Extended Marketing Mix
Figure 1 – Marriot Extended Marketing mix (field work)
The extended marketing mix comprising process, physical evidence and people comes from the product, price and promotion, which aims to facilitate the final customer. The extended marketing mix is typically more suitable in services sector. Therefore, Marriott hotel focuses on the 7P’s inclusive of process, physical evidence and people. This helps Marriott hotels to focus on their sales and marketing mix in more detail (Butcher 2009, 1).
Process of Marriott hotel defines how and where the customers will reach the hotel. It also determines what added value is provided to the customers by the Marriott hotel. Process explains to the Marriott management how the customer relationships can be developed and interactive experience can be provided. The process of Marriott is facilitated by e-commerce. Marriott hotel follows the process marketing mix and continuously interacts and transacts with its customers. Marriott management creates the value promotion and delivery along with educating and supporting its customers about Marriott hotel on continuous basis.
Physical evidence determines for the Marriott management the potential customers. It determines the image, goodwill and perception of Marriott according to the views of customers. The presentation is inclusive of physical evidence. It includes the collateral support, physical environment and packaging. For the services provision by Marriott hotel, the services and intangible product can be presented to the final customers as a good image and strong brand.
People involve the Marriott hotel management and the customers. Also, it includes employees, press, general public, partners, shareholders and the analysts. The stakeholders of Marriott hotel are the people in the marketing mix. Through people marketing mix, customers have a direct experience of the Marriott hotel (Hutt & Speh 2012, 34).
Task 3- Market Research
Marketing research process is a systematic enquiry related to the Marriott hotels. It has generally a standardized pattern, which can be changed according to the requirements of the company. Managers need to feel comfortable making decisions in a changing environment, and to engage systematically functional strategies and operating decisions with strategic senior.
Step 1: Identification and definition of the problem
The first step is to identify and define the problem by the Marriott hotels. The statement of problem cannot be identified without marketing research. The problem must be explained at the start of the marketing process because the solution to the problem will be costly for the Marriott hotels. The techniques have to determine and information must be collected. The information at this stage can be collected through preliminary investigation which includes pilot studies, experience, and secondary data. The main problem identified by Marriot hotel management is to increase the number of customers along with catering the needs and changing demands of various customers worldwide. This makes the Marriott management have a challenge of catering various demands where they are required to make their services flexible to fulfill the changing demands. The fact that marketing is linked to a changing environment that is continually offering new challenges for companies, requires that both the tasks to be performed by the marketing and the importance attached to each of them are different, in a process continuous adaptation (Brier 2012, 1).
Since the problems faced by companies evolve over time, the answers to these offers, adapt continuously in an attempt to find new solutions .The crisis, with its implications for economic figures sales without the possibility of gaining market at the expense of competitors economically profitable due to the high costs of these operations.
Step 2: statement of research objectives
The objectives must be formally stated for the Marriott hotels for the removal and correction of the identified problem. The objectives can be written as hypothesis, statement or research questions. The objectives can be qualitative and quantitative. This will clearly indicate the Marriott management and the researcher of what objectives are required to be fulfilled according to the problems identified in the former stage. The main research objectives for Marriott management will be the evaluation of customers’ requirements and demands, the changes customers’ demand with changes in time and technology, the new services required by the customers or the improvement in old services and the ways on how to fulfill the demands and requirements of customers (source:-).
Step3: Designing the Research Study
This step includes the development of research design. It is a research plan which specifies and highlights the particulars for the procedure of collection and analysis the required information. This step develops the basic framework for the research plan of action. The data must be collected fully abiding to the objectives built. The researcher must decide which kinds of sources of information is required, the method for data collecting, methodology for the sample, timings of the research and its costs. The research study of Marriott management must be related to the primary search.
Step 4: Planning the Sample
The research is responsible to involve procedures in the sample with the use of sample size or population. This step is important for the researcher where it is essential to identify the target population required to be sampled, target population, the actual number of sample size and the units of the sample must be selected. The sample must include only those customers which are special and regular customers who often avail the services for Marriott hotels (Butcher 2009, 1).
Step 5: Data Collection
For the solution driven for population, it is require gathering and collecting the facts and data. It is important to determine the methods for marketing research for the data collection. Primary data will be gathered on empirical research through various tools such as surveys, questionnaires, interviews. Secondary data will be collected through government publications, books, articles and company publications. The sources can be internal where the internal data of company is required such as accounting data of Marriott hotels or the company’s salesmen’s reports. The external sources are outside the company. The data of Marriott hotels must be collected through focus groups and surveys. The data collection method be chosen such that huge number of customers can be queried in short span of time and with cost effectiveness.
Step 6: Data Processing and Analysis
The collected data is required to be turned into such form where it can be processed and analyzed. This will help the researcher and those requiring the research to identify and define the problem. The data processing starts with data edition and coding of data. The collected data is inspected which is known editing. It is done in the form of consistency in classification, legibility and omission. The responses of data collected have to be changed into tabular or graphical forms so that they can be shaped into meaningful categories. Codes mean data storage media where the data is categorizing, recording and transferring. Computer and manual tabulation is facilitated with the help of coding. The data processing must be done according to the rules and regulations of the Marriott hotels (Capon & Hulbert 2007, 37).
Step 7: Formulating Conclusion, Preparing and processing the report
The final step of marketing research is to interpret the information and draw out conclusions for managerial decision making. The research findings must be effectively and clearly communicated where it is required to complicate the statements related to the research methods and technical aspects. The research design, statistical analysis is not important for the Marriott management. Rather, the concrete findings are important for the solution of problems is essential for Marriott management. The presentation by the researcher must be made useful, understandable and technically accurate. The final data will be presented by the researcher to the Marriott hotel management with the help of latest technology and latest data evaluation techniques.
It is important for the Marriott hotels to identify the internal and secondary data. Focus groups and surveys were used for those travelers who stayed at extended nights in the hotel. It is important for the Marriott hotels to identify the internal and secondary data. Focus groups and surveys were used for those travelers who stayed at extended nights in the hotel. There is a huge potential for the increase in customers whose changes may change. Marriott needs to increase and improve its position to capture customer and their business. The Marriott management is successful in marketing research for the development of segmentation strategy to attract various customers with the provision of various options, services and products. The diverse offerings provided by Marriott have appealed to wide number of customers and have made Marriott win great business. For the marketing success in future, Marriott is hugely dependent on the continued reliance on marketing research (Hutt & Speh 2012, 34).
Task 4- sustainability and corporate social responsibility
The corporate social responsibility of Marriott hotels includes three main sectors which includes business values, environment and society. The business values include the travel and tourism industry. It is very important for the global economic development. Marriott hotels must establish its business to increase the business values for customers with the increase of partnership and amalgamation. Marriott hotel management and its franchise operations establish the basic fundamental business values for the basic framework of ethical behavior and integrity. The environment of Marriott hotel is the vision of corporate responsibility within the hospitality industry to create a positive economic opportunities worldwide. The society of Marriott hotels is committed to the social blending of corporate financial contributions with the volunteer services around the world. Marriott hotels participate in career opportunities, children’s health, and provision of food and shelter for the workplace and support of education within the hospitality industry (Kotler 2009, 43).
The Marriott hotels amalgamate with the stakeholders for the departments and executives for the fulfillment of management responsibilities. The stakeholders group must be continuously communicated for the development of focus areas for the community engagement with the aim of serving the community. Marriott hotel management’s corporate social responsibility is achieved with the corporate financial contributions with the volunteer service and in-kind giving. The aim of Marriott hotel is to make the worldwide community strong. The management of Marriott hotels work with various charitable organizations around the world. The commitment for the preserve of environment has been done by Marriott hotel management since 25 years.
All Marriott associates are responsible for maintaining the standards legal, ethical and social issues outlined in this Code of Business Conduct. The Code of Business Conduct applies to commercial operations hotels and businesses bearing the mark of the company all Marriott business units, offices Marriott, departments and majority-owned subsidiaries. Managers responsible for supervising other associates are particular responsibility to ensure that partners understand charge expectations contained in this Code of Business Conduct. One of the main objectives, in addition to compliance with environmental regulations is the reduction of operating costs (energy consumption, water). In addition, the ability to stand out from the competition and capture new customers, especially foreign incites many brands to establish charters (Hutt & Speh 2012, 34).
Another point not to be overlooked by the hotel is that corporate customers are more likely to be preferred for consistency with their sustainable development strategy, accommodation structures whose actions are consistent with responsible approach. This is particularly the case for seminars or during event operations. Marriott respects the privacy of members who report violations potential Code of Conduct and a policy of "no retaliation" to partners who report a concern, honestly and good faith (Butcher 2009,) | http://www.ukessays.com/essays/marketing/the-history-and-background-of-marriott-hotels-marketing-essay.php | CC-MAIN-2014-41 | en | refinedweb |
XmlAnyElementAttribute Class
Specifies that the member (a field that returns an array of XmlElement or XmlNode objects) contains objects that represent any XML element that has no corresponding member in the object being serialized or deserialized.
System.Attribute
System.Xml.Serialization.XmlAnyElementAttribute
Namespace: System.Xml.SerializationNamespace: System.Xml.Serialization
Assembly: System.Xml (in System.Xml.dll)
The XmlAnyElementAttribute type exposes the following members. XmlElement[] AllElements; } public class Test { public static void Main() { Test t = new Test(); t.DeserializeObject("XFile.xml"); } private void DeserializeObject(string filename) { // Create an XmlSerializer. XmlSerializer mySerializer = new XmlSerializer(typeof(XClass)); // To read a file, a FileStream is needed. FileStream fs = new FileStream(filename, FileMode.Open); // Deserialize the class. XClass x = (XClass) mySerializer.Deserialize(fs); // Read the element names and values. foreach(XmlElement xel in x.AllElements) Console.WriteLine(xel.LocalName + ": " + xel.Value); } }
. | http://msdn.microsoft.com/en-us/library/system.xml.serialization.xmlanyelementattribute.aspx | CC-MAIN-2014-41 | en | refinedweb |
E4/position papers/Frank Appel
From Eclipsepedia
Bio.
RAP Experience Report on E4/Work Area items
Here are some comments on some of the items of the E4/Work Areas / page. These comments are based on the experince we have made with the RAP project on these topics. Note that the solutions we provide for some of the problems in RAP are only ment as a starting point for discussion.
SWT and RWT
RAP was created to provide RCP developers a possibility to bring (at least parts of) their RCP applications to the web without the need to step deep into the low-level web-technologies. The reason for this is cost reduction by reuse of knowlege and code. Therefore we have chosen the technical approach that provides the highest possible reusability - a thin client with a rich widget set and a stateful serverside on top of OSGi, reusing the eclipse workbench paradigm.
With respect to this goal and the distributed nature of the environment it's clear that RWT can only provide a subset of functionality of SWT. So currently missing functionality leads with RWT to missing API. Transforming SWT code into RWT ends up with compile errors if functionality is used that isn't available in RWT. This helps to identify problems at compile time, which we preferred to the error-prone process of finding out about missing functionality at runtime.
Also there are certain additional APIs for web-specifics, not available in SWT of course (some of them are mentioned in the chapters below).
It is obvious that the more SWT functionality is provided by RWT the more reuse of code based on SWT can be done. So it would be great to achieve a solution where RWT is just the 'web-SWT' - fragment. But still there are some difficulties e.g. regarding missing GC support (how to draw on the webbrowser with javascript? The current available possibilities do not scale) and the different resource management (colors, fonts and images are no system resources on the server, they are shared between sessions and therefore don't provide constructors and dispose methods).
Session Support
As RAP is inherent server centric it has to deal with the three scopes an object can belong to on the server: request-, session- and application scope.
The lifetime of objects in request scope spans at most a certain request lifecycle. Objects in request scope are only visible for the current request. Several internal datastructures of RWT use this scope, e.g. the datastructure that helps to determine the status delta that has to be sent to the client. ServletRequest#setAttribute(String,Object) and ServletRequest#getAttribute(String) are used for storing objects in request scope in servlet environment.
The lifetime of objects in session scope spans at most all the requests that belong to a certain client. Objects that belong to a certain session can not be referenced by another session but they are visible for each request that belongs to the session. HttpSession#setAttribute(String,Object) and HttpSession#getAttribute(String) are used for storing objects in session scope in servlet environment.
Last but not least objects in application scope span at most the lifetime of the whole application and are accessible for each request and each session. Objects in application scope can be stored in class variables.
To handle this scopes RAP maps a context to each request thread using a ThreadLocal. With this context in place it's possible to use request, response and session references at any place in the code, without transfering them all the time in method parameters. At the start of the request's lifecycle the context gets mapped and at the end it get's disposed of:
try { ServiceContext context = new ServiceContext( request, response ); ContextProvider.setContext( context ); [...] // processing of lifecycle [...] } finally { ContextProvider.disposeContext(); }
Access to the http session for example can be done using:
RWT.getRequest().getSession()
As RCP applications generally are single user clients using singletons is a quite common practice. Trying to transfer this practice to RAP there arises a problem. Singletons use class variables for storing the singleton instance. But this puts the instance in application scope in RAP. To solve this problem, but providing a similar programming pattern RAP provides a class called SessionSingletonBase which allows to access unique instances of types per session.
SessionSingletonBase provides a convenience method
public static Object getInstance( final Class type )
that allows to create singletons that can be used like singletons in RCP:
static class MySingleton { private MySingleton() { //prevent instance creation } public static MySingleton getInstance() { return ( MySingleton )getInstance( MySingleton.class ); } }
This works fine as long as the access to the singleton is done in the request thread. Background threads don't have a context mapped, so they will fail accessing the singletons. In RWT there is API to deal with this too:
UICallBack#runNonUIThreadWithFakeContext(Display,Runnable)
Besides that the name isn't very good, this method executes the given runnable in the current thread but mapping a context that allows access to a certain session. The session is determined using the display instance, since in RWT display and session live in a one to one relationship.
Multi Locale Support
In consideration of the things mentioned in the section Session Support it's clear that the multi locale support of RCP doesn't work in server environments as the translated values of the messages classes are stored in class variables.
To keep the advantage of typesafety we moved the class variables of the messages classes in fields. For a certain locale a certain instance of that message class is provided. This instance can be retrieved using a static get method to access the locale aware translation:
public class WorkbenchMessages { private static final String BUNDLE_NAME = "org.eclipse.ui.internal.messages";//$NON-NLS-1$ [...] // --- File Menu --- public String NewWizardAction_text; [...] public static WorkbenchMessages get() { Class clazz = WorkbenchMessages.class; Object result = RWT.NLS.getISO8859_1Encoded( BUNDLE_NAME, clazz ); return ( WorkbenchMessages )result; } }
Usage may look like this:
page.setDescription(WorkbenchMessages.get().NewWizardNewPage_description);
RWT looks up the locale either in the session and if none is given there it evaluates the current request. If no request is available it uses the system locale as fallback.
This helps with common translation in code, but it doesn't help with translation of labels in extension-points. To achieve this we had to use a patch fragment for org.eclipse.equinox.registry. The patch fragment prevent the translation of labels in the registry code, since this would be a one-time-per-applicaton translation. The translation now takes place by the time an extension is materialized. | http://wiki.eclipse.org/index.php?title=E4/position_papers/Frank_Appel&oldid=117508 | CC-MAIN-2014-41 | en | refinedweb |
May 03, 2012 01:19 PM|mcinnes01|LINK
Hi,
I have been working through various tutorials and their links to instructions, to make and deploy code first web applications with databases.
The tutorial I followed for the application was the Movie tutorial. I used this tutorial for my own purposes (I have kept it simple at the moment) and I have the application working.
I came to trying to deploy (I use 1 click) and the first time I tried it failed as I hadn't configued the database side of things at all. So I followed a lot of instructions and I eventually got the site to deploy.
The first time I did this I checked the target server for the database (which is different to the application) and I couldn't see the database. The result was that when I went on to the application the site had caught the error on the sites general error.aspx that the MVC framework produces as standard.
I looked at little further on the SQL server and noticed it had added the database in the system databases, so I deleted them from here and created a new database called "Outages".
I then changed the various bits of code to reflect this and deployed again, this time the tables were created as expected, however the site still was not working from the live server.
I'm not sure where I am going wrong or how I can trace the error as it is just getting caught on the standard error page.
All the account pages work fine it is just the data related pages that produce the error.
Any help will be much appreciated.
Andy
May 03, 2012 01:50 PM|raduenuca|LINK
Use Elmah to log unhandled errros or simply set CustomErrors to off in web config. You should see the yellow screen with the error.
May 03, 2012 02:08 PM|mcinnes01|LINK
Hi raduenuca,
Do you mean this in my web.release.config?
<customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt: <error statusCode="500" redirect="InternalError.htm"/> </customErrors>
If so this is currently commented out, does that mean it is off?
May 03, 2012 02:14 PM|mcinnes01|LINK
Hi,
I have a feeling it is something to do with this, I had the same problem once with an account model that I had customised and it was where I tried to reference something in the controller that was creating an error.
public ActionResult Details(int id = 0) { Outage outage = db.Outages.Find(id); if (outage == null) { return HttpNotFound(); } return View(outage); }
So I have a feeling it is failing on the line: Outage outage = db.Outages.Find(id);
If I could make a guess I would say that somewhere in my project I need to reference where the database is on the server that I have deployed it to, but I really haven't got a clue where.
I have followed all the instructions in terms of deploying the database, and this worked fine but I guess it must be something to do with the database connection string?
If this is so what information can I provide to help you, help me find the problem?
Thanks in advance,
Andy
May 04, 2012 09:02 AM|raduenuca|LINK
Do you have a connection string in your web.config? And on your local machine you use SQL Express?
Star
8727 Points
May 04, 2012 09:10 AM|christiandev|LINK
Does your production DB have data in the tables already? is your connection string in the web.config correct? does your code reference the correct connection string?
May 04, 2012 09:19 AM|mcinnes01|LINK
Hi,
I have put a drop table command in, so on my production database there is no data.
This is my connection string from the main web.config
<connectionStrings> <add name="OutageDBContext" connectionString="Data Source=|DataDirectory|Outages.sdf" providerName="System.Data.SqlServerCe.4.0"/> <add name="DefaultConnection" connectionString="Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\aspnet.mdf;Integrated Security=True;User Instance=True;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" /> </connectionStrings>
This is form my Web.Release.Config
<connectionStrings> <add name="OutageDBContext" connectionString="Data Source=OPADMTS;Initial Catalog=Outages;Persist Security Info=True;User ID=username;Password=password" xdt: </connectionStrings>
Am I right in excluding the standard connection string "default connection"? I also removed this when I imported my web.config on the package/Publish SQL tab. So I only have my
EDIT:
Almost forgot, locally I use SQL 2008 Express, our production database is 2008 R2.
Star
8727 Points
May 04, 2012 09:28 AM|christiandev|LINK
If theres no data, that query will fail. Also, is that datasource correct in the 2nd web.config? It usually has server name/instance ? but this just be for where I host?
Can you connect to the hosting db via SSMS?
May 04, 2012 09:38 AM|mcinnes01|LINK
public class OutageInitializer : DropCreateDatabaseIfModelChanges<OutageDBContext> { protected override void Seed(OutageDBContext context) { var outages = new List<Outage> { new Outage { Title = "Scheduled Maintenance", PostedOn=DateTime.Parse("2012-5-4"), CompletedBy=DateTime.Parse("2012-5-4"), Details="The system is currently offline whilst we perform some scheduled maintenance tasks. We expect the work to take no more than 2 hours. Please try back after 10:00 AM."} }; outages.ForEach(d => context.Outages.Add(d)); } }
I copied the connection string from the connection string in the image above (I then redid the image above to make sure the copied web config pulled that through.
The database and webservers are local, we don't use hosted we have everything in house.
I can access the database and the default line I have added is visible:
EDIT:
I have also deleted the autoscripts and used data and schema, this populated the line seen above in the database, but it is still the same when I go to the site.
Edit:
In terms of the connection string I tested through excel (I know probably not the best but it gives me a connection string) and this is what it gives me:
Provider=SQLOLEDB.1;Password=password;Persist Security Info=True;User ID=user name;Data Source=server;Use Procedure for Prepare=1;Auto Translate=True;Packet Size=4096;Workstation ID=computer;Use Encryption for Data=False;Tag with column collation when possible=False;Initial Catalog=Outages
Does this look better and if so do I need to just exclude the workstation ID or is there any thing else that will need adding/ removing? Also does this need to go in the release web.config only or the main web.config and the publish sql tabs connection string as well? Did I need to keep the default connection string as well ? When I tried to deploy that as well I got a table exists error for the meta data table.
May 04, 2012 10:33 AM|raduenuca|LINK
Without the actual error it is hard to tell what is going on. Please add this to the web.config on the relase server:
<configuration> <system.web> <customErrors mode="Off"/> </system.web> </configuration>
Then run the application again and reply with the error.
May 04, 2012 12:16 PM|mcinnes01|LINK
Hi Thanks, here is the error info:
Server Error in '/Gateway') +1420567 System.Data.Entity.Internal.LazyInternalConnection.TryInitializeFromAppConfig(String name) +362 System.Data.Entity.Internal.LazyInternalConnection.Initialize() +49 System.Data.Entity.Internal.LazyInternalConnection.get_ConnectionHasModel() +10 System.Data.Entity.Internal.LazyInternalContext.InitializeContext() +265 System.Data.Entity.Internal.InternalContext.GetEntitySetAndBaseTypeForType(Type entityType) +17 System.Data.Entity.Internal.Linq.InternalSet`1.Initialize() +62 System.Data.Entity.Internal.Linq.InternalSet`1.get_InternalContext() +15 System.Data.Entity.Internal.Linq.InternalSet`1.Find(Object[] keyValues) +23 Gateway.Controllers.OutagesController.Details(Int32 id) in C:\Users\amcinnes\Documents\Visual Studio 2010\Projects\Gateway\Gateway\Controllers\OutagesController.cs:29 lambda_method(Closure , ControllerBase , Object[] ) +101) +267 System.Web.Mvc.<>c__DisplayClass17.<InvokeActionMethodWithFilters>b__14() +20) +329() +8969201 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +184 -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.272
May 04, 2012 12:34 PM|raduenuca|LINK
try adding providerName="System.Data.SqlClient" to the connection string
May 04, 2012 12:55 PM|mcinnes01|LINK
Hi,
I have this directly off the server currently:
<connectionStrings> <add name="OutageDBContext" connectionString="Data Source=server;Initial Catalog=Outages;Persist Security Info=True;User ID=username;Password=password" providerName="System.Data.SqlServerCe.4.0" /> </connectionStrings>
Should the name be Outages or OutageDBContext and the providerName = System.Data.SqlServerCe.4.0 or System.Data.SqlServer?
Cheers
Andy
May 04, 2012 12:58 PM|raduenuca|LINK
That's the provider for the SQL Server Compact Edition...I don't think you have that on the release server. Try with the provider I've gave you.
May 04, 2012 03:38 PM|raduenuca|LINK
I'll sugest to use web.config transformations so you can have a configuration for each enviroment. Of course you need to make sure you have the correct settings for each one.
16 replies
Last post May 04, 2012 03:38 PM by raduenuca | http://forums.asp.net/p/1799695/4965266.aspx?Re+Help+deploying+code+first+website+with+database | CC-MAIN-2014-41 | en | refinedweb |
Originally posted by boyet silverio: (In a different thread/topic I had been recommended, by Ilja, a book - Agile SW Development by Martin - which may have something on this topic based on its online table of contents, but I'm still in the process of scraping for the "hefty" USD 55 needed to buy the book... meanwhile...)
Originally posted by Frank Carver: Is there something else preventing you from cutting out unused classes?
The Reuse-release Equivalence Principle (REP) The granule of reuse is the granule of release This principle is based on the idea that packages rather than individual classes are the units of reuse. If a package is to be reused, all classes in that package should be designed for reuse. The Common Reuse Principle (CRP) The classes in a package are reused together. If you reuse one of the classes in a package, you reuse them all. This principle is also based on the idea that packages rather than individual classes are the units of reuse. Classes that are intended to be reused together, should be put in the same package.
Originally posted by Frank Carver: A jar file should have everything that someone downloading it might need, otherwise it's really hard to tell what should be downloaded. The fact that batik seems to need such a complex diagram of its own internal code astonishes me. I'd just put all the classes from all the batik jars into one big single download.
Does anyone still do this kind of "manual linking" where some jar files are included in the classpath, and some are not. Whether your program works or crashes depends on whether any of the unsatisfied methods are accidentally called or not. Scary!
The way I work is to put all my general-purpose classes into one big jar file (currently about 220K), then run genjar against my application specific code to build a new application-specific jar file using only the utility classes that are actually used. Even my most complex application jar files are usually less than about 60K, including the bits they need from the general utilities.
This way, there's no chance of "class not found" problems (if you don't use Class.forName(), but that can always fail), and you get a small, complete application.
Originally posted by Frank Carver: As I far as I know, the "bad" things about dependencies are where they are cyclical
This article is the first of several that will describe principles that govern the macro structure of large object-oriented applications. I emphasize the word large, because these principles are most appropriate for applications that exceed 50,000 lines of C++ and require a team of engineers to write. ... As software applications grow in size and complexity they require some kind of high level organization. The class, while a very convenient unit for organizing small applications, is too finely grained to be used as an organizational unit for large applications. Something "larger" than a class is needed to help organize large applications. Several major methodologists have identified the need for a larger granule of organization. Booch uses the term "class category" to describe such a granule, Bertrand Meyer refers to "clusters", Peter Coad talks about "subject areas", and Sally Shlaer and Steve Mellor talk about "Domains". In this article we will use the UML 0.9 terminology, and refer to these higher order granules as "packages". The term "package" is common in Ada and Java circles. In those languages a package is used to represent a logical grouping of declarations that can be imported into other programs. In Java, for example, one can write several classes and incoporate them into the same package. Then other Java programs can "import" that package to gain access to those classes.
Originally posted by Frank Carver: Ooh, a lot to reply to ... Stan:If I'm going to carve out some of my favorite goodies to give you for reuse, I think I'd like to just put the appropriate packages into a jar for you. I would not like to put some fraction of one package, and some other fraction of another. Can you explain a little more why you wouldn't like to do this? Is it because you lack the tools to do it? is it because you'd be worried about missing some classes?
Jar files containing classes from more than one package are so common they it would surprise me if a jar only had classes from one package. Can you point to any popular downloads which have single-package jars??
Ilja:It comes in one single zip file, containing all the jar files. The point is, if my batik-application doesn't use Swing, why should I need to deliver the batik-swing classes with it? So it actually comes in one big "deliverable" anyway. So you (the application developer) are expected to manually split open this zip file and perform a rough, manual process for your application.
Ilja:Seriously, I would think that if I knew batik a little bit better, the diagram would make perfectly clear which jar files are needed for which application. But why should I need to know or care? Why should I have to study and learn the result of some other developers trying to guess what I might need, when I have a tool which will quickly, reliably, and automatically do it for me.
Note, though, that it is the external jar files that matter here. My complaint with the batik diagram was that it split up its own internal classes in such a complicated way.
I suggest that a reasonable model might be one deliverable (jar) for each separately managed project with a separate deliverable timeline.
Ilja:Especially with Webstart applications, you want to have the application in as many small jar files as makes sense, so that the automatic updates use as few bandwith as possible. Fair enough. But I hope you are not advising that we partition every application or utility collection this way, just in case someone ever wants to deploy it by webstart!
Ilja:Class.forName() doesn't typically fail if you use it wisely, for a small amount of classes (for example in the way JAXP uses it). I guess you could always include those dynamically loaded classes as additional "root classes" to your genjar process, but it seems to me that it would make the build process unnecessarily brittle... I guess there was a little confusion there. What I meant to say was that the "trimmed" jar files produced by genjar are as complete as hand-generated ones.
If you start allowing classes to be created from externally-entered names, then any application might fail, not just one built using genjar.
Ilja: quoting objectmentorIn Java, for example, one can write several classes and incoporate them into the same package. Then other Java programs can "import" that package to gain access to those classes. Do you honestly still use whole-package import statements?
It really seems that many of these sources are confusing the roles of "namespace", "conceptual partitioning", and "deliverable unit", and often trying to assign all three to the poor overused "package" statement.
Note also that the objectmentor article was written before jar files existed, and when there was no real way of bundling or partitioning deliverables.
--------------------------------------------------------------------------------? -------------------------------------------------------------------------------- You shouldn't - that's exactly the point: If you need a package, you should need the whole package, not just disconnected parts of it.
Originally posted by Frank Carver: This still baffles me, and all the repetition of the same bald statement "If you need a package, you should need the whole package" isn't making it any clearer.
In my code I have a core abstraction class Tract, which lives in package org.stringtree.tract. In that package with it are five or six other classes for things like serializing a tract to an OutputStream, decorated versions with argument type conversion etc. These other classes are all very dependent on Tract, but Tract itself is unaware of their existence, and in general they are unaware of each other. In some application code, I decide that a Tract is a fine representation for some application specific concept, so I import org.stringtree.tract.Tract I don't need to serialize it or pass Date objects to any of its methods, so I don't import those other classes. Yet, those of you who argue in favour of the "common reuse principle" seem to be saying that I should bring these other classes in and bloat my code just for the sake of the "principle", even though this application doesn't need them and never uses them. This sounds mad. What am I missing?
Originally posted by Frank Carver: But in an acyclic class dependency graph this force is resolved only when we achieve one class per package.
When a new release of that [reused] component [read: package/deliverable] is created, the reuser must reintroduce it into existing applications. - There is a cost for this - And a risk Therefore, the user will want each component to be as focused as possible. - If the component contains classes which the user does not take use of, then the user may be placed in the position of accepting the burden of a new release which does not affect any of the classes within the component that he is using.
I suggest that this indicates a flaw in the princple.
Originally posted by Frank Carver: The package structure is used only as a namespace mechanism.
I would agree much more with some of the "principles" as expressed in this thread if the word "package" is replaced with "component", "deliverable", "library", "jar", "module", or any other word for an arbitrary lump of code.
"a delivered component should contain no unused resources". I would achieve this by using tools such as ant and genjar to construct an application-specific jar file containing only classes and resource files actually used by the application. Are we still disagreeing?
say in an mvc application, one way would be to do as follows (e.g. based on component function in a pattern): package model; package view; package controller; another way would be something based on business areas like package savings; package loans;
from Don: In the example, if you have a loans package and in it you will have entity, model & controller classes all related to loans which will be relatively tightly coupled. There will be few interdepencies between the saving and loans packages. If using the former strategy and have packages such as model, view, controller & entity. In each of these packages you place the relevent classes pertaining to both savings and loans. In this instance you will have relatively lots of interdepencies between packages. Also the packages will contain fairly unrelated classes.
from Stan: ...you might want packages to reflect your organization...
from Stan: Any harm in slicing both ways? loan.model, loan.view, loan.controller, savings.model, etc?
Any harm in slicing both ways? loan.model, loan.view, loan.controller, savings.model, etc? -------------------------------------------------------------------------------- this is similar to what we've done which reflects more on the second case, and which I understand is Don's drift.
quote from Stan James: Look at the dependency every other package has on the shared stuff package when they extend base framework classes. Any concerns? | http://www.coderanch.com/t/98308/patterns/organizing-classes-packages | CC-MAIN-2014-41 | en | refinedweb |
Putting It All Together
Now let's put everything together in a semirealistic program that calculates the primes up to, but not including, N. There are almost as many different approaches to parallel prime finders as there are primes. In this one, we use eachElem to create a number of tasks, where each task represents a range of odd integers that should be combed for primes. To test for primality, we'll try to divide each number, n, by all primes less than or equal to sqrt(n), which implies that tasks are dependent on the results of previous tasks.
Listing One is the entire code. The master instantiates a Sleigh, initializes its workers via eachWorker, creates a list of tasks, and then runs the tasks via eachElem. The two general parameters that give the chunk length (which must be even), and N, are bound to a variable in an NWS and are retrieved by the function invoked via eachWorker. Each task is represented by an integer: the beginning of a chunk of integers to check for primality. Each task returns a list of primes that were found in that chunk. Checking a particular number may require candidate factor primes that the worker doesn't already know, in which case they'll get them via find.
import sys from nws.sleigh import Sleigh def initPrimes(): global chunk, chunks, limit limit, chunk = SleighUserNws.find('prime parameters') chunks = {} def findPrimes(first): last = min(first+chunk, limit) # we need to know all the primes up to the smaller of the start of # this chunk or the square root of its last element. need = min(first-2, int(.5+(last-1)**.5)) myPrimes = [] for c in xrange(3, need+1, chunk): if not c in chunks: chunks[c] = SleighUserNws.find('primes%d'%c) myPrimes += chunks[c] newx = len(myPrimes) for x in xrange(first, last, 2): for p in myPrimes: if x%p == 0: break else: myPrimes.append(x) if first**2 < limit: SleighUserNws.store('primes%d'%first, myPrimes[newx:]) return myPrimes[newx:] def master(workers, limit, chunk): s = Sleigh(workerCount=workers) s.userNws.store('prime parameters', (limit, chunk)) s.eachWorker(initPrimes) primes = [2] map(primes.extend, s.eachElem(findPrimes, range(3, limit, chunk))) return primes if __name__=='__main__': primes = master(int(sys.argv[1]), int(sys.argv[2]), int(sys.argv[3])) print len(primes), primes[:3], '...', primes[-3:]
The workers use SleighUserNws as their NWS; this is a workspace object corresponding to a uniquely named NWS that is created by Sleigh. It is a convenient place to store variables related to the Sleigh run. Also, the workers remember which primes they've seen so that they don't have to get them for subsequent tasks.
Conclusion
Python and NetWorkSpaces make it easy to create and experiment with parallel programs without requiring specialized tools or hardware. Communication and synchronization are greatly simplified by reduction to manipulation of variables in a shared workspace.
The programs can be tested on a single CPU using multiple processes (or threads), or for actual speedup, on multicore CPUs or a collection of computers. The state of the parallel computation is captured by the variables stored in NetWorkSpace. These can be visualized via the web interface, which makes it easy to understand and debug the program, and monitor progress. Because NetWorkSpaces is language-neutral, code and idioms can be transferred to different environments, and it can even be used to create ensembles from programs written in different languages.
Acknowledgments
Thanks to Scientific Computing Associates () for supporting the development of NetWorkSpaces, and Twisted Matrix Labs for supporting Twisted (twistedmatrix.com). Sleigh was inspired in part by the R Project's SNOW package (short for "Simple Network Of Workstations"), developed by Luke Tierney, A.J. Rossini, Na Li, and H. Sevcikova (cran.r-project.org/doc/packages/snow.pdf). | http://www.drdobbs.com/architecture-and-design/python-networkspaces-and-parallel-progra/200001971?pgno=7 | CC-MAIN-2014-41 | en | refinedweb |
fs,
inode —
format of file system volume
#include
<sys/types.h>
#include <ufs/ffs/fs.h>
#include <ufs/ufs/inode.h>
The files ⟨ufs/ffs/fs.h⟩ and ⟨ufs/ufs/inode.h⟩
⟨ufs/ffs/fs.h⟩:
#define FS_MAGIC 0x011954 struct fs { int32_t fs_firstfield; /* historic file system linked list, */ int32_t fs_unused_1; /* used for incore super blocks */ int32_t fs_sblkno; /* addr of super-block /_fsmnt[MAXMNTLEN]; /* name mounted on */ u_char fs_volname[MAXVOLLEN]; /* volume name */ u_int */ */ element%, fragmentation is unlikely to be
problematical, and the file system defaults to optimizing for time.
The fs_flags element specifies how the filesystem was mounted:
FS_DOSOFTDEP
FS_UNCLEAN
Each cylinder keeps track of the availability of blocks at different rotational positions, so that sequential blocks can be laid out with minimum rotational latency. With the default of 1 distinct rotational position, ⟨ufs/ufs/inode.h⟩.
A super-block structure named filsys appeared in Version 6 AT&T UNIX. The file system described in this manual appeared in 4.2BSD. | https://man.openbsd.org/OpenBSD-5.5/fs.5 | CC-MAIN-2020-45 | en | refinedweb |
People watching this port, also watch: rtorrent, smartmontools, monit, tmux, coreutils
make generate-plist
cd /usr/ports/x11-wm/i3/ && make install clean
pkg install i3
Number of commits found: 85
Update to 4.18.2
Update to 4.18.1
Update to 4.18
Update to 4.17.1
Update WWW
Update to 4.17
x11-wm/i3: fix build with many threads
i3 needs MAKE_JOBS_UNSAFE=yes to build.
Errors:
/wrkdirs/usr/ports/x11-wm/i3/work/i3-4.16/../i3-4.16/i3-config-wizard/main.c:109:10:
fatal error: GENERATED_config_enums.h: No such file or directory
#include "GENERATED_config_enums.h"
^~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
PR: 237890
Approved by: tcberner (mentor), bapt (maintainer timeout)
Differential Revision:
Update to 4.16.1
Add LICENSE_FILE
Sort dependencies
Sort USES
Add USES=localbase
PR: 235284
Submitted by: dg@syrec.org
This port needs USES=compiler:c11 to use newer GCC from ports on GCC-
based architectures.
PR: 234065
Submitted by: Piotr Kubaj 4.16
x11-wm/i3: Add CONFLICTS_INSTALL=i3-gaps
x11-wm/i3-gaps was added today.
PR: 230266
Submitted by: Dmitri Goutnik <dg@syrec.org>
Update to 4.15
x11-wm/i3: import patch from upstream, bump PORTREVISION
The patch fixes
Approved by: bapt (maintainer)
Pet portlint
PR: 222596
Submitted by: gahr
Update to 4.14.1
Update to 4.14
Update to 4.13
Changes:
Remove ${PORTSDIR}/ from dependencies, categories v, w, x, y, and z.
With hat: portmgr
Sponsored by: Absolight
Update to 4.12
Update to 4.11
Update to 4.10.4
Update to 4.10.3
Update to 4.10.2
Update to 4.10.1
Update to 4.10
Update to 4.9.1
Bugfixes release:
- i3bar: fix incorrect y-offset for text
- fix key bindings on big-endian platforms
- fix key bindings using Mode_switch
- fix keyboard layout change detection
- revert "Handle WM_CHANGE_STATE requests for iconic state" (fixes problems
with application windows disappearing, like SDL-based games when switching
workspaces)
- insert id-based match at HEAD, not TAIL (fixes window swallowing not
working when the criteria match the placeholder window)
- improve error messages on failing commands
- replace ~ in filepath when calling append_layout
- properly error out when the layout file cannot be read
Fix typo
Update to 4.9
Notable changes:
- mouse button bindings
- improved EWMH compatibility
- plenty of bugfixes and little enhancements
- new dependency on libxkbcommon which drops the last direct dependency on Xlib
Cleanup plist
Update to 4.8
Release
- Fixes resourceleak in i3bar and memoryleak in i3
- Convert to USES=tar:bzip2
PR: ports/187617
Submitted by: Johannes Jost Meixner <xmj@chaot.net>
Obtained from: i3 git ()
Update to 4.7.2
Update to 4.7
Changes from Upstream:
- docs/userguide: clarify variable parsing
- docs/userguide: clarify urgent_workspace
- docs/userguide: add proper quoting for rename sample command
- docs/userguide: clarify multiple criteria
- docs/userguide: userguide: explain the difference between comma and
semicolon for command chaining
- docs/hacking-howto: update to reflect parser changes
- man/i3-dump-log: document -f
- switch from libXcursor to xcb-util-cursor
- Respect workspace numbers when looking for a free workspace name
- Revert "raise fullscreen windows on top of all other X11 windows"
- i3bar: Create pixmaps using the real bar height, rather than screen height
- Add scratchpad bindings to the default config
- Remove manual creation and removal of share/applications, as it's now in the
mtree (remaining categories)
- Add note on mtree change to CHANGES
Approved by: portmgr (bdrewery)
Import a bunch of iconv fixes.
Submitted by: marino
Approved by: portmgr (bapt, implicit)
Forgot to bump portrevision on last modification
Revert back to r327939 + also removes now useless post-extract
Add the pkg-plist file forgotten in r327939
Pointyhat to: bapt
- Revert r328076 and r327939. bapt can commit proper plist tomorrow.
- Fix plist
Reported by: nox
Use stage
Install .desktop files
Install the sample configuration as default configuration (on first run it will
run a config-wizard).
Bump portrevision
Add NO_STAGE all over the place in preparation for the staging support (cat:
x11 4.6
- tab -> space for pkg-descr WWW
- Remove leading indefinite article from COMMENT
- Remove upstreamed patch
- Convert to new perl framework
Approved by: bapt@ (maintainer)
Convert, iXsystems, and RootBSD
12 vulnerabilities affecting 83 ports have been reported in the past 14 days
* - modified, not new
All vulnerabilities
Last updated:2020-10-19 10:27:15 | https://www.freshports.org/x11-wm/i3/ | CC-MAIN-2020-45 | en | refinedweb |
8.0 release notes
Release notes for Red Hat Enterprise Linux 8.0 from RHEL 7 to RHEL 8.
- Crrently supported upgrade paths are listed in Supported in-place upgrade paths for Red Hat Enterprise Linux.
-:
Chapter 2. Architectures
Red Hat Enterprise Linux 8.0 is distributed with the kernel version 4.18.0-80, which provides support for the following architectures:
- AMD and Intel 64-bit architectures
- The 64-bit ARM architecture
- IBM Power Systems, Little Endian
- IBM Z
Make sure you purchase the appropriate subscription for each architecture. For more information, see Get Started with Red Hat Enterprise Linux - additional architectures. For a list of available subscriptions, see Subscription Utilization on the Customer Portal.
Chapter 3. Distribution of content in RHEL 8
3.1. Installation
Red Hat Enterprise Linux 8 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures:
- Binary DVD ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories.
The Binary DVD ISO image is larger than 4.7 GB, and as a result, it might not fit on a single-layer DVD. A dual-layer DVD or USB key is recommended when using the Binary DVD Binary DVD ISO image.
See the Performing a standard RHEL installation document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Performing an advanced RHEL installation document.
3.2. Repositories
Red Hat Enterprise Linux 8 is distributed through two main repositories:
- BaseOS
- AppStream
Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions., see the Package manifest.
Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Content in AppStream is available in one of two formats - the familiar RPM format and an extension to the RPM format called modules..
3.3. Application Streams
Red Hat Enterprise Linux 8.0 introduces the concept of Application Streams. Multiple versions of user space RHEL 8. Each Application Stream component has a given life cycle, see Red Hat Enterprise Linux 8 Application Streams.
Chapter 4. RHEL 8.0.1 release
4.1. New features
RHEL System Roles updated
The
rhel-system-roles packages, which provide a configuration interface for RHEL subsystems, have been updated. Notable changes include:
- Handling of absent profiles in the
networkrole has been improved. When deleting an existing NetworkManager on-disk profile configuration by setting the persistent state to
absent, only the persistent configuration for the profile is now removed, and the current runtime configuration remains unchanged. As a result, the corresponding network device is no longer brought down in the described situation.
Specifying a Maximum Transmission Unit (MTU) size for VLAN and MACVLAN interfaces in the
networkrole has been fixed. As a result, setting MTU size on VLAN and MACVLAN interfaces using the
networkrole no longer fails with the following error message:
failure: created connection failed to normalize: nm-connection-error-quark: connection.type: property is missing (6)
- The
selinuxand
timesyncroles now include all their documented input variables in their defaults files (
defaults/main.yml). This makes it easy to determine what input variables are supported by the roles by examining the content of their respective defaults files.
- The
kdumpand
timesyncroles have been fixed to not fail in check mode.
(BZ#1685902, BZ#1674004, BZ#1685904)
sos-collector rebased to version 1.7
The
sos-collector packages have been updated to version 1.7 in RHEL 8.0.1. Notable changes include:
sos-collectorcan now collect sosreports from Red Hat Enterprise Linux CoreOS (RHCOS) nodes in the same way as from regular RHEL nodes. Users do not need to make any changes to the way they run
sos-collector. Identification of when a node is RHCOS or RHEL is automatic.
- When collecting from RHCOS nodes,
sos-collectorwill create a temporary container on the node and use the
support-toolscontainer to generate a sosreport. This container will be removed after completion.
- Using the
--cluster-type=noneoption allows users to skip all cluster-related checks or modifications to the
sosreportcommand that gets run on the nodes, and simply collect from a static list of nodes passed through the
--nodesparameter.
- Red Hat Satellite is now a supported cluster type to allow collecting sosreports from the Satellite and any Capsules.
(BZ#1695764)
Upgraded compiler toolsets
The following compiler toolsets, distributed as Application Streams, have been upgraded with RHEL 8.0.1:
- Rust Toolset, which provides the Rust programming language compiler
rustc, the
cargobuild tool and dependency manager, and required libraries, to version 1.35
- Go Toolset, which provides the Go (
golang) programming language tools and libraries, to version 1.11.6.
(BZ#1731500)
Enabling and disabling SMT
Simultaneous Multi-Threading (SMT) configuration is now available in RHEL 8. Disabling SMT in the web console allows you to mitigate a class of CPU security vulnerabilities such as:
4.2. Known issues
Performance deterioration in IPSec tunnels
Using the
aes256_sha2 or the
aes-gcm256 IPSec cipher set in RHEL 8.0.1 has a negative performance impact on IPSec tunnels. Users with specific VPN settings will experience 10% performance deterioration for IPSec tunnels. This regression is not caused by Microarchitectural Data Sampling (MDS) mitigations; it can be observed with the mitigations both on and off.
(BZ#1731362)
Chapter 5. RHEL 8.0.0 release
5.1. New features
This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 8.
5.1.1. The web console
The web console’s Subscriptions page is now provided by the new
subscription-manager-cockpit package.
A firewall interface has been added to the web console
The Networking page in the RHEL 8 web console now includes a Firewall section. In this section, users can enable or disable the firewall, as well as add, remove, and modify firewall rules.
(BZ#1647110).
(JIRA:RHELPLAN-10355).
(JIRA:RHELPLAN-3010)
The web console is now compatible with mobile browsers
With this update, the web console menus and pages can be navigated on mobile browser variants. This makes it possible to manage systems using the RHEL 8 web console from a mobile device.
(JIRA:RHELPLAN-10352)
The web console front page now displays missing updates and subscriptions
If a system managed by the RHEL 8 web console has outdated packages or a lapsed subscription, a warning is now displayed on the web console front page of the system.
(JIRA:RHELPLAN-10353).
(JIRA:RHELPLAN-10354)
Virtual Machines can now be managed using the web console
The
Virtual Machines page can now be added to the RHEL 8 web console interface, which enables the user to create and manage libvirt-based virtual machines.
(JIRA:RHELPLAN-2896)
5.1.2. Installer and image creation
Installing RHEL from a DVD using SE and HMC is now fully supported on IBM Z
The installation of Red Hat Enterprise Linux 8 on IBM Z hardware from a DVD using the Support Element (SE) and Hardware Management Console (HMC) is now fully supported. This addition simplifies the installation process on IBM Z with SE and HMC.
When booting from a binary DVD, the installer prompts the user to enter additional kernel parameters. To set the DVD as an installation source, append
inst.repo=hmc to the kernel parameters. The installer then enables SE and HMC file access, fetches the images for stage2 from the DVD, and provides access to the packages on the DVD for software selection.
The new feature eliminates the requirement of an external network setup and expands the installation options.
(BZ#1500792)
Installer now supports the LUKS2 disk encryption format
Red Hat Enterprise Linux 8 installer now uses the LUKS2 format by default but you can select a LUKS version from Anaconda’s Custom Partitioning window or by using the new options in Kickstart’s
autopart,
logvol,
part, and
RAID commands.
LUKS2 provides many improvements and features, for example, it extends the capabilities of the on-disk format and provides flexible ways of storing metadata.
(BZ#1547908). When the installation completes, Subscription Manager uses the system purpose information when subscribing the system.
(BZ#1612060)
Pykickstart supports System Purpose in RHEL 8
Previously, it was not possible for the
pykickstart library to provide system purpose information to Subscription Manager. In Red Hat Enterprise Linux 8.0,
pykickstart parses the new
syspurpose command and records the intended purpose of the system during automated and partially-automated installation. The information is then passed to Anaconda, saved on the newly-installed system, and available for Subscription Manager when subscribing the system.
(BZ#1612061)
Anaconda supports a new kernel boot parameter in RHEL 8
(BZ#1595415)
Anaconda supports a unified ISO in RHEL 8
In Red Hat Enterprise Linux 8.0, a unified ISO automatically loads the BaseOS and AppStream installation source repositories.
This feature works for the first base repository that is loaded during installation. For example, if you boot the installation with no repository configured and have the unified ISO as the base repository in the.
(BZ#1610806)
Anaconda can install modular packages in Kickstart scripts
The Anaconda installer has been extended to handle all features related to application streams: modules, streams and profiles. Kickstart scripts can now enable module and stream combinations, install module profiles, and install modular packages. For more information, see Performing an advanced RHEL installation.
(JIRA:RHELPLAN-1943)
The
nosmt boot option is now available in the RHEL 8 installation options
The
nosmt boot option is available in the installation options that are passed to a newly-installed RHEL 8 system.
(BZ#1677411)
RHEL 8 supports installing from a repository on a local hard drive
Previously, installing RHEL from a hard drive required an ISO image as the installation source. However, the RHEL 8 ISO image might be too large for some file systems; for example, the FAT32 file system cannot store files larger than 4 GiB.
In RHEL 8, you can enable installation from a repository on a local hard drive. You only need to specify the directory instead of the ISO image. For example:`inst.repo=hd:<device>:<path to the repository>`
(BZ#1502323)
Custom system image creation with Image Builder is available in RHEL 8
The Image Builder tool enables users to create customized RHEL images. Image Builder is available in AppStream in the lorax-composer package.
With Image Builder, users can create custom system images which include additional packages. Image Builder functionality can be accessed through:
- a graphical user interface in the web console
- a command line interface in the
composer-clitool.
Image Builder output formats include, among others:
- live ISO disk image
- qcow2 file for direct use with a virtual machine or OpenStack
- file system image file
- cloud images for Azure, VMWare and AWS
To learn more about Image Builder, see the documentation title Composing a customized RHEL system image.
(JIRA:RHELPLAN-7291, BZ#1628645, BZ#1628646, BZ#1628647, BZ#1628648)
5.1.3. Kernel
Kernel version in RHEL 8.0
Red Hat Enterprise Linux 8.0 is distributed with the kernel version 4.18.0-80.
(BZ#1797671)
ARM 52-bit physical addressing is now available
With this update, support for 52-bit physical addressing (PA) for the 64-bit ARM architecture is available. This provides larger address space than previous 48-bit PA.
(BZ#1643522)
The IOMMU code supports 5-level page tables in RHEL 8
The I/O memory management unit (IOMMU) code in the Linux kernel has been updated to support 5-level page tables in Red Hat Enterprise Linux 8.
(BZ#1485546)
Support for 5-level paging
New
P4d_t software page table type has been added into the Linux kernel in order to support 5-level paging in Red Hat Enterprise Linux 8.
(BZ#1485532)
Memory management supports 5-level page tables and 4 PB of physical memory capacity.
With the extended address range, the memory management in Red Hat Enterprise Linux 8 adds support for 5-level page table implementation, to be able to handle the expanded address range.
(BZ#1485525)
kernel-signing-ca.cer is moved to
kernel-core in RHEL 8
In all versions of Red Hat Enterprise Linux 7, the
kernel-signing-ca.cer public key was located in the
kernel-doc package. However, in Red Hat Enterprise Linux 8,
kernel-signing-ca.cer has been relocated to the
kernel-core package for every architecture.
(BZ#1638465)
Spectre V2 mitigation default changed from IBRS to Retpolines
The default mitigation for the Spectre V2 vulnerability (CVE-2017-5715) for systems with the 6th Generation Intel Core Processors and its close derivatives [1] has changed from Indirect Branch Restricted Speculation (IBRS) to Retpolines in Red Hat Enterprise Linux 8. Red Hat has implemented this change as a result of Intel’s recommendations to align with the defaults used in the Linux community and to restore lost performance. However, note that using Retpolines in some cases may not fully mitigate Spectre V2. Intel’s Retpoline document [2] describes any cases of exposure. This document also states that the risk of an attack is low.
For use cases where complete Spectre V2 mitigation is desired, a user can select IBRS through the kernel boot line by adding the
spectre_v2=ibrs flag.
If one or more kernel modules were not built withpolines document refers to as "Skylake-generation".
[2] Retpoline: A Branch Target Injection Mitigation - White Paper
(BZ#1651806)
Intel® Omni-Path Architecture (OPA) Host Software
Intel Omni-Path Architecture (OPA) host software is fully supported in Red Hat Enterprise Linux 8.
Intel OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment.
For instructions on installing Intel Omni-Path Architecture documentation, see:
NUMA supports more nodes in RHEL 8
With this update, the Non-Uniform Memory Access (NUMA) node count has been increased from 4 NUMA nodes to 8 NUMA nodes in Red Hat Enterprise Linux 8 on systems with the 64-bit ARM architecture.
(BZ#1550498)
IOMMU passthrough is now enabled by default in RHEL 8
The Input/Output Memory Management Unit (IOMMU) passthrough has been enabled by default. This provides improved performance for AMD systems because Direct Memory Access (DMA) remapping is disabled for the host. This update brings consistency with Intel systems where DMA remapping is also disabled by default. Users may disable such behavior (and enable DMA remapping) by specifying either
iommu.passthrough=off or
iommu=nopt parameters on the kernel command line, including the hypervisor.
(BZ#1658391)
RHEL8 kernel now supports 5-level page tables
Red Hat Enterprise Linux kernel now fully supports future Intel processors with up to 5 levels of page tables. This enables the processors to support up to 4PB of physical memory and 128PB of virtual address space. Applications that utilize large amounts of memory can now use as much memory as possible as provided by the system without the constraints of 4-level page tables.
(BZ#1623590)
RHEL8 kernel supports enhanced IBRS for future Intel CPUs
Red Hat Enterprise Linux kernel now supports the use of enhanced Indirect Branch Restricted Speculation (IBRS) capability to mitigate the Spectre V2 vulnerability. When enabled, IBRS will perform better than Retpolines (default) to mitigate Spectre V2 and will not interfere with Intel Control-flow Enforcement technology. As a result, the performance penalty of enabling the mitigation for Spectre V2 will be smaller on future Intel CPUs.
(BZ#1614144)
bpftool for inspection and manipulation of eBPF-based programs and maps added.
(BZ#1559607)
The
kernel-rt sources have been updated
The
kernel-rt sources have been updated to use the latest RHEL kernel source tree. The latest kernel source tree is now using the upstream v4.18 realtime patch set, which provides a number of bug fixes and enhancements over the previous version.
(BZ#1592977)
5.1.4. Software management
YUM performance improvement and support for modular content
On Red Hat Enterprise Linux 8, installing software is ensured by the new version of the YUM tool, which is based on the DNF technology (YUM v.
YUM v4 is compatible with YUM v3 when using from the command line, editing or creating configuration files.
For installing software, you can use the
yum command and its particular options in the same way as on RHEL 7..
Note that the legacy Python API provided by YUM v3 is no longer available. Users are advised to migrate their plug-ins and scripts to the new API provided by YUM v4 (DNF Python API), which is stable and fully supported. The DNF Python API is available at DNF API Reference.
The Libdnf and Hawkey APIs (both C and Python) are unstable, and will likely change during Red Hat Enterprise Linux 8 life cycle.
For more details on changes of YUM packages and tools availability, see Considerations in adopting RHEL 8.
Some of the YUM v3 features may behave differently in YUM v4. If any such change negatively impacts your workflows, please open a case with Red Hat Support, as described in How do I open and manage a support case on the Customer Portal?
(BZ#1581198)
Notable RPM features in RHEL 8
Red Hat Enterprise Linux 8 is distributed with RPM 4.14. This version introduces many enhancements over RPM 4.11, which is available in RHEL 7. The most notable features include:
- The
debuginfopackages can be installed in parallel
- Support for weak dependencies
- Support for rich or boolean dependencies
- Support for packaging files above 4 GB in size
- Support for file triggers
Also, the most notable changes include:
- Stricter spec-parser
- Simplified signature checking the output in non-verbose mode
- Additions and deprecation in macros
(BZ#1581990)
RPM now validates the entire package contents before starting an installation
On Red Hat Enterprise Linux 7, the RPM utility verified payload contents of individual files while unpacking. However, this is insufficient for multiple reasons:
- If the payload is damaged, it is only noticed after executing script actions, which are irreversible.
- If the payload is damaged, upgrade of a package aborts after replacing some files of the previous version, which breaks a working installation.
- The hashes on individual files are performed on uncompressed data, which makes RPM vulnerable to decompressor vulnerabilities.
On Red Hat Enterprise Linux 8, the entire package is validated prior to the installation in a separate step, using the best available hash.
Packages built on Red Hat Enterprise Linux 8 use a new
SHA-256 hash on the compressed payload. On signed packages, the payload hash is additionally protected by the signature, and thus cannot be altered without breaking a signature and other hashes on the package header. Older packages use the
MD5 hash of the header and payload unless it is disabled by configuration.
The
%_pkgverify_level macro can be used to additionally enable enforcing signature verification before installation or disable the payload verification completely. In addition, the
%_pkgverify_flags macro can be used to limit which hashes and signatures are allowed. For example, it is possible to disable the use of the weak
MD5 hash at the cost of compatibility with older packages.
(JIRA:RHELPLAN-10596)
5.1.5. Infrastructure services
Notable changes in the recommended Tuned profile in RHEL 8
With this update, the recommended Tuned profile (reported by the
tuned-adm recommend command) is now selected based on the following rules - the first rule that matches takes effect:
(BZ#1565598)
Files produced by named can be written in the working directory
Previously, the named daemon stored some data in the working directory, which has been read-only in Red Hat Enterprise Linux. With this update, paths have been changed for selected files into subdirectories, where writing is allowed. Now, default directory Unix and SELinux permissions allow writing into the directory. Files distributed inside the directory are still read-only to named.
(BZ#1588592)
Geolite Databases have been replaced by Geolite2 Databases
Geolite Databases that were present in Red Hat Enterprise Linux 7 were replaced by Geolite2 Databases on Red Hat Enterprise Linux 8.
Geolite Databases were provided by the
GeoIP package. This package together with the legacy database is no longer supported in the upstream..
(JIRA:RHELPLAN-6746)
CUPS logs are handled by journald
In RHEL 8, the CUPS logs are no longer stored in specific files within the
/var/log/cups directory, which was used in RHEL 7. In RHEL 8, all types of CUPS logs are centrally-logged in the systemd
journald daemon together with logs from other programs. To access the CUPS logs, use the
journalctl -u cups command. For more information, see Working with CUPS logs.
(JIRA:RHELPLAN-12764)
Notable BIND features in RHEL 8` command.
(JIRA:RHELPLAN-1820)
5.1.6. Shells and command-line tools
The
nobody user replaces
nfsnobody
In Red Hat Enterprise Linux 7, there was:
- the
nobodyuser and group pair with the ID of 99, and
- the
nfsnobodyuser and group pair with the ID of 65534, which is the default kernel overflow ID, too.
Both of these have been merged into the
nobody user and group pair, which uses the 65534 ID in Red Hat Enterprise Linux 8. New installations no longer create the
nfsnobody pair.
This change reduces the confusion about files that are owned by
nobody but have nothing to do with NFS.
(BZ#1591969)
Version control systems in RHEL 8
RHEL 8 provides the following version control systems:
Git 2.18, a distributed revision control system with a decentralized architecture.
Mercurial 4.8, a lightweight distributed version control system, designed for efficient handling of large projects.
Subversion 1.10, a centralized version control system.
Note that the Concurrent Versions System (CVS) and Revision Control System (RCS), available in RHEL 7, are not distributed with RHEL 8.
(BZ#1693775)
Notable changes in
Subversion 1.10
Subversion 1.10 introduces a number of new features since the version 1.7 distributed in RHEL 7, as well as the following compatibility changes:
- Due to incompatibilities in the
Subversionlibraries used for supporting language bindings,
Python 3bindings for
Subversion 1.10are unavailable. As a consequence, applications that require
Pythonbindings for
Subversionare unsupported.
- Repositories based on
Berkeley DBare no longer supported. Before migrating, back up repositories created with
Subversion 1.7by using the
svnadmin dumpcommand. After installing RHEL 8, restore the repositories using the
svnadmin loadcommand.
- Existing working copies checked out by the
Subversion 1.7client in RHEL 7 must be upgraded to the new format before they can be used from
Subversion 1.10. After installing RHEL 8, run the
svn upgradecommand in each working copy.
- Smartcard authentication for accessing repositories using no longer supported.
(BZ#1571415)
Notable changes in
dstat
RHEL 8 is distributed with a new version of the
dstat tool. This tool is now a part of the Performance Co-Pilot (PCP) toolkit. The
/usr/bin/dstat file and the
dstat package name is now provided by the
pcp-system-tools package.
The new version of
dstat introduces the following enhancements over
dstat available in RHEL 7:
python3support
- Historical analysis
- Remote host analysis
- Configuration file plugins
- New performance metrics
5.1.7. Dynamic programming languages, web and database servers.
For details, see Using Python in Red Hat Enterprise Linux 8.
(BZ#1580387)
Python scripts must specify major version in hashbangs at RPM build time
In RHEL.
(BZ#1583620)
(BZ#1580430, BZ#1691688).
(BZ#1648843).
(BZ#1511131)
Node.js new in RHEL
Node.js, a software development platform for building fast and scalable network applications in the JavaScript programming language, is provided for the first time in RHEL. It was previously available only as a Software Collection. RHEL 8 provides
Node.js 10.
(BZ#1622118).
(BZ#1660051)
Notable changes in Apache
httpd
RHEL 8 is distributed with the Apache HTTP Server 2.4.37. This version introduces the following changes over
httpd available in RHEL 7:
- HTTP/2 support is now provided by the
mod_http2package, which is a part of the
httpdmodule.
-.conf(5)man page for more information.
For more information about changes in
httpd and its usage, see Setting up the Apache HTTP web server.
(BZ#1632754, BZ#1527084, BZ#1581178).
(BZ#1545526)
Database servers in RHEL 8).
(BZ#1647908)
(BZ#1649891, BZ#1519450, BZ#16314.
(BZ#1637034, BZ#1519450, BZ#1688374).
(BZ#1660041).
(BZ#1633338)
5.1.8. Desktop
GNOME Shell, version 3.28 in RHEL 8
GNOME Shell, version 3.28 is available in Red Hat Enterprise Linux (RHEL) 8. Notable enhancements include:
- New GNOME Boxes features
- New on-screen keyboard
- Extended devices support, most significantly integration for the Thunderbolt 3 interface
- Improvements for GNOME Software, dconf-editor and GNOME Terminal
(BZ#1649404)
Wayland is the default display server
With Red Hat Enterprise Linux 8, the GNOME session and the GNOME Display Manager (GDM) use Wayland as their default display server instead of the X.org server, which was used with the previous major version of RHEL.
Wayland provides multiple advantages and improvements over X.org. Most notably:
- Stronger security model
- Improved multi-monitor handling
- Improved user interface (UI) scaling
- The desktop can control window handling directly.
Note that the following features are currently unavailable or do not work as expected:
- Multi-GPU setups are not supported under Wayland.
- The NVIDIA binary driver does not work under Wayland.
- The
xrandrutility does not work under Wayland due to its different approach to handling, resolutions, rotations, and layout. Note that other X.org utilities for manipulating the screen do not work under Wayland, either.
- Screen recording, remote desktop, and accessibility do not always work correctly under Wayland.
- No clipboard manager is available.
- Wayland ignores keyboard grabs issued by X11 applications, such as virtual machines viewers.
- Wayland inside guest virtual machines (VMs) has stability and performance problems, so it is recommended to use the X11 session for virtual environments.
If you upgrade to RHEL 8 from a RHEL 7 system where you used the X.org GNOME session, your system continues to use X.org. The system also automatically falls back to X.org when the following graphics drivers are in use:
- The NVIDIA binary driver
- The
cirrusdriver
- The
mgadriver
- The
aspeeddriver
You can disable the use of Wayland manually:
- To disable Wayland in GDM, set the
WaylandEnable=falseoption in the
/etc/gdm/custom.conffile.
- To disable Wayland in the GNOME session, select the legacy X11 option by using the cogwheel menu on the login screen after entering your login name.
For more details on Wayland, see.
(BZ#1589678)
Locating RPM packages that are in repositories not enabled by default
Additional repositories for desktop are not enabled by default. The disablement is indicated by the
enabled=0 line in the corresponding
.repo file. If you attempt to install a package from such repository using PackageKit, PackageKit shows an error message announcing that the application is not available. To make the package available, replace previously used
enabled=0 line in the respective
.repo file with
enabled=1.
(JIRA:RHELPLAN-2878)
GNOME Sofware for package management
The
gnome-packagekit package that provided a collection of tools for package management in graphical environment on Red Hat Enterprise Linux 7 is no longer available. On Red Hat Enterprise Linux 8, similar functionality is provided by the GNOME Software utility, which enables you to install and update applications and gnome-shell extensions. GNOME Software is distributed in the
gnome-software package.
(JIRA:RHELPLAN-3001)
Fractional scaling available for GNOME Shell on Wayland
On a GNOME Shell on Wayland session, the fractional scaling feature is available. The feature makes it possible to scale the GUI by fractions, which improves the appearance of scaled GUI on certain displays.
Note that the feature is currently considered experimental and is, therefore, disabled by default.
To enable fractional scaling, run the following command:
# gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"
5.1.9. Hardware enablement
Firmware updates using
fwupd are available
RHEL 8 supports firmware updates, such as UEFI capsule, Device Firmware Upgrade (DFU), and others, using the
fwupd daemon. The daemon allows session software to update device firmware on a local machine automatically.
To view and apply updates, you can use:
- A GUI software manager, such as GNOME Software
- The
fwupdmgrcommand-line tool
The metadata files are automatically downloaded from the Linux Vendor Firmware Service (LVFS) secure portal, and submitted into
fwupd over D-Bus. The updates that need to be applied are downloaded displaying user notifications and update details. The user must explicitly agree with the firmware update action before the update is performed.
Note that the access to LVFS is disabled by default.
To enable the access to LVFS, either click the slider in the
sources dialog in GNOME Software, or run the
fwupdmgr enable-remote lvfs command. If you use
fwupdmgr to get the updates list, you will be asked if you want to enable LVFS.
With access to LVFS, you will get firmware updates directly from the hardware vendor. Note that such updates have not been verified by Red Hat QA.
(BZ#1504934)
Memory Mode for Optane DC Persistent Memory technology is fully supported
Intel Optane DC Persistent Memory storage devices provide data center-class persistent memory technology, which can significantly increase transaction throughput.
To use the Memory Mode technology, your system does not require any special drivers or specific certification. Memory Mode is transparent to the operating system.
(BZ#1718422)
5.1.10. Identity Management
New password syntax checks in Directory Server
This enhancement adds new password syntax checks to Directory Server. Administrators can now, for example, enable dictionary checks, allow or deny using character sequences and palindromes. As a result, if enabled, the password policy syntax check in Directory Server enforces more secure passwords.
(BZ#1334254)
Directory Server now provides improved internal operations logging support
Several operations in Directory Server, initiated by the server and clients, cause additional operations in the background. Previously, the server only logged for internal operations the
Internal connection keyword, and the operation ID was always set to
-1. With this enhancement, Directory Server logs the real connection and operation ID. You can now trace the internal operation to the server or client operation that caused this operation.
(BZ#1358706)
The
tomcatjss library supports OCSP checking using the responder from the AIA extension
With this enhancement, the
tomcatjss library supports Online Certificate Status Protocol (OCSP) checking using the responder from the Authority Information Access (AIA) extension of a certificate. As a result, administrators of Red Hat Certificate System can now configure OCSP checking that uses the URL from the AIA extension.
(BZ#1636564)
The
pki subsystem-cert-find and
pki subsystem-cert-show commands now show the serial number of certificates
With this enhancement, the
pki subsystem-cert-find and
pki subsystem-cert-show commands in Certificate System show the serial number of certificates in their output. The serial number is an important piece of information and often required by multiple other commands. As a result, identifying the serial number of a certificate is now easier.
(BZ#1566360)
The
pki user and
pki group commands have been deprecated in Certificate System
With this update, the new
pki <subsystem>-user and
pki <subsystem>-group commands replace the
pki user and
pki group commands in Certificate System. The replaced commands still works, but they display a message that the command is deprecated and refer to the new commands.
(BZ#1394069)
Certificate System now supports offline renewal of system certificates
With this enhancement, administrators can use the offline renewal feature to renew system certificates configured in Certificate System. When a system certificate expires, Certificate System fails to start. As a result of the enhancement, administrators no longer need workarounds to replace an expired system certificate.#1656856)#1652719)
SSSD now allows you to select one of the multiple Smartcard authentication devices
By default, the System Security Services Daemon (SSSD) tries to detect a device for Smartcardcard.
(BZ#1620123).
(JIRA:RHELPLAN-10439).
(JIRA:RHELPLAN-10440)
Active Directory users can now administer Identity Management
With this update, RHEL 8 allows adding a user ID override for an Active Directory (AD) user as a member of an Identity Management (IdM) group. An ID override is a record describing what a specific AD. Note that currently, selected features in IdM may still be unavailable to AD users.
(JIRA:RHELPLAN-10442).
(JIRA:RHELPLAN-10443)
Identity Management packages are available as a module
In RHEL 8, the packages necessary for installing an Identity Management (IdM) server and client are shipped as a module. The
client stream is the default stream of the
idm module and you can download the packages necessary for installing the client without enabling the stream.
The IdM server module stream is called the
DL1 stream. The stream contains multiple profiles corresponding to different types of IdM servers: server, dns, adtrust, client, and default. To download the packages in a specific profile of the
DL1 stream:
- Enable the stream.
- Switch to the RPMs delivered through the stream.
- Run the
yum module install idm:DL1/profile_namecommand.
To switch to a new module stream once you have already enabled a specific stream and downloaded packages from it:
- Remove all the relevant installed content and disable the current module stream.
- Enable the new module stream.
(JIRA:RHELPLAN-10438).
(JIRA:RHELPLAN-1473)
authselect simplifies the configuration of user authentication
This update introduces the
authselect utility that simplifies the configuration of user authentication on RHEL 8 hosts, replacing the
authconfig utility.
authselect comes with a safer approach to PAM stack management that makes the PAM configuration changes simpler for system administrators.
authselect can be used to configure authentication methods such as passwords, certificates, smart cards, and fingerprint. Note that
authselect does not configure services required to join remote domains. This task is performed by specialized tools, such as
realmd or
ipa-client-install.
(JIRA:RHELPLAN-10445)
5.1.11. Compilers and development tools
Boost updated to version 1.66
The Boost C++ library has been updated to upstream version 1.66. The version of Boost included in Red Hat Enterprise Linux 7 is 1.53. For details, see the upstream changelogs:
This update introduces the following changes breaking compatibility with previous versions:
- The
bs_set_hook()function, the
splay_set_hook()function from splay containers, and the
bool splay = trueextra parameter in the
splaytree_algorithms()function in the Intrusive library have been removed.
- Comments or string concatenation in JSON files are no longer supported by the parser in the Property Tree library.
- Some distributions and special functions from the Math library have been fixed to behave as documented and raise an
overflow_errorinstead of returning the maximum finite value.
- Some headers from the Math library have been moved into the directory
libs/math/include_private.
- Behavior of the
basic_regex<>::mark_count()and
basic_regex<>::subexpression(n)functions from the Regex library has been changed to match their documentation.
- Use of variadic templates in the Variant library may break metaprogramming functions.
- The
boost::python::numericAPI has been removed. Users can use
boost::python::numpyinstead.
- Arithmetic operations on pointers to non-object types are no longer provided in the Atomic library.
(BZ#1494495)
Unicode 11.0.0 support
The Red Hat Enterprise Linux core C library, glibc, has been updated to support the Unicode standard version 11.0.0. As a result, all wide character and multi-byte character APIs including transliteration and conversion between character sets provide accurate and correct information conforming to this standard.
(BZ#1512004)
The
boost package is now independent of Python
With this update, installing the
boost package no longer installs the
Boost.Python library as a dependency. In order to use
Boost.Python, you need to explicitly install the
boost-python3 or
boost-python3-devel packages.
(BZ#1616244)
A new
compat-libgfortran-48 package available
For compatibility with Red Hat Enterprise Linux 6 and 7 applications using the Fortran library, a new
compat-libgfortran-48 compatibility package is now available, which provides the
libgfortran.so.3 library.
(BZ#1607227)
Retpoline support in GCC
This update adds support for retpolines to GCC. A retpoline is a software construct used by the kernel to reduce overhead of mitigating Spectre Variant 2 attacks described in CVE-2017-5715.
(BZ#1535774)
Enhanced support for the 64-bit ARM architecture in toolchain components
Toolchain components,
GCC and
binutils, now provide extended support for the 64-bit ARM architecture. For example:
GCCand
binutilsnow support Scalable Vector Extension (SVE).
- Support for the
FP16data type, provided by ARM v8.2, has been added to
GCC. The
FP16data type improves performance of certain algorithms.
- Tools from
binutilsnow support the ARM v8.3 architecture definition, including Pointer Authentication. The Pointer Authentication feature prevents malicious code from corrupting the normal execution of a program or the kernel by crafting their own function pointers. As a result, only trusted addresses are used when branching to different places in the code, which improves security.
(BZ#1504980, BZ#1550501, BZ#1504995, BZ#1504993, BZ#1504994)
Optimizations to
glibc for IBM POWER systems
This update provides a new version of
glibc that is optimized for both IBM POWER 8 and IBM POWER 9 architectures. As a result, IBM POWER 8 and IBM POWER 9 systems now automatically switch to the appropriate, optimized
glibc variant at run time.
(BZ#1376834)
GNU C Library updated to version 2.28
Red Hat Enterprise Linux 8 includes version 2.28 of the GNU C Library (glibc). Notable improvements include:
Security hardening features:
- Secure binary files marked with the
AT_SECUREflag ignore the
LD_LIBRARY_PATHenvironment variable.
- Backtraces are no longer printed for stack checking failures to speed up shutdown and avoid running more code in a compromised environment.
Performance improvements:
- Performance of the
malloc()function has been improved with a thread local cache.
- Addition of the
GLIBC_TUNABLESenvironment variable to alter library performance characteristics.
- Implementation of thread semaphores has been improved and new scalable
pthread_rwlock_xxx()functions have been added.
- Performance of the math library has been improved.
- Support for Unicode 11.0.0 has been added.
- Improved support for 128-bit floating point numbers as defined by the ISO/IEC/IEEE 60559:2011, IEEE 754-2008, and ISO/IEC TS 18661-3:2015 standards has been added.
Domain Name Service (DNS) stub resolver improvements related to the
/etc/resolv.confconfiguration file:
- Configuration is automatically reloaded when the file is changed.
- Support for an arbitrary number of search domains has been added.
- Proper random selection for the
rotateoption has been added.
New features for development have been added, including:
- Linux wrapper functions for the
preadv2and
pwritev2kernel calls
- New functions including
reallocarray()and
explicit_bzero()
- New flags for the
posix_spawnattr_setflags()function such as
POSIX_SPAWN_SETSID
(BZ#1512010, BZ#1504125, BZ#506398)
CMake available in RHEL
The CMake build system version 3.11 is available in Red Hat Enterprise Linux 8 as the
cmake package.
(BZ#1590139, BZ#1502802)
make version 4.2.1
Red Hat Enterprise Linux 8 is distributed with the
make build tool version 4.2.1. Notable changes include:
- When a recipe fails, the name of the makefile and line number of the recipe are shown.
- The
--traceoption has been added to enable tracing of targets. When this option is used, every recipe is printed before invocation even if it would be suppressed, together with the file name and line number where this recipe is located, and also with the prerequisites causing it to be invoked.
- Mixing explicit and implicit rules no longer cause
maketo terminate execution. Instead, a warning is printed. Note that this syntax is deprecated and may be completely removed in the future.
- The
$(file …)function has been added to write text to a file. When called without a text argument, it only opens and immediately closes the file.
- A new option,
--output-syncor
-O, causes an output from multiple jobs to be grouped per job and enables easier debugging of parallel builds.
- The
--debugoption now accepts also the
n(none) flag to disable all currently enabled debugging settings.
The
!=shell assignment operator has been added as an alternative to the
$(shell …)function to increase compatibility with BSD makefiles. For more details and differences between the operator and the function, see the GNU make manual.
Note that as a consequence, variables with a name ending in exclamation mark and immediately followed by assignment, such as
variable!=value, are now interpreted as the new syntax. To restore the previous behavior, add a space after the exclamation mark, such as
variable! =value.
- The
::=assignment operator defined by the POSIX standard has been added.
- When the
.POSIXvariable is specified,
makeobserves the POSIX standard requirements for handling backslash and new line. In this mode, any trailing space before the backslash is preserved, and each backslash followed by a new line and white space characters is converted to a single space character.
- Behavior of the
MAKEFLAGSand
MFLAGSvariables is now more precisely defined.
- A new variable,
GNUMAKEFLAGS, is parsed for
makeflags identically to
MAKEFLAGS. As a consequence, GNU
make-specific flags can be stored outside
MAKEFLAGSand portability of makefiles is increased.
- A new variable,
MAKE_HOST, containing the host architecture has been added.
- The new variables,
MAKE_TERMOUTand
MAKE_TERMERR, indicate whether
makeis writing standard output and error to a terminal.
- Setting the
-rand
-Roptions in the
MAKEFLAGSvariable inside a makefile now works correctly and removes all built-in rules and variables, respectively.
- The
.RECIPEPREFIXsetting is now remembered per recipe. Additionally, variables expanded in that recipe also use that recipe prefix setting.
- The
.RECIPEPREFIXsetting and all target-specific variables are displayed in the output of the
-poption as if in a makefile, instead of as comments.
(BZ#1641015)
SystemTap version 4.0
Red Hat Enterprise Linux 8 is distributed with the SystemTap instrumentation tool version 4.0. Notable improvements include:
- The extended Berkeley Packet Filter (eBPF) backend has been improved, especially.
(BZ#1641032)
Improvements in
binutils version 2.30
Red Hat Enterprise Linux 8 includes version 2.30 of the
binutils package. Notable improvements include:
- Support for new IBM Z architecture extensions has been improved.
Linkers:
- The linker now puts code and read-only data into separate segments by default. As a result, the created executable files are bigger and more safe to run, because the dynamic loader can disable execution of any memory page containing read-only data.
- Support for GNU Property notes which provide hints to the dynamic loader about the binary file has been added.
- Previously, the linker generated invalid executable code for the Intel Indirect Branch Tracking (IBT) technology. As a consequence, the generated executable files could not start. This bug has been fixed.
- Previously, the
goldlinker merged property notes improperly. As a consequence, wrong hardware features could be enabled in the generated code, and the code could terminate unexpectedly. This bug has been fixed.
- Previously, the
goldlinker created note sections with padding bytes at the end to achieve alignment according to architecture. Because the dynamic loader did not expect the padding, it coud terminate unexpectedly the program it was loading. This bug has been fixed.
Other tools:
- The
readelfand
objdumptools now have options to follow links into separate debug information files and display information in them, too.
- The new
--inlinesoption extends the existing
--line-numbersoption of the
objdumptool to display nesting information for inlined functions.
- The
nmtool gained a new option
--with-version-stringsto display version information of a symbol after its name, if present.
- Support for the ARMv8-R architecture and Cortex-R52, Cortex-M23, and Cortex-M33 processors has been added to the assembler.
(BZ#1641004, BZ#1637072, BZ#1501420, BZ#1504114, BZ#1614908, BZ#1614920)
Performance Co-Pilot version 4.3.0
Red Hat Enterprise Linux 8 is distributed with Performance Co-Pilot (PCP) version 4.3.0..
(BZ#1641034)
Memory Protection Keys
This update enables hardware features which allow per-thread page protection flag changes. The new
glibc system call wrappers have been added for the
pkey_alloc(),
pkey_free(), and
pkey_mprotect() functions. In addition, the
pkey_set() and
pkey_get() functions have been added to allow access to the per-thread protection flags.
(BZ#1304448)
GCC now defaults to z13 on IBM Z
With this update, by default GCC on the IBM Z architecture builds code for the z13 processor, and the code is tuned for the z14 processor. This is equivalent to using the
-march=z13 and
-mtune=z14 options. Users can override this default by explicitly using options for target architecture and tuning.
(BZ#1571124)
elfutils updated to version 0.174
In Red Hat Enterprise Linux 8, the elfutils package is available in version 0.174. Notable changes include:
- Previously, the
eu-readelftool could show a variable with a negative value as if it had a large unsigned value, or show a large unsigned value as a negative value. This has been corrected and
eu-readelfnow looks up the size and signedness of constant value types to display them correctly.
- A new function
dwarf_next_lines()for reading
.debug_linedata lacking CU has been added to the libdw library. This function can be used as alternative to the
dwarf_getsrclines()and
dwarf_getsrcfiles()functions.
- Previously, files with more than 65280 sections could cause errors in the libelf and libdw libraries and all tools using them. This bug has been fixed. As a result, extended
shnumand
shstrndxvalues in ELF file headers are handled correctly.
(BZ#1641007)
Valgrind updated to version 3.14
Red Hat Enterprise Linux 8 is distributed with the Valgrind executable code analysis tool version 3.14. Notable changes include:
- A new
--keep-debuginfooption has been added to enable retention of debug info for unloaded code. As a result, saved stack traces can include file and line information for code that is no longer present in memory.
- Suppressions based on source file name and line number have been added.
- The
Helgrindtool has been extended with an option
--delta-stacktraceto specify computation of full history stack traces. Notably, using this option together with
--history-level=fullcan improve
Helgrindperformance by up to 25%.
- False positive rate in the
Memchecktool for optimised code on the Intel and AMD 64-bit arcitectures and the ARM 64-bit architecture has been reduced. Note that you can use the
--expensive-definedness-checksto control handling of definedness checks and improve the rate at the expense of performance.
- Valgrind can now recognize more instructions of the little-endian variant of IBM Power Systems.
- Valgrind can now process most of the integer and string vector instructions of the IBM Z architecture z13 processor.
For more information about the new options and their known limitations, see the
valgrind(1) manual page.
(BZ#1641029, BZ#1501419)
GDB version 8.2
Red Hat Enterprise Linux 8 is distributed with the GDB debugger version 8.2 Notable changes include:
- The IPv6 protocol is supported for remote debugging with GDB and
gdbserver.
- Debugging without debug information has been improved.
- Symbol completion in the GDB user interface has been improved to offer better suggestions by using more syntactic constructions such as ABI tags or namespaces.
- Commands can now be executed in the background.
- Debugging programs created in the Rust programming language is now possible.
- Debugging C and C++ languages has been improved with parser support for the
_Alignofand
alignofoperators, C++ rvalue references, and C99 variable-length automatic arrays.
- GDB extension scripts can now use the Guile scripting language.
- The Python scripting language interface for extensions has been improved with new API functions, frame decorators, filters, and unwinders. Additionally, scripts in the
.debug_gdb_scriptssection of GDB configuration are loaded automatically.
- GDB now uses Python version 3 to run its scripts, including pretty printers, frame decorators, filters, and unwinders.
- The ARM and 64-bit ARM architectures have been improved with process execution record and replay, including Thumb 32-bit and system call instructions.
- GDB now supports the Scalable Vector Extension (SVE) on the 64-bit ARM architecture.
- Support for Intel PKU register and Intel Processor Trace has been added.
- Record and replay functionality has been extended to include the
rdrandand
rdseedinstructions on Intel based systems.
- Functionality of GDB on the IBM Z architecture has been extended with support for tracepoints and fast tracepoints, vector registers and ABI, and the
Catchsystem call. Additionally, GDB now supports more recent instructions of the architecture.
- GDB can now use the SystemTap static user space probes (SDT) on the 64-bit ARM architecture.
(BZ#1641022, BZ#1497096, BZ#1505346, BZ#1592332, BZ#1550502)
glibc localization for RHEL is distributed in multiple packages
In RHEL 8,
glibc locales and translations are no longer provided by the single
glibc-common package. Instead, every locale and language is available in a
glibc-langpack-CODE package. Additionally, in most cases not all locales are installed by default, only these selected in the installer. Users must install all further locale packages that they need separately, or if they wish they can install
glibc-all-langpacks to get the locales archive containing all the
glibc locales installed as before.
For more information, see Using langpacks.
(BZ#1512009)
GCC version 8.2
In Red Hat Enterprise Linux 8, the GCC toolchain is based on the GCC 8.2 release series. Notable changes include:
- Numerous general optimizations have been added, such as alias analysis, vectorizer improvements, identical code folding, inter-procedural analysis, store merging optimization pass, and others.
- The Address Sanitizer has been improved. The Leak Sanitizer and Undefined Behavior Sanitizer have been added.
- Debug information can now be produced in the DWARF5 format. This capability is experimental.
- The source code coverage analysis tool GCOV has been extended with various improvements.
- New warnings and improved diagnostics have been added for static detection of more programming errors.
- GCC has been extended to provide tools to ensure additional hardening of the generated code. Improvements related to security include built-ins for overflow checking, additional protection against stack clash, checking target addresses of control-flow instructions, warnings for bounded string manipulation functions, and warnings to detect out-of-bounds array indices.
Improvements to architecture and processor support include:
- Multiple new architecture-specific options for the Intel AVX-512 architecture, a number of its microarchitectures, and Intel Software Guard Extensions (SGX) have been added.
- Code generation can now target the 64-bit ARM architecture LSE extensions, ARMv8.2-A 16-bit Floating-Point Extensions (FPE), and ARMv8.2-A, ARMv8.3-A, and ARMv8.4-A architecture versions.
- Support for the z13 and z14 processors of the IBM Z architecture has been added.
Notable changes related to languages and standards include:
- The default standard used when compiling code in the C language has changed to C17 with GNU extensions.
- The default standard used when compiling code in the C++ language has changed to C++14 with GNU extensions.
- The C++ runtime library now supports the C++11 and C++14 standards.
- The C++ compiler now implements the C++14 standard.
- Support for the C language standard C11 has been improved.
- The new
__auto_typeGNU C extension provides a subset of the functionality of C++11
autokeyword in the C language.
- The
_FloatNand
_FloatNxtype names specified by the ISO/IEC TS 18661-3:2015 standard are now recognized by the C front end.
- Passing an empty class as an argument now takes up no space on the Intel 64 and AMD64 architectures, as required by the platform ABI.
- The value returned by the C++11
alignofoperator has been corrected to match the C
_Alignofoperator and return minimum alignment. To find the preferred alignment, use the GNU extension
__alignof__.
- The main version of the
libgfortranlibrary for Fortran language code has been changed to 5.
- Support for the Ada (GNAT), GCC Go, and Objective C/C++ languages has been removed. Use the Go Toolset for Go code development.
(JIRA:RHELPLAN-7437, BZ#1512593, BZ#1512378)
The Go cryptographic library FIPS mode now honors system settings
Previously, the Go standard cryptographic library always used its FIPS mode unless it was explicitly disabled at build time of the application using the library. As a consequence, users of Go-based applications could not control whether the FIPS mode was used. With this change, the library does not default to FIPS mode when the system is not configured in FIPS mode. As a result, users of Go-based applications on RHEL systems have more control over the use of the FIPS mode of the Go cryptographic library.
(BZ#1633351)
strace updated to version 4.24
Red Hat Enterprise Linux 8 is distributed with the
strace tool version 4.24. Notable changes include:
- System call tampering features have been added with the
-e inject=option. This includes injection of errors, return values, delays, and signals.
System call qualification syntax has been improved:
- The
-e trace=/regexoption has been added to filter system calls with regular expressions.
- Prepending a question mark to a system call qualification in the
-e trace=option lets
stracecontinue, even if the qualification does not match any system call.
- Personality designation has been added to system call qualifications in the
-e traceoption.
- Decoding of
kvm vcpuexit reason has been added. To do so, use the
-e kvm=vcpuoption.
- The
libdwlibrary from
elfutilsis now used for stack unwinding when the
-koption is used. Additionally, symbol demangling is performed using the
libibertylibrary.
- Previously, the
-roption caused
straceto ignore the
-toption. This has been fixed, and the two options are now independent.
- The
-Aoption has been added for opening output files in append mode.
- The
-Xoption has been added for configuring
xlatoutput formatting.
- Decoding of socket addresses with the
-yyoption has been improved. Additionally, block and character device number printing in
-yymode has been added.
- It is now possible to trace both 64-bit and 32-bit binaries with a single
stracetool on the IBM Z architecture. As a consequence, the separate
strace32package no longer exists in RHEL 8.
Additionally, decoding of the following items has been added, improved or updated:
netlinkprotocols, messages and attributes
arch_prctl,
bpf,
getsockopt,
io_pgetevent,
keyctl,
prctl,
pkey_alloc,
pkey_free,
pkey_mprotect,
ptrace,
rseq,
setsockopt,
socket,
statxand other system calls
- Multiple commands for the
ioctlsystem call
- Constants of various types
- Path tracing for
execveat,
inotify_add_watch,
inotify_init,
symlink,
symlinkatsystem calls and
mmapsystem calls with indirect arguments
- Lists of signal codes
(BZ#1641014)
Compiler toolsets in RHEL 8
RHEL 8.0 provides the following compiler toolsets as Application Streams:
- Clang and LLVM Toolset 7.0.1, which provides the LLVM compiler infrastructure framework, the Clang compiler for the C and C++ languages, the LLDB debugger, and related tools for code analysis. See the Using Clang and LLVM Toolset document.
- Rust Toolset 1.31, which provides the Rust programming language compiler
rustc, the
cargobuild tool and dependency manager, the
cargo-vendorplugin, and required libraries. See the Using Rust Toolset document.
- Go Toolset 1.11.5, which provides the Go programming language tools and libraries. Go is alternatively known as
golang. See the Using Go Toolset document.
(BZ#1695698, BZ#1613515, BZ#1613516, BZ#1613518).
(BZ#1699535)
C++ ABI change in
std::string and
std::list
The Application Binary Interface (ABI) of the
std::string and
std::list classes from the
libstdc++ library changed between RHEL 7 (GCC 4.8) and RHEL 8 (GCC 8) to conform to the C++11 standard. The
libstdc++ library supports both the old and new ABI, but some other C++ system libraries do not. As a consequence, applications that dynamically link against these libraries will need to be rebuilt. This affects all C++ standard modes, including C++98. It also affects applications built with Red Hat Developer Toolset compilers for RHEL 7, which kept the old ABI to maintain compatibility with the system libraries.
(BZ#1704867)
5.1.12. File systems and)
XFS now supports shared copy-on-write data extents
The XFS file system supports shared copy-on-write data extent functionality. This feature enables two or more files to share a common set of data blocks. When either of the files sharing common blocks changes, XFS breaks the link to common blocks and creates a new file. This is similar to the copy-on-write (COW) functionality found in other file systems.
Shared copy-on-write data extents are:
- Fast
- Creating shared copies does not utilize disk I/O.
- Space-efficient
- Shared blocks do not consume additional disk space.
- Transparent
- Files sharing common blocks act like regular files.
Userspace utilities can use shared copy-on-write data extents for:
- Efficient file cloning, such as with the
cp --reflinkcommand
- Per-file snapshots
This functionality is also used by kernel subsystems such as Overlayfs and NFS for more efficient operation.
Shared copy-on-write data extents are now enabled by default when creating an XFS file system, starting with the
xfsprogs package version
4.17.0-2.el8.
Note that Direct Access (DAX) devices currently do not support XFS with shared copy-on-write data extents. To create an XFS file system without this feature, use the following command:
# mkfs.xfs -m reflink=0 block-device
Red Hat Enterprise Linux 7 can mount XFS file systems with shared copy-on-write data extents only in the read-only mode.
(BZ#1494028)
Maximum XFS file system size is 1024 TiB
The maximum supported size of an XFS file system has been increased from 500 TiB to 1024 TiB.
File systems larger than 500 TiB require that:
- the metadata CRC feature and the free inode btree feature are both enabled in the file system format, and
- the allocation group size is at least 512 GiB.
In RHEL 8, the
mkfs.xfs utility creates file systems that meet these requirements by default.
Growing a smaller file system that does not meet these requirements to a new size greater than 500 TiB is not supported.
(BZ#1563617)
ext4 file system now supports
metadata checksum
With this update,
ext4 metadata is protected by
checksums. This enables the file system to recognize the corrupt metadata, which avoids damage and increases the file system resilience.
VDO now supports all architectures
Virtual Data Optimizer (VDO) is now available on all of the architectures supported by RHEL 8.
For the list of supported architectures, see Chapter 2, Architectures.
(BZ#1534087).
(BZ#1649582).
(BZ#1564540).
kdumpis not supported with NVMe/FC.
- Booting from Storage Area Network (SAN) NVMe/FC is not supported.
(BZ#1649497).
(BZ#1643294)onda with commands for handling NVDIMM devices.
- The ability of
grub2,
efibootmgr, and
efivarsystem components to handle and boot from NVDIMM devices.
(BZ#1499442), and.
(BZ#1643550)
Multiqueue scheduling on block devices
Block devices now use multiqueue scheduling in Red Hat Enterprise Linux 8. This enables the block layer performance to scale well with fast solid-state drives (SSDs) and multi-core systems.
The traditional schedulers, which were available in RHEL 7 and earlier versions, have been removed. RHEL 8 supports only multiqueue schedulers.
(BZ#1647612)
5.1.13. High availability and clusters
New
pcs commands to list available watchdog devices and test watchdog devices
In order to configure SBD with Pacemaker, a functioning watchdog device is required. This release supports the
pcs stonith sbd watchdog list command to list available watchdog devices on the local node, and the
pcs stonith sbd watchdog test command to test a watchdog device. For information on the
sbd command line tool, see the
sbd(8) man page.
(BZ#1578891)
The
pcs command now supports filtering resource failures by an operation and its interval
Pacemaker now tracks resource failures per a resource operation on top of a resource name, and a node. The
pcs resource failcount show command now allows filtering failures by a resource, node, operation, and interval. It provides an option to display failures aggregated per a resource and node or detailed per a resource, node, operation, and its interval. Additionally, the
pcs resource cleanup command now allows filtering failures by a resource, node, operation, and interval.
(BZ#1591308)
Timestamps enabled in
corosync log
The
corosync log did not previously contain timestamps, which made it difficult to relate it to logs from other nodes and daemons. With this release, timestamps are present in the
corosync log.
(BZ#1615420)
New formats for
pcs cluster setup,
pcs cluster node add and
pcs cluster node remove commands
In Red Hat Enterprise Linux 8,
pcs fully supports Corosync 3,
knet, and node names. Node names.
(BZ#1158816)
New
pcs commands
Red Hat Enterprise Linux 8 introduces the following new commands.
- RHEL 8 introduces a new command,
pcs cluster node add-guest | remove-guest, which replaces the
pcs cluster remote-node add | removecommand in RHEL 7.
- RHEL 8 introduces a new command,
pcs quorum unblock, which replaces the
pcs cluster quorum unblockcommand in RHEL 7.
- The
pcs resource failcount resetcommand has been removed as it duplicates the functionality of the
pcs resource cleanupcommand.
RHEL 8 introduces new commands which replace the
pcs resource [show]command in RHEL 7:
- The
pcs resource [status]command in RHEL 8 replaces the
pcs resource [show]command in RHEL 7.
- The
pcs resource configcommand in RHEL 8 replaces the
pcs resource [show] --fullcommand in RHEL 7.
- The
pcs resource config resource idcommand in RHEL 8 replaces the
pcs resource show resource idcommand in RHEL 7.
RHEL 8 introduces new commands which replace the
pcs stonith [show]command in RHEL 7:
- The
pcs stonith [status]command in RHEL 8 replaces the
pcs stonith [show]command in RHEL 7.
- The
pcs stonith configcommand in RHEL 8 replaces the
pcs stonith [show] --fullcommand in RHEL 7.
- The
pcs stonith config resource idcommand in RHEL 8 replaces the
pcs stonith show resource idcommand in RHEL 7.
(BZ#1654280)
Pacemaker 2.0.0 in RHEL 8
The
pacemaker packages have been upgraded to the upstream version of Pacemaker 2.0.0, which provides a number of bug fixes and enhancements over the previous version:
- The Pacemaker detail log is now
/var/log/pacemaker/pacemaker.logby default (not directly in
/var/logor combined with the
corosynclog under
/var/log/cluster).
- The Pacemaker daemon processes have been renamed to make reading the logs more intuitive. For example,
penginehas been renamed to
pacemaker-schedulerd.
- Support for the deprecated
default-resource-stickinessand
is-managed-defaultcluster properties has been dropped. The
resource-stickinessand
is-managedproperties should be set in resource defaults instead. Existing configurations (though not newly created ones) with the deprecated syntax will automatically be updated to use the supported syntax.
- For a more complete list of changes, see Pacemaker 2.0 upgrade in Red Hat Enterprise Linux 8.
It is recommended that users who are upgrading an existing cluster using Red Hat Enterprise Linux 7 or earlier, run
pcs cluster cib-upgrade on any cluster node before and after upgrading RHEL on all cluster nodes..
(BZ#1542288).
(BZ#1549535)
The
pcs commands now support display, cleanup, and synchronization of fencing history
Pacemaker’s fence daemon tracks a history of all fence actions taken (pending, successful, and failed). With this release, the
pcs commands allow users to access the fencing history in the following ways:
- The
pcs statuscommand shows failed and pending fencing actions
- The
pcs status --fullcommand shows the entire fencing history
- The
pcs stonith historycommand provides options to display and clean up fencing history
- Although fencing history is synchronized automatically, the
pcs stonith historycommand now supports an
updateoption that allows a user to manually synchronize fencing history should that be necessary
(BZ#1620190, BZ#1615891)
5.1.14. Networkingand
IPv6protocols
-)
(BZ#1644030).
(BZ#1562998)
firewalld uses
nftables by default
With this update, the
nftables filtering subsystem is the default firewall backend for the
firewalld daemon. To change the backend, use the
FirewallBackend option in the
/etc/firewalld/firewalld.conf file.
This change introduces the following differences in behavior when using
nftables:
iptablesrule executions always occur before
firewalldrules
DROPin
iptablesmeans a packet is never seen by
firewalld
ACCEPTin
iptablesmeans a packet is still subject to
firewalldrules
firewallddirect rules are still implemented through
iptableswhile other
firewalldfeatures use
nftables
- direct rule execution occurs before
firewalldgeneric acceptance of established connections
(BZ#1509026)
Notable change in
wpa_supplicant in RHEL 8.
(BZ#1582538)
NetworkManager now supports SR-IOV virtual functions
In Red Hat Enterprise Linux 8.0,.
(BZ#1555013)
IPVLAN virtual network drivers are now supported
In Red Hat Enterprise Linux 8.0, the kernel includes support for IPVLAN virtual network drivers. With this update, IPVLAN virtual Network Interface Cards (NICs) enable the network connectivity for multiple containers exposing a single MAC address to the local network. This allows a single host to have a lot of containers overcoming the possible limitation on the number of MAC addresses supported by the peer networking equipment.
(BZ#1261167).
(BZ#1555012)
Improvements in the networking stack 4.18
Red Hat Enterprise Linux 8.0 includes the networking stack upgraded to upstream version 4.18, which provides several bug fixes and enhancements. Notable changes include:
- Introduced new offload features, such as
UDP_GSO, and, for some device drivers,
GRO_HW.
- Improved significant scalability for the User Datagram Protocol (UDP).
- Improved the generic busy polling code.
- Improved scalability for the IPv6 protocol.
- Improved scalability for the routing code.
- Added a new default transmit queue scheduling algorithm,
fq_codel, which improves a transmission delay.
- Improved scalability for some transmit queue scheduling algorithms. For example,
pfifo_fastis now lockless.
- Improved scalability of the IP reassembly unit by removing the garbage collection kernel thread and ip fragments expire only on timeout. As a result, CPU usage under DoS is much lower, and the maximum sustainable fragments drop rate is limited by the amount of memory configured for the IP reassembly unit.
(BZ#1562987) | ...
(BZ#1564596)
New features added to VPN using NetworkManager
In Red Hat Enterprise Linux 8.0, NetworkManager provides the following new features to VPN:
- Support for the Internet Key Exchange version 2 (IKEv2) protocol.
- Added some more Libreswan options, such as the
rightid,
leftcert,
narrowing,
rekey,
fragmentationoptions. For more details on the supported options, see the
nm-settings-libreswanman page.
- Updated the default ciphers. This means that when the user does not specify the ciphers, the NetworkManager-libreswan plugin allows the Libreswan application to choose the system default cipher. The only exception is when the user selects an IKEv1 aggressive mode configuration. In this case, the
ike = aes256-sha1;modp1536and
eps = aes256-sha1values are passed to Libreswan.
(BZ#1557035).
(BZ#1273139).
(BZ#1335409).
(BZ#1515987)
lksctp-tools, version 1.0.18 in RHEL 8
The
lksctp-tools package, version 3.28 is available in Red Hat Enterprise Linux (RHEL) 8. Notable enhancements and bug fixes include:
- Integration with Travis CI and Coverity Scan
- Support for the
sctp_peeloff_flagsfunction
- Indication of which kernel features are available
- Fixes on Coverity Scan issues
(BZ#1568622)
Blacklisting SCTP module by default in RHEL 8
To increase security, a set of kernel modules have been moved to the
kernel-modules-extra package. These are not installed by default. As a consequence, non-root users cannot load these components as they are blacklisted by default. To use one of these kernel modules, the system administrator must install
kernel-modules-extra and explicitly remove the module blacklist. As a result, non-root users will be able to load the software component automatically.
(BZ#1642795)
Notable changes in
driverctl 0.101
Red Hat Enterprise Linux 8.0 is distributed with
driverctl 0.101. This version includes the following bug fixes:
- The
shellcheckwarnings have been fixed.
- The bash-completion is installed as
driverctlinstead of
driverctl-bash-completion.sh.
- The
load_overridefunction for non-PCI buses has been fixed.
- The
driverctlservice loads all overrides before it reaches the
basic.targetsystemd target.
(BZ#1648411)
Added rich rules priorities to firewalld
The
priority option has been added to rich rules. This allows users to define the desirable priority order during the rule execution and provides more advanced control over rich rules.
(BZ#1648497)
NVMe over RDMA is supported in RHEL 8
In Red Hat Enterprise Linux (RHEL) 8, Nonvolatile Memory Express (NVMe) over Remote Direct Memory Access (RDMA) supports Infiniband, RoCEv2, and iWARP only in initiator mode.
Note that Multipath is supported in failover mode only.
Additional restrictions:
- Kdump is not supported with NVMe/RDMA.
- Booting from NVMe device over RDMA is not supported.
The
nf_tables back end does not support debugging using
dmesg
Red Hat Enterprise Linux 8.0 uses the
nf_tables back end for firewalls that does not support debugging the firewall using the output of the
dmesg utility. To debug firewall rules, use the
xtables-monitor -t or
nft monitor trace commands to decode rule evaluation events.
(BZ#1645744)
Red Hat Enterprise Linux supports VRF
The kernel in RHEL 8.0 supports virtual routing and forwarding (VRF). VRF devices, combined with rules set using the
ip utility, enable administrators to create VRF domains in the Linux network stack. These domains isolate the traffic on layer 3 and, therefore, the administrator can create different routing tables and reuse the same IP addresses within different VRF domains on one host.
(BZ#1440031)
iproute, version 4.18 in RHEL 8
The
iproute package is distributed with the version 4.18 in Red Hat Enterprise Linux (RHEL) 8. The most notable change is that the interface alias marked as ethX:Y, such as eth0:1, is no longer supported. To work around this problem, users should remove the alias suffix, which is the colon and the following number before entering
ip link show.
(BZ#1589317)
5.1.15. Security
SWID tag of the RHEL 8.0 release
To enable identification of RHEL 8.0 installations using the ISO/IEC 19770-2:2015 mechanism, software identification (SWID) tags are installed in files
/usr/lib/swidtag/redhat.com/com.redhat.RHEL-8-<architecture>.swidtag and
/usr/lib/swidtag/redhat.com/com.redhat.RHEL-8.0-<architecture>.swidtag. The parent directory of these tags can also be found by following the
/etc/swid/swidtags.d/redhat.com symbolic link.
The XML signature of the SWID tag files can be verified using the
xmlsec1 verify command, for example:
xmlsec1 verify --trusted-pem /etc/pki/swid/CA/redhat.com/redhatcodesignca.cert /usr/share/redhat.com/com.redhat.RHEL-8-x86_64.swidtag
The certificate of the code signing certification authority can also be obtained from the Product Signing Keys page on the Customer Portal.
(BZ#1636338)
System-wide cryptographic policies are applied by default
Crypto-policies is a component in Red Hat Enterprise Linux 8, which configures the core cryptographic subsystems, covering the TLS, IPsec, DNSSEC, Kerberos, and SSH protocols..
(BZ#1591620).
(BZ#1622511)
The automatic
OpenSSH server keys generation is now handled by
sshd-keygen@.service
OpenSSH creates RSA, ECDSA, and ED25519 server host keys automatically if they are missing. To configure the host key creation in RHEL 8, use the
sshd-keygen@.service instantiated service.
For example, to disable the automatic creation of the RSA key type:
# systemctl mask sshd-keygen@rsa.service
See the
/etc/sysconfig/sshd file for more information.
(BZ#1228088)
ECDSA keys are supported for SSH authentication
This release of the
OpenSSH suite introduces support for ECDSA keys stored on PKCS #11 smart cards. As a result, users can now use both RSA and ECDSA keys for SSH authentication.
(BZ#1645038)
libssh implements SSH as a core cryptographic component
This change introduces
libssh as a core cryptographic component in Red Hat Enterprise Linux 8. The
libssh library implements the Secure Shell (SSH) protocol.
Note that the client side of
libssh follows the configuration set for
OpenSSH through system-wide crypto policies, but the configuration of the server side cannot be changed through system-wide crypto policies.
(BZ#1485241).
(BZ#1516728).
(BZ#1489094)
PKCS #11 support for smart cards and HSMs is now consistent across the system
With this update, using smart cards and Hardware Security Modules (HSM) with PKCS #11 cryptographic token interface becomes consistent. This means that the user and the administrator can use the same syntax for all related tools in the system. Notable enhancements include:
- Support for the PKCS #11 Uniform Resource Identifier (URI) scheme that ensures a simplified enablement of tokens on RHEL servers both for administrators and application writers.
- A system-wide registration method for smart cards and HSMs using the
pkcs11.conf.
- Consistent support for HSMs and smart cards is available in NSS, GnuTLS, and OpenSSL (through the
openssl-pkcs11engine) applications.
- The Apache HTTP server (
httpd) now seamlessly supports HSMs.
For more information, see the
pkcs11.conf(5) man page.
(BZ#1516741)
Firefox now works with system-wide registered PKCS #11 drivers
The Firefox web browser automatically loads the
p11-kit-proxy module and every smart card that is registered system-wide in
p11-kit through the
pkcs11.conf file is automatically detected. For using TLS client authentication, no additional setup is required and keys from a smart card are automatically used when a server requests them.
(BZ#1595638)
RSA-PSS is now supported in
OpenSC
This update adds support for the RSA-PSS cryptographic signature scheme to the
OpenSC smart card driver. The new scheme enables a secure cryptographic algorithm required for the TLS 1.3 support in the client software.
(BZ#1595626)
Notable changes in
Libreswan in RHEL 8
The
libreswan packages have been upgraded to upstream version 3.27, which provides many bug fixes and enhancements over the previous versions. Most notable changes include:
- Support for RSA-PSS (RFC 7427) through
authby=rsa-sha2, ECDSA (RFC 7427) through
authby=ecdsa-sha2, CURVE25519 using the
dh31keyword, and CHACHA20-POLY1305 for IKE and ESP through the
chacha20_poly1305encryption keyword has been added for the IKEv2 protocol.
- Support for the alternative KLIPS kernel module has been removed from
Libreswan, as upstream has deprecated KLIPS entirely.
- The Diffie-Hellman groups DH22, DH23, and DH24 are no longer supported (as per RFC 8247).
Note that the
authby=rsasig has been changed to always use the RSA v1.5 method, and the
authby=rsa-sha2 option uses the RSASSA-PSS method. The
authby=rsa-sha1 option is not valid as per RFC 8247. That is the reason
Libreswan no longer supports SHA-1 with digital signatures.
(BZ#1566574)
System-wide cryptographic policies change the default IKE version in Libreswan to IKEv2
The default IKE version in the Libreswan IPsec implementation has been changed from IKEv1 (RFC 2409) to IKEv2 (RFC 7296). The default IKE and ESP/AH algorithms for use with IPsec have been updated to comply with system-wide crypto policies, RFC 8221, and RFC 8247. Encryption key sizes of 256 bits are now preferred over key sizes of 128 bits.
The default IKE and ESP/AH ciphers now include AES-GCM, CHACHA20POLY1305, and AES-CBC for encryption. For integrity checking, they provide AEAD and SHA-2. The Diffie-Hellman groups now contain DH19, DH20, DH21, DH14, DH15, DH16, and DH18.
The following algorithms have been removed from the default IKE and ESP/AH policies: AES_CTR, 3DES, SHA1, DH2, DH5, DH22, DH23, and DH24. With the exceptions of DH22, DH23, and DH24, these algorithms can be enabled by the
ike= or
phase2alg=/esp=/ah= option in IPsec configuration files.
To configure IPsec VPN connections that still require the IKEv1 protocol, add the
ikev2=no option to connection configuration files. See the
ipsec.conf(5) man page for more information.
(BZ#1645606)
IKE version-related changes in Libreswan
With this enhancement, Libreswan handles internet key exchange (IKE) settings differently:
- The default internet key exchange (IKE) version has been changed from 1 to 2.
- Connections can now either use the IKEv1 or IKEv2 protocol, but not both.
The interpretation of the
ikev2option has been changed:
- The values
insistis interpreted as IKEv2-only.
- The values
noand
neverare interpreted as IKEv1-only.
- The values
propose,
yesand,
permitare no longer valid and result in an error, because it was not clear which IKE versions resulted from these values
(BZ#1648776)
New features in
OpenSCAP in RHEL 8
The
OpenSCAP suite has been upgraded to upstream version 1.3.0, which introduces many enhancements over the previous versions. The most notable features include:
- API and ABI have been consolidated - updated, deprecated and/or unused symbols have been removed.
- The probes are not run as independent processes, but as threads within the
oscapprocess.
- The command-line interface has been updated.
Python 2bindings have been replaced with
Python 3bindings.
(BZ#1614273)
SCAP Security Guide now supports system-wide cryptographic policies
The
scap-security-guide packages have been updated to use predefined system-wide cryptographic policies for configuring the core cryptographic subsystems. The security content that conflicted with or overrode the system-wide cryptographic policies has been removed.
Note that this change applies only on the security content in
scap-security-guide, and you do not need to update the OpenSCAP scanner or other SCAP components.
(BZ#1618505)
OpenSCAP command-line interface has been improved
The verbose mode is now available in all
oscap modules and submodules. The tool output has improved formatting.
Deprecated options have been removed to improve the usability of the command-line interface.
The following options are no longer available:
--showin
oscap xccdf generate reporthas been completely removed.
--probe-rootin
oscap oval evalhas been removed. It can be replaced by setting the environment variable,
OSCAP_PROBE_ROOT.
--sce-resultsin
oscap xccdf evalhas been replaced by
--check-engine-results
validate-xmlsubmodule has been dropped from CPE, OVAL, and XCCDF modules.
validatesubmodules can be used instead to validate SCAP content against XML schemas and XSD schematrons.
oscap oval list-probescommand has been removed, the list of available probes can be displayed using
oscap --versioninstead.
OpenSCAP allows to evaluate all rules in a given XCCDF benchmark regardless of the profile by using
--profile '(all)'.
(BZ#1618484)
SCAP Security Guide PCI-DSS profile aligns with version 3.2.1
The
scap-security-guide packages provide the PCI-DSS (Payment Card Industry Data Security Standard) profile for Red Hat Enterprise Linux 8 and this profile has been updated to align with the latest PCI-DSS version - 3.2.1.
(BZ#1618528)
SCAP Security Guide supports OSPP 4.2
The
scap-security-guide packages provide a draft of the OSPP (Protection Profile for General Purpose Operating Systems) profile version 4.2 for Red Hat Enterprise Linux 8. This profile reflects mandatory configuration controls identified in the NIAP Configuration Annex to the Protection Profile for General Purpose Operating Systems (Protection Profile Version 4.2). SCAP Security Guide provides automated checks and scripts that help users to meet requirements defined in the OSPP.
(BZ#1618518)
Notable changes in
rsyslog in RHEL 8
The
rsyslog packages have been upgraded to upstream version 8.37.0, which provides many bug fixes and enhancements over the previous versions. Most notable changes include:
- Enhanced processing of rsyslog internal messages; possibility of rate-limiting them; fixed possible deadlock.
- Enhanced rate-limiting in general; the actual spam source is now logged.
- Improved handling of oversized messages - the user can now set how to treat them both in the core and in certain modules with separate actions.
mmnormalizerule bases can now be embedded in the
configfile instead of creating separate files for them.
- All
configvariables, including variables in JSON, are now case-insensitive.
- Various improvements of PostgreSQL output.
- Added a possibility to use shell variables to control
configprocessing, such as conditional loading of additional configuration files, executing statements, or including a text in
config. Note that an excessive use of this feature can make it very hard to debug problems with rsyslog.
- 4-digit file creation modes can be now specified in
config.
- Reliable Event Logging Protocol (RELP) input can now bind also only on a specified address.
- The default value of the
enable.bodyoption of mail output is now aligned to documentation
- The user can now specify insertion error codes that should be ignored in MongoDB output.
- Parallel TCP (pTCP) input has now the configurable backlog for better load-balancing.
- To avoid duplicate records that might appear when
journaldrotated its files, the
imjournaloption has been added. Note that use of this option can affect performance.
Note that the system with
rsyslog can be configured to provide better performance as described in the Configuring system logging without journald or with minimized journald usage Knowledgebase article.
(BZ#1613880)
New rsyslog module:
omkafka
To enable kafka centralized data storage scenarios, you can now forward logs to the kafka infrastructure using the new
omkafka module.
(BZ#1542497)
rsyslog
imfile now supports symlinks
With this update, the rsyslog
imfile module delivers better performance and more configuration options. This allows you to use the module for more complicated file monitoring use cases. For example, you can now use file monitors with glob patterns anywhere along the configured path and rotate symlink targets with increased data throughput.
(BZ#1614179)
The default
rsyslog configuration file format is now non-legacy
The configuration files in the
rsyslog packages now use the non-legacy format by default. The legacy format can be still used, however, mixing current and legacy configuration statements has several constraints. Configurations carried from previous RHEL releases should be revised. See the
rsyslog.conf(5) man page for more information.
(BZ#1619645)
Audit 3.0 replaces
audispd with
auditd
With this update, functionality of
audispd has been moved to
auditd. As a result,
audispd configuration options are now part of
auditd.conf. In addition, the
plugins.d directory has been moved under
/etc/audit. The current status of
auditd and its plug-ins can now be checked by running the
service auditd state command.
(BZ#1616428).
See the Configuring automated unlocking of encrypted volumes using policy-based decryption section for more information.
New SELinux booleans
This update of the SELinux system policy introduces the following booleans:
- colord_use_nfs
- mysql_connect_http
- pdns_can_network_connect_db
- ssh_use_tcpd
- sslh_can_bind_any_port
- sslh_can_connect_any_port
- virt_use_pcscd
To get a list of booleans including their meaning, and to find out if they are enabled or disabled, install the
selinux-policy-devel package and use:
# semanage boolean -l
(JIRA:RHELPLAN-10347)
SELinux now supports systemd
No New Privileges
This update introduces the
nnp_nosuid_transition policy capability that enables SELinux domain transitions under
No New Privileges (NNP) or
nosuid if
nnp_nosuid_transition is allowed between the old and new contexts. The
selinux-policy packages now contain a policy for systemd services that use the
NNP security feature.
The following rule describes allowing this capability for a service:
allow source_domain target_type:process2 { nnp_transition nosuid_transition };
For example:
allow init_t fprintd_t:process2 { nnp_transition nosuid_transition };
The distribution policy now also contains an m4 macro interface, which can be used in SELinux security policies for services that use the
init_nnp_daemon_domain() function.
(BZ#1594111)
Support for a new map permission check on the
mmap syscall
The SELinux
map permission has been added to control memory mapped access to files, directories, sockets, and so on. This allows the SELinux policy to prevent direct memory access to various file system objects and ensure that every such access is revalidated.
(BZ#1592244)
SELinux now supports
getrlimit permission in the
process class
This update introduces a new SELinux access control check,
process:getrlimit, which has been added for the
prlimit() function. This enables SELinux policy developers to control when one process attempts to read and then modify the resource limits of another process using the
process:setrlimit permission. Note that SELinux does not restrict a process from manipulating its own resource limits through
prlimit(). See the
prlimit(2) and
getrlimit(2) man pages for more information.
(BZ#1549772)
selinux-policy now supports VxFS labels
This update introduces support for Veritas File System (VxFS) security extended attributes (xattrs). This enables to store proper SELinux labels with objects on the file system instead of the generic vxfs_t type. As a result, systems with VxFS with full support for SELinux are more secure.
(BZ#1483904)
Compile-time security hardening flags are applied more consistently
Compile-time security hardening flags are applied more consistently on RPM packages in the RHEL 8 distribution, and the
redhat-rpm-config package now automatically provides security hardening flags. The applied compile-time flags also help to meet Common Criteria (CC) requirements. The following security hardening flags are applied:
- For detection of buffer-overflow errors:
D_FORTIFY_SOURCE=2
- Standard library hardening that checks for C++ arrays, vectors, and strings:
D_GLIBCXX_ASSERTIONS
- For Stack Smashing Protector (SSP):
fstack-protector-strong
- For exception hardening:
fexceptions
- For Control-Flow Integrity (CFI):
fcf-protection=full(only on AMD and Intel 64-bit architectures)
- For Address Space Layout Randomization (ASLR):
fPIE(for executables) or
fPIC(for libraries)
- For protection against the Stack Clash vulnerability:
fstack-clash-protection(except ARM)
- Link flags to resolve all symbols on startup:
-Wl,
-z,now
See the
gcc(1) man page for more information.
(JIRA:RHELPLAN-2306)
5.1.16. Virtualization
qemu-kvm 2.12 in RHEL 8
Red Hat Enterprise Linux 8 is distributed with
qemu-kvm 2.12. This version fixes multiple bugs and adds a number of enhancements over the version 1.5.3, available in Red Hat Enterprise Linux 7.
Notably, the following features have been introduced:
- Q35 guest machine type
- UEFI guest boot
- NUMA tuning and pinning in the guest
- vCPU hot plug and hot unplug
- guest I/O threading
Note that some of the features available in
qemu-kvm 2.12 are not supported on Red Hat Enterprise Linux 8. For detailed information, see "Feature support and limitations in RHEL 8 virtualization" on the Red Hat Customer Portal.
(BZ#1559240).
Also note that the previously default PC machine type has become deprecated and should only be used when virtualizing older operating systems that do not support Q35.
(BZ#1599777)
KVM supports UMIP in RHEL 8
KVM virtualization now supports the User-Mode Instruction Prevention (UMIP) feature, which can help prevent user-space applications from accessing to system-wide settings. This reduces the potential vectors for privilege escalation attacks, and thus makes the KVM hypervisor and its guest machines more secure.
(BZ#1494651)
Additional information in KVM guest crash reports
The crash information that KVM hypervisor generates if a guest terminates unexpectedly or becomes unresponsive has been expanded. This makes it easier to diagnose and fix problems in KVM virtualization deployments.
(BZ#1508139)
NVIDIA vGPU is now compatible with the VNC console
When using the NVIDIA virtual GPU (vGPU) feature, it is now possible to use the VNC console to display the visual output of the guest.
(BZ#1497911)
Ceph is supported by virtualization
With this update, Ceph storage is supported by KVM virtualization on all CPU architectures supported by Red Hat.
(BZ#1578855)
Interactive boot loader for KVM virtual machines on IBM Z
When booting a KVM virtual machine on an IBM Z host, the QEMU boot loader firmware can now present an interactive console interface of the guest OS. This makes it possible to troubleshoot guest OS boot problems without access to the host environment.
(BZ#1508137)
IBM z14 ZR1 supported in virtual machines
The KVM hypervisor now supports the CPU model of the IBM z14 ZR1 server. This enables using the features of this CPU in KVM virtual machines that run on an IBM Z system.
(BZ#1592337)
KVM supports Telnet 3270 on IBM Z
When using RHEL 8 as a host on an IBM Z system, it is now possible to connect to virtual machines on the host using Telnet 3270 clients.
(BZ#1570029)
QEMU sandboxing has been added
In Red Hat Enterprise Linux 8, the QEMU emulator introduces the sandboxing feature. QEMU sandboxing provides configurable limitations to what systems calls QEMU can perform, and thus makes virtual machines more secure. Note that this feature is enabled and configured by default.
(JIRA:RHELPLAN-10628)
New machine types for KVM virtual machines on IBM POWER
Multiple new rhel-pseries machine types have been enabled for KVM hypervisors running on IBM POWER 8 and IBM POWER 9 systems. This makes it possible for virtual machines (VMs) hosted on RHEL 8 on an IBM POWER system to correctly use the CPU features of these machine types. In addition, this allows for migrating VMs on IBM POWER to a more recent version of the KVM hypervisor.
(BZ#1585651, BZ#1595501)
ARM 64 systems now support virtual machines with up to 384 vCPUs
When using the KVM hypervisor on an ARM 64 system, it is now possible to assign up to 384 virtual CPUs (vCPUs) to a single virtual machine (VM).
Note that the number of physical CPUs on the host must be equal to or greater than the number of vCPUs attached to its VMs, because RHEL 8 does not support vCPU overcommitting.
(BZ#1422268)
GFNI and CLDEMOT instruction sets enabled for Intel Xeon SnowRidge
Virtual machines (VMs) running in a RHEL 8 host on an Intel Xeon SnowRidge system are now able to use the GFNI and CLDEMOT instruction sets. This may significantly increase the performance of such VMs in certain scenarios.
(BZ#1494705)
IPv6 enabled for OVMF
The IPv6 protocol is now enabled on Open Virtual Machine Firmware (OVMF). This makes it possible for virtual machines that use OVMF to take advantage of a variety of network boot improvements that IPv6 provides.
(BZ#1536627)
A VFIO-based block driver for NVMe devices has been added
The QEMU emulator introduces a driver based on virtual function I/O (VFIO) for Non-volatile Memory Express (NVMe) devices. The driver communicates directly with NVMe devices attached to virtual machines (VMs) and avoids using the kernel system layer and its NVMe drivers. As a result, this enhances the performance of NVMe devices in virtual machines.
(BZ#1519004)
Multichannel support for the Hyper-V Generic UIO driver
RHEL 8 now supports the multichannel feature for the Hyper-V Generic userspace I/O (UIO) driver. This makes it possible for RHEL 8 VMs running on the Hyper-V hypervisor to use the Data Plane Development Kit (DPDK) Netvsc Poll Mode driver (PMD), which enhances the networking capabilities of these VMs.
Note, however, that the Netvsc interface status currently displays as Down even when it is running and usable.
(BZ#1650149)
Improved huge page support
When using RHEL 8 as a virtualization host, users can modify the size of pages that back memory of a virtual machine (VM) to any size that is supported by the CPU. This can significantly improve the performance of the VM.
To configure the size of VM memory pages, edit the VM’s XML configuration and add the <hugepages> element to the <memoryBacking> section.
(JIRA:RHELPLAN-14607)
5.1.17. Supportability
sosreport can report eBPF-based programs and maps
The sosreport tool has been enhanced to report any loaded extended Berkeley Packet Filtering (eBPF) programs and maps in Red Hat Enterprise Linux 8.
(BZ#1559836)
5.2. Bug fixes
This part describes bugs fixed in Red Hat Enterprise Linux 8.0 that have a significant impact on users.
5.2.1. Desktop
PackageKit can now operate on rpm packages
With this update, the support for operating on
rpm packages has been added into PackageKit.
(BZ#1559414)
5.2.2. Graphics infrastructures
QEMU does not handle 8-byte
ggtt entries correctly
QEMU occasionally splits an 8-byte
ggtt entry write to two consecutive 4-byte writes. Each of these partial writes can trigger a separate host
ggtt write. Sometimes the two
ggtt writes are combined incorrectly. Consequently, translation to a machine address fails, and an error log occurs.
(BZ#1598776)
5.2.3. Identity Management
The Enterprise Security Client uses the
opensc library for token detection
Red Hat Enterprise Linux 8.0 only supports the
opensc library for smart cards. With this update, the Enterprise Security Client (ESC) use
opensc for token detection instead of the removed
coolkey library. As a result, applications correctly detect supported tokens.
(BZ#1538645)
Certificate System now supports rotating debug logs
Previously, Certificate System used a custom logging framework, which did not support log rotation. As a consequence, debug logs such as
/var/log/pki/instance_name/ca/debug grew indefinitely. With this update, Certificate System uses the
java.logging.util framework, which supports log rotation. As a result, you can configure log rotation in the
/var/lib/pki/instance_name/conf/logging.properties file.
For further information on log rotation, see documentation for the
java.util.logging package.
(BZ#1565073)
Certificate System no longer logs
SetAllPropertiesRule operation warnings when the service starts
Previously, Certificate System logged warnings on the
SetAllPropertiesRule operation in the
/var/log/messages log file when the service started. The problem has been fixed, and the mentioned warnings are no longer logged.
(BZ#1424966)
The Certificate System KRA client parses
Key Request responses correctly
Previously, Certificate System switched to a new JSON library. As a consequence, serialization for certain objects differed, and the Python key recovery authority (KRA) client failed to parse
Key Request responses. The client has been modified to support responses using both the old and the new JSON library. As a result, the Python KRA client parses
Key Request responses correctly.
(BZ#1623444)
5.2.4. Compilers and development tools
GCC no longer produces false positive warnings about out-of-bounds access
Previously, when compiling with the
-O3 optimization level option, the GNU Compiler Collection (GCC) occasionally returned a false positive warning about an out-of-bounds access, even if the compiled code did not contain it. The optimization has been fixed and GCC no longer displays the false positive warning.
(BZ#1246444)
ltrace displays large structures correctly
Previously, the
ltrace tool could not correctly print large structures returned from functions. Handling of large structures in
ltrace has been improved and they are now printed correctly.
(BZ#1584322)
GCC built-in function
__builtin_clz returns correct values on IBM Z
Previously, the
FLOGR instruction of the IBM Z architecture was incorrectly folded by the GCC compiler. As a consequence, the
__builtin_clz function using this instruction could return wrong results when the code was compiled with the
-funroll-loops GCC option. This bug has been fixed and the function now provides correct results.
(BZ#1652016)
GDB provides nonzero exit status when last command in batch mode fails
Previously, GDB always exited with status
0 when running in batch mode, regardless of errors in the commands. As a consequence, it was not possible to determine whether the commands succeeded. This behavior has been changed and GDB now exits with status
1 when an error occurs in the last command. This preserves compatibility with the previous behavior where all commands are executed. As a result, it is now possible to determine if GDB batch mode execution is successful.
(BZ#1491128)
5.2.5. File systems and storage
Higher print levels no longer cause
iscsiadm to terminate unexpectedly
Previously, the
iscsiadm utility terminated unexpectedly when the user specified a print level higher than 0 with the
-P option. This problem has been fixed, and all print levels now work as expected.
(BZ#1582099)
multipathd no longer disables the path when it fails to get the WWID of a path
Previously, the
multipathd service treated a failed attempt at getting a path’s WWID as getting an empty WWID. If
multipathd failed to get the WWID of a path, it sometimes disabled that path.
With this update,
multipathd continues to use the old WWID if it fails to get the WWID when checking to see if it has changed.
As a result,
multipathd no longer disables paths when it fails to get the WWID, when checking if the WWID has changed.
5.2.6. High availability and clusters, the setting
PCSD_SSL_OPTIONS in the
/etc/sysconfig/pcsd configuration file accepts the
OP_NO_RENEGOTIATION option to reject renegotiations. Note that the client can still open multiple connections to a server with a handshake performed in all of them.
(BZ#1566430)
A removed cluster node is no longer displayed in the cluster status
Previously, when a node was removed with the
pcs cluster node remove command, the removed node remained visible in the output of a
pcs status display. With this fix, the removed node is no longer displayed in the cluster status.
(BZ#1595829)
Fence agents can now be configured using either newer, preferred parameter names or deprecated parameter names
A large number of fence agent parameters have been renamed while the old parameter names are still supported as deprecated. Previously,
pcs was not able to set the new parameters unless used with the
--force option. With this fix,
pcs now supports the renamed fence agent parameters while maintaining support for the deprecated parameters.
(BZ#1436217)
The
pcs command now correctly reads the XML status of a cluster for display
The
pcs command runs the
crm_mon utility to get the status of a cluster in XML format. The
crm_mon utility prints XML to standard output and warnings to standard error output. Previously
pcs mixed XML and warnings into one stream and was then unable to parse it as XML. With this fix, standard and error outputs are separated in
pcs and reading the XML status of a cluster works as expected.
(BZ#1578955)
Users no longer advised to destroy clusters when creating new clusters with nodes from existing clusters
Previously, when a user specified nodes from an existing cluster when running the
pcs cluster setup command or when creating a cluster with the
pcsd Web UI, pcs reported that as an error and suggested that the user destroy the cluster on the nodes. As a result, users would destroy the cluster on the nodes, breaking the cluster the nodes were part of as the remaining nodes would still consider the destroyed nodes to be part of the cluster. With this fix, users are instead advised to remove nodes from their cluster, better informing them of how to address the issue without breaking their clusters.
(BZ#1596050)
pcs commands no longer interactively ask for credentials
When a non-root user runs a
pcs command that requires root permission,
pcs connects to the locally running
pcsd daemon and passes the command to it, since the
pcsd daemon runs with root permissions and is capable of running the command. Previously, if the user was not authenticated to the local
pcsd daemon,
pcs asked for a user name and a password interactively. This was confusing to the user and required special handling in scripts running
pcs. With this fix, if the user is not authenticated then
pcs exits with an error advising what to do: Either run
pcs as root or authenticate using the new
pcs client local-auth command. As a result,
pcs commands do not interactively ask for credentials, improving the user experience.
(BZ#1554310)
The
pcsd daemon now starts with its default self-generated SSL certificate when
crypto-policies is set to
FUTURE.
A
crypto-policies setting of
FUTURE requires RSA keys in SSL certificates to be at least 3072b long. Previously, the
pcsd daemon would not start when this policy was set since it generates SSL certificates with a 2048b key. With this update, the key size of
pcsd self-generated SSL certificates has been increased to 3072b and
pcsd now starts with its default self-generated SSL certificate.
(BZ#1638852)
The
pcsd service now starts when the network is ready
Previously, When a user configured
pcsd to bind to a specific IP address and the address was not ready during boot when
pcsd attempted to start up, then
pcsd failed to start and a manual intervention was required to start
pcsd. With this fix,
pcsd.service depends on
network-online.target. As a result,
pcsd starts when the network is ready and is able to bind to an IP address.
(BZ#1640477)
5.2.7. Networking
Weak TLS algorithms are no longer allowed for
glib-networking
Previously, the
glib-networking package was not compatible with RHEL 8 System-wide Crypto Policy. As a consequence, applications using the
glib library for networking might allow Transport Layer Security (TLS) connections using weak algorithms than the administrator intended. With this update, the system-wide crypto policy is applied, and now applications using
glib for networking allow only TLS connections that are acceptable according to the policy.
(BZ#1640534)
5.2.8. Security
SELinux policy now allows
iscsiuio processes to connect to the discovery portal
Previously, SELinux policy was too restrictive for
iscsiuio processes and these processes were not able to access
/dev/uio* devices using the
mmap system call. As a consequence, connection to the discovery portal failed. This update adds the missing rules to the SELinux policy and
iscsiuio processes work as expected in the described scenario.
(BZ#1626446)
5.2.9. Subscription management
dnf and
yum can now access the repos regardless of
subscription-manager values
Previously, the
dnf or
yum commands ignored the
https:// prefix from a URL added by the subscription-manager service. The updated
dnf or
yum commands do not ignore invalid
https:// URLs. As a consequence,
dnf and
yum failed to access the repos. To fix the problem, a new configuration variable,
proxy_scheme has been added to the
/etc/rhsm/rhsm.conf file and the value can be set to either
http or
https. If no value is specified, subscription-manager set
http by default which is more commonly used.
Note that if the proxy uses
http, most users should not change anything in the configuration in
/etc/rhsm/rhsm.conf. If the proxy uses
https, users should update the value of
proxy_scheme to
https. Then, in both cases, users need to run the
subscription-manager repos --list command or wait for the
rhsmcertd daemon process to regenerate the
/etc/yum.repos.d/redhat.repo properly.
5.2.10. Virtualization
Mounting ephemeral disks on Azure now works more reliably
Previously, mounting an ephemeral disk on a virtual machine (VM) running on the Microsoft Azure platform failed if the VM was "stopped(deallocated)" and then started. This update ensures that reconnecting disks is handled correctly in the described circumstances, which prevents the problem from occurring.
(BZ#1615599)
5.3. Technology previews
This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 8.0.
For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope.
5.3.1. Kernel.
(BZ#1559616)).
(BZ#1548302) link:. You can refer to the same link to obtain more information about particular Control Group v2 controllers.
(BZ#1401552)
early kdump available as a Technology Preview in Red Hat Enterprise Linux 8
The
early kdump feature allows the crash kernel and initramfs to load early enough to capture the
vmcore information even for early crashes. For more details about
early kdump, see the
/usr/share/doc/kexec-tools/early-kdump-howto.txt file.
(BZ#1520209)
The
ibmvnic device driver available as a Technology Preview
With Red Hat Enterprise Linux 8.0, the IBM Virtual Network Interface Controller (vNIC) driver for IBM POWER architectures,
ibmvnic, is available as a Technology Preview. vNIC#1524683)
5.3.2. Graphics infrastructures
VNC remote console available as a Technology Preview for the 64-bit ARM architecture
On the 64-bit ARM architecture, the Virtual Network Computing (VNC) remote console is available as a Technology Preview. Note that the rest of the graphics stack is currently unverified for the 64-bit ARM architecture.
(BZ#1698565)
5.3.3. Hardware enablement
The cluster-aware MD RAID1 is available as a technology preview.
RAID1 cluster is not enabled by default in the kernel space. If you want to have a try with RAID1 cluster, you need to build the kernel with RAID1 cluster as a module first, use the following steps:
- Enter the
make menuconfigcommand.
- Enter the
make && make modules && make modules_install && make installcommand.
- Enter the
rebootcommand.
5.3.4. Identity Management Using the Identity Management API to Communicate with the IdM Server (TECHNOLOGY PREVIEW).
5.3.5. File systems and storage63281).
(JIRA:RHELPLAN-1212)
OverlayFS
OverlayFS is a type of union file system. It enables. See the Linux kernel documentation for additional information:.
OverlayFS remains a Technology Preview under most circumstances. As such, the kernel logs warnings when this technology is activated.
Full support is available for OverlayFS when used with supported container engines (
podman,
cri-o, or
buildah) under the following restrictions:
- OverlayFS is supported for use only as a container engine graph driver. Its use is supported only for container COW content, not for persistent storage. You must place any persistent storage on non-OverlayFS volumes. Only the default container engine configuration can be used; that is, one level of overlay, one lowerdir, and both lower and upper levels are on the same file system.
- Only XFS is currently supported for use as a lower layer file system.
Additionally, the following rules and limitations apply to using OverlayFS:
- The OverlayFS kernel ABI and userspace behavior are not considered stable, and might see changes in future updates.
OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. The following cases are not POSIX-compliant:
- Lower files opened with
O_RDONLYdo not receive
st_atimeupdates when the files are read.
- Lower files opened with
O_RDONLY, then mapped with
MAP_SHAREDare inconsistent with subsequent modification.
Fully compliant
st_inoor
d_inovalues are not enabled by default on RHEL 8, but you can enable full POSIX compliance for them with a module option or mount option.
To get consistent inode numbering, use the
xino=onmount option.
You can also use the
redirect_dir=onand
index=onoptions to improve POSIX compliance. These two options make the format of the upper layer incompatible with an overlay without these options. That is, you might get unexpected results or errors if you create an overlay with
redirect_dir=onor
index=on, unmount the overlay, then mount the overlay without these options.
Commands used with XFS:
-.
- SELinux security labels are enabled by default in all supported container engines with OverlayFS.
- There are several known issues associated with OverlayFS in this release. For details, see Non-standard behavior in the Linux kernel documentation:.
(BZ#1690207)
File system DAX is now available for ext4 and XFS as a Technology Preview
In Red Hat Enterprise Linux 8.0, file system DAX is available as a Technology Preview. DAX provides mount option. Then, an
mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application’s address space.
(BZ#1627455)
5.3.6. High availability and clusters
Pacemaker
podman bundles available as a Technology Preview
Pacemaker container bundles now run on the
podman container platform, with the container bundle feature being available as a Technology Preview. There is one exception to this feature being Technology Preview: Red Hat fully supports the use of Pacemaker bundles for Red Hat Openstack.
(BZ#1619620)
5.3.7. Networking
XDP available as a Technology Preview
The eXpress Data Path (XDP) feature, which is available as a Technology Preview, provides a means to attach extended Berkeley Packet Filter (eBPF) programs for high-performance packet processing at an early point in the kernel ingress data path, allowing efficient programmable packet analysis, filtering, and manipulation.
(BZ#1503672)
eBPF for tc available as a Technology Preview
As a Technology Preview, the Traffic Control (tc) kernel subsystem and the tc tool can attach extended Berkeley Packet Filtering (eBPF) programs as packet classifiers and actions for both ingress and egress queueing disciplines. This enables programmable packet processing inside the kernel network data path.
AF_XDP available as a Technology Preview
Address Family eXpress Data Path (
AF_XDP) socket is designed for high-performance packet processing. It accompanies
XDP and grants efficient redirection of programmatically selected packets to user space applications for further processing.
(BZ#1633143)
KTLS available as a Technology Preview
In Red Hat Enterprise Linux 8, Kernel Transport Layer Security (KTLS) is provided)
TIPC available as a Technology Preview
The Transparent Inter Process Communication (
TIPC) is a protocol specially designed for efficient communication within clusters of loosely paired nodes. It works as a kernel module and provides a
tipc tool in
iproute2 package to allow designers to create applications that can communicate quickly and reliably with other applications regardless of their location within the cluster. This feature is available as a Technology Preview.
(BZ#1581898)
5.3.8. Red Hat Enterprise Linux System Roles
The
postfix role of RHEL System Roles available.
The
rhel-system-roles packages are distributed through the AppStream repository.
The
postfix role is available as a Technology Preview.
The following roles are fully supported:
kdump
network
selinux
timesync
For more information, see the Knowledgebase article about RHEL System Roles.
(BZ#1812552)
5.3.9. Virtualization
AMD SEV for KVM virtual machines
As a Technology Preview, RHEL 8 introduces the Secure Encrypted Virtualization (SEV) feature for AMD EPYC host machines that use the KVM hypervisor. If enabled on a virtual machine (VM), SEV encrypts VM memory so that the host cannot access data on the VM. This increases the security of the VM if the host is successfully infected by malware.
Note that the number of VMs that can use this feature at a time on a single host is determined by the host hardware. Current AMD EPYC processors support up to 15 running VMs using SEV.
Also note that for VMs with SEV configured to be able to boot, you must also configure the VM with a hard memory limit. To do so, add the following to the VM’s XML configuration:
<memtune> <hard_limit unit='KiB'>N</hard_limit> </memtune>
The recommended value for N is equal to or greater then the guest RAM + 256 MiB. For example, if the guest is assigned 2 GiB RAM, N should be 2359296 or greater.
(BZ#1501618, BZ#1501607)
Intel vGPU
As a Technology Preview, it is now possible to divide a physical Intel GPU device into multiple virtual devices referred to as
mediated devices. These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs share the performance of a single physical Intel GPU.
Note that only selected Intel GPUs are compatible with the vGPU feature. In addition, assigning a physical GPU to VMs makes it impossible for the host to use the GPU, and may prevent graphical display output on the host from working.
(BZ#1528684)
Nested virtualization now available on IBM POWER 9
As a Technology Preview, it is now possible to use the nested virtualization features on RHEL 8 host machines running on IBM POWER 9 systems. Nested virtualization enables KVM virtual machines (VMs) to act as hypervisors, which allows for running VMs inside VMs.
Note that nested virtualization also remains a Technology Preview on AMD64 and Intel 64 systems.
Also note that for nested virtualization to work on IBM POWER 9, the host, the guest, and the nested guests currently all need to run one of the following operating systems:
- RHEL 8
- RHEL 7 for POWER 9
(BZ#1505999, BZ#1518937)
KVM virtualization is usable in RHEL 8 Hyper-V virtual machines
As a Technology Preview, nested KVM virtualization can now be used on the Microsoft Hyper-V hypervisor. As a result, you can create virtual machines on a RHEL 8 guest system running on a Hyper-V host.
Note that currently, this feature only works on Intel systems. In addition, nested virtualization is in some cases not enabled by default on Hyper-V. To enable it, see the following Microsoft documentation:
(BZ#1519039)
5.4. Deprecated functionality
This part provides an overview of functionality that has been deprecated in Red Hat Enterprise Linux 8.0.
Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 8. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. For the most recent list of deprecated functionality within a particular major release, refer to the latest version of release documentation.
Deprecated hardware components are not recommended for new deployments on the current or future major releases. Hardware driver updates are limited to security and critical fixes only. Red Hat recommends replacing this hardware as soon as reasonably feasible.
A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from a product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations.
For information regarding functionality that is present in RHEL 7 but has been removed in RHEL 8, see Considerations in adopting RHEL 8.
5.4.1. Installer and image creation
The
--interactive option of the
ignoredisk Kickstart command has been deprecated
Using the
--interactive option in future releases of Red Hat Enterprise Linux will result in a fatal installation error. It is recommended that you modify your Kickstart file to remove the option.
(BZ#1637872)
Several Kickstart commands and options have been deprecated
Using the following commands and options in RHEL 8 Kickstart files will print a warning in the logs.
author
authconfig
device
deviceprobe
dmraid
install
lilo
lilocheck
mouse
multipath
bootloader --upgrade
ignoredisk --interactive
partition --active
reboot --kexec
Where only specific options are listed, the base command and its other options are still available and not deprecated.
For more details and related changes in Kickstart, see the Kickstart changes section of the Considerations in adopting RHEL 8 document.
(BZ#1642765)
5.4.2. File systems and storage
NFSv3 over UDP has been disabled
The NFS server no longer opens or listens on a User Datagram Protocol (UDP) socket by default. This change affects only NFS version 3 because version 4 requires the Transmission Control Protocol (TCP).
NFS over UDP is no longer supported in RHEL 8.
(BZ#1592011)
The
elevator kernel command line parameter is deprecated
The
elevator kernel command line parameter was used in earlier RHEL releases to set the disk scheduler for all devices. In RHEL 8, the parameter is deprecated.
The upstream Linux kernel has removed support for the
elevator parameter, but it is still available in RHEL 8 for compatibility reasons.
Note that the kernel selects a default disk scheduler based on the type of device. This is typically the optimal setting. If you require a different scheduler, Red Hat recommends that you use
udev rules or the Tuned service to configure it. Match the selected devices and switch the scheduler only for those devices.
For more information, see the following article: Why does the 'elevator=' parameter no longer work in RHEL8.
(BZ#1665295)
The VDO Ansible module in VDO packages
The VDO Ansible module is currently provided by the
vdo RPM package. In a future release, the VDO Ansible module will be moved to the Ansible RPM packages.
5.4.3. Networking
Network scripts are deprecated in RHEL 8
Network scripts are deprecated in Red Hat Enterprise Linux 8 and they are no longer provided by default. The basic installation provides a new version of the
ifup and
ifdown scripts which call the NetworkManager service through the nmcli tool. In Red Hat Enterprise Linux 8, to run the
ifup and the
ifdown scripts, NetworkManager must be running.
Note that
ifdown scripts link to the installed legacy network scripts.
Calling the legacy network scripts shows a warning about their deprecation.
(BZ#1647725)
5.4.4. Security.
(BZ#1646541).
(BZ#1645153).
5.4.5. Virtualization
Virtual machine snapshots are not properly supported in RHEL 8
The current mechanism of creating virtual machine (VM) snapshots has been deprecated, as it is not working reliably. As a consequence, it is recommended not to use VM snapshots in RHEL 8.
Note that a new VM snapshot mechanism is under development and will be fully implemented in a future minor release of RHEL 8.
The Cirrus VGA virtual GPU type has been deprecated
With a future major update of Red Hat Enterprise Linux, the Cirrus VGA GPU device will no longer be supported in KVM virtual machines. Therefore, Red Hat recommends using the stdvga, virtio-vga, or qxl devices instead of Cirrus VGA.
(BZ#1651994)
virt-manager has been deprecated
The Virtual Machine Manager application, also known as virt-manager, has been deprecated. The RHEL 8 web console, also known as Cockpit, is intended to become its replacement in a subsequent release. It is, therefore, recommended that you use the web console for managing virtualization in a GUI. However, in Red Hat Enterprise Linux 8.0, some features may only be accessible from either virt-manager or the command line.
(JIRA:RHELPLAN-10304)
5.4.6. Deprecated packages
The following packages have been deprecated and will probably not be included in a future major release of Red Hat Enterprise Linux:
- 389-ds-base-legacy-tools
- authd
- custodia
- hostname
- libidn
- net-tools
- network-scripts
- nss-pam-ldapd
- sendmail
- yp-tools
- ypbind
- ypserv
5.5. Known issues
This part describes known issues in Red Hat Enterprise Linux 8.
5.5.1. The web console
Logging to RHEL web console with session_recording shell is not possible
Currently, the RHEL web console logins will fail for tlog recording-enabled users. RHEL web console requires a user’s shell to be present in the
/etc/shells directory to allow a successful login. However, if
tlog-rec-session is added to
/etc/shells, a recorded user is able to disable recording by changing the shell from
tlog-rec-session to another shell from
/etc/shells, using the "chsh" utility. Red Hat does not recommend adding
tlog-rec-session to
/etc/shells for this reason.
(BZ#1631905)
5.5.2. Installer and image creation
xorg-x11-drv-fbdev,
xorg-x11-drv-vesa, and
xorg-x11-drv-vmware video drivers are not installed by default
Workstations with specific models of NVIDIA graphics cards and workstations with specific AMD accelerated processing units will not display the graphical login window after a RHEL 8.0 Server installation.
To work around this problem, perform a RHEL 8.0
Workstation installation on a workstation machine. If a RHEL 8.0
Server installation is required on the workstation, manually install the
base-x package group after installation by running the
yum -y groupinstall base-x command.
In addition, virtual machines relying on EFI for graphics support, such as Hyper-V, are also affected. If you selected the
Server with GUI base)
Installation fails when using the
reboot --kexec command
The RHEL 8 installation fails when using a Kickstart file that contains the
reboot --kexec command. To avoid the problem, use the
reboot command instead of
reboot --kexec in your Kickstart file.
Copying the content of the
Binary DVD.iso file to a partition omits the
.treeinfo and
.discinfo files
During local installation, while copying the content of the RHEL 8 Binary DVD.iso image file to a partition, the
* in the
cp <path>/\* <mounted partition>/dir command fails to copy the
.treeinfo and
.discinfo files. These files are required for a successful installation. As a result, the BaseOS and AppStream repositories are not loaded, and a debug-related log message in the
anaconda.log file is the only record of the problem.
To work around the problem, copy the missing
.treeinfo and
.discinfo files to the partition.
(BZ#1692746)
Anaconda installation includes low limits of minimal resources setting requirements
Anaconda initiates the installation on systems with minimal resource settings required available and do not provide previous message warning about the required resources for performing the installation successfully. As a result, the installation can fail and the output errors do not provide clear messages for possible debug and recovery. To work around this problem, make sure that the system has the minimal resources settings required for installation: 2GB memory on PPC64(LE) and 1GB on x86_64. As a result, it should be possible to perform a successful installation.
(BZ#1696609))
5.5.3. Kernel
The i40iw module does not load automatically on boot
Due to many i40e NICs not supporting iWarp and the i40iw module not fully supporting suspend/resume, this module is not automatically loaded by default to ensure suspend/resume works properly. To work around this problem, manually edit the
/lib/udev/rules.d/90-rdma-hw-modules.rules file to enable automated load of i40iw.
Also note that if there is another RDMA device installed with a i40e device on the same machine, the non-i40e RDMA device triggers the rdma service, which loads all enabled RDMA stack modules, including the i40iw module.
(BZ#1623712)
The system sometimes becomes unresponsive when many devices are connected
When Red Hat Enterprise Linux 8 configures a large number of devices, a large number of console messages occurs on the system console. This happens, for example, when there are a large number of logical unit numbers (LUNs), with multiple paths to each LUN. The flood of console messages, in addition to other work the kernel is doing, might cause the kernel watchdog to force a kernel panic because the kernel appears to be hung.
Because the scan happens early in the boot cycle, the system becomes unresponsive when many devices are connected. This typically occurs at boot time.
If
kdump is enabled on your machine during the device scan event after boot, the hard lockup results in a capture of a
vmcore image.
To work around this problem, increase the watchdog lockup timer. To do so, add the
watchdog_thresh=N option to the kernel command line. Replace
N with the number of seconds:
- If you have less than a thousand devices, use
30.
- If you have more than a thousand devices, use
60.
For storage, the number of device is the number of paths to all the LUNs: generally, the number of
/dev/sd* devices.
After applying the workaround, the system no longer becomes unresponsive when configuring a large amount of devices.
(BZ#1598448)
KSM sometimes ignores NUMA memory policies
When the kernel shared memory (KSM) feature is enabled with the
merge_across_nodes=1 parameter, KSM ignores memory policies set by the mbind() function, and may merge pages from some memory areas to Non-Uniform Memory Access (NUMA) nodes that do not match the policies.
To work around this problem, disable KSM or set the
merge_across_nodes parameter to
0 if using NUMA memory binding with QEMU. As a result, NUMA memory policies configured for the KVM VM will work as expected.
(BZ#1153521)
The
qede driver hangs the NIC and makes it unusable
Due to a bug, the
qede driver for the 41000 and 45000 QLogic series NICs can cause Firmware upgrade and debug data collection operations to fail and make the NIC unusable or in hung state until reboot (PCI reset) of the host makes the NIC operational again.
This issue has been detected in all of the following scenarios:
- when upgrading Firmware of the NIC using the inbox driver
- when collecting debug data running the
ethtool -d ethxcommand
- running the
sosreportcommand as it includes
ethtool -d ethx.
- when the inbox driver initiates automatic debug data collection, such as IO timeout, Mail Box Command timeout and a Hardware Attention.
A future erratum from Red Hat will be released via Red Hat Bug Advisory (RHBA) to address this issue. To work around this problem, create a case in to request a supported fix for the issue until the RHBA is released.
(BZ#1697310)
Radix tree symbols were added to
kernel-abi-whitelists
The following radix tree symbols have been added to the
kernel-abi-whitelists package in Red Hat Enterprise Linux 8:
__radix_tree_insert
__radix_tree_next_slot
radix_tree_delete
radix_tree_gang_lookup
radix_tree_gang_lookup_tag
radix_tree_next_chunk
radix_tree_preload
radix_tree_tag_set
The symbols above were not supposed to be present and will be removed from the RHEL8 whitelist.
(BZ#1695142)
podman fails to checkpoint a container in RHEL 8
The version of the Checkpoint and Restore In Userspace (CRIU) package is outdated in Red Hat Enterprise Linux 8. As a consequence, CRIU does not support container checkpoint and restore functionality and the
podman utility fails to checkpoint containers. When running the
podman container checkpoint command, the following error message is displayed: 'checkpointing a container requires at least CRIU 31100'
(BZ#1689746)
early-kdump and standard
kdump fail if the
add_dracutmodules+=earlykdump option is used in
dracut.conf
Currently, an inconsistency occurs between the kernel version being installed for
early-kdump and the kernel version
initramfs is generated for. As a consequence, booting with
early-kdump enabled,
early-kdump fails. In addition, if
early-kdump detects that it is being included in a standard
kdump initramfs image, it forces an exit. Therefore the standard
kdump service also fails when trying to rebuild
kdump initramfs if
early-kdump is added as a default
dracut module. As a consequence,
early-kdump and standard
kdump both fail. To work around this problem, do not add
add_dracutmodules+=earlykdump or any equivalent configuration in the
dracut.conf file. As a result,
early-kdump is not included by
dracut by default, which prevents the problem from occuring. However, if an
early-kdump image is required, it has to be created manually.
(BZ#1662911)
Debug kernel fails to boot in crash capture environment in RHEL 8
Due to memory-demanding nature of the debug kernel, a problem occurs when the debug kernel is in use and a kernel panic is triggered. As a consequence, the debug kernel is not able to boot as the capture kernel, and a stack trace is generated instead. To work around this problem, increase the crash kernel memory accordingly. As a result, the debug kernel successfully boots in the crash capture environment.
(BZ#1659609)
Network interface is renamed to
kdump-<interface-name> when
fadump is used
When firmware-assisted dump (
fadump) is utilized to capture a vmcore and store it to a remote machine using SSH or NFS protocol, the network interface is renamed to
kdump-<interface-name> if
<interface-name> is generic, for example, *eth#, or net#. This problem occurs because the vmcore capture scripts in the initial RAM disk (
initrd) add the kdump- prefix to the network interface name to secure persistent naming. The same
initrd is used also for a regular boot, so the interface name is changed for the production kernel too.
(BZ#1745507)
5.5.4. Software management
Running
yum list under a non-root user causes YUM crash
When running the
yum list command under a non-root user after the
libdnf package has been updated, YUM can terminate unexpectedly. If you hit this bug, run
yum list under root to resolve the problem. As a result, subsequent attempts to run
yum list under a non-root user no longer cause YUM crash.
(BZ#1642458)
YUM v4 skips unavailable repositories by default
YUM v4 defaults to the "skip_if_unavailable=True" setting for all repositories. As a consequence, if the required repository is not available, the packages from the repository are not considered in the install, search, or update operations. Subsequently, some
yum commands and yum-based scripts succeed with exit code 0 even if there are unavailable repositories.
Currently, there is no other workaround available than updating the
libdnf package.
5.5.5. Infrastructure services
The
nslookup and
host utilities ignore replies from name servers with recursion not available
If more name servers are configured and recursion is not available for a name server, the
nslookup and
host utilities ignore replies from such name server unless it is the one that is last configured. In case of the last configured name server, answer is accepted even without the
recursion available flag. However, if the last configured name server is not responding or unreachable, name resolution fails.
To work around the problem:
- Ensure that configured name servers always reply with the
recursion availableflag set.
- Allow recursion for all internal clients.
To troubleshoot the problem, you can also use the
dig utility to detect whether recursion is available or not.
(BZ#1599459)
5.5.6. Shells and command-line tools.
(BZ#1584510)
systemd in debug mode produces unnecessary log messages
The
systemd system and service manager in debug mode produces unnecessary log messages that start with:
"Failed to add rule for system call ..."
List the messages by running:
journalctl -b _PID=1
These debug messages are harmless, and you can safely ignore them.
Currently, there is no workaround available..
5.5.7. Dynamic programming languages, web and database servers.
(BZ#1566048)
Problems in
mod_cgid logging
If the
mod_cgid Apache httpd module is used under a threaded multi-processing module (MPM), which is the default situation in RHEL 8, the following logging problems occur:
- The
stderroutput of the CGI script is not prefixed with standard timestamp information.
- The
stderroutput of the CGI script is not correctly redirected to a log file specific to the
VirtualHost, if configured.
(BZ#1633224)
The
IO::Socket::SSL Perl module does not support TLS 1.3
New features of the TLS 1.3 protocol, such as session resumption or post-handshake authentication, were implemented in the RHEL 8
OpenSSL library but not in the
Net::SSLeay Perl module, and thus are unavailable in the
IO::Socket::SSL Perl module. Consequently, client certificate authentication might fail and reestablishing sessions might be slower than with the TLS 1.2 protocol.
To work around this problem, disable usage of TLS 1.3 by setting the
SSL_version option to the
!TLSv1_3 value when creating an
IO::Socket::SSL object.
(BZ#1632600)
Generated Scala documentation is unreadable
When generating documentation using the
scaladoc command, the resulting HTML page is unusable due to missing JavaScript resources.
(BZ#1641744)
5.5.8. Desktop
qxl does not work on VMs based on Wayland
The
qxl driver is not able to provide kernel mode setting features on certain hypervisors. Consequently, the graphics based on the Wayland protocol are not available to virtual machines (VMs) that use
qxl, and the Wayland-based login screen does not start.
To work around the problem, use either :
- The Xorg display server instead of GNOME Shell on Wayland on VMs based on QuarkXpress Element Library (QXL) graphics.
Or
- The
virtiodriver instead of the
qxldriver for your VMs.
(BZ#1641763)
The console prompt is not displayed when running
systemctl isolate multi-user.target
When running the
systemctl isolate multi-user.target command from GNOME Terminal in a GNOME Desktop session, only a cursor is displayed, and not the console prompt. To work around the problem, press the
Ctrl+Alt+F2 keys. As a result, the console prompt appears.
The behavior applies both to GNOME Shell on Wayland and X.Org display server.
5.5.9. Graphics infrastructures
Desktop running on X.Org hangs when changing to low screen resolutions
When using the GNOME desktop with the X.Org display server, the desktop becomes unresponsive if you attempt to change the screen resolution to low values. To work around the problem, do not set the screen resolution to a value lower than 800 × 600 pixels.
(BZ#1655413)
radeon fails to reset hardware correctly
The
radeon kernel driver currently does not reset hardware in the kexec context correctly. Instead,
radeon falls over, which causes the rest of the kdump service to fail.
To work around this problem, blacklist
radeon in kdump by adding the following line to the
/etc/kdump.conf file:
dracut_args --omit-drivers "radeon" force_rebuild 1
Restart the machine and kdump. After starting kdump, the
force_rebuild 1 line may be removed from the configuration file.
Note that in this scenario, no graphics will be available during kdump, but kdump will work successfully.
(BZ#1694705)
5.5.10. Hardware enablement
Backup slave MII status does not work when using the ARP link monitor
By default, devices managed by the i40e driver, do source pruning, which drops packets that have the source Media Access Control (MAC) address that matches one of the receive filters. As a consequence, backup slave Media Independent Interface (MII) status does not work when using the Address Resolution Protocol (ARP) monitoring in channel bonding. To work around this problem, disable source pruning by the following command:
# ethtool --set-priv-flags <ethX> disable-source-pruning on
As a result, the backup slave MII status will work as expected.
(BZ#1645433)
The HP NMI watchdog in some cases does not generate a crash dump
The
hpwdt driver for the HP NMI watchdog is sometimes not able to claim a non-maskable interrupt (NMI) generated by the HPE watchdog timer because the NMI was instead consumed by the
perfmon driver.
As a consequence,
hpwdt in some cases cannot call a panic to generate a crash dump.
(BZ#1602962)
5.5.11. Identity Management
The KCM credential cache is not suitable for a large number of credentials in a single credential cache
The Kerberos Credential Manager (KCM) can handle ccache sizes of up to 64 kB. If it contains too many credentials, Kerberos operations, such as kinit, fail due to a hardcoded limit on the buffer used to transfer data between the sssd-kcm component and the underlying database.
To work around this problem, add the
ccache_storage = memory option in the kcm section of the
/etc/sssd/sssd.conf file. This instructs the kcm responder to only store the credential caches in-memory, not persistently. If you do this, restarting the system or sssd-kcm clears the credential caches.
(BZ#1448094)
Changing
/etc/nsswitch.conf requires a manual system reboot
Any change to the
/etc/nsswitch.conf file, for example running the
authselect select profile_id command, requires a system reboot so that all relevant processes use the updated version of the
/etc/nsswitch.conf file. If a system reboot is not possible, restart the service that joins your system to Active Directory, which is the
System Security Services Daemon (SSSD) or
winbind.
Conflicting timeout values prevent SSSD from connecting to servers
Some of the default timeout values related to the failover operations used by the System Security Services Daemon (SSSD) are conflicting. Consequently, the timeout value reserved for SSSD to talk to a single server prevents SSSD from trying other servers before the connecting operation as a whole time out. To work around the problem, set the value of the
ldap_opt_timeout timeout parameter higher than the value of the
dns_resolver_timeout parameter, and set the value of the
dns_resolver_timeout parameter higher than the value of the
dns_resolver_op_timeout parameter.
(BZ#1382750)
SSSD can look up only unique certificates in ID overrides
When multiple ID overrides contain the same certificate, the System Security Services Daemon (SSSD) is unable to resolve queries for the users that match the certificate. An attempt to look up these users does not return any user. Note that looking up users by using their user name or UID works as expected.
(BZ#1446101)
SSSD does not correctly handle multiple certificate matching rules with the same priority
If a given certificate matches multiple certificate matching rules with the same priority, the System Security Services Daemon (SSSD) uses only one of the rules. As a workaround, use a single certificate matching rule whose LDAP filter consists of the filters of the individual rules concatenated with the
| (or) operator. For examples of certificate matching rules, see the sss-certamp(5) man page.
(BZ#1447945)
SSSD returns incorrect LDAP group membership for local users
If the System Security Services Daemon (SSSD) serves users from the local files, the files provider does not include group memberships from other domains. As a consequence, if a local user is a member of an LDAP group, the
id local_user command does not return the user’s LDAP group membership. To work around the problem, either revert the order of the databases where the system is looking up the group membership of users in the
/etc/nsswitch.conf file, replacing
sss files with
files sss, or disable the implicit
files domain by adding
enable_files_domain=False
to the
[sssd] section in the
/etc/sssd/sssd.conf file.
As a result,
id local_user returns correct LDAP group membership for local users.
Sudo rules might not work with
id_provider=ad if sudo rules reference group names
System Security Services Daemon (SSSD) does not resolve Active Directory group names during the
initgroups operation because of an optimization of communication between AD and SSSD by using a cache. The cache entry contains only a Security Identifiers (SID) and not group names until the group is requested by name or ID. Therefore, sudo rules do not match the AD group unless the groups are fully resolved prior to running sudo.
To work around this problem, you need to disable the optimization: Open the
/etc/sssd/sssd.conf file and add the
ldap_use_tokengroups = false parameter in the
[domain/example.com] section.
Default PAM settings for
systemd-user have changed in RHEL 8 which may influence SSSD behavior
The Pluggable authentication modules (PAM) stack has changed in Red Hat Enterprise Linux 8. For example, the
systemd user session now starts a PAM conversation using the
systemd-user PAM service. This service now recursively includes the
system-auth PAM service, which may include the
pam_sss.so interface. This means that the SSSD access control is always called.
Be aware of the change when designing access control rules for RHEL 8 systems. For example, you can add the
systemd-user service to the allowed services list.
Please note that for some access control mechanisms, such as IPA HBAC or AD GPOs, the
systemd-user service is has been added to the allowed services list by default and you do not need to take any action.
IdM server does not work in FIPS
Due to an incomplete implementation of the SSL connector for Tomcat, an Identity Management (IdM) server with a certificate server installed does not work on machines with the FIPS mode enabled.
Samba denies access when using the
sss ID mapping plug-in
To use Samba as a file server on a RHEL host joined to an Active Directory (AD) domain, the Samba Winbind service must be running even if SSSD is used to manage user and groups from AD. If you join the domain using the
realm join --client-software=sssd command or without specifying the
--client-software parameter in this command,
realm creates only the
/etc/sssd/sssd.conf file. When you run Samba on the domain member with this configuration and add a configuration that uses the
sss ID mapping back end to the
/etc/samba/smb.conf file to share directories, changes in the ID mapping back end can cause errors. Consequently, Samba denies access to files in certain cases, even if the user or group exists and it is known by SSSD.
If you plan to upgrade from a previous RHEL version and the
ldap_id_mapping parameter in the
/etc/sssd/sssd.conf file is set to
True, which is the default, no workaround is available. In this case, do not upgrade the host to RHEL 8 until the problem has been fixed.
Possible workarounds in other scenarios:
- For new installations, join the domain using the
realm join --client-software=winbindcommand. This configures the system to use Winbind instead of SSSD for all user and group lookups. In this case, Samba uses the
ridor
adID mapping plug-in in
/etc/samba/smb.confdepending on whether you set the
--automatic-id-mappingoption to
yes(default) or
no. If you plan to use SSSD in future or on other systems, using
--automatic-id-mapping=noallows an easier migration but requires that you store POSIX UIDs and GIDs in AD for all users and groups.
When upgrading from a previous RHEL version, and if the
ldap_id_mappingparameter in the
/etc/sssd/sssd.conffile is set to
Falseand the system uses the
uidNumberand
gidNumberattributes from AD for ID mapping:
- Change the
idmap config <domain> : backend = sssentry in the
/etc/samba/smb.conffile to
idmap config <domain> : backend = ad
- Use the
systemctl status winbindcommand to restart the Winbind.
The
nuxwdog service fails in HSM environments and requires to install the
keyutils package in non-HSM environments
The
nuxwdog watchdog service has been integrated into Certificate System. As a consequence,
nuxwdog is no longer provided as a separate package. To use the watchdog service, install the
pki-server package.
Note that the
nuxwdog service has following known issues:
- The
nuxwdogservice does not work if you use a hardware storage module (HSM). For this issue, no workaround is available.
- In a non-HSM environment, Red Hat Enterprise Linux 8.0 does not automatically install the
keyutilspackage as a dependency. To install the package manually, use the
dnf install keyutilscommand.
Adding ID overrides of AD users works only in the IdM CLI
Currently, adding ID overrides of Active Directory (AD) users to Identity Management (IdM) groups for the purpose of granting access to management roles fails in the IdM Web UI. To work around the problem, use the IdM command-line interface (CLI) instead.
Note that if you installed the
ipa-idoverride-memberof-plugin package on the IdM server after previously performing certain operations using the
ipa utility, Red Hat recommends cleaning up the
ipa utility’s cache to force it to refresh its view about the IdM server metadata.
To do so, remove the content of the
~/.cache/ipa directory for the user under which the
ipa utility is executed. For example, for root:
# rm -r /root/.cache/ipa
No information about required DNS records displayed when enabling support for AD trust in IdM
When enabling support for Active Directory (AD) trust in Red Hat Enterprise Linux Identity Management (IdM) installation with external DNS management, no information about required DNS records is displayed. Forest trust to AD is not successful until the required DNS records are added. To work around this problem, run the 'ipa dns-update-system-records --dry-run' command to obtain a list of all DNS records required by IdM. When external DNS for IdM domain defines the required DNS records, establishing forest trust to AD is possible.
5.5.12. Compilers and development tools
Synthetic functions generated by GCC confuse SystemTap
GCC optimization can generate synthetic functions for partially inlined copies of other functions. Tools such as SystemTap and GDB can not distinguish these synthetic functions from real functions. As a consequence, SystemTap can place probes on both synthetic and real function entry points, and thus register multiple probe hits for a single real function call.
To work around this problem, SystemTap scripts must be adapted with measures such as detecting recursion and suppressing probes related to inlined partial functions. For example, a script
probe kernel.function("can_nice").call { }
can try to avoid the described problem as follows:
global in_can_nice% probe kernel.function("can_nice").call { in_can_nice[tid()] ++; if (in_can_nice[tid()] > 1) { next } /* code for real probe handler */ } probe kernel.function("can_nice").return { in_can_nice[tid()] --; }
Note that this example script does not take into account all possible scenarios, such as missed kprobes or kretprobes, or genuine intended recursion.
(BZ#1169184)
The
ltrace tool does not report function calls
Because of improvements to binary hardening applied to all RHEL components, the
ltrace tool can no longer detect function calls in binary files coming from RHEL components. As a consequence,
ltrace output is empty because it does not report any detected calls when used on such binary files. There is no workaround currently available.
As a note,
ltrace can correctly report calls in custom binary files built without the respective hardening flags.
(BZ#1618748, BZ#1655368)
5.5.13. File systems and storage
Unable to discover an iSCSI target using the
iscsiuio package
Red Hat Enterprise Linux 8 does not allow concurrent access to PCI register areas. As a consequence, a
could not set host net params (err 29) error was set and the connection to the discovery portal failed. To work around this problem, set the kernel parameter
iomem=relaxed in the kernel command line for the iSCSI offload. This specifically involves any offload using the
bnx2i driver. As a result, connection to the discovery portal is now successful and
iscsiuio package now works correctly.
(BZ#1626629)
VDO volumes lose deduplication advice after moving to a different-endian platform
Virtual Data Optimizer (VDO) writes the Universal Deduplication Service (UDS) index header in the endian format native to your platform. VDO considers the UDS index corrupt and overwrites it with a new, blank index if you move your VDO volume to a platform that uses a different endian.
As a consequence, any deduplication advice stored in the UDS index prior to being overwritten is lost. VDO is then unable to deduplicate newly written data against the data that was stored before you moved the volume, leading to lower space savings.
The XFS DAX mount option is incompatible with shared copy-on-write data extents
An XFS file system formatted with the shared copy-on-write data extents feature is not compatible with the
-o dax mount option. As a consequence, mounting such a file system with
-o dax fails.
To work around the problem, format the file system with the
reflink=0 metadata option to disable shared copy-on-write data extents:
# mkfs.xfs -m reflink=0 block-device
As a result, mounting the file system with
-o dax is successful.
For more information, see Creating a file system DAX namespace on an NVDIMM.
(BZ#1620330)
Certain SCSI drivers might sometimes use an excessive amount of memory
Certain SCSI drivers use a larger amount of memory than in RHEL 7. In certain cases, such as vPort creation on a Fibre Channel host bus adapter (HBA), the memory usage might be excessive, depending upon the system configuration.
The increased memory usage is caused by memory preallocation in the block layer. Both the multiqueue block device scheduling (BLK-MQ) and the multiqueue SCSI stack (SCSI-MQ) preallocate memory for each I/O request in RHEL 8, leading to the increased memory usage.
(BZ#1733278)
5.5.14. Networking
nftables does not support multi-dimensional IP set types
The
nftables packet-filtering framework does not support set types with concatenations and intervals. Consequently, you cannot use multi-dimensional IP set types, such as
hash:net,port, with
nftables.
To work around this problem, use the
iptables framework with the
ipset tool if you require multi-dimensional IP set types.
(BZ#1593711)
The
TRACE target in the
iptables-extensions(8) man page does not refer to the
nf_tables variant
The description of the
TRACE target in the
iptables-extensions(8) man page refers only to the
compat variant, but Red Hat Enterprise Linux (RHEL) 8.0 uses the
nf_tables variant. The
nftables-based
iptables utility in RHEL uses the
meta nftrace expression internally. Therefore, the kernel does not print
TRACE events in the kernel log but sends them to the user space instead. However, the man page does not reference the
xtables-monitor command-line utility to display these events.
RHEL 8)
The
ebtables command does not support
broute table
The
nftables-based
ebtables command in Red Hat Enterprise Linux 8.0 does not support the
broute table. Consequently, users can not use this feature.
(BZ#1649790)
IPsec network traffic fails during IPsec offloading when GRO is disabled
IPsec offloading is not expected to work when Generic Receive Offload (GRO) is disabled on the device. If IPsec offloading is configured on a network interface and GRO is disabled on that device, IPsec network traffic fails.
To work around this problem, keep GRO enabled on the device.
(BZ#1649647)
(BZ#1571655)
Advanced options of
IPsec based VPN cannot be changed using
gnome-control-center
When configuring an
IPsec based VPN connection using the
gnome-control-center application, the
Advanced dialog will only display the configuration, but will not allow doing any change. As a consequence, users cannot change any advanced IPsec options. To work around this problem, use the
nm-connection-editor or
nmcli tools to perform configuration of the advanced properties.
The /etc/hosts.allow and /etc/hosts.deny files contain inaccurate information
The tcp_wrappers package is removed in Red Hat Enterprise Linux (RHEL) 8, but not its files, /etc/hosts.allow and /etc/hosts.deny. As a consequence, these files contain outdated information, which is not applicable for RHEL 8.
To work around this problem, use firewall rules for filtering access to the services. For filtering based on usernames and hostnames, use the application-specific configuration.
IP defragmentation cannot be sustainable under network traffic overload
In Red Hat Enterprise Linux 8, the garbage collection kernel thread has been removed and IP fragments expire only on timeout. As a result, CPU usage under Denial of Service (DoS) is much lower, and the maximum sustainable fragments drop rate is limited by the amount of memory configured for the IP reassembly unit. With the default settings workloads requiring fragmented traffic in presence of packet drops, packet reorder or many concurrent fragmented flows may incur in relevant performance regression.
In this case, users can use the appropriate tuning of the IP fragmentation cache in the
/proc/sys/net/ipv4 directory setting the
ipfrag_high_thresh variable to limit the amount of memory and the
ipfrag_time variable to keep per seconds an IP fragment in memory. For example,
echo 419430400 > /proc/sys/net/ipv4/ipfrag_high_thresh echo 1 > /proc/sys/net/ipv4/ipfrag_time
The above applies to IPv4 traffic. For IPv6 the relevant tunables are:
ip6frag_high_thresh and
ip6frag_time in the
/proc/sys/net/ipv6/ directory.
Note that any workload relying on high-speed fragmented traffic can cause stability and performance issues, especially with packet drops, and such kind of deployments are highly discouraged in production.
(BZ#1597671)
Network interface name changes in RHEL 8
In Red Hat Enterprise Linux 8, the same consistent network device naming scheme is used by default as in RHEL 7. However, some kernel drivers, such as
e1000e,
nfp,
qede,
sfc,
tg3 and
bnxt_en changed their consistent name on a fresh installation of RHEL 8. However, the names are preserved on upgrade from RHEL 7.
5.5.15. Security
libselinux-python is available only through its module
The
libselinux-python package contains only Python 2 bindings for developing SELinux applications and it is used for backward compatibility. For this reason,
libselinux-python is no longer available in the default RHEL 8 repositories through the
dnf install libselinux-python command.
To work around this problem, enable both the
libselinux-python and
python27 modules, and install the
libselinux-python package and its dependencies with the following commands:
# dnf module enable libselinux-python # dnf install libselinux-python
Alternatively, install
libselinux-python using its install profile with a single command:
# dnf module install libselinux-python:2.8/common
As a result, you can install
libselinux-python using the respective module.
(BZ#1666328)
libssh does not comply with the system-wide crypto policy
The
libssh library does not follow system-wide cryptographic policy settings. As a consequence, the set of supported algorithms is not changed when the administrator changes the crypto policies level using the
update-crypto-policies command.
To work around this problem, the set of advertised algorithms needs to be set individually by every application that uses
libssh. As a result, when the system is set to the LEGACY or FUTURE policy level, applications that use
libssh behave inconsistently when compared to
OpenSSH.
(BZ#1646563)
Certain
rsyslog priority strings do not work correctly
Support for the GnuTLS priority string for
imtcp that allows fine-grained control over encryption is not complete. Consequently, the following priority strings do not work properly in
rsyslog:
NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+DHE-RSA:+AES-256-GCM:+SIGN-RSA-SHA384:+COMP-ALL:+GROUP-ALL
To work around this problem, use only correctly working priority strings:
NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+ECDHE-RSA:+AES-128-CBC:+SIGN-RSA-SHA1:+COMP-ALL:+GROUP-ALL
As a result, current configurations must be limited to the strings that work correctly..
(JIRA:RHELPLAN-10431)
OpenSCAP
rpmverifypackage does not work correctly
The
chdir and
chroot system calls are called twice by the
rpmverifypackage probe. Consequently, an error occurs when the probe is utilized during an OpenSCAP scan with custom Open Vulnerability and Assessment Language (OVAL) content.
To work around this problem, do not use the
rpmverifypackage_test OVAL test in your content or use only the content from the
scap-security-guide package where
rpmverifypackage_test is not used.
(BZ#1646197)
SCAP Workbench fails to generate results-based remediations from tailored profiles
The following error occurs when trying to generate results-based remediation roles from a customized profile using the SCAP Workbench tool:
Error generating remediation role .../remediation.sh: Exit code of oscap was 1: [output truncated]
To work around this problem, use the
oscap command with the
--tailoring-file option.
(BZ#1640715)
Kickstart uses
org_fedora_oscap instead of
com_redhat_oscap in RHEL 8
The Kickstart references the Open Security Content Automation Protocol (OSCAP) Anaconda add-on as
org_fedora_oscap instead of
com_redhat_oscap which might cause confusion. That is done to preserve backward compatibility with Red Hat Enterprise Linux 7.
(BZ#1665082)
OpenSCAP
rpmverifyfile does not work
The OpenSCAP scanner does not correctly change the current working directory in offline mode, and the
fchdir function is not called with the correct arguments in the OpenSCAP
rpmverifyfile probe. Consequently, scanning arbitrary file systems using the
oscap-chroot command fails if
rpmverifyfile_test is used in an SCAP content. As a result,
oscap-chroot aborts in the described scenario.
(BZ#1636431)
OpenSCAP does not provide offline scanning of virtual machines and containers
Refactoring of
OpenSCAP codebase caused certain RPM probes to fail to scan VM and containers file systems in offline mode. For that reason, the following tools were removed from the
openscap-utils package:
oscap-vm and
oscap-chroot. Also, the
openscap-containers package was completely removed.
(BZ#1618489)
A utility for security and compliance scanning of containers is not available
In Red Hat Enterprise Linux 7, the
oscap-docker utility can be used for scanning of Docker containers based on Atomic technologies. In Red Hat Enterprise Linux 8, the Docker- and Atomic-related OpenSCAP commands are not available. As a result,
oscap-docker or an equivalent utility for security and compliance scanning of containers is not available in RHEL 8 at the moment.
(BZ#1642373).
Apache
httpd fails to start if it uses an RSA private key stored in a PKCS#11 device and an RSA-PSS certificate
The PKCS#11 standard does not differentiate between RSA and RSA-PSS key objects and uses the
CKK_RSA type for both. However, OpenSSL uses different types for RSA and RSA-PSS keys. As a consequence, the
openssl-pkcs11 engine cannot determine which type should be provided to OpenSSL for PKCS#11 RSA key objects. Currently, the engine sets the key type as RSA keys for all PKCS#11
CKK_RSA objects. When OpenSSL compares the types of an RSA-PSS public key obtained from the certificate with the type contained in an RSA private key object provided by the engine, it concludes that the types are different. Therefore, the certificate and the private key do not match. The check performed in the
X509_check_private_key() OpenSSL function returns an error in this scenario. The
httpd web server calls this function in its startup process to check if the provided certificate and key match. Since this check always fails for a certificate containing an RSA-PSS public key and a RSA private key stored in the PKCS#11 module,
httpd fails to start using this configuration. There is no workaround available for this issue.
httpd fails to start if it uses an ECDSA private key without corresponding public key stored in a PKCS#11 device
Unlike RSA keys, ECDSA private keys do not necessarily contain public key information. In this case, you cannot obtain the public key from an ECDSA private key. For this reason, a PKCS#11 device stores public key information in a separate object whether it is a public key object or a certificate object. OpenSSL expects the
EVP_PKEY structure provided by an engine for a private key to contain the public key information. When filling the
EVP_PKEY structure to be provided to OpenSSL, the engine in the
openssl-pkcs11 package tries to fetch the public key information only from matching public key objects and ignores the present certificate objects.
When OpenSSL requests an ECDSA private key from the engine, the provided
EVP_PKEY structure does not contain the public key information if the public key is not present in the PKCS#11 device, even when a matching certificate that contains the public key is available. As a consequence, since the Apache
httpd web server calls the
X509_check_private_key() function, which requires the public key, in its start-up process,
httpd fails to start in this scenario. To work around the problem, store both the private and public key in the PKCS#11 device when using ECDSA keys. As a result,
httpd starts correctly when ECDSA keys are stored in the PKCS#11 device.
OpenSSH does not handle PKCS #11 URIs for keys with mismatching labels correctly
The OpenSSH suite can identify key pairs by a label. The label might differ on private and public keys stored on a smart card. Consequently, specifying PKCS #11 URIs with the object part (key label) can prevent OpenSSH from finding appropriate objects in PKCS #11.
To work around this problem, specify PKCS #11 URIs without the object part. As a result, OpenSSH is able to use keys on smart cards referenced using PKCS #11 URIs.
(BZ#1671262).
curve25519-sha256 is not supported by default in OpenSSH
The
curve25519-sha256 SSH key exchange algorithm is missing in the system-wide crypto policies configurations for the OpenSSH client and server even though it is compliant with the default policy level. As a consequence, if a client or a server uses
curve25519-sha256 and this algorithm is not supported by the host, the connection might fail.
To work around this problem, you can manually override the configuration of system-wide crypto policies by modifying the
openssh.config and
opensshserver.config files in the
/etc/crypto-policies/back-ends/ directory for the OpenSSH client and server. Note that this configuration is overwritten with every change of system-wide crypto policies. See the
update-crypto-policies(8) man page for more information..
SSH connections with VMware-hosted systems do not work
The current version of the
OpenSSH suite introduces a change of the default IP Quality of Service (IPQoS) flags in SSH packets, which is not correctly handled by the VMware virtualization platform. Consequently, it is not possible to establish an SSH connection with systems on VMware.
To work around this problem, include the
IPQoS=throughput in the
ssh_config file. As a result, SSH connections with VMware-hosted systems work correctly.
See the RHEL 8 running in VMWare Workstation unable to connect via SSH to other hosts Knowledgebase solution article for more information.
(BZ#1651763)
5.5.16. Subscription management
No message is printed for the successful setting and unsetting of
service-level
When the candlepin service does not have a 'syspurpose' functionality, subscription manager uses a different code path to set the
service-level argument. This code path does not print the result of the operation. As a consequence, no message is displayed when the service level is set by subscription manager. This is especially problematic when the
service-level set has a typo or is not truly available.
syspurpose addons have no effect on the
subscription-manager attach --auto output.
In Red Hat Enterprise Linux 8, four attributes of the
syspurpose command-line tool have been added:
role,
usage,
service_level_agreement and
addons. Currently, only
role,
usage and
service_level_agreement affect the output of running the
subscription-manager attach --auto command. Users who attempt to set values to the
addons argument will not observe any effect on the subscriptions that are auto-attached.
5.5.17. Virtualization
ESXi virtual machines that were customized using cloud-init and cloned boot very slowly
Currently, if the
cloud-init service is used to modify a virtual machine (VM) that runs on the VMware ESXi hypervisor to use static IP and the VM is then cloned, the new cloned VM in some cases takes a very long time to reboot. This is caused
cloud-init rewriting the VM’s static IP to DHCP and then searching for an available datasource.
To work around this problem, you can uninstall
cloud-init after the VM is booted for the first time. As a result, the subsequent reboots will not be slowed down.
(BZ#1666961, BZ#1706482)
Enabling nested virtualization blocks live migration
Currently, the nested virtualization feature is incompatible with live migration. Therefore, enabling nested virtualization on a RHEL 8 host prevents migrating any virtual machines (VMs) from the host, as well as saving VM state snapshots to disk.
Note that nested virtualization is currently provided as a Technology Preview in RHEL 8, and is therefore not supported. In addition, nested virtualization is disabled by default. If you want to enable it, use the
kvm_intel.nested or
kvm_amd.nested module parameters.
Using
cloud-init to provision virtual machines on Microsoft Azure fails
Currently, it is not possible to use the
cloud-init utility to provision a RHEL 8 virtual machine (VM) on the Microsoft Azure platform. To work around this problem, use one of the following methods:
- Use the
WALinuxAgentpackage instead of
cloud-initto provision VMs on Microsoft Azure.
Add the following setting to the
[main]section in the
/etc/NetworkManager/NetworkManager.conffile:
[main] dhcp=dhclient
(BZ#1641190)
Generation 2 RHEL 8 virtual machines sometimes fail to boot on Hyper-V Server 2016 hosts
When using RHEL 8 as the guest operating system on a virtual machine (VM) running on a Microsoft Hyper-V Server 2016 host, the VM in some cases fails to boot and returns to the GRUB boot menu. In addition, the following error is logged in the Hyper-V event log:
The guest operating system reported that it failed with the following error code: 0x1E
This error occurs due to a UEFI firmware bug on the Hyper-V host. To work around this problem, use Hyper-V Server 2019 as the host.
(BZ#1583445)
virsh iface-\* commands do not work consistently
Currently,
virsh iface-* commands, such as
virsh iface-start and
virsh iface-destroy, frequently fail due to configuration dependencies. Therefore, it is recommended not to use
virsh iface-\* commands for configuring and managing host network connections. Instead, use the NetworkManager program and its related management applications.
(BZ#1664592)
Linux virtual machine extensions for Azure sometimes do not work
RHEL 8 does not include the
python2 package by default. As a consequence, running Linux virtual machine extensions for Azure, also known as
azure-linux-extensions, on a RHEL 8 VM in some cases fails.
To increase the probability that
azure-linux-extensions will work as expected, install
python2 on the RHEL 8 VM manually:
# yum install python2
(BZ#1561132)
5.5.18. Supportability
redhat-support-tool does not collect
sosreport automatically from
opencase
The
redhat-support-tool command cannot create a
sosreport archive. To work around this problem, run the
sosreport command separately and then enter the
redhat-support-tool addattachment -c command to upload the archive or use web UI on the Customer Portal. As a result, a case will be created and
sosreport will be uploaded.
Note that the
findkerneldebugs,
btextract,
analyze
diagnose commands do not work as expected and will be fixed in future releases.
Chapter 6. Notable changes to containers
A set of container images is available for Red Hat Enterprise Linux (RHEL) 8.0. 7. Internationalization
7.1. Red Hat Enterprise Linux.
7
glibclocalization for RHEL is distributed in multiple packages.
- The
glibcpackage updates for multiple locales are now synchronized with the Common Locale Data Repository (CLDR).
Appendix A. List of tickets by component
Acknowledgements
Thank you to everyone who provided feedback as part of the RHEL 8 Readiness Challenge. The top 3 winners are:
- Sterling Alexander
- John Pittman
- Jake Hunsaker
Appendix B. Revision History
0.0-4
Tue Apr 28 2020, Lenka Špačková (lspackova@redhat.com)
- Updated information about in-place upgrades in Overview.
0.0-3
Thu Mar 12 2020, Lenka Špačková (lspackova@redhat.com)
- Added the missing
postfixRHEL system role to Technology Previews.
0.0-2
Wed Feb 12 2020, Jaroslav Klech (jklech@redhat.com)
- Provided a complete kernel version to Architectures and New Features chapters.
0.0-1
Tue Jul 30 2019, Lucie Maňásková (lmanasko@redhat.com)
- Release of the Red Hat Enterprise Linux 8.0.1 Release Notes.
0.0-0
Tue May 07 2019, Ioanna Gkioka (igkioka@redhat.com)
- Release of the Red Hat Enterprise Linux 8.0 Release Notes. | https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/8.0_release_notes/index?extIdCarryOver=true&intcmp=701f2000000RqMjAAK&sc_cid=701f2000001OH7JAAW | CC-MAIN-2020-45 | en | refinedweb |
Decorators are an experimental feature of TypeScript that can be attached to a class declaration, method, accessor, property, or parameter in the form
@expression. Read more about decorators here.
Although decorators are an experimental feature, TypeScript based server-side frameworks such as the following heavily use this feature:
If you have used NestJS, you may have seen decorators in controllers:
import { Controller, Get } from '@nestjs/common'; @Controller('/cats') export class CatsController { @Get() findAll(): string { return 'This action returns all cats'; } }
You can do a lot of things in decorators. One of the things that is very useful is to add metadata to the decorator target using Reflection API.
Frameworks like NestJs expose plenty of framework APIs as decorators which attaches metadata to decorator target and after that Reflection API is used to access the attached metadata.
For example, the
[@Controller]()()\ decorator in a class will register the class in the metadata as a controller for the specified HTTP route. The metadata will be read by the framework to know which controller is responsible for which route.
Most of the times, I find myself using the same set of decorators together like this:
import { Controller, Get } from '@nestjs/common'; import { AuthenticationGuard, AccessControlGuard } from 'app/decorators'; @Controller({ path: '/admin/dashboard' }) @AuthenticationGuard() @AccessControlGuard({ role: 'admin' }) export class AdminDashboardController { @Get() index(): string { return 'dashboard data for admin'; } } @Controller({ path: '/admin/posts' }) @AuthenticationGuard() @AccessControlGuard({ role: 'admin' }) export class AdminPostsController { @Get() findPaginated(): string { return 'all posts for the page'; } }
It may be fine if you have just a few controllers, but it gets hard to maintain if you have lots of controllers.
A neat way to organize a group of decorators that come together in your application is to expose a new decorator that composes the group of decorators internally.
Imagine how neat it would be if we could so something like this for the above two controller classes:
import { Controller, Get } from '@nestjs/common'; import { AdminController, AccessControlGuard } from 'app/decorators'; @AdminController({ path: '/dashboard' }) export class AdminDashboardController { @Get() index(): string { return 'dashboard data for admin'; } } @AdminController({ path: '/posts' }) export class AdminPostsController { @Get() findPaginated(): string { return 'all posts for the page'; } }
Let’s try to implement the new decorator
AdminController. In theory, pseudo-code for the decorator could be something like this:
@AdminController(options) { @Controller({ path: '/admin' + options.path }) @AuthenticationGuard() @AccessControlGuard({ role: 'admin' }) }
Like with any decorator, it has to implement a certain type. I am specifically creating a class decorator in this post, but the idea of decorator composition should apply to other type of decorators as well. Here’s the actual implementation of the fictional
AdminController decorator that we have used in the above gists.
import { Controller, ControllerOptions } from '@nestjs/common'; import { join } from 'path'; import { AuthenticationGuard, AccessControlGuard } from 'app/guards'; expose function AdminController(options: ControllerOptions) { return function (target: Function) { Controller(join('/admin', options.path))(target); AuthenticationGuard()(target); AccessControlGuard({ role: 'admin' })(target); }; }
Final Thoughts
Decorators are currently in stage 2 proposal for JavaScript, but they are already used heavily in the TypeScript community.
Please feel free to play around with decorator patterns like this and share tips and tricks with the awesome JavaScript community.
If you like the decorator composition pattern, you may also like this library that I recently authored based on the pattern.
Discussion | https://practicaldev-herokuapp-com.global.ssl.fastly.net/ojhaujjwal/decorators-composition-in-typescript-1411 | CC-MAIN-2020-45 | en | refinedweb |
In this article, I present the simple idea of exponentiation by squaring. This idea saves computation time in determining the value of large integer powers by “splitting” the exponentiation in a clever way into a series of squaring operations. The technique is based on the fact that, for \(n\) even,
\[x^n=(x^2)^\frac{n}{2}.\]
For \(n\) odd you can simply decrease \(n\) by one and do an extra multiplication by \(x\) outside of the exponent. This leads to the full expression,
\[x^n=\begin{cases}
x(x^2)^\frac{n-1}{2},&n \textrm{ odd}\\[0.3em]
(x^2)^\frac{n}{2},&n \textrm{ even}
\end{cases}.\]
You then apply this recursively, meaning that you don’t just compute \(x^2\) and raise that number to the power of \(n/2\), but keep using the same trick again and again.
For example, if \(n=6\), we first get \(x^6=(x^2)^3\). If we define \(y=x^2\) for clarity, we then get \(y^3=y(y^2)^1=(x^2)(x^2)^2\). This means that \(x^6\) can be computed using only three multiplications instead of six; one to compute \(x^2\), one to compute \((x^2)^2\), and a third one to multiply both together. Of course, for a large \(n\) the gains are much bigger.
For simple numbers and small exponents, computing \(x^n\) directly is not a problem, but when a single multiplication takes a lot of time (e.g., for matrices), or when the exponent is large, it can make a big difference.
Python Code
Below is a small Python program that computes \(x^n\) recursively.
from __future__ import print_function def exponentiation_by_squaring(x, n): if n == 0: return 1 elif n == 1: return x elif n % 2: return x * exponentiation_by_squaring(x * x, (n - 1) // 2) else: return exponentiation_by_squaring(x * x, n // 2) x = 2 n = 20 print(exponentiation_by_squaring(x, n))
Remember this simple trick and keep it in your algorithm toolbox!
Add new comment | https://tomroelandts.com/index.php/articles/exponentiation-by-squaring | CC-MAIN-2020-45 | en | refinedweb |
CKEDITOR.editor
Filtering
Properties
activeEnterMode : Number
since 4.3.0 readonly
The dynamic Enter mode which should be used in the current context (selection location). By default it equals the enterMode and it can be changed by the setActiveEnterMode method.
See also the setActiveEnterMode method for an explanation of dynamic settings.
activeFilter : filter
since 4.3.0 readonly
The active filter instance which should be used in the current context (location selection). This instance will be used to make a decision which commands, buttons and other features can be enabled.
By default it equals the filter and it can be changed by the setActiveFilter method.
editor.on( 'activeFilterChange', function() { if ( editor.activeFilter.check( 'cite' ) ) // Do something when <cite> was enabled - e.g. enable a button. else // Otherwise do something else. } );
See also the setActiveEnterMode method for an explanation of dynamic settings.
activeShiftEnterMode : Number
since 4.3.0 readonly
See the activeEnterMode property.
balloonToolbars : contextManager
since 4.8.0 readonly
The balloon toolbar manager for a given editor instance. It ensures that there is only one toolbar visible at a time.
Use the CKEDITOR.plugins.balloontoolbar.contextManager.create method to register a new toolbar context.
The following example will add a toolbar containing Link and Unlink buttons for any anchor or image:
editor.balloonToolbars.create( { buttons: 'Link,Unlink', cssSelector: 'a[href], img' } );
Indicates that the editor is running in an environment where no block elements are accepted inside the content.
This can be for example inline editor based on an
<h1>element.
The configuration for this editor instance. It inherits all settings defined in CKEDITOR.config, combined with settings loaded from custom configuration files and those defined inline in the page when creating the editor.
var editor = CKEDITOR.instances.editor1; alert( editor.config.skin ); // e.g. 'moono'
The outermost element in the DOM tree in which the editable element resides. It is provided by a specific editor creator after the editor UI is created and is not intended to be modified.
var editor = CKEDITOR.instances.editor1; alert( editor.container.getName() ); // 'span'
contextMenu : contextMenu
readonly
copyFormatting : state
since 4.6.0
Current state of the Copy Formatting plugin in this editor instance.
If defined, points to the data processor which is responsible for translating and transforming the editor data on input and output. Generally it will point to an instance of CKEDITOR.htmlDataProcessor, which handles HTML data. The editor may also handle other data formats by using different data processors provided by specific plugins.
The document that stores the editor content.
- For the classic (
iframe-based) editor it is equal to the document inside the
iframecontaining the editable element.
- For the inline editor it is equal to CKEDITOR.document.
The document object is available after the contentDom event is fired and may be invalidated when the contentDomUnload event is fired (classic editor only).
editor.on( 'contentDom', function() { console.log( editor.document ); } );
The original host page element upon which the editor is created. It is only supposed to be provided by the particular editor creator and is not subject to be modified.
elementMode : Number
readonly
This property indicates the way this instance is associated with the element. See also CKEDITOR.ELEMENT_MODE_INLINE and CKEDITOR.ELEMENT_MODE_REPLACE.
The main (static) Enter mode which is a validated version of the CKEDITOR.config.enterMode setting. Currently only one rule exists — blockless editors may have Enter modes set only to CKEDITOR.ENTER_BR.
The main filter instance used for input data filtering, data transformations, and activation of features.
It points to a CKEDITOR.filter instance set up based on editor configuration.
focusManager : focusManager
readonly
Controls the focus state of this editor instance. This property is rarely used for normal API operations. It is mainly targeted at developers adding UI elements to the editor interface.
A unique random string assigned to each editor instance on the page.
keystrokeHandler : keystrokeHandler
readonly
Controls keystroke typing in this editor instance.
An object that contains all language strings used by the editor interface.
alert( editor.lang.basicstyles.bold ); // e.g. 'Negrito' (if the language is set to Portuguese)
The code for the language resources that have been loaded for the user interface elements of this editor instance.
alert( editor.langCode ); // e.g. 'en'
The current editing mode. An editing mode basically provides different ways of editing or viewing the editor content.
alert( CKEDITOR.instances.editor1.mode ); // (e.g.) 'wysiwyg'
A unique identifier of this editor instance.
pasteFilter : filter
since 4.5.0 readonly
Content filter which is used when external data is pasted or dropped into the editor or a forced paste as plain text occurs.
This object might be used on the fly to define rules for pasted external content. This object is available and used if the clipboard plugin is enabled and CKEDITOR.config.pasteFilter or CKEDITOR.config.forcePasteAsPlainText was defined.
To enable the filter:
var editor = CKEDITOR.replace( 'editor', { pasteFilter: 'plain-text' } );
You can also modify the filter on the fly later on:
editor.pasteFilter = new CKEDITOR.filter( 'p h1 h2; a[!href]' );
Note that the paste filter is only applied to external data. There are three data sources:
- copied and pasted in the same editor (internal),
- copied from one editor and pasted into another (cross-editor),
- coming from all other sources like websites, MS Word, etc. (external).
If Advanced Content Filter is not disabled, then it will also be applied to pasted and dropped data. The paste filter job is to "normalize" external data which often needs to be handled differently than content produced by the editor.
An object that contains references to all plugins used by this editor instance.
alert( editor.plugins.dialog.path ); // e.g. '' // Check if a plugin is available. if ( editor.plugins.image ) { ... }
Indicates the read-only state of this editor. This is a read-only property. See also setReadOnly.
shiftEnterMode : Number
since 4.3.0 readonly
See the enterMode property.
Indicates editor initialization status. The following statuses are available:
- unloaded: The initial state — the editor instance was initialized, but its components (configuration, plugins, language files) are not loaded yet.
- loaded: The editor components were loaded — see the loaded event.
- ready: The editor is fully initialized and ready — see the instanceReady event.
- destroyed: The editor was destroyed — see the destroy method.
Defaults to
'unloaded'
The tabbing navigation order determined for this editor instance. This can be set by the
CKEDITOR.config.tabIndexsetting or taken from the
tabindexattribute of the element associated with the editor.
alert( editor.tabIndex ); // e.g. 0
Defaults to
0
Contains all UI templates created for this editor instance.
Defaults to
{}
Indicates the human-readable title of this editor. Although this is a read-only property, it can be initialized with CKEDITOR.config.title.
Note: Please do not confuse this property with editor.name which identifies the instance in the CKEDITOR.instances literal.
The toolbar definition used by the editor. It is created from the CKEDITOR.config.toolbar option if it is set or automatically based on CKEDITOR.config.toolbarGroups.
The namespace containing UI features related to this editor instance.
uploadRepository : uploadRepository
since 4.5.0 readonly
An instance of the upload repository. It allows you to create and get file loaders.
var loader = editor.uploadRepository.create( file ); loader.loadAndUpload( '' );
widgets : repository
since 4.3.0 readonly
An instance of widget repository. It contains all registered widget definitions and initialized instances.
editor.widgets.add( 'someName', { // Widget definition... } ); editor.widgets.registered.someName; // -> Widget definition
The window instance related to the document property.
Static properties
useCapture : Boolean
mixed static
Methods
constructor( [ instanceConfig ], [ element ], [ mode ] ) → editor
Creates an editor class instance.
addCommand( commandName, commandDefinition )
Adds a command definition to the editor instance. Commands added with this function can be executed later with the execCommand method.
editorInstance.addCommand( 'sample', { exec: function( editor ) { alert( 'Executing a command for the editor name "' + editor.name + '"!' ); } } );
Since 4.10.0 this method also accepts a CKEDITOR.command instance as a parameter.
Parameters
commandName : String
The indentifier name of the command.
commandDefinition : commandDefinition | command
The command definition or a
CKEDITOR.commandinstance.
addContentsCss( cssPath )
since 4.4.0
Adds the path to a stylesheet file to the exisiting CKEDITOR.config.contentsCss value.
Note: This method is available only with the
wysiwygareaplugin and only affects classic editors based on it (so it does not affect inline editors).
editor.addContentsCss( 'assets/contents.css' );
Parameters
cssPath : String
The path to the stylesheet file which should be added.
addFeature( feature ) → Boolean
since 4.1.0
Shorthand for CKEDITOR.filter.addFeature.
Parameters
feature : feature
See CKEDITOR.filter.addFeature.
Returns
Boolean
See CKEDITOR.filter.addFeature.
addMenuGroup( name, [ order ] )
Registers an item group to the editor context menu in order to make it possible to associate it with menu items later.
Parameters
name : String
Specify a group name.
[ order ] : Number
Define the display sequence of this group inside the menu. A smaller value gets displayed first.
Defaults to
100
addMenuItem( name, definition )
Adds an item from the specified definition to the editor context menu.
Parameters
name : String
The menu item name.
definition : Object
The menu item definition.
addMenuItems( definitions )
Adds one or more items from the specified definition object to the editor context menu.
Parameters
definitions : Object
Object where keys are used as itemName and corresponding values as definition for a addMenuItem call.
Registers an editing mode. This function is to be used mainly by plugins.
Parameters
mode : String
The mode name.
exec : Function
The function that performs the actual mode change.
addRemoveFormatFilter( func )
since 3.3.0
Add to a collection of functions to decide whether a specific element should be considered as formatting element and thus could be removed during
removeFormatcommand.
Note: Only available with the existence of
removeformatplugin.
// Don't remove empty span. editor.addRemoveFormatFilter( function( element ) { return !( element.is( 'span' ) && CKEDITOR.tools.isEmpty( element.getAttributes() ) ); } );
Parameters
applyStyle( style )
Applies the style upon the editor's current selection. Shorthand for CKEDITOR.style.apply.
attachStyleStateChange( style, callback )
Registers a function to be called whenever the selection position changes in the editing area. The current state is passed to the function. The possible states are CKEDITOR.TRISTATE_ON and CKEDITOR.TRISTATE_OFF.
// Create a style object for the <b> element. var style = new CKEDITOR.style( { element: 'b' } ); var editor = CKEDITOR.instances.editor1; editor.attachStyleStateChange( style, function( state ) { if ( state == CKEDITOR.TRISTATE_ON ) alert( 'The current state for the B element is ON' ); else alert( 'The current state for the B element is OFF' ); } );
Parameters
Register event handler under the capturing stage on supported target.
checkDirty() → Boolean
Checks whether the current editor content contains changes when compared to the content loaded into the editor at startup, or to the content available in the editor when resetDirty was called.
function beforeUnload( evt ) { if ( CKEDITOR.instances.editor1.checkDirty() ) return evt.returnValue = "You will lose the changes made in the editor."; } if ( window.addEventListener ) window.addEventListener( 'beforeunload', beforeUnload, false ); else window.attachEvent( 'onbeforeunload', beforeUnload );
Returns
Boolean
trueif the content contains changes.
createFakeElement( realElement, className, realElementType, isResizable ) → element
Creates fake CKEDITOR.dom.element based on real element. Fake element is an img with special attributes, which keep real element properties.
createFakeParserElement( realElement, className, realElementType, isResizable ) → element
Creates fake CKEDITOR.htmlParser.element based on real element.
createRange() → range
Shortcut to create a CKEDITOR.dom.range instance from the editable element.
Returns
range
The DOM range created if the editable has presented. CKEDITOR.dom.range
Predefine some intrinsic properties on a specific event name.
Parameters
name : String
The event name
meta : Object
- Properties
[ errorProof ]
Whether the event firing should catch error thrown from a per listener call.
Defaults to
false
Destroys the editor instance, releasing all resources used by it. If the editor replaced an element, the element will be recovered.
alert( CKEDITOR.instances.editor1 ); // e.g. object CKEDITOR.instances.editor1.destroy(); alert( CKEDITOR.instances.editor1 ); // undefined
Parameters
[ noUpdate ] : Boolean
If the instance is replacing a DOM element, this parameter indicates whether or not to update the element with the instance content.
Creates, retrieves or detaches an editable element of the editor. This method should always be used instead of calling CKEDITOR.editable directly.
Parameters
[ elementOrEditable ] : element | editable
The DOM element to become the editable or a CKEDITOR.editable object.
Returns
elementPath( [ startNode ] ) → elementPath
Returns an element path for the selection in the editor.
Parameters
[ startNode ] : node
From which the path should start, if not specified defaults to editor selection's start element yielded by CKEDITOR.dom.selection.getStartElement.
Returns
execCommand( commandName, [ data ] ) → Boolean
Executes a command associated with the editor.
editorInstance.execCommand( 'bold' );
Parameters
commandName : String
The identifier name of the command.
[ data ] : Object
The data to be passed to the command. It defaults to an empty object starting from 4.7.0.
Returns
Boolean
trueif the command was executed successfully,
falseotherwise. CKEDITOR.editor.addCommand
extractSelectedHtml( [ toString ], [ removeEmptyBlock ] ) → documentFragment | String | null
since 4.5.0
Gets the selected HTML (it is returned as a document fragment or a string) and removes the selected part of the DOM. This method is designed to work as the user would expect the cut and delete functionalities to work.
See also:
- the getSelectedHtml method,
- the CKEDITOR.editable.extractHtmlFromRange method.
Parameters
[ toString ] : Boolean
If
true, then stringified HTML will be returned.
[ removeEmptyBlock ] : Boolean
Default
falsemeans that the function will keep an empty block (if the entire content was removed) or it will create it (if a block element was removed) and set the selection in that block. If
true, the empty block will be removed or not created. In this case the function will not handle the selection.
Defaults to
false
Returns
documentFragment | String | null
-.
Moves the selection focus to the editing area space in the editor.
-
getClipboardData( callbackOrOptions, callback )
Gets clipboard data by directly accessing the clipboard (IE only) or opening the paste dialog window.
editor.getClipboardData( function( data ) { if ( data ) alert( data.type + ' ' + data.dataValue ); } );
Parameters
callbackOrOptions : Function | Object
For function, see the
callbackparameter documentation. The object was used before 4.7.0 with the
titleproperty, to set the paste dialog's title.
callback : Function
A function that will be executed with the
dataproperty of the paste event or
nullif none of the capturing methods succeeded. Since 4.7.0 the
callbackshould be provided as a first argument, just like in the example above. This parameter will be removed in an upcoming major release.
getColorFromDialog( callback, [ scope ] )
Open up color dialog and to receive the selected color.
Parameters
callback : Function
The callback when color dialog is closedProperties
color : String
The color value received if selected on the dialog.
[ scope ] : Object
The scope in which the callback will be bound.
getCommand( commandName ) → command
Gets one of the registered commands. Note that after registering a command definition with addCommand, it is transformed internally into an instance of CKEDITOR.command, which will then be returned by this function.
getCommandKeystroke( command, [ all ] ) → Number | Number[] | null
since 4.6.0
Returns the keystroke that is assigned to a specified CKEDITOR.command. If no keystroke is assigned, it returns
null.
Since version 4.7.0 this function also accepts a
commandparameter as a string.
Parameters
command : command | String
The CKEDITOR.command instance or a string with the command name.
[ all ] : Boolean
If
true, the function will return an array of assigned keystrokes. Available since 4.11.0.
Defaults to
false
Returns
Number | Number[] | null
Depending on the
allparameter value:
false– The first keystroke assigned to the provided command or
nullif there is no keystroke.
true– An array of all assigned keystrokes or an empty array if there is no keystroke.
Gets the editor data. The data will be in "raw" format. It is the same data that is posted by the editor.
if ( CKEDITOR.instances.editor1.getData() == '' ) alert( 'There is no data available.' );
Parameters
internal : Boolean
If set to
true, it will prevent firing the beforeGetData and getData events, so the real content of the editor will not be read and cached data will be returned. The method will work much faster, but this may result in the editor returning the data that is not up to date. This parameter should thus only be set to
truewhen you are certain that the cached data is up to date.
Returns
String
The editor data.
getMenuItem( name ) → Object
Retrieves a particular menu item definition from the editor context menu.
Parameters
name : String
The name of the desired menu item.
Returns
Object
-
getResizable( forContents ) → element
Gets the element that can be used to check the editor size. This method is mainly used by the Editor Resize plugin, which adds a UI handle that can be used to resize the editor.
getSelectedHtml( [ toString ] ) → documentFragment | String
since 4.5.0
Gets the selected HTML (it is returned as a document fragment or a string). This method is designed to work as the user would expect the copy functionality to work. For instance, if the following selection was made:
<p>a<b>b{c}d</b>e</p>
The following HTML will be returned:
<b>c</b>
As you can see, the information about the bold formatting was preserved, even though the selection was anchored inside the
<b>element.
See also:
- the extractSelectedHtml method,
- the CKEDITOR.editable.getHtmlFromRange method.
Parameters
[ toString ] : Boolean
If
true, then stringified HTML will be returned.
Returns
documentFragment | String
-
getSelectedRanges( [ onlyEditables ] ) → Array
since 4.14.0
Retrieves the CKEDITOR.dom.range instances that represent the current selection.
Note: This function is an alias for the CKEDITOR.dom.selection.getRanges method.
Parameters
[ onlyEditables ] : Boolean
If set to
true, this function retrieves editable ranges only.
Returns
Array
Range instances that represent the current selection.
getSelection( forceRealSelection ) → selection
Retrieve the editor selection in scope of editable element.
Note: Since the native browser selection provides only one single selection at a time per document, so if editor's editable element has lost focus, this method will return a null value unless the lockSelection has been called beforehand so the saved selection is retrieved.
var selection = CKEDITOR.instances.editor1.getSelection(); alert( selection.getType() );
Parameters
forceRealSelection : Boolean
Return real selection, instead of saved or fake one.
Returns
getSnapshot() → String
Gets the "raw data" currently available in the editor. This is a fast method which returns the data as is, without processing, so it is not recommended to use it on resulting pages. Instead it can be used combined with the loadSnapshot method in order to automatically save the editor data from time to time while the user is using the editor, to avoid data loss, without risking performance issues.
getStylesSet( callback )
Gets the current
stylesSetfor this instance.
getUiColor() → String
Gets the color of the editor user interface.
CKEDITOR.instances.editor1.getUiColor();
Returns
String
uiColor The editor UI color or
undefinedif the UI color is not set.
-
insertElement( element )
Inserts an element into the currently selected position in the editor in WYSIWYG mode.
var element = CKEDITOR.dom.element.createFromHtml( '<img src="hello.png" border="0" title="Hello" />' ); CKEDITOR.instances.editor1.insertElement( element );
Fires the insertElement event. The element is inserted in the listener with a default priority (10), so you can add listeners with lower or higher priorities in order to execute some code before or after the element is inserted.
Parameters
insertHtml( html, [ mode ], [ range ] )
Inserts HTML code into the currently selected position in the editor in WYSIWYG mode.
Example:
CKEDITOR.instances.editor1.insertHtml( '<p>This is a new paragraph.</p>' );
Fires the insertHtml and afterInsertHtml events. The HTML is inserted in the insertHtml event's listener with a default priority (10) so you can add listeners with lower or higher priorities in order to execute some code before or after the HTML is inserted.
Parameters
html : String
HTML code to be inserted into the editor.
[ mode ] : String
The mode in which the HTML code will be inserted. One of the following:
'html'– The inserted content will completely override the styles at the selected position.
'unfiltered_html'– Like
'html'but the content is not filtered with CKEDITOR.filter.
'text'– The inserted content will inherit the styles applied in the selected position. This mode should be used when inserting "htmlified" plain text (HTML without inline styles and styling elements like
<b>,
<strong>,
<span style="...">).
Defaults to
'html'
[ range ] : range
If specified, the HTML will be inserted into the range instead of into the selection. The selection will be placed at the end of the insertion (like in the normal case). Introduced in CKEditor 4.5.
insertText( text )
since 3.5.0
Inserts text content into the currently selected position in the editor in WYSIWYG mode. The styles of the selected element will be applied to the inserted text. Spaces around the text will be left untouched.
CKEDITOR.instances.editor1.insertText( ' line1 \n\n line2' );
Fires the insertText and afterInsertHtml events. The text is inserted in the insertText event's listener with a default priority (10) so you can add listeners with lower or higher priorities in order to execute some code before or after the text is inserted.
Parameters
text : String
Text to be inserted into the editor.
isDestroyed() → Boolean
since 4.13.0
Determines if the current editor instance is destroyed.
Returns
Boolean
true if the editor is destroyed.
isDetached() → Boolean
since 4.13.0
Provides information whether the editor's container is detached.
Returns
Boolean
true if the editor's container is detached.
loadSnapshot( snapshot )
Loads "raw data" into the editor. The data is loaded with processing straight to the editing area. It should not be used as a way to load any kind of data, but instead in combination with getSnapshot-produced data.
var data = editor.getSnapshot(); editor.loadSnapshot( data );
Parameters
snapshot : Object
-
lockSelection( [ sel ] ) → Boolean
Locks the selection made in the editor in order to make it possible to manipulate it without browser interference. A locked selection is cached and remains unchanged until it is released with the unlockSelection method..
openDialog( dialogName, callback, [ forceModel ] ) → dialog
Loads and opens a registered dialog.
CKEDITOR.instances.editor1.openDialog( 'smiley' );
Parameters
dialogName : String
The registered name of the dialog.
callback : Function
The function to be invoked after a dialog instance is created.
[ forceModel ] : element | widget | Object
Forces opening the dialog using the given model as a subject. The forced model will take precedence before the CKEDITOR.dialog.definition.getModel method. Available since 4.13.0.
Returns
dialog
The dialog object corresponding to the dialog displayed or
nullif the dialog name is not registered. CKEDITOR.dialog.add
Opens Browser in a popup. The
widthand
heightparameters accept numbers (pixels) or percent (of screen size) values.
Parameters
url : String
The url of the external file browser.
[ width ] : Number | String
Popup window width.
Defaults to
'80%'
[ height ] : Number | String
Popup window height.
Defaults to
'70%'
[ options ] : String
Popup window features.
Defaults to
'location=no,menubar=no,toolbar=no,dependent=yes,minimizable=no,modal=yes,alwaysRaised=yes,resizable=yes,scrollbars=yes'.
removeMenuItem( name )
since 3.6.1
Removes a particular menu item added before from the editor context menu.
Parameters
name : String
The name of the desired menu item.
removeStyle( style )
Removes the style from the editor's current selection. Shorthand for CKEDITOR.style.remove.
Resets the "dirty state" of the editor so subsequent calls to checkDirty will return
falseif the user will not have made further changes to the content.
alert( editor.checkDirty() ); // e.g. true editor.resetDirty(); alert( editor.checkDirty() ); // false
Resets the undo stack.
Resizes the editor interface.
Note: Since 4.14.1 this method accepts numeric or absolute CSS length units.
editor.resize( 900, 300 ); editor.resize( '5in', 450, true );
Parameters
width : Number | String
The new width. It can be an integer denoting a value in pixels or a CSS size value with unit.
height : Number | String
The new height. It can be an integer denoting a value in pixels or a CSS size value with unit.
[ isContentHeight ] : Boolean
Indicates that the provided height is to be applied to the editor content area, and not to the entire editor interface. Defaults to
false.
[ resizeInner ] : Boolean
Indicates that it is the inner interface element that must be resized, not the outer element. The default theme defines the editor interface inside a pair of
<span>elements (
<span><span>...</span></span>). By default the first, outer
<span>element receives the sizes. If this parameter is set to
true, the second, inner
<span>is resized instead.
restoreRealElement( fakeElement ) → element | null
Creates CKEDITOR.dom.element from fake element.
selectionChange( [ checkNow ] )
Check the selection change in editor and potentially fires the selectionChange event.
Parameters
[ checkNow ] : Boolean
Force the check to happen immediately instead of coming with a timeout delay (default).
Defaults to
false
setActiveEnterMode( enterMode, shiftEnterMode )
since 4.3.0
Sets the active Enter modes: (enterMode and shiftEnterMode). Fires the activeEnterModeChange event.
Prior to CKEditor 4.3.0 Enter modes were static and it was enough to check CKEDITOR.config.enterMode and CKEDITOR.config.shiftEnterMode when implementing a feature which should depend on the Enter modes. Since CKEditor 4.3.0 these options are source of initial:
- static enterMode and shiftEnterMode values,
- dynamic activeEnterMode and activeShiftEnterMode values.
However, the dynamic Enter modes can be changed during runtime by using this method, to reflect the selection context. For example, if selection is moved to the widget's nested editable which is a blockless one, then the active Enter modes should be changed to CKEDITOR.ENTER_BR (in this case Widget System takes care of that).
Note: This method should not be used to configure the editor – use CKEDITOR.config.enterMode and CKEDITOR.config.shiftEnterMode instead. This method should only be used to dynamically change Enter modes during runtime based on selection changes. Keep in mind that changed Enter mode may be overwritten by another plugin/feature when it decided that the changed context requires this.
Note: In case of blockless editor (inline editor based on an element which cannot contain block elements — see blockless) only CKEDITOR.ENTER_BR is a valid Enter mode. Therefore this method will not allow to set other values.
Note: Changing the active filter may cause the Enter mode to change if default Enter modes are not allowed by the new filter.
Parameters
enterMode : Number
One of CKEDITOR.ENTER_P, CKEDITOR.ENTER_DIV, CKEDITOR.ENTER_BR. Pass falsy value (e.g.
null) to reset the Enter mode to the default value (enterMode and/or shiftEnterMode).
shiftEnterMode : Number
See the
enterModeargument.
setActiveFilter( filter )
since 4.3.0
Sets the active filter (activeFilter). Fires the activeFilterChange event.
// Set active filter which allows only 4 elements. // Buttons like Bold, Italic will be disabled. var filter = new CKEDITOR.filter( 'p strong em br' ); editor.setActiveFilter( filter );
Setting a new filter will also change the active Enter modes to the first values allowed by the new filter (see CKEDITOR.filter.getAllowedEnterMode).
Parameters
Sets the editor data. The data must be provided in the "raw" format (HTML).
Note that this method is asynchronous. The
callbackparameter must be used if interaction with the editor is needed after setting the data.
CKEDITOR.instances.editor1.setData( '<p>This is the editor data.</p>' ); CKEDITOR.instances.editor1.setData( '<p>Some other editor data.</p>', { callback: function() { this.checkDirty(); // true } } );
Note: In CKEditor 4.4.2 the signature of this method has changed. All arguments except
datawere wrapped into the
optionsobject. However, backward compatibility was preserved and it is still possible to use the
data, callback, internalarguments.
Parameters
data : String
The HTML code to replace current editor content.
[ options ] : Object
- Properties
[ internal ] : Boolean
Whether to suppress any event firing when copying data internally inside the editor.
Defaults to
false
[ callback ] : Function
Function to be called after
setDatais completed (on dataReady).
[ noSnapshot ] : Boolean
If set to
true, it will prevent recording an undo snapshot. Introduced in CKEditor 4.4.2.
Defaults to
false
[ internal ] : Boolean
Old equivalent of
options.internalparameter. It is only available to provide backwards compatibility for calls with
data, callback, internalparameters. It is recommended to use
options.internalparameter instead.
Defaults to
false
setKeystroke( keystroke, [ behavior ] )
since 4.0.0
Assigns keystrokes associated with editor commands.
editor.setKeystroke( CKEDITOR.CTRL + 115, 'save' ); // Assigned Ctrl+S to the "save" command. editor.setKeystroke( CKEDITOR.CTRL + 115, false ); // Disabled Ctrl+S keystroke assignment. editor.setKeystroke( [ [ CKEDITOR.ALT + 122, false ], [ CKEDITOR.CTRL + 121, 'link' ], [ CKEDITOR.SHIFT + 120, 'bold' ] ] );
This method may be used in the following cases:
- By plugins (like
linkor
basicstyles) to set their keystrokes when plugins are being loaded.
- During the runtime to modify existing keystrokes.
The editor handles keystroke configuration in the following order:
- Plugins use this method to define default keystrokes.
- Editor extends default keystrokes with CKEDITOR.config.keystrokes.
- Editor blocks keystrokes defined in CKEDITOR.config.blockedKeystrokes.
You can then set new keystrokes using this method during the runtime.
Parameters
keystroke : Number | Array
A keystroke or an array of keystroke definitions.
[ behavior ] : String | Boolean
A command to be executed on the keystroke.
Changes the editing mode of this editor instance.
Note: The mode switch could be asynchronous depending on the mode provider. Use the
callbackto hook subsequent code.
// Switch to "source" view. CKEDITOR.instances.editor1.setMode( 'source' ); // Switch to "wysiwyg" view and be notified on completion. CKEDITOR.instances.editor1.setMode( 'wysiwyg', function() { alert( 'wysiwyg mode loaded!' ); } );
Parameters
[ newMode ] : String
If not specified, the CKEDITOR.config.startupMode will be used.
[ callback ] : Function
Optional callback function which is invoked once the mode switch has succeeded.
setReadOnly( [ isReadOnly ] )
since 3.6.0
Puts or restores the editor into the read-only state. When in read-only, the user is not able to change the editor content, but can still use some editor features. This function sets the readOnly property of the editor, firing the readOnly event.
Note: The current editing area will be reloaded.
Parameters
[ isReadOnly ] : Boolean
Indicates that the editor must go read-only (
true, default) or be restored and made editable (
false).
setUiColor( color )
Sets the color of the editor user interface. This method accepts a color value in hexadecimal notation, with a
#character (e.g. #ffffff).
CKEDITOR.instances.editor1.setUiColor( '#ff00ff' );
Parameters
color : String
The desired editor UI color in hexadecimal notation.
showNotification( message, [ type ], [ progressOrDuration ] ) → notification
since 4.5.0
Shows a notification to the user.
If the Notification plugin is not enabled, this function shows a normal alert with the given
message. The
typeand
progressOrDurationparameters are supported only by the Notification plugin.
If the Notification plugin is enabled, this method creates and shows a new notification. By default the notification is shown over the editor content, in the viewport if it is possible.
See CKEDITOR.plugins.notification.
Parameters
message : String
The message displayed in the notification.
[ type ] : String
The type of the notification. Can be
'info',
'warning',
'success'or
'progress'.
Defaults to
'info'
[ progressOrDuration ] : Number
If the type is
progress, the third parameter may be a progress from
0to
1(defaults to
0). Otherwise the third parameter may be a notification duration denoting after how many milliseconds the notification should be closed automatically.
0means that the notification will not close automatically and the user needs to close it manually. See CKEDITOR.plugins.notification.duration. Note that
warningnotifications will not be closed automatically.
Returns
notification
Created and shown notification.
unlockSelection( [ restore ] )
Unlocks the selection made in the editor and locked with the lockSelection method. An unlocked selection is no longer cached and can be changed.
Parameters
[ restore ] : Boolean
If set to
true, the selection is restored back to the selection saved earlier by using the CKEDITOR.dom.selection.lock method.
Updates the
<textarea>element that was replaced by the editor with the current data available in the editor.
Note: This method will only affect those editor instances created with the CKEDITOR.ELEMENT_MODE_REPLACE element mode or inline instances bound to
<textarea>elements.
CKEDITOR.instances.editor1.updateElement(); alert( document.getElementById( 'editor1' ).value ); // The current editor data.
_attachToForm()
private
Attaches the editor to a form to call updateElement before form submission. This method is called by both creators (replace and inline), so there is no reason to call it manually..
_getEditorElement( elementOrId ) → element | null
since 4.12.0 private static
Gets the element from the DOM and checks if the editor can be instantiated on it. This function is available for internal use only.
Events
activeEnterModeChange( evt )
since 4.3.0
Event fired by the setActiveEnterMode method when any of the active Enter modes is changed. See also the activeEnterMode and activeShiftEnterMode properties.
activeFilterChange( evt )
since 4.3.0
Event fired by the setActiveFilter method when the activeFilter is changed.
afterCommandExec( evt )
Event fired after the command execution when execCommand is called.
afterInsertHtml( evt )
since 4.5.0
Event fired after data insertion using the insertHtml, CKEDITOR.editable.insertHtml, or CKEDITOR.editable.insertHtmlIntoRange methods.
Parameters
evt : eventInfo
- Properties
data : Object
- Properties
[ intoRange ] : range
If set, the HTML was not inserted into the current selection, but into the specified range. This property is set if the CKEDITOR.editable.insertHtmlIntoRange method was used, but not if for the CKEDITOR.editable.insertHtml method.
afterPaste( evt )
Fired after the paste event if content was modified. Note that if the paste event does not insert any data, the
afterPasteevent will not be fired.
afterPasteFromWord( evt )
since 4.6.0
Fired after the Paste form Word filters have been applied.
afterSetData( evt )
Event fired at the end of the setData call execution. Usually it is better to use the dataReady event.
afterUndoImage( evt )
since 3.5.3
Fired after an undo image is created. An undo image represents the editor state at some point. It is saved into the undo store, so the editor is able to recover the editor state on undo and redo operations.
Parameters
evt : eventInfo
- Properties
editor : editor
This editor instance. CKEDITOR.editor.beforeUndoImage
ariaEditorHelpLabel( evt )
since 4.4.3
Event fired by the editor in order to get accessibility help label. The event is responded to by a component which provides accessibility help (i.e. the
a11yhelpplugin) hence the editor is notified whether accessibility help is available.
ariaWidget( evt )
Fired when some elements are added to the document.
Fired when the Auto Grow plugin is about to change the size of the editor.
Parameters
beforeCommandExec( evt )
Event fired before the command execution when execCommand is called.
beforeDestroy( evt )
Event fired when the destroy method is called, but before destroying the editor.
beforeGetData( evt )
Internal event to get the current data.
beforeModeUnload( evt )
Fired before changing the editing mode. See also beforeSetMode and mode.
beforePaste( evt )
deprecated
Fired before the paste event. Allows to preset data type.
beforeSetMode( evt )
since 3.5.3
Fired before the editor mode is set. See also mode and beforeModeUnload.
beforeUndoImage( evt )
since 3.5.3
Fired before an undo image is to be created. An undo image represents the editor state at some point. It is saved into the undo store, so the editor is able to recover the editor state on undo and redo operations.
Parameters
evt : eventInfo
- Properties
editor : editor
This editor instance. CKEDITOR.editor.afterUndoImage
Fired when the editor instance loses the input focus.
Note: This event will NOT be triggered when focus is moved internally, e.g. from an editable to another part of the editor UI like a dialog window. If you are interested only in the focus state of the editable, listen to the
focusand
blurevents of the CKEDITOR.editable instead.
editor.on( 'blur', function( e ) { alert( 'The editor named ' + e.editor.name + ' lost the focus' ); } );
Parameters
Fired when the content of the editor is changed.
Due to performance reasons, it is not verified if the content really changed. The editor instead watches several editing actions that usually result in changes. This event may thus in some cases be fired when no changes happen or may even get fired twice.
If it is important not to get the
changeevent fired too often, you should compare the previous and the current editor content inside the event listener. It is not recommended to do that on every
changeevent.
Please note that the
changeevent is only fired in the wysiwyg mode. In order to implement similar functionality in the source mode, you can listen for example to the key event or the native
inputevent (not supported by Internet Explorer 8).
editor.on( 'mode', function() { if ( this.mode == 'source' ) { var editable = editor.editable(); editable.attachListener( editable, 'input', function() { // Handle changes made in the source mode. } ); } } );
Parameters
configLoaded( evt )
Event fired once the editor configuration is ready (loaded and processed).
contentDirChanged( evt )
Fired when the language direction in the specific cursor position is changed
contentDom( evt )
Event fired when the editor content (its DOM structure) is ready. It is similar to the native
DOMContentLoadedevent, but it applies to the editor content. It is also the first event fired after the CKEDITOR.editable is initialized.
This event is particularly important for classic (
iframe-based) editor, because on editor initialization and every time the data are set (by setData) content DOM structure is rebuilt. Thus, e.g. you need to attach DOM event listeners on editable one more time.
For inline editor this event is fired only once — when the editor is initialized for the first time. This is because setting editor content does not cause editable destruction and creation.
The contentDom event goes along with contentDomUnload which is fired before the content DOM structure is destroyed. This is the right moment to detach content DOM event listener. Otherwise browsers like IE or Opera may throw exceptions when accessing elements from the detached document.
Note: CKEDITOR.editable.attachListener is a convenient way to attach listeners that will be detached on contentDomUnload.
editor.on( 'contentDom', function() { var editable = editor.editable(); editable.attachListener( editable, 'click', function() { console.log( 'The editable was clicked.' ); }); });
Parameters
contentDomInvalidated( evt )
since 4.3.0
Event fired when the content DOM changes and some of the references as well as the native DOM event listeners could be lost. This event is useful when it is important to keep track of references to elements in the editable content from code.
contentDomUnload( evt )
Event fired before the content DOM structure is destroyed. See contentDom documentation for more details.
contentPreview( evt )
Event fired when executing the
previewcommand that allows for additional data manipulation. With this event, the raw HTML content of the preview window to be displayed can be altered or modified.
Note This event should also be used to sanitize HTML to mitigate possible XSS attacks. Refer to the Validate preview content section of the Best Practices article to learn more.
Parameters
customConfigLoaded( evt )
Event fired when a custom configuration file is loaded, before the final configuration initialization.
Custom configuration files can be loaded thorugh the CKEDITOR.config.customConfig setting. Several files can be loaded by changing this setting.
Parameters
dataFiltered( evt )
since 4.1.0
This event is fired when CKEDITOR.filter has stripped some content from the data that was loaded (e.g. by setData method or in the source mode) or inserted (e.g. when pasting or using the insertHtml method).
This event is useful when testing whether the CKEDITOR.config.allowedContent setting is sufficient and correct for a system that is migrating to CKEditor 4.1.0 (where the Advanced Content Filter was introduced).
Parameters
Event fired as an indicator of the editor data loading. It may be the result of calling setData explicitly or an internal editor function, like the editor editing mode switching (move to Source and back).
Event fired when this editor instance is destroyed. The editor at this point is not usable and this event should be used to perform the clean-up in any plugin.
dialogHide( evt )
Event fired when a dialog is hidden.
dialogShow( evt )
Event fired when a dialog is shown.
dirChanged( evt )
Fired when the language direction of an element is changed.
doubleclick( evt )
Event fired when the user double-clicks in the editable area. The event allows to open a dialog window for a clicked element in a convenient way:
editor.on( 'doubleclick', function( evt ) { var element = evt.data.element; if ( element.is( 'table' ) ) evt.data.dialog = 'tableProperties'; } );
Note: To handle double-click on a widget use CKEDITOR.plugins.widget.doubleclick.
Parameters
Facade for the native
dragendevent. Fired when the native
dragendevent occurs.end event.
target : node
Drag target.
dataTransfer : dataTransfer
DataTransfer facade.
Facade for the native
dragstartevent. Fired when the native
dragstartevent occurs.
This event can be canceled in order to block the drag start operation. It can also be fired to mimic the start of the drag and drop operation. For instance, the
widgetplugin uses this option to integrate its custom block widget drag and drop with the entire system.start event.
target : node
Drag target.
dataTransfer : dataTransfer
DataTransfer facade.
Facade for the native
dropevent. Fired when the native
dropevent occurs.
Note: To manipulate dropped data, use the paste event. Use the
dropevent only to control drag and drop operations (e.g. to prevent the ability to drop some content).
Read more about integration with drag and drop in the Clipboard Deep Dive guide.
See also:
- The paste event,
- The dragstart and dragend events,
- The CKEDITOR.plugins.clipboard.dataTransfer class.
Parameters
evt : eventInfo
- Properties
editor : editor
This editor instance.
data : Object
- Properties
$ : Object
Native drop event.
target : node
Drop target.
dataTransfer : dataTransfer
DataTransfer facade.
dragRange : range
Drag range, lets you manipulate the drag range. Note that dragged HTML is saved as
text/htmldata on
dragstartso if you change the drag range on drop, dropped HTML will not change. You need to change it manually using dataTransfer.setData.
dropRange : range
Drop range, lets you manipulate the drop range.
elementsPathUpdate( evt )
Fired when the contents of the elementsPath are changed.
Event fired when executing the
exportPdfcommand that allows for additional data manipulation. With this event, the raw HTML content of the editor which will be sent to the HTML to PDF converter service can be altered or modified. It also allows to modify CSS rules and conversion options.
It is possible by adding listeners with the different priorities:
- 1-14: The data is available in the original string format.
- 15: The data is processed by the plugin.
- 16-19: The data that will be sent to the endpoint can be modified.
- 20: The data is sent to the endpoint.
Read more in the documentation.
Parameters
fileUploadRequest( evt )
since 4.5.0
Event fired when the file loader should send XHR. If the event is not stopped or canceled, the default request will be sent. Refer to the Uploading Dropped or Pasted Files article for more information.
Parameters
evt : eventInfo
- Properties
data : Object
- Properties
fileLoader : fileLoader
A file loader instance.
requestData : Object
An object containing all data to be sent to the server.
fileUploadResponse( evt )
since 4.5.0
Event fired when the file upload response is received and needs to be parsed. If the event is not stopped or canceled, the default response handler will be used. Refer to the Uploading Dropped or Pasted Files article for more information.
Parameters
evt : eventInfo
- Properties
data : Object
All data will be passed to CKEDITOR.fileTools.fileLoader.responseData.Properties
fileLoader : fileLoader
A file loader instance.
message : String
The message from the server. Needs to be set in the listener — see the example above.
fileName : String
The file name on server. Needs to be set in the listener — see the example above.
url : String
The URL to the uploaded file. Needs to be set in the listener — see the example above.
floatingSpaceLayout( evt )
since 4.5.0
Fired when the viewport or editor parameters change and the floating space needs to check and eventually update its position and dimensions.
Fired when the editor instance receives the input focus.
Event fired before the getData call returns, allowing for additional manipulation.
getSnapshot( evt )
Internal event to perform the getSnapshot call.
insertElement( evt )
Event fired by the insertElement method. See the method documentation for more information about how this event can be used.
insertHtml( evt )
Event fired by the insertHtml method. See the method documentation for more information about how this event can be used.
Parameters
evt : eventInfo
- Properties
editor : editor
This editor instance.
data : Object
- Properties
mode : String
The mode in which the data is inserted (see insertHtml).
dataValue : String
The HTML code to insert.
[ range ] : range
See insertHtml's
rangeparameter.
insertText( evt )
Event fired by the insertText method. See the method documentation for more information about how this event can be used.
instanceReady( evt )
Event fired when the CKEDITOR instance is completely created, fully initialized and ready for interaction.
Fired when any keyboard key (or a combination thereof) is pressed in the editing area.
editor.on( 'key', function( evt ) { if ( evt.data.keyCode == CKEDITOR.CTRL + 90 ) { // Do something... // Cancel the event, so other listeners will not be executed and // the keydown's default behavior will be prevented. evt.cancel(); } } );
Usually you will want to use the setKeystroke method or the CKEDITOR.config.keystrokes option to attach a keystroke to some command. Key event listeners are usuful when some action should be executed conditionally, based for example on precise selection location.
Parameters
evt : eventInfo
- Properties
data : Object
- Properties
keyCode : Number
A number representing the key code (or a combination thereof). It is the sum of the current key code and the CKEDITOR.CTRL, CKEDITOR.SHIFT and CKEDITOR.ALT constants, if those are pressed.
domEvent : event
A
keydownDOM event instance. Available since CKEditor 4.4.1.
editor : editor
This editor instance.
langLoaded( evt )
since 3.6.1
Event fired when the language is loaded into the editor instance.
loadSnapshot( evt )
Internal event to perform the loadSnapshot call.
Event fired when editor components (configuration, languages and plugins) are fully loaded and initialized. However, the editor will be fully ready to for interaction on instanceReady.
lockSnapshot( evt )
since 4.0.0
Locks the undo manager to prevent any save/update operations.
It is convenient to lock the undo manager before performing DOM operations that should not be recorded (e.g. auto paragraphing).
See CKEDITOR.plugins.undo.UndoManager.lock for more details.
Note: In order to unlock the undo manager, unlockSnapshot has to be fired the same number of times that
lockSnapshothas been fired.
Parameters
evt : eventInfo
- Properties
editor : editor
This editor instance.
data : Object
- Properties
[ dontUpdate ] : Boolean
When set to
true, the last snapshot will not be updated with the current content and selection. Read more in the CKEDITOR.plugins.undo.UndoManager.lock method.
[ forceUpdate ] : Boolean
When set to
true, the last snapshot will always be updated with the current content and selection. Read more in the CKEDITOR.plugins.undo.UndoManager.lock method.
Event fired when the maximize command is called. It also indicates whether an editor is maximized or not.
Parameters
evt : eventInfo
- Properties
editor : editor
This editor instance.
data : Number
Current state of the command. See CKEDITOR.TRISTATE_ON and CKEDITOR.TRISTATE_OFF.
Fired when a menu is shown.
Fired after setting the editing mode. See also beforeSetMode and beforeModeUnload
notificationHide( evt )
since 4.5.0
Event fired when the CKEDITOR.plugins.notification.hide method is called, before the notification is hidden. If this event is canceled, the notification will not be hidden.
Using this event allows you to fully customize how a notification will be hidden. It may be used to integrate the CKEditor notification system with your web page notifications.
Parameters
evt : eventInfo
- Properties
data : Object
- Properties
notification : notification
Notification which will be hidden.
editor : editor
The editor instance.
notificationShow( evt )
since 4.5.0
Event fired when the CKEDITOR.plugins.notification.show method is called, before the notification is shown. If this event is canceled, the notification will not be shown.
Using this event allows you to fully customize how a notification will be shown. It may be used to integrate the CKEditor notification system with your web page notifications.
Parameters
evt : eventInfo
- Properties
data : Object
- Properties
notification : notification
Notification which will be shown.
editor : editor
The editor instance.
notificationUpdate( evt )
since 4.5.0
Event fired when the CKEDITOR.plugins.notification.update method is called, before the notification is updated. If this event is canceled, the notification will not be shown even if the update was important, but the object will be updated anyway. Note that canceling this event does not prevent updating element attributes, but if notificationShow was canceled as well, this element is detached from the DOM.
Using this event allows you to fully customize how a notification will be updated. It may be used to integrate the CKEditor notification system with your web page notifications.
Parameters
evt : eventInfo
- Properties
data : Object
- Properties
notification : notification
Notification which will be updated. Note that it contains the data that has not been updated yet.
options : Object
Update options, see CKEDITOR.plugins.notification.update.
editor : editor
The editor instance.
Fired after the user initiated a paste action, but before the data is inserted into the editor. The listeners to this event are able to process the content before its insertion into the document.
Read more about the integration with clipboard in the Clipboard Deep Dive guide.
See also:
- the CKEDITOR.config.pasteFilter option,
- the drop event,
- the CKEDITOR.plugins.clipboard.dataTransfer class.
Parameters
evt : eventInfo
- Properties
editor : editor
This editor instance.
data : Object
- Properties
type : String
The type of data in
data.dataValue. Usually
'html'or
'text', but for listeners with a priority smaller than
6it may also be
'auto'which means that the content type has not been recognised yet (this will be done by the content type sniffer that listens with priority
6).
dataValue : String
HTML to be pasted.
method : String
Indicates the data transfer method. It could be drag and drop or copy and paste. Possible values:
'drop',
'paste'. Introduced in CKEditor 4.5.
dataTransfer : dataTransfer
Facade for the native dataTransfer object which provides access to various data types and files, and passes some data between linked events (like drag and drop). Introduced in CKEditor 4.5.
[ dontFilter ] : Boolean
Whether the paste filter should not be applied to data. This option has no effect when
data.typeequals
'text'which means that for instance CKEDITOR.config.forcePasteAsPlainText has a higher priority. Introduced in CKEditor 4.5.
Defaults to
false
pasteDialog( evt )
private
Internal event to open the Paste dialog window.
This event was not available in 4.7.0-4.8.0 versions.
Parameters
evt : eventInfo
- Properties
editor : editor
This editor instance.
[ data ] : Function
Callback that will be passed to openDialog.
pasteDialogCommit( evt )
private
Internal event to pass the paste dialog data to the listeners.
pasteFromWord( evt )
since 4.6.0
Fired when the pasted content was recognized as Microsoft Word content.
pluginsLoaded( evt )
Event fired when all plugins are loaded and initialized into the editor instance.
Event fired after the readOnly property changes.
removeFormatCleanup( evt )
Fired after an element was cleaned by the removeFormat plugin.
Fired when the editor (replacing a
<textarea>which has a
requiredattribute) is empty during form submission.
This event replaces native required fields validation that the browsers cannot perform when CKEditor replaces
<textarea>elements.
You can cancel this event to prevent the page from submitting data.
editor.on( 'required', function( evt ) { alert( 'Article content is required.' ); evt.cancel(); } );
Parameters
Fired after the editor instance is resized through the CKEDITOR.resize method.
Parameters
evt : eventInfo
- Properties
Fired when the user clicks the Save button on the editor toolbar. This event allows to overwrite the default Save button behavior.
saveSnapshot( evt )
Fired when the editor is about to save an undo snapshot. This event can be fired by plugins and customizations to make the editor save undo snapshots.
selectionChange( evt )
Fired when selection inside editor has been changed. Note that this event is fired only when selection's start element (container of a selecion start) changes, not on every possible selection change. Thanks to that
selectionChangeis fired less frequently, but on every context (the elements path holding selection's start) change.
Parameters
evt : eventInfo
- Properties
editor : editor
This editor instance.
data : Object
- Properties
selection : selection
-
path : elementPath
-
Event fired before the setData call is executed, allowing for additional manipulation.
Event fired when the styles set is loaded. During the editor initialization phase the getStylesSet method returns only styles that are already loaded, which may not include e.g. styles parsed by the
stylesheetparserplugin. Thus, to be notified when all styles are ready, you can listen on this event.
Event fired when a UI template is added to the editor instance. It makes it possible to bring customizations to the template source.
toDataFormat( evt )
since 4.1.0
This event is fired when CKEDITOR.htmlDataProcessor is converting internal HTML to output data HTML..htmlFilter is not applied yet.
- 10: Data is filtered with CKEDITOR.htmlDataProcessor.htmlFilter.
- 11: Data is filtered with the {CKEDITOR.filter content filter} (on output the content filter makes only transformations, without filtering).
- 10-14: Data is available in the parsed format and CKEDITOR.htmlDataProcessor.htmlFilter has already been applied.
- 15: Data is written back to an HTML string.
- 15-*: Data is available in an HTML string.
For example to be able to process parsed and already processed data add listener this way:
editor.on( 'toDataFormat', function( evt) { evt.data.dataValue; // -> CKEDITOR.htmlParser.fragment instance }, null, null, 12 );
Parameters
evt : eventInfo
- Properties
editor : editor
This editor instance.
data : Object
- Properties
dataValue : String | fragment | element
Output data to be prepared.
context : String
See CKEDITOR.htmlDataProcessor.toDataFormat The
contextargument.
filter : Boolean
See CKEDITOR.htmlDataProcessor.toDataFormat The
filterargument.
enterMode : Boolean
See CKEDITOR.htmlDataProcessor.toDataFormat The
enterModeargument.
This event is fired by the CKEDITOR.htmlDataProcessor when input HTML is to be purified by the CKEDITOR.htmlDataProcessor.toHtml method..dataFilter is not applied yet.
- 6: Data is filtered with the content filter.
- 10: Data is processed with CKEDITOR.htmlDataProcessor.dataFilter.
- 10-14: Data is available in the parsed format and CKEDITOR.htmlDataProcessor.dataFilter has already been applied.
- 15: Data is written back to an HTML string.
- 15-*: Data is available in an HTML string.
For example to be able to process parsed, but not yet filtered data add listener this way:
editor.on( 'toHtml', function( evt) { evt.data.dataValue; // -> CKEDITOR.htmlParser.fragment instance }, null, null, 7 );
Parameters
evt : eventInfo
- Properties
editor : editor
This editor instance.
data : Object
- Properties
dataValue : String | fragment | element
Input data to be purified.
context : String
See CKEDITOR.htmlDataProcessor.toHtml The
contextargument.
fixForBody : Boolean
See CKEDITOR.htmlDataProcessor.toHtml The
fixForBodyargument.
dontFilter : Boolean
See CKEDITOR.htmlDataProcessor.toHtml The
dontFilterargument.
filter : Boolean
See CKEDITOR.htmlDataProcessor.toHtml The
filterargument.
enterMode : Boolean
See CKEDITOR.htmlDataProcessor.toHtml The
enterModeargument.
[ protectedWhitespaces ] : Boolean
See CKEDITOR.htmlDataProcessor.toHtml The
protectedWhitespacesargument.
Fired when the UI space is created. This event allows to modify the top bar or the bottom bar with additional HTML.
For example, it is used in the Editor Resize plugin to add the HTML element used to resize the editor.
Parameters
unlockSnapshot( evt )
since 4.0.0
Unlocks the undo manager and updates the latest snapshot.
updateSnapshot( evt )
Amends the top of the undo stack (last undo image) with the current DOM changes.
widgetDefinition( evt )
An event fired when a widget definition is registered by the CKEDITOR.plugins.widget.repository.add method. It is possible to modify the definition being registered.
Parameters
evt : eventInfo
- Properties
data : definition
Widget definition. | https://ckeditor.com/docs/ckeditor4/latest/api/CKEDITOR_editor.html | CC-MAIN-2020-45 | en | refinedweb |
Member since 01-22-2019
9
6
Kudos Received
0
Solutions
02-03-2020 12:26 PM
02-03-2020 12:26 PM
Announcing Cloudera Enterprise 6.3.3 We are pleased to announce the release of Cloudera Enterprise 6.3.3 , the world’s leading data platform for machine learning and analytics, optimized for the cloud. This maintenance release includes all the features and capabilities of the previous minor release (Cloudera Enterprise 6.3) with the addition of numerous important bug fixes . Also note the current list of known issues with Cloudera Enterprise 6. We also encourage you to review the new Upgrade Guide that now includes the ability to create customized upgrade instructions based on your specific environment and upgrade path. New Features & Capabilities Cloudera Enterprise 6.3.3 now supports running on RedHat Enterprise Linux 7.7 and CentOS 7.7 Cloudera Manager provides new tuning options to increase the scalability of monitoring services for large CDH deployments package,. Cloudera Installation Guide Cloudera Manager Upgrade Guide CDH Upgrade Guide Important Note About Cloudera Express Beginning with release 6.3.3, Cloudera Express has been discontinued and is no longer available. Upgrades to Cloudera Manager 6.3.3 or CDH 6.3.3 and higher are not supported while running Cloudera Express. A valid Cloudera Enterprise or CDP license must be in place before upgrading to Cloudera Manager 6.3.3 or 7.x, or the upgrade will not be completed. This also applies to trial installations of Cloudera Enterprise and CDP Data Center. Downgrading from Cloudera Enterprise to Cloudera Express is also no longer supported in Cloudera Manager 6.3.3 and later. An expired Cloudera Enterprise or CDP license, or an expired trial license will disable the Cloudera Manager UI until a valid Cloudera Enterprise or CDP license key is provided. As always, we value your feedback and remain committed to Your success! Please provide any comments and suggestions through our Community Forums . ... View more
11-15-2019 08:39 AM
11-15-2019 08:39 AM
We are happy to announce Cloudera Enterprise new Runtime release, CDH 6.3.2. What's New? It is a patch release, cumulatively includes base 6.3 features, additional to latest single important fix, above existing known limitations from previous. Cloudera Manager, 6.3.1, the recommended tool for installing Cloudera Enterprise, is compatible and has no updates this time. Quick overview of Runtime fixes as follows: Upstream fix/es for Kudu [ KUDU-2990 ] Kudu cannot distribute libnuma (dependency of memkind). In this release the NVM cache implementation in Kudu has been changed to dynamically link memkind at runtime using dlopen(). full Runtime variant. Thank You for using Cloudera Software! ... View more
10-11-2019 11:57 AM
10-11-2019 11:57 AM
We are happy to announce Cloudera Enterprise new release, CDH 6.3.1. What's New? It is a regular maintenance release, cumulatively includes base 6.3 features, additionally new Dell EMC Isilon support to latest patches and fixes, above existing known limitations. Cloudera Manager, 6.3.1, the recommended tool for installing Cloudera Enterprise, has its own feature additions, fixes and known limitations updated. Quick overview of Runtime fixes as follows: Cloudera Distribution fix/es for Kafka Idempotent and Transactional Capabilities of Kafka that are Incompatible with Sentry Upstream fix/es for Apache Avro HIVE-17829 - Fixed ArrayIndexOutOfBoundsException that occurred when using HBASE-backed tables with Avro schema in Hive2 Apache Hadoop HADOOP-16018 - DistCp does not reassemble chunks when the value of blocks per chunk is greater than zero. HDFS HDFS-12828 - OIV ReverseXML Processor fails with escaped characters. HDFS-13101 - An fsimage corruption related to snapshots. HDFS-13709 - Report bad block to NameNode when transfer block encounters EIO exception HDFS-14148 - HDFS OIV ReverseXML SnapshotSection parser throws exception when there is more than one snapshottable directory. HDFS-14687 - Standby Namenode does not come out of safemode when EC files are being written. HDFS-14706 - Checksums are not checked if the block meta file size is less than 7 bytes. MapReduce 2 MAPREDUCE-7225 - Fix broken current folder expansion during MR job start YARN YARN-9667 - Container-executor.c duplicates messages to stdout YARN-9833 - Race condition when DirectoryCollection.checkDirs() runs during container launch Apache HBase HBASE-19893 - restore_snapshot is broken in master branch when region splits HBASE-20305 - adding options to skip deletes/puts on target when running SyncTable HBASE-22169 - Open region failed cause memory leak HBASE-22539 - WAL corruption due to early DBBs re-use when Durability.ASYNC_WAL is used HBASE-22617 - Recovered WAL directories not getting cleaned up HBASE-22690 - Deprecate / Remove OfflineMetaRepair in hbase-2+ HBASE-22759 - Extended grant and revoke audit events with caller info Apache Hive HIVE-17829 - Fixed ArrayIndexOutOfBoundsException that occurred when using HBASE-backed tables with Avro schema in Hive2 Hue HUE-8922 - [frontend] Show dates and times in local format with timezone offset details HUE-8933 - [editor] Results are not properly cleared in multi-statement execution HUE-8950 - [core] Saving newly copied Oozie workflow throws an exception HUE-8979 - [jb] Oozie spark jobs display a NoneType object that is not iterable Apache Impala IMPALA-8549 - Added support for scanning DEFLATE text files. IMPALA-8820 - Fixed an issue where the catalogd process was not found when Impala starts in a cluster. IMPALA-8847 - The event based automatic metadata invalidation can now correctly ignore empty partition lists generated for certain Hive queries. Apache Oozie OOZIE-3397 - Improve logging in NotificationXCommand. OOZIE-3542 - Handle better HDFS implementations in ECPolicyDisabler. Apache Sentry SENTRY-2276 - Sentry-Kafka integration does not support Kafka's Alter/DescribeConfigs and IdempotentWrite operations SENTRY-2528 - Format exception when fetching a full snapshot Apache Spark SPARK-18364 - [YARN] Expose metrics for YarnShuffleService SPARK-24352 - [CORE][TESTS] De-flake StandaloneDynamicAllocationSuite blacklist test SPARK-24355 - Spark external shuffle server improvement to better handle block fetch requests. SPARK-25139 - [SPARK-18406][CORE][2.4] Avoid NonFatals to kill the Executor in PythonRunner SPARK-25641 - Change the spark.shuffle.server.chunkFetchHandlerThreadsPercent default to 100 SPARK-25642 - [YARN] Adding two new metrics to record the number of registered connections as well as the number of active connections to YARN Shuffle Service SPARK-25692 - [CORE] Remove static initialization of worker eventLoop handling chunk fetch requests within TransportContext. This fixes ChunkFetchIntegrationSuite as well. SPARK-26615 - [CORE] Fixing transport server/client resource leaks in the core unittests SPARK-27021 - [CORE] Cleanup of Netty event loop group for shuffle chunk fetch requests SPARK-28150 - [CORE][FOLLOW-UP] Don't try to log in when impersonating. SPARK-28150 - [CORE] Log in user before getting delegation tokens. SPARK-28261 - [CORE] Fix client reuse test SPARK-28335 - [DSTREAMS][TEST] DirectKafkaStreamSuite wait for Kafka async commit SPARK-28584 - [CORE] Fix thread safety issue in blacklist timer, tests Runtime variants, and previous versions. Thank You for using Cloudera Software! ... View more
09-18-2019 11:41 PM
09-18-2019 11:41 PM
We are happy to announce Cloudera Enterprise 6.2.1 ! What's New? It is a regular maintenance release, including base release features, additionally latest patches and Upstream fixes packaged. Apache Hadoop HADOOP-16011 - OsSecureRandom very slow compared to other SecureRandom implementations HADOOP-16018 - DistCp won't reassemble chunks when blocks per chunk > 0. HADOOP-16167 - Fixed Hadoop shell script for Ubuntu 18. HADOOP-16238 - Add the possbility to set SO_REUSEADDR in IPC Server Listener HDFS HDFS-10477 - Stop decommission a rack of DataNodes caused NameNode fail over to standby HDFS-12781 - After Datanode down, In Namenode UI Datanode tab is throwing warning message. HDFS-13101 - Yet another fsimage corruption related to snapshot HDFS-13244 - Add stack, conf, metrics links to utilities dropdown in NN webUI HDFS-13677 - Dynamic refresh Disk configuration results in overwriting VolumeMap HDFS-14111 - hdfsOpenFile on HDFS causes unnecessary IO from file offset 0 HDFS-14314 - fullBlockReportLeaseId should be reset after registering to NN HDFS-14359 - Inherited ACL permissions masked when parent directory does not exist HDFS-14389 - getAclStatus returns incorrect permissions and owner when an iNodeAttributeProvider is configured HDFS-14687 - Standby Namenode never come out of safemode when EC files are being written HDFS-14746 - Trivial test code update after HDFS-14687 MapReduce 2 MAPREDUCE-7225 - Fix broken current folder expansion during MR job start YARN YARN-9552 - FairScheduler: NODE_UPDATE can cause NoSuchElementException YARN-9667 - Use setbuf with line buffer to reduce fflush complexity in container-executor. Apache HBase HBASE-19893 - restore_snapshot is broken in master branch when region splits HBASE-21736 - Remove the server from online servers before scheduling SCP for it in hbck HBASE-21800 - RegionServer aborted due to NPE from MetaTableMetrics coprocessor HBASE-21960 - RESTServletContainer not configured for REST Jetty server HBASE-21978 - Should close AsyncRegistry if we fail to get cluster id when creating AsyncConnection HBASE-21991 - Fix MetaMetrics issues - [Race condition, Faulty remove logic], few improvements HBASE-22128 - Move namespace region then master crashed make deadlock HBASE-22144 - Correct MultiRowRangeFilter to work with reverse scans HBASE-22169 - Open region failed cause memory leak HBASE-22200 - WALSplitter.hasRecoveredEdits should use same FS instance from WAL region dir HBASE-22581 - User with "CREATE" permission can grant, but not revoke permissions on created table HBASE-22615 - Make TestChoreService more robust to timing HBASE-22617 - Recovered WAL directories not getting cleaned up HBASE-22690 - Deprecate / Remove OfflineMetaRepair in hbase-2+ HBASE-22759 - Extended grant and revoke audit events with caller info - ADDENDUM Apache Hive HIVE-16811 - Estimate statistics in absence of stats, partially backported, when a query fails with IllegalArgumentException Size requested for unknown type: java.util.Collection HIVE-13278 - Avoid FileNotFoundException when map/reduce.xml is not available Hue HUE-4327 - [editor] Turn off batch mode for query editors HUE-8140 - [editor] Additional improvements to multi statement execution HUE-8691 - [useradmin] Fix group sync fail to import member HUE-8717 - [oozie] Fix Sqoop1 editor fail to execute HUE-8720 - [importer] Fix importer with custom separator HUE-8727 - [frontend] Prevent Chrome from autofilling user name in various input elements HUE-8734 - [editor] Fix zero width column filter in the results HUE-8746 - [pig] Add hcat support in the Pig Editor in Hue HUE-8759 - [importer] Fix import to index, importing to hive instead HUE-8802 - [assist] Fix js exception on assist index refresh HUE-8829 - [core] Fix redirect stops at /hue/accounts/login HUE-8860 - [beeswax] Truncate column size to 5000 if too large HUE-8878 - [oozie] Fix Hive Document Action variable with prefilled value HUE-8879 - [core] Fix ldaptest not allow space in user_filter HUE-8880 - [oozie] Fix KeyError when execute coordinator HUE-8922 - [frontend] Show dates and times in local format with timezone offset details HUE-8933 - [editor] Make sure to clear any previous result when the execute call returns HUE-8950 - [core] Fix error of saving copied document Apache Impala IMPALA-7800 - Impala now times out new connections after it reaches the maximum number of concurrent client connections. The limit is specified by the --fe_service_threads startup flag. The default value is 64 with which 64 queries can run simultaneously. Previously the connection attempts that could not be serviced were hanging infinitely. IMPALA-7802 - Idle client connections are now closed to conserve front-end service threads. IMPALA-8469 - Fixed the issue where Impala clusters with dedicated coordinators incorrectly rejected queries destined for memory pools with configured limits. IMPALA-8549 - Added support for scanning DEFLATE text files. IMPALA-8595 - Impala supports TLS v1.2 with the Python version 2.7.9 and older in impala-shell. IMPALA-8673 - Added the DEFAULT_HINTS_INSERT_STATEMENT query option for setting the default hints for the INSERT statements when no optimizer hint was specified. Apache Kafka KAFKA-7697 - Process DelayedFetch without holding leaderIsrUpdateLock Apache Kudu KUDU-2807 - The system doesn’t crash when a flush or a compaction overlaps with another compaction. KUDU-2871 - (part 1): A temporary fix that pegs maximum TLS version to TLSv1.2. Apache Oozie OOZIE-3365 - Workflow and coordinator action status remain as RUNNING after rerun. OOZIE-3397 - Improve logging in NotificationXCommand. OOZIE-3478 - Oozie needs execute permission on the submitting user's home directory. Apache Sentry SENTRY-2276 - Sentry-Kafka integration does not support Kafka's Alter/DescribeConfigs and IdempotentWrite operations SENTRY-2511 - Debug level logging on HMSPaths significantly affects performance SENTRY-2528 - Format exception when fetching a full snapshot Apache Spark SPARK-25139 - [SPARK-18406][CORE][2.4] Avoid NonFatals to kill the Executor in PythonRunner SPARK-25429 - [SQL] Use Set instead of Array to improve lookup performance SPARK-26003 - Improve SQLAppStatusListener.aggregateMetrics performance SPARK-26089 - [CORE] Handle corruption in large shuffle blocks SPARK-26349 - [PYSPARK] Forbid insecure py4j gateways SPARK-27094 - [YARN] Work around RackResolver swallowing thread interrupt. SPARK-27112 - [CORE] : Create a resource ordering between threads to resolve the deadlocks encountered ... SPARK-28150 - [CORE][FOLLOW-UP] Don't try to log in when impersonating. SPARK-28150 - [CORE] Log in user before getting delegation tokens. SPARK-28335 - [DSTREAMS][TEST] DirectKafkaStreamSuite wait for Kafka async commit Apache Zookeeper ZOOKEEPER-1392 - Request READ or ADMIN permission for getAcl() ZOOKEEPER-2141 - ACL cache in DataTree never removes entries See all fixes in this release, Cloudera Proprietary software fixes included. How You can get it? You can download the parcel(s) from our archive and apply it directly to provisioned clusters without disrupting Your currently running CDH workloads. Full software download also available at Cloudera Homepage, with support and documentation. Archive continues to provide access to both variants. Thank You for using Cloudera Software! ... View more
We are pleased to announce the general availability of Cloudera Enterprise 6.3.0, the world’s leading data platform for machine learning and analytics, optimized for the cloud. This release delivers a number of new capabilities, improved usability, better performance, and support for more modern Java and identity management infrastructure software. New capabilities include: Platform highlights: Support for the OpenJDK 11 runtime : all components and tools in Cloudera Enterprise 6 support both the JDK8 and JDK11 Java Virtual Machine (JVM). Updated versions of platform components , including packaging & upgrade support for the following new Apache project versions: Kafka 2.2.1, HBase 2.1.4, Impala 3.2.0 and Kudu 1.10.0. Support for Free IPA and Red Hat IPA Manager: Cloudera Manager & CDH now supports the use of FeeIPA/RedHat IDM as the Kerberos KDC provider for CDH, and for LDAP authentication to Cloudera Enterprise Support for zstd compression with Parquet files, a fast real-time compression algorithm offering an improved speed to compression ratio as well as fast decompression speeds. Both Impala and Spark have been certified with zstd and Parquet. Management highlights: SPNEGO/Kerberos support for Cloudera Manager Admin Console and API, Cloudera Manager now supports strong authentication via Kerberos (using SPNEGO). See Configuring External Authentication and Authorization for Cloudera Manager . TLS certificate expiry monitoring and alerting: Cloudera Manager now alerts you 60 days before the Cloudera Manager Server’s TLS certificate expires, prompting you to rotate (re-generate) the certificates used by Cloudera Manager’s push-button wire encryption system (‘AutoTLS’) Network Performance Inspector now includes a bandwidth test, for verifying sufficient network performance between independent compute and storage clusters Kafka support in Compute Clusters, Independently managed Kafka ‘compute’ clusters can now share a single Sentry in a base CDH cluster for common authorization across all services Governance highlights: Auditing in Virtual Private Clusters. Cloudera Navigator now extracts audit events from all relevant activity within Compute Clusters in addition to collecting audit events upon creation of Compute Clusters and Data Contexts. No lineage or metadata is extracted from services running on Compute clusters. The new behavior is described in detail in Virtual Private Clusters and Cloudera SDX . Search, query, access highlights: Data Cache for Remote Reads (preview feature, off by default): To improve performance on environments with separate storage & compute clusters as well as on object store environments, Impala now caches data for non-local reads (e.g. S3, ABFS, ADLS) on local storage. See Impala Remote Data Cache for the information and steps to enable remote data cache. Automatic Invalidate/Refresh Metadata (preview feature, off by default): When other CDH services update the Hive metastore, Impala users no longer have to issue INVALIDATE/REFRESH in a number of scenarios. See Impala Metadata Management for the information and steps to enable the Zero Touch Metadata feature. Support for Kudu integrated with Hive Metastore, metadata for Kudu tables can now be managed via HMS and shared between Impala and Spark. See Using the Hive Metastore with Kudu for upgrading existing tables. Kudu supports both full and incremental table backups, via a job implemented using Apache Spark. Additionally, it supports restoring tables from full and incremental backups via a restore job implemented using Apache Spark. Kudu can use HDFS, S3 and any Spark-compatible destination to store backups. See the backup documentation for more details. Kudu’s web UI now supports SPNEGO , a protocol for securing HTTP requests with Kerberos by passing negotiation through the HTTP headers. To enable authorization using SPNEGO, set the --webserver_require_spnego command line flag. Query Profile, output enhanced for better monitoring and troubleshooting of query performance. See Impala Query Profile for generating and reading query profile. Security highlights: Kudu now supports native, fine-grained authorization via integration with Apache Sentry , Kudu can now enforce role-based access control policies defined in Sentry. When this feature is turned on, access control is enforced for all clients accessing Kudu, including Impala, Spark and native Kudu clients. See the authorization documentation for more details. Product software download available, with updated documentation . Please refer to the release notes for a complete list of features, same for base release . We also encourage you to review the new Upgrade Guide that now includes the ability to create a customized document based on your unique upgrade path. As always, we love your feedback and remain committed to Your success! Please provide any comments and suggestions through Our Community forums. ... View more
We are happy to announce Cloudera Enterprise 5.16.2. You can download the parcel(s) and apply it directly to provisioned clusters without disrupting Your currently running CDH workloads. What's New? Upstream and other faults fixed: Apache Flume FLUME-2973 - Deadlock in hdfs sink. FLUME-3223 - Flume HDFS Sink should retry close prior recover lease. Apache Hadoop HADOOP-15442 - ITestS3AMetrics.testMetricsRegister should not know the name of the metrics source. HDFS-11751 - DFSZKFailoverController daemon exits with the wrong status code. HDFS-12683 - DFSZKFailOverController re-order logic for logging exception. HDFS-14111 - hdfsOpenFile on HDFS causes unnecessary IO from file offset 0 MAPREDUCE-6382 - HTML links in the Diagnostics in JHS job overview must not be escaped. MAPREDUCE-7125 - JobResourceUploader creates LocalFileSystem when it's not necessary. MAPREDUCE-7131 - Job History Server has race condition where it moves files from intermediate to finished but thinks file is in intermediate. YARN-4227 - Ignore expired containers from the removed nodes in FairScheduler. YARN-4677 - RMNodeResourceUpdateEvent update from scheduler can lead to race condition. Apache HBase HBASE-16810 - HBase Balancer throws ArrayIndexOutOfBoundsException when regionservers are in /hbase/draining znode and unloaded HBASE-17510 - DefaultMemStore gets the wrong heap size after rollback HBASE-19730 - Backport HBASE-14497 Reverse Scan threw StackOverflow caused by readPt checking HBASE-20604 - ProtobufLogReader#readNext can incorrectly loop to the same position in the stream until the the WAL is rolled HBASE-21275 - Disable TRACE HTTP method for thrift http server HBASE-21546 - ConnectException in TestThriftHttpServer Apache Hive HIVE-12476 - Metastore NPE on Oracle with Direct SQL HIVE-13278 - Avoid FileNotFoundException when map/reduce.xml is not available HIVE-13394 - Analyze table fails in Tez on empty partitions HIVE-13592 - metastore calls map is not thread safe HIVE-14557 - Nullpointer When both SkewJoin and Mapjoin Enabled HIVE-14560 - Support exchange partition between S3 and HDFS tables HIVE-14690 - Query fail when hive.exec.parallel=true, with conflicting session dir HIVE-16839 - Unbalanced calls to openTransaction/commitTransaction when altering the same partition concurrently HIVE-18778 - Needs to capture input/output entities in explain HIVE-20331 - Query with union all, lateral view and Join fails with "cannot find parent in the child operator" HIVE-20678 - HiveHBaseTableOutputFormat should implement HiveOutputFormat to ensure compatibility HIVE-20695 - HoS Query fails with hive.exec.parallel=true HIVE-21028 - get_table_meta should use a fetch plan to avoid race conditions ending up in NucleusObjectNotFoundException HIVE-21044 - Improvments to HMS metrics HIVE-21045 - Add connection pool info and rolling performance info to the metrics system Hue HUE-8388 - [oozie] Make Hue create a new workspace when importing an Oozie workflow instead of using the "deployment_dir" field HUE-8450 - [editor] Embedded mode improvements for previous Hue version HUE-8458 - [frontend] Improve application loading performance HUE-8468 - [frontend] Dynamically adding styles in embedded mode fails in Internet Explorer (throws a Java script exception) HUE-8584 - [useradmin] Errors returned for Add Sync Ldap Group HUE-8585 - [useradmin] Errors returned for Add Sync Ldap Users HUE-8631 - HBase is not accessible by way of the Hue server; instead returns "API Error." HUE-8660 - [core] File browser cannot view files containing a hash (#) in the name HUE-8691 - [useradmin] Attempting to add/sync group will not add users if the objectClass posixGroup exists in the LDAP entry HUE-8692 - [useradmin] Group sync fails if all group members are not found with error "No such object" HUE-8693 - [useradmin] Security application only displays 100 users in the impersonation list HUE-8705 - [oozie] Hidden popup window is blocking the Query drop-down menu and the search box HUE-8709 - [useradmin] Black transparent screen remains after confirmation modal is hidden HUE-8746 - [pig] Add hcat support to the Pig Editor in Hue Apache Impala IMPALA-6323 - Impala now supports a constant in the window specifications.-8058 - Fixed cardinality estimates for HBase queries, which could sometimes yield hugely high numbers. IMPALA-8109 - Impala can now read the gzip files bigger than 2 GB. IMPALA-8212 - Fixed a race condition in the Kerberos authentication code. Kite KITE-1185 - Make root temp directory path configurable in HiveAbstractDatasetRepository Apache Kudu KUDU-1678 - Fixed a crash caused by a race condition between altering tablet schemas and deleting tablet replicas. KUDU-2195 - Now you can use the ‑‑cmeta_force_fsync flag to fsync Kudu’s consensus metadata more aggressively. Setting this to true may decrease Kudu’s performance, but will improve its durability in the face of power failures and forced shutdowns. The issue was much more likely to happen when Kudu was running on XFS. KUDU-2463 - Fixed an issue in which incorrect results would be occasionally returned in scans following a server restart. CDH-76920 - Fixed the issue where the Kudu CLI crashes when running the 'kudu cluster rebalance' sub-command on some platforms. Apache Oozie OOZIE-3382 - Implement and backportOptimize SshActionExecutor's drainBuffers method Apache Pig PIG-5373 - InterRecordReader might skip records if certain sync markers are used PIG-5374 - Use CircularFifoBuffer in InterRecordReader Apache Sentry SENTRY-2205 - Improve Sentry NN Logging. SENTRY-2301 - Log where sentry stands in the snapshot fetching process, periodically SENTRY-2372 - SentryStore should not implement grantOptionCheck SENTRY-2419 - Log where sentry stands in the process of persisting the snpashot SENTRY-2427 - PortUse Hadoop KerberosName class to derive shortName SENTRY-2428 - Skip null partitions or partitions with null sds entries SENTRY-2437 - PortWhen granting privileges a single transaction per grant causes long delays SENTRY-2490 - PortWhen building a full perm update for each object we only build 1 privilege per role SENTRY-2498 - Exception while deleting paths that does't exist SENTRY-2502 - Modified BackportSentry NN plug-in stops fetching updates from sentry server. SENTRY-2511 - Debug level logging on HMSPaths significantly affects performance Apache Spark SPARK-4224 - [CORE][YARN] Support group acls SPARK-19019 - [PYTHON][BRANCH-1.6] Fix hijacked `collections.namedtuple` and port cloudpickle changes for PySpark to work with Python 3.6.0 Apache Zookeeper ZOOKEEPER-1392 - Request READ or ADMIN permission for getAcl() ZOOKEEPER-2141 - ACL cache in DataTree never removes entries See all fixes in this release. See documentation here. Thank You for using Cloudera Software! ... View more
We are happy to announce CDS 2.4 Release 2? - A new Streamlined API - Performance Improvements - Stream Processing using Dataframes - New machine learning algorithms and model persistence See CDS Powered By Apache Spark Fixed Issues for the list of fixed issues and enhancements. See documentation and full download available here. -------------- Want to become a pro Spark user? Sign up for Apache Spark Training. ... View more
04-09-2019 01:39 AM
04-09-2019 01:39 AM
We are happy to announce CDS 2.4 Release 1? - A new Streamlined API - Performance Improvements - Stream Processing using Dataframes - New machine learning algorithms and model persistence See CDS Powered By Apache Spark Fixed Issues for the list of fixed issues. See documentation and full download available. Want to become a pro Spark user? Sign up for Apache Spark Training. ... View more
01-22-2019 01:00 PM
01-22-2019 01:00 PM
We are happy to announce CDS 2.1 Release 4 This is purely a maintenance release. See CDS Powered By Apache Spark Fixed Issues for the list of fixed issues. Download CDS 2.1 Release 4 Powered By Apache Spark. Read the documentation . Want to become a pro Spark user? Sign up for Apache Spark Training . ... View more | https://community.cloudera.com/t5/user/viewprofilepage/user-id/31727 | CC-MAIN-2020-45 | en | refinedweb |
configureAsRemoved
How to get configureAsRemoved
import configureAsRemoved from 'fontoxml-families/src/configureAsRemoved.js'
Type: Function
Use this family to hide an element from the author.
The element and their content will not be rendered in the editor. The associated XML will be preserved, until one of its ancestors is being deleted.
You could use this for metadata.
Arguments
Was this page helpful? | https://documentation.fontoxml.com/api/latest/configureasremoved-33443117.html | CC-MAIN-2020-45 | en | refinedweb |
Scala is a general-purpose, high-level, multi-paradigm programming language. It is a pure object-oriented programming language which also provides support to the functional programming approach. Scala programs can convert to bytecodes and can run on the JVM(Java Virtual Machine). Scala stands for Scalable language. Scala doesn’t provide any support for .Net Framework. Scala was designed by the Martin Odersky, professor of programming methods at École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and a German computer scientist. Scala was first released publicly in 2004 on the Java platform as its first version. In June 2004. The latest version of scala is 2.12.6 which released on 27-Apr-2018.
Topics:
- Features and Application
- Installation Guide
- Run Scala Code
- Variables
- Operators
- Decision Making
- Loops
- Arrays
- Strings
- Functions
Scala has many reasons for being popular and in demand. Few of the reasons are mentioned below:
- Object- Oriented: Every value in Scala is an object so it is a purely object-oriented programming language. The behavior and type of objects are depicted by the classes and traits in Scala.
- Functional: It is also a functional programming language as every function is a value and every value is an object. It provides the support for the high-order functions, nested functions, anonymous functions etc.
- Statically Typed: The process of verifying and enforcing the constraints of types is done at compile time in Scala. Unlike other statically typed programming languages like C++, C etc., Scala doesn’t expect the redundant type of information from the user. In most cases, the user has no need to specify a type.
- Extensible: New language constructs can be added to Scala in form of libraries. Scala is designed to interpolate with the JRE(Java Runtime Environment).
- Concurrent & Synchronize Processing: Scala allows the user to write the codes in an immutable manner that makes it easy to aplly the parallelism(Synchronize) and concurrency.
Application area
Scala is a very compatible language and thus can very easily be installed into the Windows and the Unix operating systems both very easily.
Since Scala is a lot similar to other widely used languages syntactically, it is easier to code and learn in Scala. scala programs can be written on any plain text editor like notepad, notepad++, or anything of that sort. One can also use an online IDE for writing Scala codes or can even install one on their system to make it more feasible to write these codes because IDEs provide a lot of features like intuitive code editor, debugger, compiler, etc.
To begin with, writing Scala Codes and performing various intriguing and useful operations, one must have scala installed on their system. This can be done by following the step by step instructions provided below:
- Verifying Java Packages
The first thing we need to have is a Java Software Development Kit(SDK) installed on the computer. We need to verify this SDK packages and if not installed then install them.
- Now install Scala
We are done with installing the java now let’s install the scala packages. The best option to download these packeges is to download from the official site only: The packages in the link above is the approximately of 100MB storage. Once the packages are downloaded then open the downloaded .msi file.
- Testing and Running the Scala Commands
Open the command prompt now and type in the following codes.
C:\Users\Your_PC_username>scala
We will receive an output as shown below:
Output of the command.
Let’s consider a simple Hello World Program.
Output:
Hello, World!
Generally, there are two ways to Run a Scala program-
- Using Online IDEs: We can use various online IDEs which can be used to run Scala programs without installing.
- Using Command-Line: We can also use command line options to run a Scala program. Below steps demonstrate how to run a Scala program on Command line in Windows/Unix Operating System:
Open Commandline and then to compile the code type scala Hello.scala. If your code has no error then it will execute properly and output will be displayed.
Fundamentals of Scala
Variables are simply a storage location. Every variable is known by its name and stores some known and unknown piece of information known as value. In Scala there are two types of variable:
- Mutable Variables: These variables are those variables which allow us to change a value after the declaration of a variable. Mutable variables are defined by using the “var” keyword.
- Immutable Variables: These variables are those variables which do not allow you to change a value after the declaration of a variable. Immutable variables are defined by using the “val” keyword.
Example:
// Mutable Variable var name: String = "geekforgeeks"; // Immutable Variable val name: String = "geekforgeeks";
To know more about Scala Variables refer – Variables in Scala, Scope of Variable in Scala.
An operator is a symbol that represents an operation to be performed with one or more operand. Operators allow us to perform different kinds of operations on operands. There are different types of operators used in Scala as follows:
Example :
Addition is: 14 Subtraction is: 6 Equal To Operator is False Logical Or of a || b = true Bitwise AND: 0 Addition Assignment Operator: ()
Decision Making in programming is similar to decision making in real life. Scala uses control statements to control the flow of execution of the program based on certain conditions. These are used to cause the flow of execution to advance and branch based on changes to the state of a program.
Decision Making Statements in Scala :
- If
- If – else
- Nested – If
- if – elsif ladder
Example 1: To illustrate use of if and if-else
Even Number Sudo Placement
Example 2: To illustrate the use of Nested-if
Number is divisible by 2 and 5
To know more about Decision Making please refer to Decision making in Scala
Looping in programming languages is a feature which facilitates the execution of a set of instructions/functions repeatedly while some condition evaluates to true. Loops make the programmer’s task simpler. Scala provides the different types of loop to handle the condition based situation in the program. The loops in Scala are :
- for loopchevron_rightfilter_noneOutput:
Value of y is: 1 Value of y is: 2 Value of y is: 3 Value of y is: 4
- While loopchevron_rightfilter_noneOutput:
Value of x: 1 Value of x: 2 Value of x: 3 Value of x: 4
- do-while loopchevron_rightfilter_noneOutput:
10 9 8 7 6 5 4 3 2 1
To know more about Loops please refer to Loops in Scala
Array is a special kind of collection in scala. it is a fixed size data structure that stores elements of the same data type. It is a collection of mutable values. Below is the syntax.
Syntax :
var arrayname = new Array[datatype](size)
It will create an array of integers which contains the value 40, 55, 63, 17 and many more. Belowb is the syntax to access a single element of an array.
number(0)
It will produce the output as 40.
Iterating through an Array:
second element of an array is: geeks
To know more about arrays please refer to Arrays in Scala
A string is a sequence of characters. In Scala, objects of String are immutable which means a constant and cannot be changed once created. In Scala a String type is specified before meeting the string literal. but when the compiler meet to a string literal and creates a string object str.
Syntax :
var str = "Hello! GFG" or val str = "Hello! GFG" var str: String = "Hello! GFG" or val str: String = "Hello! GFG"
Hello! GFG GeeksforGeeks.
String 1:Welcome! GeeksforGeeks String 2: to Portal New String :Welcome! GeeksforGeeks to Portal This is the tutorial of Scala language on GFG portal
To know more about Strings please refer to Strings in Scala
A function is a collection of statements that perform a certain task. Scala is assumed as functional programming language so these play an important role. It makes easier to debug and modify the code. Scala functions are first class values. Below is the syntax of Scala Functions.
Syntax:
def function_name ([parameter_list]) : [return_type] = { // function body }
def keyword is used to declare a function in Scala.)
Output :
Sum is: 8
Anonymous Functions in Scala :
In Scala, An anonymous function is also known as a function literal. A function which does not contain a name is known as an anonymous function.
Syntax :
(z:Int, y:Int)=> z*y Or (_:Int)*(_Int)
Output :
Geeks12Geeks GeeksforGeeks
Scala Nested Functions:
A function definition inside an another function is known as Nested Function. In Scala, we can define functions inside a function and functions defined inside other functions are called nested or local functions.
Syntax :
def FunctionName1( perameter1, peramete2, ..) = { def FunctionName2() = { // code } }
Output:
Min and Max from 5, 7 Max is: 7 Min is: 5
Currying Functions in Scala :
Currying in Scala is simply a technique or a process of transforming a function. This function takes multiple arguments into a function that takes single argument.
Syntax :
def function name(argument1, argument2) = operation
Output:
39.
OOPs Concepts:
Creation of a Class and an Object:
Classes and Objects are basic concepts of Object Oriented Programming which revolve around the real-life entities. A class is a user-defined blueprint or prototype from which objects are created.
Name of the company : Apple Total number of Smartphone generation: 16
Traits are like interfaces in Java. But they are more powerful than the interface in Java because in the traits you are allowed to implement the members. Traits can have methods(both abstract and non-abstract), and fields as its members.
Creating a trait –
Output :
Pet: Dog Pet_color: White Pet_name: Dollar
To know more about Traits please refer to Traits in Scala
Regular Expressions explain a common pattern utilized to match a series of input data so, it is helpful in Pattern Matching in numerous programming languages. In Scala Regular Expressions are generally termed as Scala Regex.
Output :
Some(GeeksforGeeks)
To know more about tuple please refer to Regular Expressions in Scala.
An exception is an unwanted or unexpected event, which occurs during the execution of a program i.e at run time. These events change the flow control of the program in execution.
In scala, All exceptions are unchecked. there is no concept of checked exception Scala facilitates a great deal of flexibility in terms of the ability to choose whether to catch an exception.
The Throwing Exceptions :
Output :
You are eligible for internship
Try-Catch Exceptions :
Output :
Arithmetic Exception occurred.
File Handling is a way to store the fetched information in a file. Scala provides packages from which we can create, open, read and write the files. For writing to a file in scala we borrow java.io._ from Java because we don’t have a class to write into a file, in the Scala standard library. We could also import java.io.File and java.io.PrintWriter.
- Creating a new file :
- java.io.File defines classes and interfaces for the JVM access files, file systems and attributes.
- File(String pathname) converts theparameter string to abstract path name, creating a new file instance.
- Writing to the file :
java.io.PrintWriter includes all the printing methods included in PrintStream.
Below is the implementation for creating a new file and writing into it.chevron_rightfilter_none
- Reading a File :
Below is the example to reading a file.chevron_rightfilter_none
To know more about various different File Handling, please refer to File Handling in Scala
A list is a collection which contains immutable data. List represents linked list in Scala. The Scala List class holds a sequenced, linear list of items. Lists are immutable whereas arrays are mutable in Scala. In a Scala list, each element must be of the same type. list is defined under scala.collection.immutable package.
Syntax :
val variable_name: List[type] = List(item1, item2, item3) or val variable_name = List(item1, item2, item3)
Create and initialize Scala List
Example :
Output :
List 1: List(Geeks, GFG, GeeksforGeeks, Geek123) List 2: C C# Java Scala PHP Ruby
To know more about List please refer to List in Scala.
Map is a collection of key-value pairs. In other words, it is similar to dictionary. Keys are always unique while values need not be unique. In order to use mutable Map, we must import scala.collection.mutable.Map class explicitly.
Creating a map and accessing the value
Example :
Output :
30
To know more about Map please refer to Map in Scala.
An iterator is a way to access elements of a collection one-by-one. It resembles to a collection in terms of syntax but works differently in terms of functionality. To access elements we can make use of hasNext() to check if there are elements available and next() to print the next element.
Syntax:
val v = Iterator(5, 1, 2, 3, 6, 4) //checking for availability of next element while(v.hasNext) //printing the element println(v.next)
Example :
Output:
5 1 2 3 6 4
To know more about tuple please refer to Iterators in Scala.
A set is a collection which only contains unique items. The uniqueness of a set are defined by the == method of the type that set holds. If you try to add a duplicate item in the set, then set quietly discard your request.
Syntax :
// Immutable set val variable_name: Set[type] = Set(item1, item2, item3) or val variable_name = Set(item1, item2, item3) // Mutable Set var variable_name: Set[type] = Set(item1, item2, item3) or var variable_name = Set(item1, item2, item3)
Creating and initializing Immutable set
Example :
Output :
Set 1: Set(Geeks, GFG, GeeksforGeeks, Geek123) Set 2: Scala C# Ruby PHP C Java
Creating and initializing mutable set
Example :
Output :
Set 1: Set(Geeks, GFG, GeeksforGeeks, Geek123) Set 2: 10 100000 10000 1000 100
To know more about Set please refer to Set in Scala | Set-1 and Set in Scala | Set-2.
Tuple is a collection of elements. Tuples are heterogeneous data structures, i.e., is they can store elements of different data types. A tuple is immutable, unlike an array in scala which is mutable.
Creating a tuple and accessing an element
Example :
Output :
15 chandan true
To know more about tuple please refer to tuple in Scala.
Recommended Posts:
- Scala Char to(end: Char, step: Char) method with example
- Scala Int to(end: int, step: int) method with example
- Scala Float to(end: Float, step: Float) method with example
- Scala Int until(end: Int, step: Int) method with example
- Scala Float until(end: Float, step: Float) method with example
-. | https://www.geeksforgeeks.org/scala-tutorial-learn-scala-with-step-by-step-guide/?ref=rp | CC-MAIN-2020-45 | en | refinedweb |
The data types available in Scala are similar to those available in Java and other modern programming languages. To use data types effectively in Scala there are four key concepts that need to be understood. These are literals, values, variables and types. This tutorial assumes you have a Scala development environment set up in Eclipse or your favorite IDE. If you have not please refer to setting up a Scala development environment in eclipse tutorial. Each concept will be briefly explained. A literal refers to data that is placed directly in code. For example look at the code shown below.
object HelloWorld { def main(args: Array[String]): Unit = { println("Hello, world!") } }
This is the well known hello world program that prints “Hello World” on the screen. In this code “Hello world!” is a literal. Start eclipse and create a new Scala project, add a HelloWorld object and add the code above. Save and run it as a Scala program.
Beginning with Scala 2.10.0 you are able to create literals by directly referencing variables. Such a mechanism is referred to as string interpolation and literals created this way are called processed literals. There are three interpolation methods available in Scala. These are s, f and raw. To process a string you just add any of the three methods before the string.
The s method allows you to reference a variable and add it to the string. For example if we expand the hello world program and create a value then we can reference it in the string literal. The expanded program is shown below.
object HelloWorld { def main(args: Array[String]): Unit = { val tutor = “Eduonix learning” println(s"Hello world!, $tutor" ) } }
The output from our revised program will be Hello world! Eduonix learning.
The f method is used to format strings using a specified style. For example expanding on previous example we can create a number and display it using no decimals.
def main(args: Array[String]): Unit = { val tutor = “Eduonix learning” val daysYear = 365 println(s"Hello world!, $tutor, $daysYears%0.f" ) }
A unit of storage that is immutable is referred to as a value. When the unit of storage is defined it is possible to assign data but it cannot be reassigned. Simply put it cannot be changed once it has been defined. The keyword val is used to define a value. To demonstrate an example start Scala in interactive mode by running scala at a terminal. Below are examples of defining values.
val name = “sammy” val age =78 val weight = 34.8
A unit of storage that is mutable is referred to as a variable. When this unit of storage is defined you assign data which you can reassign later. In simple terms a definition can be changed. The keyword var is used to define a variable. Below are examples of how to define variables
var tutor = “eduonix” var courseSeries = 1 var durationHours = 2.5
The various kinds of data available in Scala are referred to as data types. Everything in Scala is an object and there are no primitive types like those found in Java. The types available in Scala are discussed below.
Byte is used to store signed values that range from -128 to 127. short is used to store signed values that are in the range of -32768 to 32767. Int is used to store values in the range of -9223372036854775808 to 9223372036854775807. Single precision floating point storage is provided by Float type. Storage of double quoted characters is provided by String type. True and false storage is availed by Boolean type. These are some of commonly used types in Scala.
From output when creating values and variables above observe Scala was able to know the data type. This is referred to as type inference. Specifying the data type is still a valid way of defining variables. To demonstrate that let us repeat variable definition by specifying types. Using this approach you specify the data type after the variable name.
var tutor:String = “eduonix” var courseSeries:Int = 1 var durationHours:Float = 2.5
Variable scoping exists at three levels depending on where definition happens. Variables defined at an object level are referred to as fields and can be accessed by all methods in the object. It is possible for fields to be accessed outside of an object if access modifiers defining it allow. Fields are allowed to be mutable or immutable. Variables defined within a method have a local scope so they are only accessible within the method. They can be defined as mutable or immutable. Method parameters is the last scope type.
Operators in Scala are categorized into assignment, arithmetic, relational, bitwise and logical. We begin by looking at assignment operators. Arithmetic operators available in Scala are addition (+), substraction (-), multiplication (*), division (+), modulus (%). Below are examples of their use in Scala shell.
val weight = 45.9 val height = 190 val total = weight + height val product = weight * height val divide = weight / height
Relational operators available are equal to (==), not equal to (!=), greater than (>), less than (<), greater than or equal to (>=) and less than or equal to. The results of their operations are stored in a boolean data type. Examples of their use are shown below
weight == height weight > height weight != height
Logical operators available AND (&&), OR (||) and NOT (!). The results of such operations are stored in a Boolean type. Use cases are shown below. In the previous example we created weight and height variable. If we have a requirement that both values be greater than a certain value we use &&.To check one of the values is greater than a certain value we use ||. To check weight is not a specific value we use !.
val wt = weight > 47 val ht = height > 120 wt && height wt || height
The = operator is used to assign the value on the right of the operator to the left. It is possible to combine = and arithmetic operators to combine arithmetic operation and assignment. Examples are shown below.
val score1 = 89 val score2 =89 score2 += score1 //this is same as score2 = score1 + score2
This tutorial introduced you to the concepts required to effectively use data types in Scala. Variable definition, mutability and scope were defined. The types of operators available were also discussed. | https://blog.eduonix.com/software-development/learn-data-types-operators-used-scala/ | CC-MAIN-2020-45 | en | refinedweb |
Công Nghệ Thông Tin
Quản trị mạng
Tải bản đầy đủ
5 Certificates and CRLs 215
5 Certificates and CRLs 215
Tải bản đầy đủ
216
Demystifying the IPsec Puzzle
currency of a certificate. OCSP provides more up-to-the-minute information, but that protocol has its own complications for IKE, because an IKE
negotiation can time out while waiting for an OCSP response.
10.6 Certificate Formats
For certificates and CRLs to be universally useful, it is important to establish
a standard, unambiguous way in which to describe their components. Ideally, that is the function of Abstract Syntax Notation One [18, 19], generally
referred to as ASN.1. It is a symbolic language, consisting of a series of
rules that, together, definitively describe a composite object; in our case, the
ultimate objects we want to define are certificates, CRs, and CRLs. That is
accomplished in an iterative manner. The initial ASN.1 rule describes the
highest level object in terms of a series of components. Successive rules refine
the definition of each component in an increasingly concrete manner, until
the lowest level, that of digits and characters, is reached.
Figure 10.2 shows the ASN.1 representation of two portions of a
certificate. The first rule defines the general structure of a certificate, which
consists of a tbscertificate, the portion of the certificate that will be digitally
signed, an identifier for the algorithm used to create the digital signature, and
the signature itself. The second rule defines the time-related validity period
of the certificate.
Now that we have an abstract way to describe certificates, we need to
be able to translate this structure into an encoding that consists of bits and
bytes. That is where basic encoding rules (BER) and distinguished encoding
rules (DER) come in [20]. Each ASN.1 component is assigned a unique
identifier, a numeric object identifier (OID). BER and DER are used to
Certificate
::=
SEQUENCE
{
tbsCertificate
TBSCertificate,
signatureAlgorithm
AlgorithmIdentifier,
signatureValue
BIT STRING
Validity ::= SEQUENCE {
notBefore
Time,
notAfter
Time }
Figure 10.2 Sample ASN.1 rules.
}
The Framework: Public Key Infrastructure (PKI)
217
translate the abstract definition, using OIDs and the specific data appropriate to an individual case, into an encoded certificate. Figure 10.3 shows the
DER encoding of a sample certificate field, the email address jdoe@bb.gov,
along with two BER alternative encodings. The first example is the DER
encoding; the second is an alternative BER encoding; and the third shows a
BER encoding with the e-mail address broken up into three components:
jdoe, @, and bb.gov.
Why two alternative encodings? The BER rules often allow the same
object to be encoded in several different ways, while the DER rules define
a single encoding for each case. BER can be more efficient to implement,
because its alternative formats generally allow a program to encode or decode
an object in a single pass, without the necessity to look ahead for coming
attractions. Using DER may necessitate some lookahead, but a single standard encoding is essential to ensure that the verifying signature is computed
over the same entity. Now that we have presented samples of ASN.1, DER,
and BER for the readers edification and mystification, we will not delve further into their minutiae.
Heres where it gets even more complicated, if possible. DER-encoded
certificates need to be stored in repositories and transmitted over the network. Some transmission methods, such as email, cannot handle binary
objects. That gave rise to the Privacy Enhanced Mail (PEM) [21], encoding
of the DER encoding of an ASN.1 certificate. PEM-encoded certificates and
CRLs thus can be sent as email attachments. PKCS#10 CRs and PKCS#7
cryptographic objects are defined over the DER format of a certificate. They
can be transformed into, but are not equivalent to, the PEM-encoded versions. And let us not forget the PKCS#7-wrapped version of PKCS#10
objects. As if that were not confusing enough, the ASN.1 definitions and
16 0a
6a 64 6f 65 40 62 62 2e 67 6f 76
16 81 0a
6a 64 6f 65 40 62 62 2e 67 6f 76
36 13
16 04
6a 64 6f 65
16 01
40
16 06
62 62 2e 67 6f 76
Figure 10.3 Sample DER and BER encodings.
218
Demystifying the IPsec Puzzle
OIDs for various certificate pieces are defined in numerous documents, some
intended for the universal PKI domain and some aimed at specific subsets of
that domain.
IPsec implementers have tried to do an end run around some of this
confusion by holding periodic IPsec interoperability workshops, also known
as bake-offs. That allows developers to compare certificate contents and formats. At the end of each workshop, a list of issues that cropped up during the
workshop is compiled. Solutions are discussed on the IPsec email list, and,
once consensus is reached, those solutions are publicized in Internet Drafts.
For vendors who are latecomers to the process, the email list archives supply
a record of previously discussed issues, the array of proposed solutions, and
the rationale behind the ultimate consensual solution.
10.7 Certificate Contents
For an end user of IPsec, it would be nice to treat certificates as opaque
entities that merely serve as grist for the IPsec mill. If that were the case, the
fortunate end user would not need to be aware of the fields and the data
contained within the certificate. Unfortunately, the literature and standards
are replete with quaint compound terms such as subjectAltName and
Distinguished Name.
X.509 certificates consist of a number of basic fields found in all certificates and a number of optional extensions, added in X.509 version 3. In
addition, communities of certificate users can agree on the definition, format, applicability, and use of other extensions. The basic fields are as follows.
• Version. Identifies whether the X.509 conventions used in the cer-
tificate conform to version 1, 2, or 3. For PKIX and IPsec, version 3
certificates are used.
• Serial number. A number assigned by the CA that is unique among
all the CAs certificates.
• Signature. The identifier (OID) of the algorithms used by the CA
to hash and digitally sign the certificate. Two examples mentioned in the IKE PKI profile are id-dsa-with-sha1 and
sha-1WithRSAEncryption. The IKE PKIX profile suggests that all
IKE implementations should be able to handle both RSA signatures
and DSA signatures using the SHA-1 hash algorithm. As mentioned
in Chapter 4, DSA can be computed only over a SHA-1 hash, but RSA
can use a variety of hash algorithms, including MD5 and SHA-1.
The Framework: Public Key Infrastructure (PKI)
219
• Issuer. The distinguished name (DN) of the CA. It generally is made
up of a series of fields that uniquely characterize the CA. Figure 10.4
contains two DNs, the first of which could apply to a CA. Following
are some of the fields that can be used within the DN and examples
of their use.
• Country (C): C = United States
• Organization (O): O = Bureau of the Budget
• Organizational unit (OU): OU = Red Ink Department
• Validity. The start and end dates that delineate the certificates life-
time. If an IKE SA is authenticated via a certificate, or an IPsec SA is
generated using this type of IKE SA, the IKE PKI profile does not
allow either SA to expire any later than the certificates expiration
date. It also requires IKE to check that no certificates in the path
from the peers certificate up to the issuing CA have been revoked.
• Subject. The DN of the certificates holder. The second distinguished name in Figure 10.4 could appear as a certificates subject.
All the fields shown for a CAs DN can also be used for a certificate
holders DN. Some additional DN fields appropriate only for the
holders DN are these:
• Common name (CN): CN=Joe Smith
• Surname (SN): SN=Smith
• Given name (GN): GN=Joe
• Personal name (PN): PN=SN=Smith, GN=Joe
The DN was originally intended to place its subject at a unique
node in the X.500 directory information tree (DIT), which was
supposed to organize the whole world into a uniform, hierarchical
framework. Because a unified framework has not been established,
this field is of dubious value, and some of its lesser used components
(such as organizationalUnitName, localityName, and stateOr Province Name) are applied differently, if at all, in different domains.
• Subjects public key information. The public key algorithm to be
used in conjunction with the certificate holders public key and the
key itself.
C=US, O=Bureau of the Budget, CN=Federal PKI, L=Baltimore
C=US, O=Bureau of the Budget, OU=Red Ink Department, CN=John Doe, L=Gaithersburg
Figure 10.4 Sample distinguished names (DNs).
220
Demystifying the IPsec Puzzle
• Unique subject and issuer (CA) identifiers. These fields are intended
to ensure that a CA cannot issue multiple certificates that have the
same owners name but were actually issued to disparate entities.
They also guard against the problem of multiple CAs with the same
issuer name. PKIX disapproves of this approach and recommends
careful use of issuer and subject namespace instead.
• Signature algorithm. The identifiers of the algorithms used by the
CA to hash and digitally sign the certificate. This field is not cryptographically protected by the digital signature, to enable the certificates users to verify the signature. It is duplicated in the signature
field mentioned above, which is included in the digital signature;
that ensures that an attacker cannot disable use of the certificate by
altering this field.
• Signature value. A hash of the DER-encoded form of the certificates
contents, digitally signed with the CAs private key.
The X.509 data definitions include multiple extensions, some of which are
necessary for Internet-related communications. To interoperate, there must
be agreement on support for those extensions. The handling of optional
extensions also must be defined. That is an important step toward the
interoperation of two implementations, one of which includes optional
extensions but does not necessarily expect the peer to process them, and the
other of which can ignore those extensions without rejecting the peers whole
data object. Extensions to the basic certificate fields can be processed in several different ways. If they are marked as critical fields within the certificate,
certificate users must be capable of processing and acting on the extension
fields information; otherwise, the certificate must be ignored. Extension
fields not marked as critical can be ignored by certificate users that do not
accept or understand that particular extension. Some of the more commonly
accepted extensions are the following.
• CA. This extension includes the cA bit, used to identify a CAs pub-
lic key certificate, whose private key can be used to sign other certificates as well as its own. When this extension is used and the cA bit is
on, the maximum nesting depth of lower-level CA certificates may
be specified. This extensions official name is basic constraints. PKIX
requires this extension to be present and to be marked as critical in
all CA certificates. The cA bit cannot be on for certificates whose
owner is not a CA.
The Framework: Public Key Infrastructure (PKI)
221
• Alternative name. This GeneralName (GN) contains any identifying
names of the certificates holder that do not fit the DN format,
for example, email address, fully qualified domain name (FQDN),
IP address, or URL. If the certificate holder does not have a DN,
this field must be present and is considered a critical field. The
DN and any alternative names are the identities that are bound to
the certificates keys. This field is formally labeled subjectAltName, a
term that is often found in the PKI literature and commonly used
by PKI aficionados. To add to the confusion, PKI documents frequently refer to email addresses as RFC822 [22] names.
For IKE, one of the names in the certificate must match exactly
the peers phase 1 ID payload; the ID types and content must be
identical. For example, if the initiators phase 1 ID is a DN, it must
match the DN in the certificate presented to the responder. If the
responders phase 1 ID is an email address, one of the names that
constitute the certificates subjectAltName field must be the same
The IKE PKI profile allows (but does not require) an IKE participant to terminate an IKE negotiation if this field contains an IP
address or DNS domain name that is deemed unacceptable in
the context of the current negotiation. When a peers certificate is
accessed and examined prior to an IKE negotiation, that information can be used by an initiator to generate the appropriate proposals
or by a responder to evaluate the initiators proposals. If the certificate is sent as part of an IKE negotiation, an unfortunate situation
can occur. In the digital signature mode, the certificates are
exchanged after the protection suite has been negotiated. Thus, a
proposal may have been proposed or accepted based on the IP
address from which the peer sent the packet, which may not correspond to the address or other identity information found in the certificate. In the public key encryption modes, when a responder has
multiple certificates, the relevant one is identified after the exchange
of proposals, with the responder possibly facing the same dilemma as
in the digital signature mode. In such a case, the only possible solution might be to terminate peremptorily the current phase 1 negotiation, optionally starting a new negotiation that takes into account
the ID information that has been gleaned from the certificate.
• Key usage. Suggests or mandates the uses to which the certificates
public-private key pair can be put, including digital signature, key
222
Demystifying the IPsec Puzzle
encipherment (i.e., transport of symmetric session keys), data encipherment (i.e., encryption), and certificate signing (found only in a
CAs certificate). If this is a critical field, the key can be used only for
one of the designated purposes. To limit the exposure of the private
key, a single entity could have several certificates, each one used for
a different purpose. If this field is marked as critical, that specialization is enforced; otherwise, it is suggested but not enforced. The
PKIX profile requires the certificate signing bit to be in accord with
the basic constraints extension. For a CA, both the cA bit and the
certificate signing bit must be on; for a non-CA, both must be either
omitted or turned off.
• Extended key usage. In addition to the standardized key usage fields,
additional ones may be defined for special-purpose use. One such is
iKEIntermediate, proposed in the IKE PKI profile to designate a key
that can be used for phase 1 IKE authentication. (In the early days of
IKE, PKIX [23] listed several other IKE-related extended key usage
values, but they were rejected by the proponents of IPsec.) This
field also can be marked critical. If both the key usage field and the
extended key usage fields are critical, the certificates key can be used
only in situations that satisfy both fields.
• CRL distribution points. A pointer to the location of the CRL. This is
useful in cases where the CRLs are not colocated with the certificates.
There is a subtle interplay among flexibility, interoperability, and security
in the use and interpretation of many of the certificate fields [24], notably
the key usage and extended key usage bits. If an IKE implementation is
extremely demanding and limiting in the use, interpretation, and validation
of certificate fields, security is enhanced but interoperability may be impossible. At the other end of the spectrum, too much flexibility maximizes
interoperability at the expense of meaningful security.
A CR has the same format as a certificate, but the only fields that contain data are those whose values are required to be matched by the certificate
sent by the IKE peer or generated by the CA in response to the request. The
CRs format specification currently is up to version 2.
10.8 IKE and IPsec Considerations
Standards written for general certificate and PKI use do not always fulfill the
specific needs of IKE and IPsec users. Pieces of the solution are contained in
The Framework: Public Key Infrastructure (PKI)
223
the PKIX roadmap [2], the PKIX profile [23], and the IKE PKI profile [13].
At times, the PKIX profile and the IKE PKI profile are at odds; in such
a situation, IKE wins hands down. An IPsec PKI profile has not yet been
written, so its relationship to its fellow travelers is as yet undefined. On the
other hand, with continued use and experimentation, new issues continue to
crop up.
In phase 1, peers certificates can be requested through the use of a CR
payload and transmitted using a certificate payload. In addition to the peers
certificate, the certificate payload can include the certificate of the CA whose
private key was used to sign the peers certificate; a whole chain of intermediate CA certificates used to sign and validate the peers certificate; and/or the
CAs latest CRL. Clearly, those payloads can contain data that would be of
interest to an attacker. In particular, if the certificates identity is not identical to the peers ID address, revealing that information defeats IKEs phase 1
identity protection. Thus, the phase 1 messages in which it makes sense
to include either CRs or certificates vary, depending on the type of phase 1
negotiation and the peer authentication method that is used.
When IKE peers use digital signatures for authentication, the certificates public key is only needed by the initiator in Main Mode message 5 and
by the responder in Main Mode message 6. Thus, to preserve identity protection, certificate payloads should be included only in Main Mode messages 5
or 6 if the identity is a value other than the peers IP address or domain
name. A CR can include a specific CA or certificate type, limiting the types
of certificates that will be accepted by the requester. If an IKE initiator does
not want to reveal this type of information, it can send its CR payload as part
of an encrypted Main Mode message 5. Because the responders only
encrypted message is the last Main Mode message, message 6, there is no way
for a responder to send a protected CR payload. In Aggressive Mode, because
identity protection is not an issue, the CR payload can be part of message 1
or 2; in Base Mode it can appear in messages 1, 2, or 3. In those two modes,
because the public key is used in only the last two messages, the certificate
payload can appear in any message.
With preshared secret key authentication, certificates can be requested
and exchanged for use in future PKI-based negotiations. The messages in
which they can be used are identical for those in digital signature mode.
When the authentication method is public key encryption, the initiator
and responder public keys are used in Main Mode messages 3 and 4, respectively; in Aggressive Mode and Base Mode, they are required in messages 1
and 2. Thus, for Aggressive Mode and Base Mode, CRs are not useful.
The initiator must obtain the responders certificate before the negotiation
Tài liệu liên quan
Demystifying the IPSec puzzle computer securities series
1 The TCP/IP Protocol Stack 5
2 Security Associations and the Security Parameters Index 16
8 AH Processing for Outbound Messages 25
9 AH Processing for Inbound Messages 30
7 ESP Header Processing for Inbound Messages 49
9 Criticisms and Counterclaims 52
11 Why Two Security Headers? 55
3 The ESP Header Encryption Algorithms 68
5 Public Key Cryptography 79
4 Proposals and Counterproposals 90 | https://toc.123doc.org/document/968242-5-certificates-and-crls-215.htm | CC-MAIN-2017-43 | en | refinedweb |
Hi there folks.
I'm trying to create a program that takes 2 numbers from command line arguments and use the second value as a power so for e.g. java Power 10 2 should give 100. But i want to do this without using the Math.pow method, and using a for loop. I have the following so far but i'm not sure what the problem is and it;s only seem to be doinng square numbers (i.e. to the power of 2)
Code java:
public class Power { public static void main(String [] args) { int mantissa = Integer.parseInt(args[0]); int exponent = Integer.parseInt(args[1]); int answer; for(int i=0; (mantissa*mantissa) < exponent; i++) exponent=exponent + i; answer = mantissa*mantissa; System.out.println("Result is " + answer); } } | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/5735-program-compute-powers-printingthethread.html | CC-MAIN-2017-43 | en | refinedweb |
This is a Java Program to Find the Area of a Triangle Given Three Sides.
Semiperimeter = (a+b+c)/2
Area = sqrt(sp*(sp-a)*(sp-b)*(sp-c))
Semiperimeter = (a+b+c)/2
Area = sqrt(sp*(sp-a)*(sp-b)*(sp-c))
Enter the length of three sides of triangle. Now we use the Heron’s formula to get the area of triangle.
Here is the source code of the Java Program to Find the Area of a Triangle Given Three Sides. The Java program is successfully compiled and run on a Windows system. The program output is also shown below.
public class Triangle
{
public static void main(String args[])
{
double s1, s2, s3, s4, area;
Scanner s = new Scanner(System.in);
System.out.print("Enter the first side :");
s1 = s.nextDouble();
System.out.print("Enter the second side :");
s2 = s.nextDouble();
System.out.print("Enter the third side :");
s3 = s.nextDouble();
s4 = (s1 + s2 + s3 )/ 2 ;
area = Math.sqrt(s4 * (s4 - s1) * (s4 - s2) * (s4 - s3));
System.out.print("Area of Triangle is:"+area+" sq units");
}
}
Output:
$ javac Triangle.java $ java Triangle Enter the first side :3 Enter the second side :4 Enter the third side :5 Area of Triangle is:6.0 sq units
Sanfoundry Global Education & Learning Series – 1000 Java Programs.
Here’s the list of Best Reference Books in Java Programming, Data Structures and Algorithms.
» Next Page - Java Program to Calculate the Sum, Multiplication, Division and Subtraction of Two Numbers | http://www.sanfoundry.com/java-program-find-area-triangle-given-three-sides/ | CC-MAIN-2017-43 | en | refinedweb |
Approach #1: Brute Force [Time Limit Exceeded]
Intuition and Algorithm
For each substring
S[i: j+1], let's check if it is a palindrome. If it is, we increment 1 to our answer.
To check whether a substring
S[i: j+1] is a palindrome, we check whether every
i+d-th character equals the corresponding
j-d-th character, for
d up to half the length of the substring.
Notably, our brute force method avoids creating new substrings which would use unnecessary space.
Python
class Solution(object): def countSubstrings(self, S): def is_palindrome(i, j): #Is S[i: j+1] a palindrome? for d in range((j-i)/2 + 1): if S[i+d] != S[j-d]: return False return True ans = 0 for i in range(len(S)): for j in range(i, len(S)): if is_palindrome(i, j): ans += 1 return ans
Complexity Analysis
Time Complexity: $$O(N^3)$$, where $$N$$ is the length of the string. After our
for i, jloops of complexity $$O(N^2)$$, we spend $$O(N)$$ work to check whether
S[i: j+1]is a palindrome.
Space Complexity: We need $$O(1)$$ additional space to store the answer.
Approach #2: Center Expansion [Accepted]
Intuition
Every palindrome has a unique center position. For each possible center, we determine all the palindromes that could have that center by scanning from left to right.
Algorithm
Let
N = len(S). There are
2N-1 possible centers for the palindrome: we could have a center at
S[0], between
S[0] and
S[1], at
S[1], between
S[1] and
S[2], at
S[2], etc.
To iterate over each of the
2N-1 centers, we will move the left pointer every 2 times, and the right pointer every 2 times starting with the second (index 1). Hence,
left = center / 2, right = center / 2 + center % 2.
From here, finding every palindrome starting with that center is straightforward: while the ends are valid and have equal characters, record the answer and expand.
Python
def countSubstrings(self, S): ans = 0 for center in range(2 * len(S) - 1): left = center // 2 right = left + center % 2 while left >= 0 and right < len(S) and S[left] == S[right]: ans += 1 left -= 1 right += 1 return ans
Complexity Analysis
Time Complexity: $$O(N^2)$$, where $$N$$ is the length of the string. For each center, we could expand potentially up to the bounds of the string. Thus, both the outer loop and the inner loop have $$O(N)$$ complexity.
Space Complexity: We need $$O(1)$$ additional space to store the answer.
Approach #3: Manacher's Algorithm [Accepted]
Intuition
If we knew Manacher's algorithm, a linear time algorithm for determining
Z[i], the longest half-length of a palindrome with center
i, then computing the number of palindromes would simply be
sum((z + 1) // 2 for z in Z).
Algorithm
We focus on an explanation of Manacher's algorithm.
To make our implementation easier, we will add
'#' characters, between each character of
S. We also add a
'@' and
'$' characters to the beginning and end of S, to form a working string
A. We assume these characters do not exist originally in S. The problem of finding the longest palindrome at a given center is the same for string
S as it is for
A, so this transformation is justified.
From here on, our position indices will always be in terms of "half indices", represented as full indices in.
Beyond that interval, we can manually expand the palindrome at interval
[i - Z[i], i + Z[i]] (centered at
i with radius
Z[i]) while appropriate. After, we update our knowledge of the palindrome with the largest right-boundary if appropriate.
Python
def countSubstrings(self, S): def manachers(S): A = '@#' + '#'.join(S) + '#$' Z = [0] * len(A) center = right = 0 for i in range(1, len(A) - 1): if i < right: Z[i] = min(right - i, Z[2 * center - i]) while A[i - Z[i] - 1] == A[i + Z[i] + 1]: Z[i] += 1 if i + Z[i] > right: center, right = i, i + Z[i] return Z return sum((v+1)//2 for v in manachers(S))
Complexity Analysis
Time Complexity: $$O(N)$$, where $$N$$ is the length of the string. The outer loop
for i in range(...)is $$O(N)$$. The while loop only checks the condition more than once when
Z[i] = right - i. In that case, for each time
Z[i] += 1, it increments
right. As right can only be incremented up to
2*N+2times, in total the algorithm is $$O(N)$$.
Space Complexity: We used $$O(N)$$ additional space to store
Aand
Z. | https://discuss.leetcode.com/topic/101425/solution-by-awice | CC-MAIN-2017-43 | en | refinedweb |
Following logic is used in the solution:-
Traverse through every element of the left side of the root.
If both p and q are found, move left. If none are found, move right. If only one of them is found, root is the answer.
Assuming that tree is sufficiently balanced, traversing takes n/2 time and we repeat this logn time.
A good catch is to compare the TreeNode reference and not their values, since multiple nodes can have same value and we may get false positive! Below is the code :-
public class Solution { boolean foundP = false; boolean foundQ = false; public TreeNode lowestCommonAncestor(TreeNode root, TreeNode p, TreeNode q) { if(root == p || root == q) return root; prefix(root.left, p, q); if(foundP && foundQ) { foundP = foundQ = false; return lowestCommonAncestor(root.left, p, q); } else if(!foundP && !foundQ) return lowestCommonAncestor(root.right, p, q); return root; } void prefix(TreeNode root, TreeNode p, TreeNode q) { if(root == null) return; if(root == p) foundP = true; else if(root == q) foundQ = true; prefix(root.left, p, q); prefix(root.right, p , q); } }
if(root == p || root == q)
return root;
This is an important part of your solution that you don't address in your summary. Without it,
[[p],[1,q]] will not return correctly.
One average you are good, but your worst case is
O(n^2): each node has left only and p and q are at the lowest levels. I think it's possible
O(n) worst/avg time.
@TWiStErRob: I am not very sure regarding the runtime complexity of this problem. But let's see. If we consider the tree to be balanced, we could traverse n/2 nodes in the 1st pass, then n/4, n/8, .., so on. So, summing up, n(1 + 1/2 + 1/4 + ... 1/(2^lgn) ) which is less than 2n (using GP). This gives us O(n).
However, if the tree is totally unbalanced (one node on each level), we could be looking (n -1) + (n - 2) + ... 1 nodes, which gives us O(n^2)
You actually visit each node only once regardless of input in
lowestCommonAncestor recursion, only the
prefix method skews your runtime a lot. I suggest you try to work those into a single method and a single left-right recursion. Btw, what's the runtime score (in ms) you were given? link
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/18955/java-solution-o-nlgn | CC-MAIN-2017-43 | en | refinedweb |
#include <OMX_Audio.h>
Audio Volume adjustment for a port
Is the volume to be set in linear (0.100) or logarithmic scale (mB)
Port index indicating which port to set. Select the input port to set just that port's volume. Select the output port to adjust the master volume.
size of the structure in bytes
OMX specification version information
Volume linear setting in the 0..100 range, OR Volume logarithmic setting for this port. The values for volume are in mB (millibels = 1/100 dB) relative to a gain of 1 (e.g. the output is the same as the input level). Values are in mB from nMax (maximum volume) to nMin mB (typically negative). Since the volume is "voltage" and not a "power", it takes a setting of -600 mB to decrease the volume by 1/2. If a component cannot accurately set the volume to the requested value, it must set the volume to the closest value BELOW the requested value. When getting the volume setting, the current actual volume must be returned. | http://limoa.sourceforge.net/docs/1.0/structOMX__AUDIO__CONFIG__VOLUMETYPE.html | CC-MAIN-2017-43 | en | refinedweb |
Hi
I am new to coding and I have been given an example of a function similar to the built in function len().
I do understand
def fct(s) :
if not s : return 0
return 1 + fct(s[1:])
However I am having a hard time understanding the logic of the one below which is the equivalent for what I see.
def fct(s) : return s and 1 + fct(s[1:]) or 0 #I do not understabd the "return s" part
could someone explain that to me?
Many thanks | http://www.python-forum.org/viewtopic.php?p=10480 | CC-MAIN-2015-11 | en | refinedweb |
Bad AsssertionError "Cannot mix positive and negative values in BarSeries"
Bad AsssertionError "Cannot mix positive and negative values in BarSeries"
Unfortunately this assert does not seem to work as expected and causes errors (in dev mode)
Code:
public class BarSeries<M> extends MultipleColorSeries<M> { private void calculatePaths() { ... double value = yFields.get(j).getValue(store.get(i)).doubleValue(); assert value * minY >= 0 : "Cannot mix positive and negative values in BarSeries."; ... } }
In my data I see values ranging between 0 and 5587930998, could it also be because I sometimes have a chart full of 0's ?
Success! Looks like we've fixed this one. According to our records the fix was applied for a bug in our system in a recent build. | http://www.sencha.com/forum/showthread.php?231621-Bad-AsssertionError-quot-Cannot-mix-positive-and-negative-values-in-BarSeries-quot&p=858239&viewfull=1 | CC-MAIN-2015-11 | en | refinedweb |
>>.'"
Go! (Score:5, Informative).
Re:Go! (Score:5, Funny)
Even more funny is the fact that they hosting their language on code.google.com
Perhaps we shouldn't worry that much about them harvesting our data after all?
Re:Go! (Score:5, Funny)
There once was a language named "Go"
By Google it's made to help the Pro
But there's a claim the name
it sounds quite the same
as another fellow's lingo
This other lingo named "Go!"
"It was earlier" it's inventor says so.
"Why didn't you look
on a webpage or in my book,
it's even google search result two!"
"So Google, rename your thing!
Or in front of a judge you i bring!
Lots of users agree
it was disgraceful by thee
just be sorry and give me a ring!"
So the question arise
allthough google might despise
"what new name shall we be giving
to the lingo that's not yet living
and has not yet seen this world with it's own eyes?"
One fella proposed the name "Goo"
Which is similar to pythons clone "Boo"
But also this name is taken
and not yet forsaken
and honestly sounds close to "Poo".
Another said "Lango" is cool,
He would take such thing as a tool.
But a lingo named "Lango"
Only rhymes "Jango" or "Tango"
This is real, not Star Wars, you fool!
Lots of other names were called
some were boring, some others were bold
The question still remain
Will google act or refrain
from renaming it's lingo as told?
The remainder of my little piece
Is the ironic issue of this
Why did you, google miss
to google "go" before release
You would have known it's not your name, but his'!
Re: (Score:3, Insightful)
I don't know if there's a Poet Laureate position for Slashdot, but either way I nominate this guy. Brilliant!
Re:Go! (Score:4, Funny)
Re: (Score:3, Interesting)..
Re:Go! (Score:5, Funny)
Re: (Score:3, Informative)
But it does kind of fly in the face of the "Don't be evil" slogan.
Not really. There was no malice here anywhere. Nobody tried to be evil, nobody is trying to be evil this moment and nobody is trying to be evil in the future.
Some dude had an idea a couple years back that was so utterly obscurethat no Wikipedia page existed for it. Let that sink in: There's a page on Wikipedia for every actor that was ever seen in the background of any Star Trek episode; yet this supposed "Go language" was so unknown that nobody ever bothered to make a page for it (until yesterday). And t p
Re:Go! (Score:5, Informative)
He published in "Annals of Mathematics and Artificial Intelligence" and it's cited [acm.org] in the ACM portal. Who cares what Wiki has or doesn't have.
This wasn't some geocities page with talk about a language that was never developed. thinkin:Go! (Score:4, Funny)
Plus every source file would be a
.gog [merriam-webster.com]!
Re: (Score:3, Funny)
Re:Go! (Score:4, Interesting)
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Re:Go! (Score:4, Funny)
Wouldn't Go! be pronounced Go(bang)?
Maybe we should use "Gang!" as the name, then.
Re: (Score:3, Funny)
So what? (Score:3, Insightful)
"From what I've read, Go! was pretty much unknown to anyone outside a very small group 2 years ago."
From what I've read, Go was pretty much unknown outside of Google until about a week ago.
I said it yesterday, but... (Score:5, Funny)
Two "Go"'s considered harmful.
Re: (Score:2)
Do not pass Go! Do not collect £200
Re: (Score:2)
Better stated as:
GOGO considered harmful
Re: (Score:2)
Not a chance, as its common knowledge that goto's cause the apocalypse.
Re: (Score:2)
(Why did they name it Go? According to the FAQ, they thought "Go Ogle" would be a great name for the debugger. "Goo Ogle" would be just as gooooood.)
Re: (Score:3, Insightful)
Is Go! alive? (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Informative)
Re: (Score:2)
And as far as I can tell, the wiki entry was created yesterday.
(I'm wiki challenged, so I may be wrong)
Re: (Score:2)
You are correct, it was added by somebody after reading about the go vs go! thing, before then ther wasn't even a reference on "go" disambiguation
Re: (Score:3, Interesting)
Excellent find,
I'm sure the author is relishing in the Streisand Effect right now.
How far down the page was Go! two days ago if you googled the name?
Someone is getting fired... (Score:3, Funny)
I bet someone at Google will get fired soon...
Either 1 of 2 things may have happened:
1) They used Microsoft Bing to search for potential trademark violations
2) They were too lazy and didn't check at all.
Normal for this crew (Score:2)
If Ken Thompson and Rob Pike were designing it, they probably didn't care about getting fired / marketing implications / public backlash etc. They have a history of choosing provocative names, just look at the plan9 stuff.
Re: (Score:2)
Fired? Isn't that exaggerating things a little?
;).
So? (Score:2, Insightful)
Re: (Score:2, Informative)
The way I see it, TM or copyright are really useful so you don't have to demonstrate that you were using that name before... he doesn't have it, so he has to show that he had a book, that the language was published in 2003 with that name, etc.
Re:So? (Score:5, Insightful)
Some things are ethically questionable even when there is no legal problem involved. A concept often forgotten in the corporate world.
Re: (Score:3, Insightful)
"Like reusing the name of an obscure project that seemingly died years ago and nobody here has even heard of?"
Right. If Slashdotters haven't heard of it, there's no ethical issue.
Goo (Score:3, Funny)
Re:Goo (Score:4, Informative)
Re: (Score:3, Funny)
Good idea! No namespace crash there. Everyone can just call it GPL for short, and...
...d'oh!
Re: (Score:3)
"Goo" is a dialect of Lisp, so "Gooo" it is!
Personally, I think Google should rename it "Giggity"..
Re: (Score:2)
"Under fire"? (Score:2)
Tag this one !news.
Since when is a gazillion-dollar company considered "under fire" because one dude with no legal status is annoyed at them?
By that logic, "McDonald's has come under fire this week for serving goodmanj a batch of stale fries last time he went there."
Google should rename Go to Issue 9 (Score:5, Interesting)
Re: (Score:2)
Issue 9 is kind of a mouthful to pronounce, plus it might be weird in some other languages (like in french where issue means exit)
That said I agree that another name than go could be good if only to make it easier to google.
Re:Google should rename Go to Issue 9 (Score:5, Funny)
Issue 9 is kind of a mouthful to pronounce, plus it might be weird in some other languages (like in french where issue means exit)
Meh, in conversation just shorten it to I9 and you're good to... *cough*. Yeah.
Re:Google should rename Go to Issue 9 (Score:4, Funny)
I think they should name it Issue Express 9 or IE9 for short. Preemptive naming.
Re: (Score:2)
Especially with some guys behind this, also behind Plan 9.
Re: (Score:3, Interesting)
Why don't they just call it "g". Then later, others can invent g++ and g# languages. This won't be gonfusing at all.
Easily avoidable (Score:2)
Couldn't they have googled the name first? You'd kind of expect at least that from them..
Not like Go is such a great name anyway. They should run a poll to decide the name. With enough luck it'll get called Marblecake or Colbert++.
They should plan better (Score:2, Insightful)
Re:They should plan better (Score:5, Insightful)
As someone stated before, this is not a legal issue. It's just about basic politeness.
Re: violat
Google simply does not care. (Score:2, Insightful)
People! Punctuation is IMPORTANT! (Score:2, Interesting)
It originates from the paper by Dijkstra [arizona.edu] where he argued GoTo statements should be banned. That resulted in many structured programming languages main stream computer science. But what is not k
Re:People! Punctuation is IMPORTANT! (Score:5, Informative)
Google's language is called Go! (with an exclamation mark.) The preexisting language whose existence has been suddenly and rudely revealed is called Go without the exclamation mark.
Other way around. Google's language is "Go". McCabe's language is "Go!".
Re:People! Punctuation is IMPORTANT! (Score:5, Informative)
Dont get me started on the Japanese chess game Go.
I don't know if your post was supposed to be either sarcastic or funny, but Go [wikipedia.org] is neither Japanese nor chess.
It's Chinese, and it's older than chess.
The game commonly referred to as "Japanese chess" is Shogi [wikipedia.org].
Re: (Score:3, Informative)
Re: (Score:3, Informative)
Actually, "Go" is the Japanese name for the game. That's a Romanization, obviously, but is considered phonetically close to the Japanese pronunciation.
Not to sound cranky, but how hard would it be to check the relevant section [wikipedia.org] of the Wikipedia article? Quoting:
Re: (Score:2)
Google's is go, the other guys is go!, those who cannot tell the difference between the 2 should not be programming.
Goop? (Score:2)
I think they should call it Goop. So much code produced by humans has looked like a blob from a bad sci-fi movie that it seems fitting.
Re: (Score:2)
Re: (Score:2)
I still vote for Goop. Have you ever heard the cliche, "you are what you eat"? I think a corollary might emerge: "you are what you code (in)". Some genius will use Goop to code the first artificially intelligent self-replicating nanobots, and they'll decide we're no more significant than any other raw material and turn us all into....
Re: (Score:2)
Biggest scandal since Lindows! (Score:2)
To be honest, i can see the confusion... (Score:2, Funny)
How would Google even know that a language called "Go" exists?
They would have to have some mechanism for searching the internet to do that.
Wikipedia proposes deletion of Go! page (Score:4, Interesting)
This template was added 2009-11-12 14:22
Re: .
They should change it... (Score:3, Insightful)
Slashdot needs a voting mecahnism for this (Score:2)
A poll would be interesting.
Personally, I think that "Go and "Go! are two different names, so there is no problem.
not an issue (Score:2, Interesting)
One has a bang (!) at the end, while the other doesn't.
Everybody knows the difference between C and C#
The claim has no basis.
Rename it (Score:2)
UUIDs (Score:5, Funny)
Re: (Score:3, Funny)
Bastard! A little research through a few obscure, un-archived computing journals published in the now defunct USSR would have shown you that I wrote the programming language Ed68c886-6390-4255-813f-48e61f6b0b05 over 25 years ago! The cheek of some people!: (Score:3, Funny)
to call a stop. Or a stop!
while $STOP; HAMMERTIME; end
Re: (Score:2)
while $STOP {
// Credit where due [xkcd.com]
collaborate();
listen();
}
Re: (Score:2)
if (exists(town{"Der Kommissar"})) {
exit
} else {
@ARRAY = reverse @ARRAY;
}
Re:Perfect example (Score:4, Insightful)
There's no IP.
There is copyright, patents and trademarks. This sounds like a trademark thing, so no need to confuse the issue.
Re: (Score:2)
I think the issue here is more akin to trademarks rather than IP.
I think your post is more an issue of words than of text. Intellectual Property is an umbrella term combining trademarks, copyright, and patents. Even without a registered trademark, I think they'd have a good case that Google is trying to pass off their new language as the original Go.
Re: (Score:3, Informative)
Even without a registered trademark, I think they'd have a good case that Google is trying to pass off their new language as the original Go.
Actually, unregistered trademarks are valid, too. In North America, the trademark system is a "first to use" system, not a "first to file".
However, the original Go is not a commercial product, so there is no trademark issue. Google will likely consider changing the name just because it's stupid to create a new programming language and give it the same name as an existing one, but trademark won't enter into the discussion.
Re: (Score:2)
I always thought one had to register a trademark for it to be valid. I thought the the (tm) mark was for pending trademarks. It looks like I was wrong. [ehow.com]
I think the whole fighting over the "go" name is stupid. Seriously, what kind of idiot would think no one used such a commonly used word, especially since most people would equate a programming language with an action. (Yeah, and someone actually used the word "Action!" as the name of their programming language [wikipedia.org].)
Re: (Score:3, Insightful)
Re: (Score:2)
where are you googling? the only thing on the first page is the "Go!" wikipedia article which was created yesterday, AFTER Google launched their language
Re:Non-issue (Score:5, Funny)
Hey it's not their fault. If only they had access to some sort of computer system that allowed one to quickly examine the internet, a "search engine" if you will, then they might have been able to catch this in time.
Re: (Score:3, Interesting)
Believe me, if there's at least one lawyer working for Google, they knew. Even most start-ups research a product name before announcing it. They probably just figured they could pay the guy off.
Re: (Score:2): (Score:2)
Re: .
Re: : (Score:2)
Maybe it's time for tougher IP laws where such things would be possible! At least I would if I were into politics...
Re:'GO' != 'GO!' (Score:4, Informative)
A+ != A# != A# C != C# (in fairness they are related) There are several languages refereed to as D F != F# L != L# M != M4
If you can't tell the difference between to similarly named programming languages perhaps programming isn't for you!
But C# = Db F = E# and moreover B# = C
Re: (Score:3, Insightful)
Because Googling for "go" gets you 2,950,000,000 hits. Yes, that's billions. And yet they didn't see that choosing such a common word for a language name was a bad idea. Ah, how the mighty goof up. | http://developers.slashdot.org/story/09/11/12/1256234/Google-Under-Fire-For-Calling-Their-Language-Go | CC-MAIN-2015-11 | en | refinedweb |
I wanted to share some of my insight on setting up a development environment for asp .net Web API development in CORS domain situation.
Our company started to develop a new SaaS solution for project management and this was the first time that the team worked outside of a closed enterprise environment.
We approached a designer to develop our user interface for the product, and when we discussed the final details of the API, he requested that we implement CORS support for it.
For him it was the first time developing UX against asp .net api, so he help us on implementing CORS. We researched the web for a solution and found the new preview version
of ASP.NET that has a built-in CORS support. We replaced our referenced libraries with the new ones and added the needed configuration (we later reverted this change and
returned to the stable version). It seemed like we had done our part. We then told our designer to check the API, and he couldn't make it work. He insisted that our problem
was with CORS support. So we started to investigate the subject. After 3 days of searching and reading every piece of information available on the Web, we came up with
these 3 steps for solving the problem..
Don't use the default, server-issued one unless you don't have any other option!!!
The first step of our solution was to get a trusted certificate issued from a digital certificate provider like Comodo, GeoTrust, VeriSign or other.
There is also an option to get the certificate form an SSL Certificate reseller like namecheap.com or cheapssls.com that issue certificates at a lower price.
Because we needed to use this certificate for development purposes, we got a Class 1 free certificate from startssl.com for free that is valid for
one year.
The other restriction for the certificate is that its key will be 2048 bits or more.
The second step we needed was to stop some browsers’ habit of sending the server the user's client certificate for two way authentication.
This task is simply done inside the Internet Information Services (IIS) Management Console. You need to do the following two steps on every path
of your web directory from your API directory to your root: Go to IIS Manager and double click on SSL Settings.
Select Ignore under Client certificate; make sure that the Require SSL checkbox is selected.
These steps take care of the select client certificate prompt on all browsers that block the clients’ AJAX calls.
The final step in the solution is to handle the CORS HTTP request and the "pre-flight" OPTIONS request that is being sent from the browsers.
This is the delegating message handler that we eventually implemented in our API.
It's based mainly on Carlos Figueira’s article, as well as a few others found online.
This handler, together with the signed certificate and the IIS configuration, is what finally enabled us to get our API working in cross origin situations over SSL.
//
// Our Final Cors Message Handler Implementation:
//
public class CorsDelegatingHandler : DelegatingHandler
{
protected override Task<HttpResponseMessage> SendAsync(
HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
{
const string origin = "Origin";
const string accessControlRequestMethod = "Access-Control-Request-Method";
const string accessControlRequestHeaders = "Access-Control-Request-Headers";
const string accessControlAllowOrigin = "Access-Control-Allow-Origin";
const string accessControlAllowMethods = "Access-Control-Allow-Methods";
const string accessControlAllowHeaders = "Access-Control-Allow-Headers";
bool isCorsRequest = request.Headers.Contains(origin);
bool isPreflightRequest = request.Method == HttpMethod.Options;
if (isCorsRequest)
{
if (isPreflightRequest)
{
return Task.Factory.StartNew(() =>
{
var response = new HttpResponseMessage(HttpStatusCode.OK);
response.Headers.Add(accessControlAllowOrigin, request.Headers.GetValues(origin).First());
string controlRequestMethod =
request.Headers.GetValues(accessControlRequestMethod).FirstOrDefault();
if (controlRequestMethod != null)
{
response.Headers.Add(accessControlAllowMethods, controlRequestMethod);
}
string requestedHeaders =
string.Join(", ", request.Headers.GetValues(accessControlRequestHeaders));
if (!string.IsNullOrEmpty(requestedHeaders))
{
response.Headers.Add(accessControlAllowHeaders, requestedHeaders);
}
return response;
}, cancellationToken);
}
return base.SendAsync(request, cancellationToken).ContinueWith(t =>
{
HttpResponseMessage response = t.Result;
response.Headers.Add(accessControlAllowOrigin,
request.Headers.GetValues(origin).First());
return response;
});
}
return base.SendAsync(request, cancellationToken);
}
}
Remember to add the new delegating message handler to the MessageHandlers list in your Web API Configuration. Your
Global.asax.cs should look similar to this:
MessageHandlers
public class WebApiApplication : System.Web.HttpApplication
{
protected void Application_Start()
{
WebApiConfig.Register(GlobalConfiguration.Configuration);
WebApiConfig.ConfigureApi(GlobalConfiguration.Configuration);
FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
RouteConfig.RegisterRoutes(RouteTable.Routes);
BundleConfig.RegisterBundles(BundleTable.Bundles);
GlobalConfiguration.Configuration.MessageHandlers.Add(new CorsDelegatingHandler());
}
}
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | http://www.codeproject.com/Articles/669152/ASP-NET-Web-API-CORS-Authentication-SSL-and-self-s | CC-MAIN-2015-11 | en | refinedweb |
View Poll Results: If you read it, did you find DirectJNgine User's Guide adequate?
- Voters
- 54. You may not vote on this poll
Yes
No
Direct Store with paging grid in ExtJs4and DJN 2.0
Direct Store with paging grid in ExtJs4and DJN 2.0
It is mentioned in the DJN 2 user guide it donot support "baseparams" and we must adapt a way of form Submit/api ...
how ever I cannot get the Idea of how to use api?submit with a "direct Store" and directFn when using DJN ...
Is any body out there already developed a "Paging grid" or a buffered Grid ? what way is adapted to pass parameters to server's java function.
please share the code of the store in java script if possible..
Any guidance will be appriciated!
Problem with many war in jboss
Problem with many war in jboss
Hi,
thank for your directjngine lib.
i've a problem... when i deploy two war in jboss as7 (two part of the same application) and i try to call, using direct, the slave war from the main war i've got this error:
{"tid":1,"action":"MyTestAction","method":"get","message":"RequestException: No action registered as 'MyTestAction'","where":"","type":"exception"}
i've defined the action "MyTestAction" in the slave web.xml.
please help me...
Paolo
Maybe you are not providing the compiled classes + configuration in both wars? If that's the case, DJN has no way to know what calls are avaible, nor what they look like.Pedro Agulló, Barcelona (Spain) - pagullo.soft.dev at gmail.com
Agile team building, consulting, training & development
DirectJNgine: - Log4js-ext:
I see DJN 2.0 alpha released in july 2011. Any idea about DJN 2.0 final release schedule?
DirectJNgine 2.1 final is out!
DirectJNgine 2.1 final is out!
DJN 2.1 final is out.
The 2.0 alpha 1 ended up being production ready, as I said in other posts in which I recommended to use it for production. But I think we all needed to see the "final" suffix, so here it is
I should have removed the 'alpha 1' suffix a long time ago, if only to provide reassurance, mea culpa. The released version just adds some additional tests and I made sure it passes all tests for ExtJs 4.1.0. Some new change in 4.1.0 broke a test that used to work in 4.0.x, and I had to change the test code accordingly.
The bump from 2.0 to 2.1 is to emphasize the fact that this one works with ExtJs 4.1.0, which is the version I feel we all should be using, due to the many fixes and improvements it sports.
As always, you can get DJN at
Regards!Pedro Agulló, Barcelona (Spain) - pagullo.soft.dev at gmail.com
Agile team building, consulting, training & development
DirectJNgine: - Log4js-ext:
Now that 2.1 is out and I have found some spare time for DJN (!), I've been thinking hard about new features.
In order to share my ruminations with the community I have publihed some of them at. I will soon follow with more posts.
Be aware that these are just ruminations, though
As always, comments and suggestions will be welcome.Pedro Agulló, Barcelona (Spain) - pagullo.soft.dev at gmail.com
Agile team building, consulting, training & development
DirectJNgine: - Log4js-ext:
DirectJNgine Security Manager
DirectJNgine Security Manager
Hi Pedro,
I'm exploring your directJNgine library and I have to say that I really like it! It really simplifies every day development, and I wanted to thank you for all the time you have spent on it providing a great library to all the world! The docs are brilliant and I've configured everything really fastly. Now that I have my remote methods (marked with the convenient @DirectMethod), that are usable from the client side, I was wondering how to implement a convenient way to manage the authorization level of each method.
In the perfect world I'd like to accomplish something like this:
Code:
@DirectMethod @AccessLevel( level="Admin") public String something()
Looking forward to hear from you,
Alex
As a *quick* solution, you might be able to solve your issue by letting some other element check permisions and raise a security exception. DJN will pass it to the client with a message in an standard format that includes the exception type and its message.
Note that the message is referred to the "cause exception", not the last one in a exception stack, making it meaningful for that kind of processing .Pedro Agulló, Barcelona (Spain) - pagullo.soft.dev at gmail.com
Agile team building, consulting, training & development
DirectJNgine: - Log4js-ext:
Hi Pedro,
I already have developed my own Security manager that handles method-level permissions, but I need to intercept the calls to my method before they happen. By analyzing your source code, I've found that
every server side method is callled by the code on the line 144 of DispatcherBase class, that does the following:
Code:
132 protected Object invokeMethod(RegisteredMethod method, Object actionInstance, Object[] parameters) { ... 136 Method javaMethod = method.getMethod(); 137 Object result; ... 144 result = javaMethod.invoke( actionInstance, parameters ); ... 160 return result; 161 }
My idea is to replace the line 144 with the following one:
Code:
result = securityClass.invoke(javaMethod, actionInstance, parameters);
Code:
public interface SecurityManager { public Object invoke(Method method, Object actionInstance, Object[] parameters); }
Code:
public class DjnDefaultSecurityManager implements SecurityManager { @Override public Object invoke(Method method, Object actionInstance, Object[] parameters) { return method.invoke( actionInstance, parameters ); } }
The configuration parameter could be something like this:
Code:
<init-param> <param-name>DjnSecurityManager</param-name> <param-value>org.abusinessname.CustomSecurityManager</param-value> </init-param>
In this way DJN delegates to the developer the task of implementing its own security manager, but provides a more convenient and elegant way to do it.
What do you think about this proposal? I think that it's a minimal tweak to your code that could provide a powerful feature to your library.
Alex
Thanks for your proposal!
I prefer not to add security management to DJN, as I consider it an orthogonal concern -more so because there are so many ways to handle security, think Spring Security, JBoss Picketlink or JAAS.
Regards,Pedro Agulló, Barcelona (Spain) - pagullo.soft.dev at gmail.com
Agile team building, consulting, training & development
DirectJNgine: - Log4js-ext:
Thread Participants: 89
- Animal (5 Posts)
- barton (4 Posts)
- Condor (1 Post)
- mauro_monti (6 Posts)
- mbarto (1 Post)
- aconran (1 Post)
- MoShAn480 (1 Post)
- asgillett (2 Posts)
- seade ) | http://www.sencha.com/forum/showthread.php?73027-Ext-Direct-Java-based-implementation&p=806856&viewfull=1 | CC-MAIN-2015-11 | en | refinedweb |
This is your resource to discuss support topics with your peers, and learn from each other.
01-21-2013 12:13 PM
Im having problems trying to adapt the Cascades "BluetootoothSPPChat example(on Blackberry gitHub). to scan for bluetooth low energy devices. My current road block in trying to adopt instructions from JAM 21 session code descriptions to the Cascade example is the following:
bt_le_callbacks_t le_callbacks = {
.advert = le_advertisement_cb
};
But the compiler is complaining about syntax so I split in up to the following:
bt_le_callbacks_t le_callbacks;
le_callbacks.advert = le_advertisement_cb;
But the compiler is complaining about mismatching return types.
I have the imports:
#include <btapi/btdevice.h>
#include <btapi/btle.h>
and a skeletal function definition for le_advertisement_cb:
void le_advertisement_cb(const char *bdaddr, const char *data, int len, void *userData){
}
Any help would be much appreciated.
Solved! Go to Solution.
01-23-2013 02:18 AM
Hi rbork,
I am also working with BT LE, and I think I passed this roadblock. Try this:
#include <QDebug>
#include <btapi/btdevice.h>
#include <btapi/btgatt.h>
#include <btapi/btle.h>
#include <errno.h>
void my_advertisement_cb (const char *bdaddr, int8_t rssi, const char *data, int len, void *userData) { qDebug("Advertisement from BTAddr:%s. (Signal strength %d)", bdaddr, rssi); } /* * Note that this must be static. `bt_le_init()` does not "retain" the callbacks. */ static bt_le_callbacks_t le_callbacks = { my_advertisement_cb }; void initAndSearch() { bt_le_init(&le_callbacks); qDebug("BEGIN Scanning for device!"); int retVal = bt_le_add_scan_device(BT_LE_BDADDR_ANY, this); if (retVal != EOK) { qDebug("Couldn't start scanning for devices. Error: %d", retVal); } }
I am stuck at getting a service to connect. I'll start a new thread for that challenge
Good luck! | http://supportforums.blackberry.com/t5/Native-Development/Setting-up-bluetooth-low-energy-callback/m-p/2111069/thread-id/11145 | CC-MAIN-2015-11 | en | refinedweb |
Value must be an XML name
May include letters, digits, underscores, hyphens, and periods
May not include whitespace
May or may not have the name "id" or "ID"
May contain colons only if used for namespaces
Value must be unique within ID type attributes in the document
Generally the default value is
#REQUIRED
<!ELEMENT composer (name)> <!ATTLIST composer id ID #REQUIRED> | http://www.cafeconleche.org/slides/xmlone/london2003/xmlfundamentals/130.html | CC-MAIN-2015-11 | en | refinedweb |
Application A Thorough Introduction To Backbone.Marionette (Part 1)
- By Joseph Zimmerman
- February 11th, 2013
- 19 Comments
To help you tap the full potential of Marionette, we’ve prepared an entire eBook1 full of useful hands-on examples which is also available in the Smashing Library2. — Ed.
Backbone.js is quickly becoming the most popular framework for building modular client-side JavaScript applications. This is largely due to its low barrier to entry; getting started with it is super-simple. However, unlike Ember.js, Backbone, being so minimal, also leaves a lot up to the developer to figure out.
So, once you start getting into more advanced applications, it’s no longer so simple. Backbone.Marionette was created to alleviate a lot of the growing pains of Backbone development. Backbone.Marionette “make[s] your Backbone.js apps dance with a composite application architecture!,” according to its author.
This “composite” architecture refers mainly to the numerous view types that have been provided to help with subview management. We won’t be discussing those views today (although we will touch on regions, which are a small part of the subview management that Marionette offers), but you can find documentation for this project in the GitHub repository3. It offers numerous components that extend Backbone and that enable you to write less boilerplate and do more stuff with little to no hassle, especially when it comes to views.
The Central Application Object
Most of the time, when someone creates a Backbone application, they make a central object that everything is attached to, which is often referenced as
App or
Application. Backbone doesn’t offer anything to make this object from, so most people just create a main router and make that the app object. While it’s great that people are attaching things to a central object so that the global namespace isn’t so convoluted, the router was not designed to handle this task.
Derick Bailey, the creator of Marionette, had a better idea. He created a “class” that you could instantiate an object from that is specifically designed to handle the responsibilities of being the go-to root object of the entire application. You create a new application with
var App = new Backbone.Marionette.Application(), and then, when everything is set, you start the application with
App.start(options). I’ll discuss the
options argument soon. For now, just remember that it’s optional.
Initializers
One of the coolest things about Marionette’s
Application is the initializers. When your code is modular, several pieces will need to be initialized when the application starts. Rather than filling a
main.js file with a load of code to initialize all of these objects, you can just set the modules up for initialization within the code for the module. You do this using
addInitializer. For example:
var SomeModule = function(o){ // Constructor for SomeModule }; App.addInitializer(function(options) { App.someModule = new SomeModule(options); });
All of the initializers added this way will be run when
App.start is called. Notice the
options argument being passed into the initializer. This is the very same object that is passed in when you call
App.start(options). This is great for allowing a configuration to be passed in so that every module can use it.
A few events are also fired when running through these initializers:
initialize:before
Fires just before the initializers are run.
initialize:after
Fires just after the initializers have all finished.
start
Fires after
initialize:after.
You can listen for these events and exert even more control. Listen for these events like this:
App.on('initialize:before', function(options) { options.anotherThing = true; // Add more data to your options }); App.on('initialize:after', function(options) { console.log('Initialization Finished'); }); App.on('start', function(options) { Backbone.history.start(); // Great time to do this });
Pretty simple, and it gives you a ton of flexibility in how you start up your applications.
Event Aggregator
The
Application object brings even more possibilities for decoupling a Backbone application through the use of an event aggregator. A while back I wrote a post about scalable JavaScript applications4, in which I mentioned that modules of a system should be completely ignorant of one another, and that the only way they should be able to communicate with each other is through application-wide events. This way, every module that cares can listen for the changes and events they need to so that they can react to them without anything else in the system even realizing it exists.
Marionette makes this kind of decoupling largely possible via the event aggregator that is automatically attached to the application object. While this is only one of the mechanisms that I wrote about in that article, it is a start and can be very useful in even smaller applications.
The event aggregator is available through a property in the application called
vent. You can subscribe and unsubscribe to events simply via the
on and
off methods, respectively (or
bind and
unbind, if you prefer). These functions might sound familiar, and that’s because the event aggregator is simply an extension of Backbone’s
Event object5. Really, the only thing new here that you need to worry about is that we’re using the events on an object that should be accessible everywhere within your app, so that every piece of your application can communicate through it. The event aggregator is available as its own module too, so you can add it to any object you want, just like Backbone’s
Event.
Regions
Region is another module for Marionette that enables you to easily attach views to different regions of an HTML document. I won’t go into detail about how regions work here — that’s a topic for another day — but I’ll cover it briefly and explain how to use them with
Application.
A region is an object — usually created with
new Backbone.Marionette.Region({ el: 'selector'}) — that manages an area where you attach a view. You would add a view and automatically render it by using
show. You can then close out that view (meaning it will remove it from the DOM and, if you’re using one of the Marionette views, undo any bindings made by the view) and render a different view simply by calling
show again, or you can just close the view by calling
close. Regions can do more than that, but the fact that they handle the rendering and closing for you with a single function call makes them extremely useful. Here’s a code sample for those who speak in code rather than English:
// Create a region. It will control what's in the #container element. var region = new Backbone.Marionette.Region({ el: "#container" }); // Add a view to the region. It will automatically render immediately. region.show(new MyView()); // Close out the view that's currently there and render a different view. region.show(new MyOtherView()); // Close out the view and display nothing in #container. region.close();
If you want a
Region directly on your application object (e.g.
App.someRegion), there’s a simple way to add one quickly:
addRegions. There are three ways to use
addRegions. In every case, you would pass in an object whose property names will be added to the application as regions, but the value of each of these may be different depending on which way you wish to accomplish this.
Selector
Simply supply a selector, and a standard region will be created that uses that selector as its
el property.
App.addRegions({ container: "#container", footer: "#footer" }); // This is equivalent to App.container = new Backbone.Marionette.Region({el:"#container"}); App.footer = new Backbone.Marionette.Region({el:"#footer"});
Custom Region Type
You can extend
Region to create your own types of regions. If you want to use your own type of region, you can use the syntax below. Note that, with this syntax,
el must already be defined within your region type.
var ContainerRegion = Backbone.Marionette.Region.extend({ el: "#container", // Must be defined for this syntax // Whatever other custom stuff you want }); var FooterRegion = Backbone.Marionette.Region.extend({ el: "#footer", // Must be defined for this syntax // Whatever other custom stuff you want }); // Use these new Region types on App. App.addRegions({ container: ContainerRegion, footer: FooterRegion }); // This is equivalent to: App.container = new ContainerRegion(); App.footer = new FooterRegion();
Custom Region Type with Selector
If you don’t define
el — or you want to override it — in your custom region type, then you can use this syntax:
var ContainerRegion = Backbone.Marionette.Region.extend({}); var FooterRegion = Backbone.Marionette.Region.extend({}); // Use these new Region types on App. App.addRegions({ container: { regionType: ContainerRegion, selector: "#container" }, footer: { regionType: FooterRegion, selector: "#footer" } }); // This is equivalent to: App.container = new ContainerRegion({el:"#container"}); App.footer = new FooterRegion({el:"#footer"});
As you can see, adding application-wide regions is dead simple (especially if you’re using the normal
Region type), and they add a lot of useful functionality.
Conclusion
As you can already see, Marionette adds a ton of great features to make Backbone development simpler, and we’ve covered only one of many modules that it provides (plus, we’ve touched on a couple of other modules that
Application itself uses, but there’s plenty more to learn about those). I hope this will entice Backbone programmers a bit and make you eager to read the rest of this series6, when I’ll cover more of the modules.
Credits of image on start page: Dmitry Baranovskiy7.
(al)
Footnotes
- 1
- 2
- 3
- 4
- 5
- 6
- 7
↑ Back to top Tweet itShare on Facebook | http://www.smashingmagazine.com/2013/02/11/introduction-backbone-marionette/ | CC-MAIN-2015-11 | en | refinedweb |
Develop-PG-Qt является генератором кода парсера для KDevplatform и используется некоторыми расширениями поддержки языков программирования в KDevelop (например: Ruby, PHP, Java...).
KDevelop-PG-Qt основывается на классах Qt, а так же оригинальном парсере KDevelop-PG, который использует типы данных из библиотеки STL, однако у него более широкие возможности. Большинство возможностей схожи, хотя возможно что ...-Qt парсер-генератор более современен и функционально богаче, чем обычный генератор в стиле STL. Для создания парсеров, используемых для расширений поддержки языков программирования в KDevelop следует использовать ...-Qt версию парсер-генератора.
Этот документ не подразумевается как полноценное и глубокое описание для всех частей KDevelop-PG. Вместо этого он предназначен для краткого введения и, что более важно, ссылается на разработчиков.
Для получения более детальной информации, прочтите отличную диссертацию бакалавра Якоба Петровитса (Jakob Petsovits). Вы найдёте её поссылке с секции Web-ссылок в нижней части этой страницы (примечание переводчика: на самом деле, к сожалению, этот документ был удалён и ссылка ведёт вникуда. Ведётся поиск этого документа).
KDevelop-PG-Qt можно найти в репозитории git. В исходном коде имеется 4 пакета с примерами.
Чтобы скачать попробуйте:
git clone git://anongit.kde.org/kdevelop-pg-qt.git
или:
git clone kde:kdevelop-pg-qt(если имеете установленный git с настроенным URL префиксом "kde")
При запуске программа требует файл .g, называемый грамматическим:
./kdev-pg-qt --output=prefix syntax.g
Значение параметра --output определяет префикс выходных файлов, а так же пространство имён для генерируемого кода. Программа Kate производит простую подсветку синтаксиса для грамматических файлов KDevelop-PG-Qt.
While evaluating the grammar and generating its parser files, the application will output information about so called conflicts to STDOUT. As said above, the following files will actually be prefixed.
AST stands for Abstract Syntax Tree. It defines the data structure in which the parse tree is saved. Each node is a struct with the postfix Ast, which contains members that point to any possible sub elements.
One important part of parser.h is the definition of the parser tokens, the TokenType enum. The TokenStream of your lexer should to use this. You have to write your own lexer or let one generate by Flex. See also the part about Tokenizers/Lexers below.
Having the token stream available, you create your root item and call the parser on the parse method for the top-level AST item, e.g. DocumentAst* => parseDocument(&root). On success, root will contain the AST.
The parser will have one parse method for each possible node of the AST. This is nice for e.g. an expression parser or parsers that should only parse a sub-element of a full document.
The Visitor class provides an abstract interface to walk the AST. Most of the time you don't need to use this directly, the DefaultVisitor takes some work off your shoulders.
The DefaultVisitor is an implementation of the abstract Visitor interface and automatically visits each node in the AST. Hence, this is probably the best candidate for a base class for your personal visitors. Most language plugins use these in their Builder classes to create the DUChain.
As mentioned, KDevelop-PG-Qt requires a Tokenizer. You can either let KDevelop-PG-Qt generate one for you, write one per hand, as it has been done for C++ and PHP, or you can use external tools like Flex.
The tokenizer's job, in principle, boils down to:
The rest, e.g. actually building the tree and evaluating the semantics, is part of the parser and the AST visitors.
KDevelop-PG-Qt can generate lexers being well integrated into its architecture (you do not have to create a token-stream-class invoking lex or something like that). See examples/foolisp in the code for a simplistic example, there is also an incomplete PHP-Lexer for demonstration purposes.
Regular expressions are used to write rules using the KDevelop-PG-Qt, we use the following syntax (α and β are arbitrary regular expressions, a and b characters):
All regular expressions are case sensitive. Sorry, there is currently no way for insensitivity.
There are several escape sequences which can be used to encode special characters:
Some regexes are predefined and can be used using braces {⟨name⟩}. They get imported from the official Unicode data, some important regexes:
Rules can be written as:
⟨regular expression⟩ TOKEN;
Then the Lexer will generate the token TOKEN for lexemes matching the given regular expression. Which token will be chosen if there are multiple options? We use the first longest match rule: It will take the longest possible match (eating as many characters as possible), if there are multiple of those matches, it will take the first one.
Rules can perform code actions and you can also omit tokens (then no token will be generated):
⟨regular expression⟩ [: ⟨code⟩ :] TOKEN; ⟨regular expression⟩ [: ⟨code⟩ :]; ⟨regular expression⟩ ;
There is rudimentary support for lookahead and so called (our invention) barriers:
⟨regular expression⟩ %la(⟨regular expression⟩); ⟨regular expression⟩ %ba(⟨regular expression⟩);
The first rule will only accept words if they match the first regular expression and are followed by anything matching the expression specified using %la. The second rule will accept words matched by the first regular expression but will never run into a character sequence matching the regex specified by %ba. However, currently only rules with fixed length are allowed in %la and %ba (for example foo|bar, but not qux|garply). When applying the “first longest match” rule the %la/%ba expressions count, too.
You can create your own named regexes using an arrow:
⟨regular expression⟩ -> ⟨identifier⟩;
The first character of the identifier should not be upper case.
Additionally there are two special actions:
⟨regular expression⟩ %fail; ⟨regular expression⟩ %continue;
%fail will stop tokenization. %continue will make the matched characters part of the next token.
A grammar file can contain multiple rulesets. A ruleset is a set of rules, as described in the previous section. It gets declared using:
%lexer "name" -> ⟨rules⟩ ;
For your main-ruleset you omit the name (the name will be “start”).
Usually the start-ruleset will be used. But you can change the ruleset in code actions using the macro lxSET_RULE_SET(⟨name⟩). You can specify code to be executed when entering or leaving a ruleset by using %enter [: ⟨code⟩ :]; or %leave [: ⟨code⟩ :]; respectively inside the definition of the ruleset.
The standard statements %lexer_declaration_header and %lexer_bits_header are available to include files in the generated lexer.h/lexer.cpp.
By using %lexer_base you can specify the baseclass for the lexer-class, by default it is the TokenStream class defined by KDevelop-PG-Qt.
After %lexerclass(bits) you can specify code to be inserted in lexer.cpp.
You have to specify an encoding the lexer should work with internally using %input_encoding "⟨encoding⟩", possible values:
With %input_stream you can specify which class the lexer should use to get the characters to process, there are some predefined classes:
Whether you choose UTF-8, UTF-16 or UTF-32 is irrelevant for functionality, but it may significantly affect compile-time and run-time performance (you may want to test your Lexer with ASCII if compilation takes too long). For example you want to work with a QByteArray containing UTF-8 data, and you want to get full Unicode support: You could either use the QByteArrayIterator and UTF-8 as internal encoding, or the QUtf8ToUtf16Iterator and UTF-16, or the QUtf8ToUcs4Iterator and UTF-32.
You can also choose between %table_lexer; and %sequence_lexer; In the first case transitions between states of the lexer will get represented by big tables while generating the lexer (cases in the generated code). In the second case sequences will get stored in a compressed data structure and transitions will get represented by nested if-statements. For UTF-32 %table_lexer is infeasible, thus there %sequence_lexer is the onliest option.
Inside your actions of the lexer you can use some predefined macros:
lxCURR_POS // position in the input (some kind of iterator or pointer) lxCURR_IDX // index of the position in the input // (it is the index as presented in the input, for example: input is a QByteArray, index incrementation per byte, but the lexer may operate on 32-bit codepoints) lxCONTINUE // like %continue, add the current lexeme to the next token lxLENGTH // length of the current lexeme (as presented in the input) lxBEGIN_POS // position of the first character of the current lexeme lxBEGIN_ID // corresponding index lxNAMED_TOKEN(⟨name⟩, ⟨type⟩) // create a variable representing named ⟨name⟩ with the token type ⟨type⟩ lxTOKEN(⟨type⟩) // create such a variable named “token” lxDONE // return the token generated before lxRETURN(X) // create a token of type X and return it lxEOF // create the EOF-token lxFINISH // create the EOF-token and return it (will stop tokenization) lxFAIL // raise the tokenization error lxSKIP // continue with the next lexeme (do not return a token, you should not have created one before) lxNEXT_CHR(⟨chr⟩) // set the variable ⟨chr⟩ to the next char in the input yytoken // current token
With the existing examples, it shouldn't be too hard to write such a lexer. Between most languages, especially those "inheriting" C, there are many common syntactic elements. Especially comments and literals can be handled just the same way over and over again. Adding a simple token is trivial:
"special-command" return Parser::Token_SPECIAL_COMMAND;
That's pretty much it, take a look at eg. java.ll for an excellent example. However, it is quite tricky and ugly to handle UTF-8 with Flex.
KDevelop-PG-Qt uses so called Type-2-grammars use a concept of non-terminals (nodes) and terminals(tokens). While writing the grammar for the basic structure of your language, you should try to mimic the semantics of the language. Lets take a look at an example:
C++-document consists of lots of declarations and definitions, a class definition could be handled e.g. in the following way:
The member-declarations-list is of course not a part of any C++ description, it is just a helper to explain the structure of a given semantic part of your language. The grammar could then define how exactly such helper might look like.
Now let us have a look at a basic example, a declaration in C++, as described in grammar syntax:
struct_declaration
This is called a rule definition. Every lower-case string in the grammar file references such a rule. Our case above defines what a declaration looks like. The |-char stands for a logical or, all rules have to end on two semicolons.
In the example we reference other rules which also have to be defined. Here's for example the class_declaration, note the tokens in all-upper-case:
CLASS IDENTIFIER LBRACE class_declaration* RBRACE SEMICOLON -> class_declaration ;
There is a new char in there: The asterisk has the same meaning as in regular expressions, i.e. that the previous rule can occur arbitrarily often or not at all.
In a grammar
0 stands for an empty token. Using it in addition with parenthesizing and the logical or from above, you can express optional elements:
some_required_rule SOME_TOKEN ( some_optional_stuff | some_other_stuff | 0 ) -> my_rule ;
All symbols never occuring on the left side of a rule are start-symbols. You can use one of them to start parsing.
The simple rule above could be used to parse the token stream, yet no elements would be saved in the parsetree. This can be easily done though:
class_declaration=class_declaration
The DeclarationAst struct now contains pointers to each of these elements. During the parse process the pointer for each found element gets set, all others become NULL. To store lists of elements, prepend the identifier with a hash # :
CLASS IDENTIFIER SEMICOLON
TODO: internal structure of the list, important for Visitors
Identifier and targets can be used in more than one place:
#one=one (#one=one)* -> one_or_more ;
In the example above, all matches to the rule one will be stored in one and the same list one.
Somewhere in the grammar (you should probably put it near the head) you'll have to define a list of available Tokens. From this list, the TokenType enum in parser.h will be created. Additionally to the enum value names you should define an explanation name which will e.g. be used in error messages. Note that the representation of a Token inside the source code is not required for the grammar/parser as it operates on a TokenStream, see Lexer/Tokenizer section above.
%token T1 ("T1-Name"), T2 ("T2-Name"), COMMA (";"), SEMICOLON (";") ;
It is possible to use %token multiple times to group tokens in the grammar. Though all tokens will still be put into the same TokenType enum.
TODO: explain process of using the parser Tokens
Alternatively to the asterisk (*) you can use a plus sign (+) to mark lists of one-or-more elements:
(#one=one)+ -> one_or_more ;
Using the #rule @ TOKEN syntax you can mark a list of rule, separated by TOKEN:
#item=item @ COMMA -> comma_separated_list ;
Alternatively to the above mentioned (item=item | 0) syntax you can use the following to mark optional items:
?item=item -> optional_item ;
Using a colon : instead of the equal sign = you can store the sub-AST in a local variable that will only be available during parsing, and only in the current rule.
TODO: need example
Instead of a name you can also place a dot . before the equal sign = . Then the AST will inherit all class-members from the sub-AST and the parsing-method for the sub-AST will be merged into the parent-AST. An example:
op=PLUS
In SimpleArithmeticsAst there will be the fields val1, val2 (storing the operands) and op (storing the operator-token). parseSimpleArithmetics will not have to call parseOperator. Obviously recursive inlining is not allowed.
Sometimes it is required to integrate hand-written code into the generated parser. Instead of editing the end-result (**never** do that!) you should put this code into the grammar at the correct places. Here are a few examples when you'd need this:
[: // here be dragons^W code ;-) :]
The code will be put into the generated parser.cpp file. If you use it inside a grammar rule, it will be put into the correct position during the parse process. You can access the current node via the variable yynode, it will have the type 'XYZAst**'.
In KDevelop language plugins, you'll see that most grammars start with something like:
[: #include <QtCore/QString> #include <kdebug.h> #include <tokenstream.h> #include <language/interfaces/iproblem.h> #include "phplexer.h" namespace KDevelop { class DUContext; } :]
This is a code section, that will be put at the beginning of ast.h, i.e. into the global context.
But there are also some newer statements available to include header-files:
%ast_header "header.h" %parser_declaration_header "header.h" %parser_bits_header "header.h"
The include-statement will be inserted into ast.h, parser.h respectively parser.cpp.
Also it's very common to define a set of enumerations e.g. for operators, modifiers, etc. pp. Here's an stripped example from PHP, note that the code will again be put into the generated parser.h file:
%namespace [: enum ModifierFlags { ModifierPrivate = 1, ModifierPublic = 1 << 1, ModifierProtected = 1 << 2, ModifierStatic = 1 << 3, ModifierFinal = 1 << 4, ModifierAbstract = 1 << 5 }; ... enum OperationType { OperationPlus = 1, OperationMinus, OperationConcat, OperationMul, OperationDiv, OperationMod, OperationAnd, OperationOr, OperationXor, OperationSl, OperationSr }; :]
When you write code at the end of the grammar-file simply between [: and :], it will be added to parser.cpp.
To add additional members to _every_ AST variable, use the following syntax:
%ast_extra_members [: KDevelop::DUContext* ducontext; :]
You can also specify the base-class for the nodes:
%ast_base symbol_name "BaseClass"
BaseClass has to inherit from AstNode.
Instead of polluting the global context with state tracker variables, and hence destroying the whole advantages of OOP, you can add additional members to the parser class. It's also very convenient to define functions for error reporting etc. pp. Again a stripped example from PHP:
%parserclass (public declaration) [: enum ProblemType { Error, Warning, Info }; void reportProblem( Parser::ProblemType type, const QString& message ); QList<KDevelop::ProblemPointer> problems() { return m_problems; } ... enum InitialLexerState { HtmlState = 0, DefaultState = 1 }; :]
Note, that we used %parserclass (public declaration), we could instead have used private or protected declaration.
protected
There is also a statement to specify a base-class:
%parser_base "ClassName"
When you add member variables to the class, you'll have to initialize and or destroy them as well. Here's how (either use ctor or dtor, of course):
desctructor] ) [: // Code :]
?[: // some bool expression :]
The following rule will only apply if the boolean expression evaluates to true. Here's an advanced example, which also shows that you can use the pipe symbol ('|') as logical or, i.e. essentially this is a if... else...' conditional:
?[: someCondition :] SOMETOKEN ifrule=myVar
This is especially convenient together with lookaheads (see below).
You can set up the grammar to define local variables whenever a rule gets applied:
... -> class_member [: enum { A_PUBLIC, A_PROTECTED, A_PRIVATE } access; :];
This variable is local to the rule class_member.
Similar to the syntax above, you can define members whenever a rule gets applied:
temporary] variable yourName: yourType ];
For example:
... -> class_member [ member variable access : AccessType; ];
Of course AccessType has to be defined somewhere else, see e.g. the Additional parser class members section above.
Using temporary or member is equivalent.
The first time you write a grammar, you'll potentially fail: Since KDevelop-PG-Qt is a LL(1) parser generator, conflicts can occur and have to be solved by hand. Here's an example for a FIRST/FIRST conflict:
class_definition -> class_expression ;
KDevelop-PG-Qt will output:
** WARNING found FIRST/FIRST conflict in "class_expression"
Sometime's it's these warnings can be ignored, but most of the time it will lead to problems in parsing. In our example the class_expression rule won't be evaluated properly: When you try to parse a class_definition, the parser will see that the first part (CLASS) of class_declaration matches, and jumps think's that this rule is to be applied. Since the next part of the rule does not match, an error will be reported. It does _not_ try to evaluate class_definition automatically, but there are different ways to solve this problem:
In theory such behavior might be unexpected: In BNF e.g. the above syntax would be enough and the parser would automatically jump back and retry with the next rule, here class_definition. But in practice such behaviour is - most of the time - not necessary and would slow down the parser. If however such behavior is explicitly what you want, you can use an explicit backtracking syntax in your grammar:
try/rollback(class_declaration) catch(class_definition) -> class_expression ;
In theory, every FIRST/FIRST could be solved this way, but keep in mind that the more you use this feature, the slower your parser gets. If you use it, sort the rules in order of likeliness. Also, there could be cases where the sort order could make the difference between correct and wrong parsing, especially with rules that "extend" others.
KDevelop-PG-Qt has an alternative to the - potentially slow - backtracking mechanism: Look ahead. You can use the LA(qint64) function in embedded C++ code (see sections above). LA(1) will return the current token, you most probably never need to use that. Accordingly, LA(2) returns the next and LA(0) the previous token. (If you wonder where these somewhat uncommon indexes come from: Read the thesis, or make yourself acquainted with LA(k) parser theory.)
class_declaration -> class_expression
Note: The conflict will still be output, but it is manually resolved. You should always document this somewhere with a comment, for future contributors to know that this is a false-positive.
Often you can solve the conflict all-together with an elegant solution. Like in our example:
LBRACE class_content RBRACE -> class_definition ; CLASS IDENTIFIER ?class_definition SEMICOLON -> class_expression ;
No conflicts, fast: everybody is happy!
A FIRST/FOLLOW conflict says, that it is undefined where a symbol ends and the parent symbol continues. A pretty stupid example is:
item* -> item_list ; item_list item* -> items ;
You probably see the glaring error. try/rollback helps with serious problems (e.g. parser doesn't work), though often you can ignore these conflicts. But if you do so, be aware that the parser is greedy: In our example item_list will always contain all item elements, and items will never have an item member. If this is an issue that leads to later conflicts, only try/rollback, custom manual code to check which rule should be used, or a refactoring of your grammar structure.
Alternatively it is sometimes helpful/required to change the greedy behaviour. In lists you can use manual code to force a break at a given position. Here's an example that shows this on a declaration of an array with fixed size:
typed_identifier=typed_identifier LBRACKET UNSIGNED_INTEGER [: count = static_cast<MyTokenStream*>(token)->strForCurrentToken().toUInt(); :] RBRACKET EQUAL LBRACE (#expression=expression [: if(--count == 0) break; :] @ COMMA) ?[: count == 0 :] RBRACE SEMICOLON -> initialized_fixed_array [ temporary variable count: uint; ];
TODO: return true/false or break??
Via return false you can enforce a premature stop (== error) of the evaluation of the current rule. Using return true you stop the evaluation prematurely, but signalize that the parse process was successful.
try/recover(expression) -> symbol ;;
This is approximately the same as:
[: ParserState *state = copyCurrentState(); :] try/rollback(expression) catch( [: restoreState(state); :] ) -> symbol ;
Hence you have to implement the member-functions copyCurrentState and restoreState and you have to define a type called ParseState. You do not have to write the declaration of those functions in the header-file, it is generated automatically if you use try/recover. This concept seems to be useful if there are additional states used while parsing. The Java-parser takes usage from it very often. But I do not know a lot about this feature and it seems unimportant for me. (I guess, it is not) I would be happy when somebody could explain it to me.
TODO | https://techbase.kde.org/index.php?title=Development/KDevelop-PG-Qt_Introduction/ru&oldid=68377 | CC-MAIN-2015-11 | en | refinedweb |
JBoss.org Community Documentation
This tutorial shows you how to cache your entities using EJB 3.0 for JBoss
To avoid roundtrips to the database, you can use a cache for your entities. JBoss EJB3 uses Hibernate as the JPA implementation which has support for a second-level cache. The Hibernate setup used for our JBoss EJB 3.0 implementation uses JBossCache as its underlying cache implementation. With caching enabled:
JBossCache allows you to specify timeouts to cached entities. Entities not accessed within a certain amount of time are dropped from the cache in order to save memory.
Furthermore, JBossCache supports clustering. If running within a cluster, and the cache is updated, changes to the entries in one node will be replicated to the corresponding entries in the other nodes in the cluster.
Take a look at
META-INF/persistence.xml which sets up the caching for this deployment:
"/> <property name="hibernate.cache.region.jbc2.cfg.entity" value="mvcc-entity"/> <property name="hibernate.cache.region.jbc2.cfg.query" value="local-query"/> <property name="hibernate.show_sql" value="true"/>
hibernate.show_sqlproperty in the persistence.xml to true. When this property is set, Hibernate will print to STDOUT the sql queries that are fired to the database, when the entity is being operated upon. This will, later on, help us in verifying whether the entity is being picked up from cache or is being loaded from database through a SQL query.
You define your entities
org.jboss.tutorial.cachedentity.bean.Customer and
org.jboss.tutorial.cachedentity.bean.Contact the normal way.
The default behaviour is to not cache anything, even with the settings shown above, in the persistence.xml.
A very simplified rule of thumb is that you will typically want to do caching for objects that rarely change,
and which are frequently read. We also annotate the classes with the
@Cache annotation to
indicate that the entities should be cached.
@Entity @Cache (usage=CacheConcurrencyStrategy.TRANSACTIONAL) public class Customer implements java.io.Serializable { ...
@Entity @Cache (usage=CacheConcurrencyStrategy.TRANSACTIONAL) public class Contact implements Serializable { ...
This defines that the Customer and the Contact entities need to be cached. Any attempt to look up
Customer or
Contact by their primary key, will first attempt
to read the entry from the cache. If it cannot be found it will try and load it up from the database.
You can also cache relationship collections. Any attempt to access the
contacts collection of
Customer will attempt to load the data from the cache before hitting the database:
@Entity @Cache (usage=CacheConcurrencyStrategy.TRANSACTIONAL) public class Customer implements java.io.Serializable { ... @Cache (usage=CacheConcurrencyStrategy.TRANSACTIONAL) @OneToMany(mappedBy="customer", fetch=FetchType.EAGER, cascade=CascadeType.ALL) public Set<Contact> getContacts() { return contacts; } ... }
Open
org.jboss.tutorial.cachedentity.client.CachedEntityRun. It takes two arguments, they are the
server:jndiport of the two nodes to use. If you look at the 'run' target of
META-INF/build.xml
you will see that they both default to localhost:1099, so in this case node1 and node2 will be the same,
i.e. no clustering takes place.
To build and run the example, make sure you have installed JBoss 5.x. See the Section 1.1, “JBoss Application Server 5.x” for details.
From the command prompt, move to the "cachedentity" folder under the Section 1.3, “Set the EJB3_TUTORIAL_HOME”
Make sure the "all" server configuration of JBossAS-5.x is running
$ ant $ ant run run: [java] Saving customer to node1 = localhost:1099 [java] Looking for customer on node2 = localhost:1099 (should be available in cache) [java] Found customer on node2 (cache). Customer details follow: [java] Customer: id=1; name=JBoss [java] Contact: id=2; name=Kabir [java] Contact: id=1; name=Bill
$ mvn clean install -PRunSingleTutorial
On the server you will notice these logs:
02:14:04,528 INFO [STDOUT] Hibernate: insert into Customer (id, name) values (null, ?) 02:14:04,529 INFO [STDOUT] Hibernate: call identity() 02:14:04,542 INFO [STDOUT] Hibernate: insert into Contact (id, CUST_ID, name, tlf) values (null, ?, ?, ?) 02:14:04,543 INFO [STDOUT] Hibernate: call identity() 02:14:04,544 INFO [STDOUT] Hibernate: insert into Contact (id, CUST_ID, name, tlf) values (null, ?, ?, ?) 02:14:04,545 INFO [STDOUT] Hibernate: call identity() 02:14:04,545 INFO [EntityTestBean] Created customer named JBoss with 2 contacts 02:14:04,634 INFO [EntityTestBean] Find customer with id = 1 02:14:04,645 INFO [STDOUT] Hibernate: select customer0_.id as id0_1_, customer0_.name as name0_1_, contacts1_.CUST_ID as CUST4_3_, contacts1_.id as id3_, contacts1_.id as id1_0_, contacts1_.CUST_ID as CUST4_1_0_, contacts1_.name as name1_0_, contacts1_.tlf as tlf1_0_ from Customer customer0_ left outer join Contact contacts1_ on customer0_.id=contacts1_.CUST_ID where customer0_.id=? 02:14:04,682 INFO [EntityTestBean] Customer with id = 1 found 02:14:04,756 INFO [EntityTestBean] Find customer with id = 1 02:14:04,801 INFO [EntityTestBean] Customer with id = 1 found
As you can see, the first time the customer with id = 1 was being requested through the entity manager, a database query was fired to fetch it, since it was not available in cache. The Customer object was then loaded from the database and stored in the cache. As can be seen in the next request to fetch the customer with the same id. This request to the entity manager, pulls up the entity from the cache instead of firing a database query. | http://docs.jboss.org/ejb3/docs/tutorial/1.0.7/html/Caching_EJB3_Entities.html | CC-MAIN-2015-11 | en | refinedweb |
:
Hey! I think I have bug, or I implemented it wrongly. Any tips anyone?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace NeumannRandNumGen
{
class Program
{
/
static void Main(string[] args)
{
int seed = 45505986; // seed
while(true) {
int randomNumber = NeumannRandom0To100(ref seed);
Console.WriteLine(randomNumber.ToString());
Console.ReadLine();
if (randomNumber == -1)
break;
}
}
static SortedSet alreadySeen = new SortedSet();
static int NeumannRandom0To100(ref int seed)
{
if (seed == 0 || seed == -1)
return -1;
// jabūt pāra skaitlim ciparu
if (seed.ToString().Length % 2 != 0)
seed /= 10;
unchecked
{
seed *= seed;
}
string newSeed = seed.ToString();
int length = newSeed.Length;
int middleFirst = length / 2; // int truncate division
int generatedNumber = int.Parse(newSeed[middleFirst].ToString() + newSeed[middleFirst + 1].ToString());
if (alreadySeen.Contains(generatedNumber))
return -1;
alreadySeen.Add(generatedNumber);
return generatedNumber;
}
}
} | http://programmingpraxis.com/2012/08/21/two-more-random-exercises/?like=1&source=post_flair&_wpnonce=8ef4095fcf | CC-MAIN-2015-11 | en | refinedweb |
This is the second article in a canonic series about the Enterprise Application Architecture design. In the previous article, we discussed about distributed application layers and how we are going to implement these layers in a sample application (Customer Order Management System (COMS)). So in this article, I am going to describe about database design, entity container and library implementation. Project names will remain the same as mentioned at the right side of architecture diagram explained in part 1.
The very first thing in any application design is creating the underlying database to support it. We should be very careful while designing database since we have to cover most possible business logic in the database itself instead of writing in client side code. To be frank, I am not an expert in database design. Please feel free to comment if you think something is less accurate. Let’s start our database design.
I’ll take a few sample use cases for implementation in COMS (Customer Order Management System) with these four tables for a demonstration.
As you might be aware, for a relational database to work properly, you should have a field in each database that uniquely identifies (Primary Key) that row in your database table. Also, we should have a link in the table based on its relationship (Foreign Key). The table below illustrates relationships between the tables.
Customer
CustomerID
Product
ProductID
Order
OrderID
OrderDetails
OrderDetailsID
Orders
Customers
Products
Have a look into the below mentioned database design image for more ideas.
Before creating the tables, we have to create our own database in SQL Server. Please note that I am using Microsoft SQL Server Management Studio Express for creating database and tables.
Open the Microsoft SQL Server Management Studio Express and you could see the window as given below:
Now, for creating the database, please follow the steps as mentioned here by Microsoft tech net guys. I assume that you are giving database name as Customer Database. After creating the database, you could see some predefined objects getting displayed under your database in object explorer tree such as Tables, Database Diagram, etc. Please see the below image.
Now we are all set to go ahead with table creation.
To create a new table with Table Designer:
Create all the four tables that I have mentioned in the database design section with proper primary key. After creating all the tables, we also have to create foreign key for the tables as I have mentioned in database design section.
You can create foreign key using SQLQuery analyzer. Click the New Query button which is available in the SQL Express toolbar, and paste the below queries in query window, select all the queries and press F5. Once you’re done with this, foreign key will be created automatically.
ALTER TABLE Orders WITH CHECK ADD CONSTRAINT [FK_CustomerID] FOREIGN KEY([CustomerID])
REFERENCES [Customers] ([CustomerID])
ALTER TABLE OrderDetails WITH CHECK ADD CONSTRAINT [FK_OrderID] FOREIGN KEY([OrderID])
REFERENCES [Orders] ([OrderID])
ALTER TABLE OrderDetails WITH CHECK ADD CONSTRAINT [FK_ProductID] FOREIGN KEY([ProductID])
REFERENCES [Products] ([ProductID])
I assume that you have done everything successfully upto this stage. Oh great . Now we’re done with customer order management system database design. See the below image which has four tables and some sample customer’s data.
I hope that you have got an idea about how to design the database.
As we discussed in the architecture diagram which is explained in part 1, we are going to discuss and implement the second level from the bottom (Data Access Layer). See the image given below:
You could see that I have mentioned clearly about what language and technology we are going to use for implementing this layer with projects name details. So before starting with implementation, we will discuss a bit about the purpose of this layer.
Typically Data Access Layer is for communicating with database to store/retrieve the data. Earlier, we used to write this layer with the help of pure ADO.NET. But now, .NET Framework provides ADO.NET Entity Data Model utilities by default. So using Entity Data Model Wizard, we can create edmx file which describes the target database schema, and defines the mapping between the EDM and the database.
Why class library for interacting with edmx? Can’t we use edmx file directly in business logic layer? Yes. Better not to use edmx file directly in business logic layer. Business logic should be an independent layer. If you use edmx file directly in business logic, that means you are losing extensibility of your platform. For instance, if you add some additional field in the database; you have to touch your business logic again. To avoid this dependency, we are exposing interfaces and methods in library to interact with the edmx file.
We are here to learn how to create these two projects and what type of project templates needs to be selected.
Open Visual Studio 2010 and select the Class Library project template from Visual C# -> windows. Look at the solution and project name in the given snapshot.
I have given the same name (“ServiceLibaries.CustomerServiceLibrary”) as mentioned in the architecture diagram. I have added two folders and .cs files for the implementation in the solution. One is for CustomerServiceLibrary interface and another one is for the actual method implementation. But don’t worry; there is no code inside the files now. Here is the structure of my VS solution.
ServiceLibaries.CustomerServiceLibrary
CustomerServiceLibrary
Let’s start creating an edmx file and then we will get back to the actual library implementation. Here are the steps for generating edmx file from a specific database.
CustomerOrderManagementSystemDEM
CustomerOrderManagementSystemDEM
CustomerOrderManagementSystemEntityDataModel
Now you will be asked to proceed with Entity Data Model Wizard. Select Generate from Database option from the wizard and choose your data connection from the combo box. If you don’t find your database in combo box, click the New Connection button and choose your Server name, Data source, Authentication type (Leave it as Use Windows Authentication), Database name “Customer Database” and click ok.
Then you will get the page with connection string details as the given below. Click the next button and choose the tables that you want to generate entities. I selected all the four tables. Change the model namespace at bottom of the dialog window (optional), and click Finish button.
When you hit the Finish button, Visual Studio starts to generate the edmx file with app.config file for you. Once it’s generated, you could see ConnectionStrings tag in the XML file with other attributes such as connection string and provider name details.
ConnectionStrings
<configuration /><?xml version="1.0" encoding="utf-8"?>
<configuration>
<connectionStrings>
<add name="Customer_DatabasesEntities"
connectionString="metadata=res://*/CustomerOrderManagementSystemEntityDataModel.
csdl|res://*/CustomerOrderManagementSystemEntityDataModel.ssdl|res:
//*/CustomerOrderManagementSystemEntityDataModel.msl;
provider=System.Data.SqlClient;
provider connection string='Data Source=XXXXX-PC\SQLEXPRESS;
Initial Catalog="Customer Databases";
Integrated Security=True;MultipleActiveResultSets=True'"
providerName="System.Data.EntityClient" />
</connectionStrings>
</configuration>
Apart from the configuration file, .edmx file (with code-behind) is also generated. If you open .cs file, you could see all the tables generated as an entity object class along with default object context class (“Customer_DatabasesEntities”). These are the class that we are going to use in our library to interact with database.
Customer_DatabasesEntities
That’s it. We’re done with the edmx file generation. Let’s come back with the library implementation, as we discussed already, we have two .cs files in the library project. Now we can directly go ahead with the implementation part. Here is the ICustomerServiceLibrary interface which contains the methods declaration to provide customer details as well as saving data in database.
ICustomerServiceLibrary
public interface ICustomerServiceLibrary
{
/// <summary>
/// Get all the customers data
/// </summary>
/// <param name="context"></param>
/// <returns></returns>
List<Customer> GetCustomers(Customer_DatabasesEntities context);
/// <summary>
/// Get specific customer data with customer ID
/// </summary>
/// <param name="customerID"></param>
/// <param name="context"></param>
/// <returns></returns>
Customer GetCustomer(int customerID, Customer_DatabasesEntities context);
/// <summary>
/// Get the actual Entities object context from edmx.
/// </summary>
/// <returns></returns>
Customer_DatabasesEntities GetEntitiesObjectContext();
/// <summary>
/// Save the data in database with specific context.
/// </summary>
/// <param name="context"></param>
void Save(Customer_DatabasesEntities context);
}
Interface provides four functionalities:
GetCustomers
GetCustomer(int CustomerID)
GetEntitiesObjectContext()
Save (context)
Let’s go ahead with the actual implementation of the methods:
namespace ServiceLibraries
{
public class CustomerServiceLibrary : ICustomerServiceLibrary
{
// Database entity context.
Customer_DatabasesEntities context = null;
/// <summary>
/// Constructor which initialize including context.
/// </summary>
public CustomerServiceLibrary()
{
context = new Customer_DatabasesEntities();
}
#region ICustomerServiceLibrary Members
/// <summary>
/// Returns customers data as list.
/// </summary>
/// <param name="context"></param>
/// <returns></returns>
public List<Customer> GetCustomers(Customer_DatabasesEntities context)
{
return context.Customers.ToList() ;
}
/// <summary>
/// Returns specific customer detail.
/// </summary>
/// <param name="customerID"></param>
/// <param name="context"></param>
/// <returns></returns>
public Customer GetCustomer(int customerID, Customer_DatabasesEntities context)
{
var cust = from customer in context.Customers
where customer.CustomerID == customerID select customer;
return cust.FirstOrDefault();
}
/// <summary>
/// Saves the data in database and disposing the context.
/// </summary>
/// <param name="context"></param>
public void Save(Customer_DatabasesEntities context)
{
context.SaveChanges();
context.Dispose();
}
/// <summary>
/// Returns the context reference.
/// </summary>
/// <returns></returns>
public Customer_DatabasesEntities GetEntitiesObjectContext()
{
return context;
}
#endregion
}
}
Constructor of the CustomerServiceLibrary class creates the entity object context instance and it’s shared across the class. In case, if engine wants the entity context object for saving data into the database, we can get through GetEntitiesObjectContext method via library instance.
GetEntitiesObjectContext
Save method is used for saving new data in database. You can see context.SaveChanges() method called inside the Save method. This method saves the changes in database. To call this method, you have to add System.Data.Entity assembly reference in library project. Don’t forget to add this assembly in your library. Otherwise, you will get a compilation error.
Save
context.SaveChanges()
System.Data.Entity
That’s all. Now the library is ready for the customer data service. Can we test this library through some test client? Yes. Let’s create a simple console application for testing this library.
static void Main(string[] args)
{
ICustomerServiceLibrary service = new CustomerServiceLibrary();
List<customer /> list = service.GetCustomers(service.GetEntitiesObjectContext());
Console.WriteLine("Customer ID" + "First Name".PadLeft(21) +
"Last Name".PadLeft(29));
Console.WriteLine("========================================================");
int lastCustId = 0;
foreach (Customer data in list)
{
Console.WriteLine(data.CustomerID + data.FirstName.PadLeft(48) +
data.LastName.PadLeft(10));
lastCustId = data.CustomerID;
}
Customer newCustomer = new Customer();
Console.WriteLine("Enter New Customer Details");
Console.WriteLine("First Name:");
newCustomer.FirstName = Console.ReadLine();
Console.WriteLine("Last Name");
newCustomer.LastName = Console.ReadLine();
Console.WriteLine("Email Id");
newCustomer.Email = Console.ReadLine();
Console.WriteLine("Address1");
newCustomer.Address1 = Console.ReadLine();
Console.WriteLine("Address2");
newCustomer.Address2 = Console.ReadLine();
newCustomer.CustomerID = ++lastCustId;
service.GetEntitiesObjectContext().AddToCustomers(newCustomer);
service.Save(service.GetEntitiesObjectContext());
Console.WriteLine("Customer Data Stored Successfully");
service = new CustomerServiceLibrary();
List<customer /> newlist = service.GetCustomers(service.GetEntitiesObjectContext());
Console.WriteLine("Customer ID" + "First Name".PadLeft(21) +
"Last Name".PadLeft(29));
Console.WriteLine("=========================================================");
foreach (Customer data in newlist)
{
Console.WriteLine(data.CustomerID + data.FirstName.PadLeft(48) +
data.LastName.PadLeft(10));
}
Console.ReadLine();
}
Make sure that connection string is mentioned in the client config file. Otherwise application never connects with the database.
For running the attached sample application, you will have to create the database and tables in your SQL Server management studio express. But, for your convenience, I have attached the database backup file, so that you can just download and import into your SQL server. If you have any questions about how to import the database in SQL Server, please visit this site.
Oops. At the last moment of completing this article, my colleague Boopalan who is also good in tech side raised this question. Why don’t you integrate your database in Visual Studio itself? Yes. He is absolutely correct. Why do I want to kill your time for running demo sample? So I decided to explain how to manage/integrate the database in Visual Studio itself and posted an article here. If you decided to go ahead with this logic, you don’t want to restore the database backup file that is uploaded along with in the sample source. However, I still leave this traditional database design concept which I have explained in this article since it will be helpful for some folks who don’t have idea about it.
I am really delighted with the logic and sample app, and I do think it's really easy to run from your end. As such, I sure would really appreciate some votes, and some comments if you feel this post will help you out in entity access through library .
As I stated in the introduction, the remaining parts will be posted one by. | http://www.codeproject.com/Articles/112556/Enterprise-Application-Architecture-Designing-Appl?fid=1589029&df=90&mpp=10&sort=Position&spc=None&tid=4398504 | CC-MAIN-2015-11 | en | refinedweb |
Hello Maurizio,
> I do some change on the S2 Junit4 plugin, now it should be simpler run a
> test with or without spring.
> Could you test the latest version of the aforementioned plugin [1]?
> Christian, is this [2] your use case?
> WDYT? is more intuitive?
not tested it yet, but yes, that is what I want to have, it looks
great! Thank you very much for the work!
One question roused in me when I see this one and even the old Junit3
testcase. I was always wondering were
the struts.xml is drawn. In most cases of course I want to test the
struts.xml as it is used in my webapp. But sometimes i might want to
test something else, a special error case or maybe a result type. For
these cases I might want to test with a variation of my struts.xml,
and therefore this might be useful:
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = {"your-application-context.xml"})
@StrutsContextConfiguration(locations = {"struts-test.xml"})
public class YourActionIntegrationTest extends
StrutsSpringJUnit4TestCase<YourAction> {
not sure if my idea makes sense, but wanted to bring it to discussion.
Cheers!
Christian
>
> [1]
>
> [2]
>
> On 3 August 2011 20:16, Christian Grobmeier <grobmeier@gmail.com> wrote:
>
>> Hi,
>>
>> at the moment I found out to test without that class. Not really a
>> full test, but it works for my needs at the moment. Therefore I can
>> wait until you have pushed your code to google. Please ping this list
>> once it is done - guess some others ahve an interest in it too :-)
>>
>> Cheers
>> Christian
>>
>> On Wed, Aug 3, 2011 at 7:54 PM, Gabriel Belingueres
>> <belingueres@gmail.com> wrote:
>> > Hi,
>> >
>> > StrutsJUnit4TestCase is really tricky. I found few pointers in the web.
>> > I'm currently using it successfully for my modest testing
>> > requirements, but you don't need to provide a web.xml file.
>> >
>> > When I say integration testing utility, I mean testing a full blown
>> > interceptor stack with your actions and interceptors. If you want to
>> > test either your actions or interceptors in isolation, you may not
>> > need this.
>> >
>> > Funny enough, I'm currently in the process of open sourcing our Struts
>> > 2 integration testing utility, which is based on StrutsJUnit4TestCase
>> > (only tested it on Struts 2.2.3 I'm afraid), so if you can hold on a
>> > few minutes, you may download it from google code.
>> >
>> > Regards,
>> > Gabriel
>> >
>> > 2011/8/3 Christian Grobmeier <grobmeier@gmail.com>:
>> >> Hello all,
>> >>
>> >> today I tried to figure out how one can use StrutsJUnit4TestCase. I am
>> >> currently puzzled. I found docs for the older implementation for
>> >> Junit3 of course, but nothing on the StrutsJUnit4TestCase class. Any
>> >> pointers?
>> >>
>> >> With the old stuff i simply did: this.executeAction() and all was
>> >> well. Now it seems I have to give the class a web.xml - how?
>> >>
>> >> Thanks in advance
>> >>
>> >> Christian
>> >>
>> >> ---------------------------------------------------------------------
>> >> | http://mail-archives.apache.org/mod_mbox/struts-user/201108.mbox/%3CCAPFNckg_Qdxiy=ci95B2g7KGpFoz1=yhV0KsjPrWfT8x4pk-Mg@mail.gmail.com%3E | CC-MAIN-2015-11 | en | refinedweb |
(Download updated on 29th July 2012)
When you write an application using the Windows Workflow Foundation (WF), you often want to have your own workflow designer integrated into your application. Microsoft provides a class called WorkflowDesigner to solve this problem, but there are still some tasks you have to do by yourself:
WorkflowDesigner
I realized, that these tasks are mainly implemented the same way every time, so I started to think about a generic infrastructure. First of all there are some guidelines I want to mention the infrastructure should satisfy:
To show you some examples, here are two implementations, that use the framework.
1. My standard workflow designer application:
2. Another application using a ribbon bar and an outlook bar (from Odyssey library):
As already said, we´re using Prism-modules. For those who don´t know Prism: We have a root-view with some content controls (Regions) in it. At runtime views (usually UserControls) are attached to them as content. Because I provide these views, you can easily change the layout later by just using a different root-view. We basically need:
As the WorkflowDesigner already provides UIElements as properties, we can easily bind our Views to them, while using the ViewModel as DataContext. I decided to implement a UserControl for each view and not just to hand the UIElements from the WorkflowDesigner over to Prism, because there might be cases when we don´t have a WorkflowDesigner, but still want to display something, for instance when a load error occurs.
<UserControl ...
<ContentControl Content="{Binding CurrentSurface.Designer.View}"/>
</UserControl>
Now let´s say something about the ViewModel. You may have noticed the CurrentSurface.Designer in the path of the binding, so what is the internal structure of the ViewModel?
CurrentSurface.Designer
The DesignerViewModel is defined by the following interface:
public interface IDesignerViewModel
{
void ReloadDesigner(object root);
object CurrentSurface { get; }
event Action SurfaceChanged;
}
As you can see, the actual state of the ViewModel is represented by a "surface". This could be just the normal WorkflowDesigner view, but also an error display. Because WPF bindings don´t throw exceptions in case they don´t work, our content control above is just not visible when an error occurs. Knowing that, we can place an error TextBox behind the content control that is normally covered by the designer view.
The ReloadDesigner method tells the ViewModel to reload the designer given a designer root (either an Activity or an ActivityBuilder) by creating a StandardDesignerSurface, which hosts the WorkflowDesigner.
ReloadDesigner
Activity
ActivityBuilder
StandardDesignerSurface
WorkflowDesigner
For the case of an invalid XAML definition, I created another interface my DesignerViewModel implements:
public interface ILoadErrorDesignerViewModel
{
void ReloadError(string xaml);
}
Calling the ReloadError method tells the ViewModel to use an LoadErrorDesignerSurface as CurrentSurface, from which you can get the invalid XAML and display it or do whatever you want.
ReloadError
LoadErrorDesignerSurface
After having discussed the basics of the framework we can continue with some more advanced stuff. In connection with storage, we will have to talk about additional data stored for each workflow: Maybe you want a description or you want to enter the key for a database or something else. Including this additional data in our design leads us to the definition of a workflow type. That means there are different kinds of workflows that differ in the way they´re stored, the additional data attached to them and even the toolbox items having to be loaded when editing such a workflow.
public interface IWorkflowType
{
IToolboxCreatorService ToolboxCreatorService { get; }
IStorageAdapter StorageAdapter { get; }
string DisplayName { get; }
string Description { get; }
}
The ToolboxCreatorService is responsible for loading the correct toolbox items and the StorageAdapter handles all the other things we spoke about.
public interface IStorageAdapter
{
IEnumerable<IWorkflowDescription> GetAllWorkflowDescriptions();
bool CanCreateNewDesignerModel { get; }
IDesignerModel CreateNewDesignerModel();
IDesignerModel GetDesignerModel(object key);
bool SaveDesignerModel(IDesignerModel designerModel);
bool DeleteDesignerModel(object key);
}
For performance reasons we don´t want to load the whole database (or all data files, depending on which storage mechanism we are using) into memory in case the user want´s to load only one of the stored workflows. Everything the user needs to identify the workflow he wants to load is a name and maybe a description, so we separate these things from the rest (XAML definition and so on) by defining an interface (IWorkflowDescription). The other interface (IDesignerModel) contains everything, it´s also responsible for holding the additional data in a property called PropertyObject. The two interfaces are correlated by a key object that should be the key used for storing the workflow. If you ask yourself how you could edit the additional data, by a property grid of course: The storage module exchanges the standard property view for another one that consists of the normal property grid from the WorkflowDesigner and a custom PropertyGrid bound to the PropertyObject of the IDesignerModel (I got the custom property grid from here).
IWorkflowDescription
IDesignerModel
PropertyObject
It makes sense to show a storage dialog that could for instance look like that:
You can select the workflow type on the left and the activity in the middle. Take notice of the possibility to have multiple workflow types in one application.
In some cases it is much more efficient to edit the XAML directly than to edit the workflow in the designer. Especially when you have got an invalid XAML definition, you have no choice but to edit the XAML, because the workflow can´t be displayed.
The XAML view is again implemented as a Prism module. I created a ViewModel that reacts on the SurfaceChanged event of the IDesignerViewModel implementation. In the event handler, we first check, whether the new surface is an IDesignerSurface or an ILoadErrorDesignerSurface.
SurfaceChanged
IDesignerViewModel
IDesignerSurface
ILoadErrorDesignerSurface
IDesignerSurface
ModelChanged
Every time the user edits the XAML, the ViewModel tries to update the designer view and displays eventually occuring errors. For a better performance I have buit in a timer that delays the update of the model. As this timer is restarted every time a key is pressed, the model only changes when the user stops to write and not every time he presses a key.
I am using the AvalonEdit text editor to get some nice colors:
The basic idea was to integrate the modules in another application, but if you just want a workflow designer, you can use the sample implementation I´ve taken the screenshots from. Basically all the application does is running a very straightforward PrismBootstrapper that shows the MainWindow (where the Prism regions are defined). You can extend the application by registering your own IWorkflowType in the UnityContainer (using the App.config):
IWorkflowType
<types>
<!--...-->
<!--this is the example workflow type-->
<type name="Selen.ActivityWorkflowType"
type="Selen.WorkflowDesigner.Contracts.IWorkflowType, Selen.WorkflowDesigner.Contracts"
mapTo="Selen.ActivityWorkflowType.VoidActivityType, Selen.ActivityWorkflowType">
<lifetime type="singleton"/>
</type>
<!--Put your own workflow type here-->
</types>
Now lets make the last step and see how you can implement such a workflow type. Due to several reasons, I take the ActivityBuilder as example:
For those who may not know the difference between ActivityBuilder and DynamicActivity: The ActivityBuilder class can be used as root in the workflow designer. When we generate the XAML from it we get it in the form "<Activity …" (just like with Visual Studio). But when we want to execute the activity or use it in a workflow (or another activity), we have to convert it to a DynamicActivity (See MSDN for more details). This all works fine until the moment you try to save the workflow, as this is not possible with a DynamicActivity in it. There must be a solution, but I wasn´t able to compile the "<Activity ..." XAML to a real type, so I had to deal with the DynamicActivity. To solve the saving problem I created another class that behaves exactly like a DynamicActivity, despite the fact that it can be saved.
ActivityBuilder
DynamicActivity
DynamicActivity
My so called PlaceholderActivity exposes exactly the same properties like the DynamicActivity, I just added that attribute for each of them:
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
By doing that I tell the WF Runtime not to serialize these properties to XAML, because this would result in an exception. The only way to save a DynamicActivity to XAML is converting it to an ActivityBuilder and that is exactly what I do. I implemented a string property that is serialized to XAML and in the getter of that property I create a new ActivityBuilder and then copy all properties of the PlaceholderActivity to the ActivityBuilder (the ActivityBuilder has the same properties like the DynamicActivity and thus the same like the PlaceholderActivity). After that I can serialize the ActivityBuilder and return the outcoming XAML, which contains all information about my PlaceholderActivity. While deserializing the workflow, the WF Runtime sets the XAML property and I can load a DynamicActivity from it and again copy the properties.
public PlaceholderActivity(DynamicActivity dynamicActivity)
: this()
{
this.ApplyDynamicActivity(dynamicActivity);
}
public PlaceholderActivity()
: base()
{
this.typeDescriptor = new PlaceholderActivityTypeDescriptor(this);
}
private void ApplyDynamicActivity(DynamicActivity dynamicActivity)
{
this.DisplayName = dynamicActivity.Name;
foreach (var item in dynamicActivity.Attributes)
{
this.Attributes.Add(item);
}
foreach (var item in dynamicActivity.Constraints)
{
this.Constraints.Add(item);
}
this.Implementation = dynamicActivity.Implementation;
this.Name = dynamicActivity.Name;
foreach (var item in dynamicActivity.Properties)
{
this.Properties.Add(item);
}
}
[Browsable(false)]
public string XAML
{
get
{
var activityBuilder = new ActivityBuilder();
foreach (var item in this.Attributes)
{
activityBuilder.Attributes.Add(item);
}
foreach (var item in this.Constraints)
{
activityBuilder.Constraints.Add(item);
}
activityBuilder.Implementation = this.Implementation != null ? this.Implementation() : null;
activityBuilder.Name = this.Name;
foreach (var item in this.Properties)
{
activityBuilder.Properties.Add(item);
}
var sb = new StringBuilder();
var xamlWriter = ActivityXamlServices.CreateBuilderWriter(
new XamlXmlWriter(new StringWriter(sb), new XamlSchemaContext()));
XamlServices.Save(xamlWriter, activityBuilder);
return sb.ToString();
}
set
{
this.ApplyDynamicActivity(ActivityXamlServices.Load(new StringReader(value)) as DynamicActivity);
}
}
The implementation of the two data contracts (IDesignerModel and IWorkflowDescription) is very straightforward:
IDesignerModel
public class WorkflowDescription : IWorkflowDescription
{
public WorkflowDescription(object key)
{
if (key == null) throw new ArgumentNullException("key");
this.Key = key;
}
public string WorkflowName { get; set; }
public string Description { get; set; }
public object Key { get; private set; }
}
public class ActivityEntity
{
[Category("General")]
[Description("Activity description")]
public string Description { get; set; }
}
public class DesignerModel : IDesignerModel
{
private string originalKey;
public DesignerModel(object rootActivity, ActivityEntity activityEntity)
{
if (activityEntity == null) throw new ArgumentNullException("activityEntity");
this.RootActivity = rootActivity;
this.ActivityEntity = activityEntity;
this.ApplyKey();
}
public object Key { get { return this.RootActivity != null ? (this.RootActivity as ActivityBuilder).Name : null; } }
public bool HasKeyChanged
{
get
{
return (string)this.Key != originalKey;
}
}
public object RootActivity { get; set; }
public object PropertyObject { get { return this.ActivityEntity; } }
public ActivityEntity ActivityEntity { get; private set; }
public void ApplyKey()
{
this.originalKey = (string)this.Key;
}
}
Note that we can use the name of the ActivityBuilder as key for our
workflow, but we could also define the key in the additional data (the ActivityEntity
class in this example). I have used the additional data to provide a
way to edit the description of the workflow. We also have to implement a
property, whether the key changed since the last saving of the workflow
(we know this, because ApplyKey is called every time the workflow is
saved). My infrastructure needs this information to keep the user from
accidentally overriding workflows with the same key.
ActivityEntity
ApplyKey
Because we want to show our custom activities in the toolbox later, we need a type we can pass to the ToolboxItemWrapper. As it is in no way possible to define parameters, we must create a type for each of the activities. I decided to do this by an IActivityTemplateFactory, because we don´t have real compiled activity types (just the DynamicActivity). So I compile an assembly for each activity with only one type in it, that derives from IActivityTemplateFactory and defines the following private fields:
ToolboxItemWrapper
IActivityTemplateFactory
IActivityTemplateFactory
private readonly string xaml = @"{0}";
private readonly string description = @"{1}";
private readonly string activityName = @"{2}";
The placeholders stand for the hard coded values of the workflow data. The private fields are exposed by public properties to make them readable for the StorageAdapter. As we can implement the Create method of the IActivityTemplateFactory, we can let a PlaceholderActivity being inserted in the workflow when the users drags the toolbox item on the designer surface.
public Activity Create(DependencyObject target)
{
DynamicActivity dynamicActivity = ActivityXamlServices.Load(new StringReader(this.xaml)) as DynamicActivity;
return new PlaceholderActivity(dynamicActivity);
}
All what´s left for the StorageAdapter to do is compiling the assemblies including the IActivityTemplateFactories and loading them. If the StorageAdapter should encounter a load error caused by invalid XAML, a WorkflowLoadException has to be thrown including the invalid XAML and the additional data:
WorkflowLoadException
if (workflow == null)
{
throw new LoadWorkflowException() { XAMLDefinition = instance.XAML, DesignerModel = new DesignerModel(null, new ActivityEntity() { Description = instance.Description }) };
}
After having catched this exception, my infrastructure calls the ReloadError method of the ILoadErrorDesignerViewModel and thus displays the invalid XAML in the XAML view and shows an error text.
If you want to test your custom activities, that you created with my Reshosted Workflow Designer, you can use my Test Host console application (only included in the source download), which automatically loads the activity dlls from the designer exe. To test an activity, follow these steps:
Here is a screenshot of how the TestHost could look after. | http://www.codeproject.com/Articles/375034/A-dynamic-Rehosted-Workflow-Designer-for-WF-4?fid=1709245&df=10000&mpp=10&sort=Position&spc=None&tid=4419053 | CC-MAIN-2015-11 | en | refinedweb |
15 September 2008 13:44 [Source: ICIS news]
LONDON (ICIS news)--Ciba Specialty Chemicals has agreed to buy the photoinitiator business of specialty chemicals producer Lamberti Group for an undisclosed sum, the company said on Monday.
Based in ?xml:namespace>
?xml:namespace>
“Lamberti’s photoinitiator business is attractive because of its excellent fit with our current offer and its valuable intellectual property and expertise,” said Ciba CEO Brendan Cummins.
“This move will strengthen ultraviolet (UV) curing as a growth driver in our Coating Effects business.”
Photoinitiators are used in the UV curing of surfaces in industrial coatings, inks, adhesives and electronics for end-products such as food packaging.
On absorption of light, they catalyse chemical reactions that result in significant changes in the solubility and physical properties of the surface and help speed polymerisation of the final cross-linked product.
Regulatory approval was pending, but the two companies intend to finalise the deal as as soon as possible. The agreements comes as Ciba's board was urging its shareholders to accept a Swfr6.1bn ($5.5bn/€3.9bn) takeover bid from German chemicals group BASF.
($1 = Swfr1.11, €0.70/€1 = Swfr1 | http://www.icis.com/Articles/2008/09/15/9156409/ciba-to-buy-photoinitiator-ops-of-italys-lamberti.html | CC-MAIN-2015-11 | en | refinedweb |
IRC log of html-wg on 2007-11-16
Timestamps are in UTC.
00:03:50 [mjs]
mjs has joined #html-wg
00:07:13 [shepazu]
shepazu has joined #html-wg
00:08:53 [mjs]
mjs has joined #html-wg
00:10:56 [MikeSmith]
MikeSmith has joined #html-wg
00:11:29 [mjs]
mjs has joined #html-wg
00:37:24 [andreas]
andreas has joined #html-wg
00:37:48 [Marcos_]
Marcos_ has joined #html-wg
00:46:09 [Marcos__]
Marcos__ has joined #html-wg
00:48:03 [MikeSmith]
Marcos__ - buenos días
00:48:28 [Marcos__]
heya mikesmith :)
00:49:20 [shepazu]
shepazu has joined #html-wg
00:52:57 [MikeSmith]
Marcos__ - andreas works for Opera here in Tokyo
00:54:23 [Marcos__]
mikesmith, how's the workshop going? what are you discussing?
00:55:43 [MikeSmith]
Marcos__ - discussing something bout modalities
00:55:46 [MikeSmith]
multiple ones
00:55:52 [MikeSmith]
that's about as much as I know
00:56:08 [Marcos__]
hehe
00:56:12 [MikeSmith]
only 2.5 hours of sleep last night
00:56:22 [MikeSmith]
so I'm a little slow on the uptake at this point
00:56:54 [Marcos__]
fair enough
00:57:20 [Marcos__]
at least you didnt show up drunk :)
00:57:30 [Marcos__]
...or did you? :D
00:57:59 [MikeSmith]
not drunk -- just slightly buzzed
00:58:45 [gavin]
gavin has joined #html-wg
01:05:58 [timbl]
timbl has joined #html-wg
01:06:52 [mjs]
mjs has joined #html-wg
01:18:05 [aaronlev]
aaronlev has joined #html-wg
02:13:27 [sbuluf]
sbuluf has joined #html-wg
02:26:50 [JanC]
JanC has joined #html-wg
03:05:53 [gavin]
gavin has joined #html-wg
04:25:44 [mjs]
mjs has joined #html-wg
05:13:21 [gavin]
gavin has joined #html-wg
05:17:30 [gavin_]
gavin_ has joined #html-wg
05:42:02 [Lionheart]
Lionheart has joined #html-wg
06:15:18 [Thezilch]
Thezilch has joined #html-wg
06:24:47 [andreas]
andreas has joined #html-wg
06:28:47 [MikeSmith]
MikeSmith has joined #html-wg
06:59:21 [xover]
xover has joined #html-wg
07:02:43 [Lionheart]
Lionheart has joined #html-wg
07:03:57 [shepazu]
shepazu has joined #html-wg
07:20:30 [gavin]
gavin has joined #html-wg
07:30:41 [Lionheart]
Lionheart has joined #html-wg
08:33:27 [tH_]
tH_ has joined #html-wg
08:45:22 [Lachy]
Lachy has joined #html-wg
09:01:20 [Julian]
Julian has joined #html-wg
09:15:36 [Lionheart]
Lionheart has joined #html-wg
09:16:50 [tH_]
tH_ has joined #html-wg
09:28:09 [ROBOd]
ROBOd has joined #html-wg
09:28:16 [gavin]
gavin has joined #html-wg
09:43:08 [zcorpan]
zcorpan has joined #html-wg
09:46:35 [Lachy]
Lachy has joined #html-wg
09:48:58 [Lachy]
Lachy has joined #html-wg
09:51:59 [jgraham]
jgraham has joined #html-wg
09:53:24 [Lionheart]
Lionheart has joined #html-wg
09:53:44 [Lionheart]
Lionheart has left #html-wg
09:54:22 [Lachy]
Lachy has joined #html-wg
10:13:54 [Lionheart]
Lionheart has joined #html-wg
10:14:07 [Lionheart]
Lionheart has left #html-wg
10:15:30 [marcospod]
marcospod has joined #html-wg
10:34:01 [Sander]
Sander has joined #html-wg
10:41:05 [marcospod]
marcospod has joined #html-wg
11:00:48 [marcospod]
marcospod has joined #html-wg
11:14:00 [myakura]
myakura has joined #html-wg
11:29:47 [aaronlev]
aaronlev has joined #html-wg
11:35:59 [gavin]
gavin has joined #html-wg
11:45:34 [timbl]
timbl has joined #html-wg
13:07:02 [smedero]
smedero has joined #html-wg
13:10:05 [marcospod]
marcospod has joined #html-wg
13:25:52 [aaronlev]
DanC: i can make the html wg meeting today but will miss the first part
13:43:31 [gavin]
gavin has joined #html-wg
13:45:19 [DanC]
ok; when do you think you can join?
14:08:17 [Lachy]
Lachy has joined #html-wg
14:08:55 [DanC]
hmm... where's mikesmith? I'd like 2 more topic anchors before the aria thing in
14:09:24 [Lachy]
Yay! HDP is being published :-)
14:10:16 [Lachy]
DanC, any idea when the spec will be published?
14:10:44 [anne]
I made the specification ready for publication yesterday
14:13:48 [shepazu]
shepazu has joined #html-wg
14:23:10 [anne]
DanC, any updates on the other drafts, which had pretty much the same feedback through the survey?
14:25:46 [DanC]
working on it
14:34:35 [torus]
torus has joined #html-wg
14:35:03 [torus]
torus has left #html-wg
14:39:09 [Lachy]
Lachy has joined #html-wg
14:50:27 [timbl]
timbl has joined #html-wg
15:08:08 [Julian]
Julian has joined #html-wg
15:13:46 [smedero]
smedero has joined #html-wg
15:26:52 [Lachy]
Lachy has joined #html-wg
15:39:44 [billmason]
billmason has joined #html-wg
15:43:13 [aaronlev]
DanC: can you delay the aria discussion until 15-20 minutes after the hour?
15:44:19 [Sander]
Sander has joined #html-wg
15:45:20 [DanC]
I think so; actually, I forgot to put aria on the agenda.
15:45:37 [DanC]
is Hixie around to eyeball this canvas question?
15:51:18 [gavin]
gavin has joined #html-wg
15:54:36 [DanC]
can anybody else eyeball it? anne? mjs?
15:59:45 [Philip]
Is it relevant that canvas specification is mostly about documenting existing practice, since it's already implemented in most major browsers, and is not about developing a new feature?
16:14:04 [DanC]
I thought I captured a sense of that
16:14:51 [DanC]
"Do use cases such as games, shared whiteboards, and yahoo pipes and others in the ESW wiki motivate a requirement that HTML 5 provide an immediate mode graphics API and canvas element?"
16:15:04 [DanC]
oh... I could cite the relevant design principle.
16:15:25 [DanC]
I'm inclined to leave it up to wiki-elves to do that. I think the WBS question is clear enough.
16:26:11 [Zakim]
Zakim has joined #html-wg
16:26:15 [DanC]
Zakim, this will be html
16:26:15 [Zakim]
ok, DanC; I see HTML_WG()12:00PM scheduled to start in 34 minutes
16:26:42 [DanC]
agenda + Convene HTML WG meeting of 2007-11-16T17:00:00Z
16:27:37 [DanC]
HTML WG teleconference 2007-11-16T17:00:00Z
(logs:
)
16:27:44 [DanC]
DanC has changed the topic to: HTML WG teleconference 2007-11-16T17:00:00Z
(logs:
)
16:27:58 [DanC]
agenda + ISSUE-18 html-design-principles HTML Design Principles
16:28:07 [DanC]
agenda + ISSUE-19 html5-spec HTML 5 specification release(s)
16:28:16 [DanC]
agenda + ISSUE-15 immediate-mode-graphics requirement for Immediate Mode Graphics and canvas element
16:28:23 [DanC]
agenda + # ISSUE-14 aria-role Integration of WAI-ARIA roles into HTML5
16:28:31 [DanC]
agenda 5 = ISSUE-14 aria-role Integration of WAI-ARIA roles into HTML5
16:28:43 [DanC]
agenda + ISSUE-16 offline-applications-sql offline applications and data synchronization
16:28:56 [DanC]
agenda + face-to-face meeting 8-10 November, review
16:36:04 [gsnedders]
DanC: it looks fine to me, but I'd make it explicit that it is just defining current behaviour, and being compat with current content
16:38:46 [oedipus]
oedipus has joined #html-wg
16:40:08 [DanC]
this question is not a judgement on the details of the design; just the requirement
16:40:24 [DanC]
thanks for the quick feedback, in any case
16:40:44 [DanC]
yes, I announced it when it was clear that you understood the question
16:42:39 [DanC]
er... where's MikeSmith?
16:54:38 [ChrisWilson]
ChrisWilson has joined #html-wg
16:59:36 [Zakim]
HTML_WG()12:00PM has now started
16:59:37 [Zakim]
+JulianR
17:00:04 [Lachy]
Lachy has joined #html-wg
17:00:52 [Zakim]
+Gregory_Rosmaita
17:00:54 [Zakim]
-Gregory_Rosmaita
17:00:55 [Zakim]
+Gregory_Rosmaita
17:01:18 [DanC]
RRSAgent, pointer?
17:01:18 [RRSAgent]
See
17:01:25 [DanC]
Zakim, take up item 1
17:01:25 [Zakim]
agendum 1. "Convene HTML WG meeting of 2007-11-16T17:00:00Z" taken up [from DanC]
17:01:35 [Zakim]
+DanC
17:01:35 [Zakim]
+??P9
17:02:46 [rubys]
rubys has joined #html-wg
17:03:06 [oedipus]
scribe: Gregory_Rosmaita
17:03:11 [oedipus]
scribenick: oedipus
17:03:18 [DanC]
Meeting: HTML WG Weekly
17:03:43 [DanC]
Zakim, agenda?
17:03:43 [Zakim]
I see 7 items remaining on the agenda:
17:03:44 [Zakim]
1. Convene HTML WG meeting of 2007-11-16T17:00:00Z [from DanC]
17:03:47 [Zakim]
2. ISSUE-18 html-design-principles HTML Design Principles [from DanC]
17:03:49 [Zakim]
3. ISSUE-19 html5-spec HTML 5 specification release(s) [from DanC]
17:03:51 [Zakim]
4. ISSUE-15 immediate-mode-graphics requirement for Immediate Mode Graphics and canvas element [from DanC]
17:03:53 [Zakim]
5. ISSUE-14 aria-role Integration of WAI-ARIA roles into HTML5
17:03:55 [Zakim]
6. ISSUE-16 offline-applications-sql offline applications and data synchronization [from DanC]
17:03:58 [Zakim]
7. face-to-face meeting 8-10 November, review [from DanC]
17:04:30 [Zakim]
+Sam
17:05:24 [DanC]
Zakim, passcode?
17:05:24 [Zakim]
the conference code is 4865 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), DanC
17:06:16 [Zakim]
+[Microsoft]
17:06:25 [ChrisWilson]
Zakim, Microsoft is me
17:06:25 [Zakim]
+ChrisWilson; got it
17:06:59 [aaronlev]
aaronlev has joined #html-wg
17:07:12 [anne]
anne has joined #html-wg
17:07:27 [oedipus]
GJR: to coordinate some IRC time to discuss HTML5 stylesheet issues with the editors/interested parties -- a limited color pallatte using named colors needs some negotiation (and some eyeballs) and i'm still testing actual support for CSS generated text using :before and :after
17:07:32 [DanC]
22 Nov telcon cancelled
17:07:45 [oedipus]
CW: skip next week's meeting -- next meeting 29 November 2007 at 1700z
17:08:02 [DanC]
Zakim, next item
17:08:02 [Zakim]
agendum 2. "ISSUE-18 html-design-principles HTML Design Principles" taken up [from DanC]
17:08:13 [anne]
Zakim, who is here?
17:08:13 [Zakim]
On the phone I see JulianR, Gregory_Rosmaita, hsivonen, DanC, Sam, ChrisWilson
17:08:13 [oedipus]
TOPIC: HTML Design Principles
17:08:15 [Zakim]
On IRC I see anne, aaronlev, rubys, Lachy, ChrisWilson, oedipus, Zakim, gavin, Sander, billmason, smedero, Julian, timbl, shepazu, marcospod, myakura, zcorpan, tH, xover, Thezilch,
17:08:20 [Zakim]
... gavin_, mjs, JanC, marcos, jmb, heycam, gsnedders, paullewis, DanC, Shunsuke, Hixie, Dashiva, Philip, drry, Bert, laplink, bogi, jane, krijnh, deltab, beowulf, hsivonen,
17:08:22 [anne]
Zakim, passcode?
17:08:23 [Zakim]
... trackbot-ng, Bob_le_Pointu, RRSAgent
17:08:24 [Zakim]
the conference code is 4865 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), anne
17:08:34 [DanC]
HTML Design Principles
17:08:39 [oedipus]
DanC: migrated issues to issue tracker -- issue 18
17:08:52 [Zakim]
+??P13
17:08:55 [oedipus]
DanC: completed action to email negative responders
17:09:08 [Zakim]
+anne
17:09:19 [DanC]
Zakim, ??P13 is aaronlev
17:09:19 [Zakim]
+aaronlev; got it
17:09:24 [oedipus]
DanC: Mike(tm)Smith still needs to compile minutes from saturday's HTML f2f session
17:09:33 [oedipus]
DanC: mjs Action 20 completed
17:09:45 [oedipus]
DanC: explores for a "comments" mailing list
17:10:13 [DanC]
17:10:32 [oedipus]
DanC: feedback on HDP should be sent to public-html-comments@w3.org
17:10:54 [oedipus]
s/feedback/outside feedback/
17:11:01 [DanC]
Zakim, who's on the phone?
17:11:01 [Zakim]
On the phone I see JulianR, Gregory_Rosmaita, hsivonen, DanC, Sam, ChrisWilson, aaronlev, anne
17:11:13 [DanC]
Zakim, next item
17:11:13 [Zakim]
agendum 3. "ISSUE-19 html5-spec HTML 5 specification release(s)" taken up [from DanC]
17:11:28 [DanC]
17:11:30 [oedipus]
TOPIC: HTML5 Specification Draft Release
17:11:51 [oedipus]
DanC: had conversation with PTaylor about formal objection - action done
17:12:09 [oedipus]
DanC: completed action to email negative and non-responders - done
17:12:44 [oedipus]
Chairs have said the question does not carry -- WG will keep working on spec
17:12:58 [DanC]
DanC found out non-responders are not ok to publish
17:13:08 [oedipus]
Anne: graphics API a problem?
17:13:34 [oedipus]
DanC: publication starts the clock on W3C process
17:14:02 [oedipus]
Anne?/Henri?: deadline? make something available?
17:14:49 [hsivonen]
s/Anne\?\/Henri\?/Anne/
17:14:50 [oedipus]
DanC: like those who responded no to releasing draft to explain comments on questions; question may need to be refined
17:15:03 [DanC]
Zakim, next item
17:15:03 [Zakim]
agendum 4. "ISSUE-15 immediate-mode-graphics requirement for Immediate Mode Graphics and canvas element" taken up [from DanC]
17:15:13 [oedipus]
TOPIC: ISSUE 15 Immediate Mode Graphics
17:15:45 [oedipus]
DanC: ChrisW get info from MS (10 december deadline); DanC put question to WG
17:15:51 [DanC]
Zakim, who's on the phone?
17:15:52 [Zakim]
On the phone I see JulianR, Gregory_Rosmaita, hsivonen, DanC, Sam, ChrisWilson, aaronlev, anne
17:16:22 [jgraham]
jgraham has joined #html-wg
17:16:35 [oedipus]
Anne: how does modification of activity affect charter? results clear -- if say "yes" then might be changed
17:17:16 [oedipus]
DanC: question is "who is everyone" -- question to HTML WG and question to W3C; feedback at TPAC was that this issue is in scope; not critical path for denying discussions
17:17:29 [oedipus]
Anne: membership ok with it, can we carry on as usual?
17:17:40 [oedipus]
DanC: presuming all goes well, continue on in parallell
17:18:37 [oedipus]
JulianR: can answer in week
17:19:07 [oedipus]
Henri: can answer in week; question posed isn't what i want answered -- 3 of top 4 already implementing
17:19:15 [oedipus]
ChrisW: questions 3 out of 4
17:19:29 [ROBOd]
ROBOd has joined #html-wg
17:19:32 [oedipus]
DanC: a lot of people have made up their mind, but the question still has to be fielded
17:20:03 [DanC]
q+ to suggest a survey with some options in parallel
17:20:26 [oedipus]
ChrisW: in scope of WebAPI or not? 3 of 4 implemented shouldn't make issue one for HTML WG -- question whether covered by charter or patent policy; some implementors don't believe charter needs to be implemented, but that is my gut feeling
17:21:11 [oedipus]
DanC: considering doing an informal survey in parallel with formal survey; CANVAS tag in HTML WG and CANVAS tag in HTML WG or other WG? if formal question doesn't carry, still gaining info
17:21:30 [ChrisWilson]
My point is that 3 out of 4 implementers implementing means this IS in scope of "the platform"; the question, to my mind, is whether this is covered by our charter and therefore covered by the patent policy.
17:22:00 [oedipus]
DanC: been suggested that html5 spec should have CANVAS in it and cite document with graphics API -- question of whether HTML WG develops document or another WG develops document
17:22:11 [ChrisWilson]
The goal in creating a W3C WG with a patent policy is to explicitly lay out what that WG is going to do, so companies getting involved in the WG know what IP they may be offering up.
17:22:15 [oedipus]
Anne: rather keep it in HTML WG; willing to answer survey
17:22:16 [ChrisWilson]
Charters cannot be open-ended.
17:22:28 [Lachy]
isn't everything in the spec covered by the patent policy, regardless of whether it's explicitly in the charter?
17:22:58 [oedipus]
JulianR: spec already too complex -- need to seriously discuss way to take things out and harmonize with existing specs
17:23:26 [DanC]
agenda + outcome of HTML for authors session
17:23:56 [oedipus]
GJR: spec too complex, but can answer any survey
17:24:03 [ChrisWilson]
Lachy, everything in the spec IS covered by the patent policy. Joining a working group cannot be opening a company's entire patent portfolio in a free-for-all, or those with large patent portfolios would be foolish to participate at all - weakening the point of having a patent policy.
17:24:44 [oedipus]
Henri: formal survey first, then consider steps to separate API portions of spec; question of whether anything should be taken out of spec dependent upon who is going to edit that portion of spec -- do we have expertise?
17:25:04 [oedipus]
SamR: can't answer within week; support informal survey; do have charter concerns
17:25:13 [Lachy]
so the real question is, does Microsoft have patents that they do not want to give up, but which they would be forced to if canvas were included?
17:25:15 [oedipus]
ChrisW: yes, can answer question
17:25:22 [oedipus]
AaronL: not likely to have opinion now
17:25:29 [rubys]
oedipus: I said I CAN answer within a week
17:25:29 [DanC]
trackbot-ng,
17:25:32 [DanC]
trackbot-ng, status
17:25:45 [ChrisWilson]
Lachy, without having a charter that scopes the WG's specifications, I can't know the answer to that question.
17:25:47 [DanC]
ACTION: Dan consider informal survey on canvas tactics
17:25:47 [trackbot-ng]
Created ACTION-21 - Consider informal survey on canvas tactics [on Dan Connolly - due 2007-11-23].
17:25:52 [oedipus]
SCRIBE'S NOTE: Sam Ruby CAN answer within a week
17:26:09 [DanC]
Zakim, next item
17:26:09 [Zakim]
I see a speaker queue remaining and respectfully decline to close this agendum, DanC
17:26:14 [DanC]
ack danc
17:26:14 [Zakim]
DanC, you wanted to suggest a survey with some options in parallel
17:26:17 [DanC]
Zakim, next item
17:26:17 [Zakim]
agendum 5. "ISSUE-14 aria-role Integration of WAI-ARIA roles into HTML5" taken up
17:26:18 [ChrisWilson]
With the charter we have now, our legal staff did not investigate our graphics patents.
17:26:29 [oedipus]
TOPIC: ISSUE 14 ARIA Role Integration
17:26:33 [DanC]
17:26:58 [oedipus]
DanC: action on URI extensibility -
17:27:11 [DanC]
some progress:
17:27:11 [oedipus]
AaronL: hard time figuring out what was being proposed
17:27:22 [oedipus]
DanC: specific questions?
17:27:24 [Lachy]
ChrisWilson, didn't the legal department look at the existing whatwg spec, so that they would have a better idea of what to look for, rather than relying on the vague charter?
17:27:42 [DanC]
17:27:50 [oedipus]
AaronL: page to other links, couldn't ascertain what was DanC's contribution
17:27:56 [ChrisWilson]
Lachy, the WHATWG spec is not our charter.
17:27:59 [hsivonen]
q+ to talk about Norm Walsh's blog post
17:28:03 [aroben]
aroben has joined #html-wg
17:28:12 [ChrisWilson]
Nor has the WHATWG spec been stable in that time frame.
17:28:14 [rubys]
concrete charters tend to trump draft specs
17:28:18 [oedipus]
AaronL: summary, please?
17:28:18 [ChrisWilson]
(i.e. not added features)
17:28:35 [oedipus]
DanC: let WG members read at leisure; may do more work on page to make clearer
17:28:37 [DanC]
ack hsivonen
17:28:37 [Zakim]
hsivonen, you wanted to talk about Norm Walsh's blog post
17:29:06 [ArneJ]
ArneJ has joined #html-wg
17:29:23 [oedipus]
Henri: NormW's post suggests implicit namespaces in parser; considering constraints of aria- proposal don't think what NormW wrote satisfies requirements; can't do something to make DOM APIs act differently
17:29:27 [oedipus]
DanC: can if want to
17:30:14 [oedipus]
Henri: then introduce discrepancy in DOM scripting; changes way XML is parsed to infer namespaces from content-type deeper change than previously proposed; want not to afftect DOM API scripting -- examine RDF
17:30:26 [oedipus]
DanC: couple of steps ahead of me -- good feedback
17:30:39 [oedipus]
AaronL: will speak with Henri offline
17:30:57 [oedipus]
DanC: cost to changing APIs -- still thinking through
17:31:46 [oedipus]
scribe's note: DanC and MichaelC's actions continued
17:31:58 [oedipus]
DanC: next telecon not until 2 weeks
17:32:39 [oedipus]
AaronL: clarity always welcome; asked Doug Schepers and Bill for date by which they will decide aria- ; told me tied to other issues and gave no date
17:32:46 [DanC]
q+ to ask for a test pointer
17:32:59 [hsivonen]
s/examine RDF/could define a URI mapping for apps that need it for GRDDL to RDF mapping without affecting the DOM/
17:33:06 [oedipus]
GJR: PF yesterday discussed what next steps can take to further discussion
17:33:39 [DanC]
good tests?
17:34:16 [DanC]
<div aria="something">
17:34:51 [DanC]
17:34:54 [anne]
s/something/checkbox/
17:35:04 [oedipus]
AaronL: SVG does not want to change "role" to "aria"
17:35:14 [aaronlev]
17:35:24 [anne]
17:35:34 [oedipus]
17:35:36 [hsivonen]
DanC, it is about <div role='checkbox'> or <div aria='checkbox'>
17:35:37 [aaronlev]
s/SVG does/I do
17:36:05 [oedipus]
GJR: comparative tests needed?
17:36:07 [oedipus]
DanC: yes
17:36:21 [hober]
hober has joined #html-wg
17:36:26 [oedipus]
GJR: will communicate back to PF
17:36:52 [oedipus]
AaronL: like tests with role="checkbox"
17:37:11 [DanC]
DanC: thanks; I'll study
and
17:37:18 [oedipus]
UIUC ARIA Tests:
17:37:42 [DanC]
s/SVG does not want/I do not want/
17:37:45 [oedipus]
AaronL: clarifies -- not SVG WG, but my impression of what SVG is saying
17:37:58 [DanC]
aaronlev: I don't recommend the UIUC tests
17:38:17 [hsivonen]
DanC, did you mean test cases or proposed syntax examples?
17:38:55 [oedipus]
GJR: need comparative tests of single concept using diff markup proposals
17:38:57 [DanC]
I tend to call them tests; sorry if that's confusing
17:39:20 [oedipus]
AaronL: don't think there is controversy save for attribute name "role" and "aria"
17:39:26 [oedipus]
DanC: would like comparative tests
17:39:50 [Hixie]
DanC: i can't answer the canvas question. I strongly feel that a canvas API is already in scope, and I strongly object to reopening the charter rathole. But the question asks whether I think it is in scope and says that a "yes" answer reopens the rathole.
17:39:57 [oedipus]
ACTION GJR: coordinate comparative tests using competing ARIA proposals
17:41:02 [DanC]
ack danc
17:41:02 [Zakim]
DanC, you wanted to ask for a test pointer
17:41:10 [oedipus]
DanC: AlG promised that test materials used at HTML f2f would be given stable URIs
17:41:10 [DanC]
Zakim, next item
17:41:10 [Zakim]
agendum 6. "ISSUE-16 offline-applications-sql offline applications and data synchronization" taken up [from DanC]
17:41:24 [DanC]
17:41:30 [oedipus]
GJR: will follow up with PF test suite builders/maintainers
17:41:39 [DanC]
Editor's Draft 11 November 2007
17:41:45 [oedipus]
Anne action completed with editor's draft of 11 november
17:41:54 [oedipus]
ChrisW: not seen yet
17:42:18 [oedipus]
DanC: good to have the document ready; like a few more keywords in abstract: caching, SQL
17:42:26 [oedipus]
Anne: can add -- pretty clear, i think
17:42:50 [oedipus]
DanC: suggests using ToC to populate abstract
17:43:30 [oedipus]
SamR: plan to review
17:43:38 [oedipus]
DanC: page and a half
17:43:46 [oedipus]
SamR: will review this weekend
17:44:34 [oedipus]
ACTION: SamRuby review oflline-webapps by monday, 19 november 2007
17:44:34 [trackbot-ng]
Sorry, couldn't find user - SamRuby
17:45:04 [DanC]
trackbot-ng, status
17:45:24 [oedipus]
ACTION: Julian review offline-webapps by monday, 19 november 2007
17:45:24 [trackbot-ng]
Created ACTION-22 - Review offline-webapps by monday, 19 november 2007 [on Julian Reschke - due 2007-11-23].
17:45:50 [oedipus]
ChrisW: don't have an opinion; from another perspective, offline and SQL not in charter
17:46:12 [oedipus]
DanC: publication a natural way to start conversation; Anne, thinking of note or working draft?
17:46:16 [oedipus]
Anne: note
17:46:31 [oedipus]
DanC: inclined to publish in a few weeks
17:46:36 [oedipus]
Anne: reasonable
17:46:49 [DanC]
Zakim, next item
17:46:49 [Zakim]
agendum 7. "face-to-face meeting 8-10 November, review" taken up [from DanC]
17:46:59 [DanC]
Zakim, take up item 8
17:46:59 [Zakim]
agendum 8. "outcome of HTML for authors session" taken up [from DanC]
17:47:11 [oedipus]
TOPIC: Outcome of HTML for Authors' Session
17:47:15 [oedipus]
DanC: record of session?
17:47:53 [hsivonen]
17:47:56 [mjs]
mjs has left #html-wg
17:48:14 [mjs]
mjs has joined #html-wg
17:48:22 [mjs]
our charter does in fact contain "Data storage APIs"
17:48:26 [oedipus]
DanC: is it an accurate/reasonable catch of what transpired?
17:48:56 [ChrisWilson]
..."if the WebAPI WG fails to deliver."
17:49:05 [oedipus]
DanC: 2 actions noted in minutes
17:49:21 [DanC]
1 is a dup
17:49:38 [DanC]
ah... it's ACTION-5 by tracker
17:49:42 [Hixie]
the webapi wg has failed to deliver their own deliverables, let alone ours
17:49:52 [oedipus]
Henri: not sure if reached some kind of agreement; no consensus on best practices --
17:50:12 [ChrisWilson]
Have they stated that to the W3C staff?
17:50:14 [oedipus]
DanC: read not a call to create task force, but a proposal via email from KarlD
17:50:20 [Hixie]
ChrisWilson: yes
17:50:30 [ChrisWilson]
Can you send a pointer?
17:50:49 [oedipus]
DanC: moves to adjourn
17:50:56 [oedipus]
scribe's note: NO dissent
17:51:10 [oedipus]
Henri: plan on staying around to check records
17:51:19 [oedipus]
ChrisW: seconds motion to adjourn
17:51:19 [DanC]
ADJOURN.
17:51:20 [hsivonen]
not to stay around
17:51:25 [Zakim]
-JulianR
17:51:31 [Zakim]
-hsivonen
17:51:33 [Zakim]
-aaronlev
17:51:40 [oedipus]
s/staying around/not staying around
17:51:52 [Zakim]
-Sam
17:51:54 [oedipus]
SamR: please don't add to issue tracking just yet -- shortly
17:52:23 [Zakim]
-ChrisWilson
17:52:29 [Hixie]
ChrisWilson, look at any status e-mail in hcg
17:53:03 [anne]
Yeah, it's pretty clear that the Web API WG has not enough volunteers to edit
17:53:10 [ChrisWilson]
oedipus, yes and yes.
17:53:22 [oedipus]
thanks ChrisW -- anne, i am joining WebAPI
17:53:28 [oedipus]
zakim, please part
17:53:28 [Zakim]
leaving. As of this point the attendees were JulianR, Gregory_Rosmaita, DanC, hsivonen, Sam, ChrisWilson, anne, aaronlev
17:53:28 [Zakim]
Zakim has left #html-wg
17:53:29 [ChrisWilson]
It's pretty clear we suffer from the same problem.
17:53:39 [anne]
it seems that Hixie is doing just fine
17:53:42 [anne]
to me, anyway
17:53:53 [oedipus]
rrsagent, set logs world-visible
17:53:59 [oedipus]
rrsagent, create minutes
17:53:59 [RRSAgent]
I have made the request to generate
oedipus
17:54:05 [oedipus]
rrsagent, format minutes
17:54:05 [RRSAgent]
I have made the request to generate
oedipus
17:54:18 [anne]
ChrisWilson, could you perhaps e-mail the list with what you consider to be out of scope?
17:54:40 [ChrisWilson]
? Anything not captured in the charter?
17:54:48 [oedipus]
chair: Dan_Connolly
17:54:51 [oedipus]
rrsagent, create minutes
17:54:51 [RRSAgent]
I have made the request to generate
oedipus
17:54:55 [oedipus]
rrsagent, format minutes
17:54:55 [RRSAgent]
I have made the request to generate
oedipus
17:55:00 [anne]
ChrisWilson, basically, yeah
17:55:12 [oedipus]
chair+ Dan_Connolly
17:55:16 [oedipus]
rrsagent, create minutes
17:55:16 [RRSAgent]
I have made the request to generate
oedipus
17:55:19 [oedipus]
rrsagent, format minutes
17:55:19 [RRSAgent]
I have made the request to generate
oedipus
17:55:34 [DanC]
chair: DanC
17:55:56 [oedipus]
any regrets received?
17:56:14 [DanC]
Regrets+ mikko
17:56:18 [DanC]
(I think)
17:56:27 [oedipus]
rrsagent, create minutes
17:56:27 [RRSAgent]
I have made the request to generate
oedipus
17:56:28 [anne]
anne has left #html-wg
17:56:31 [oedipus]
rrsagent, format minutes
17:56:31 [RRSAgent]
I have made the request to generate
oedipus
17:56:45 [oedipus]
regrets+ mikko
17:56:47 [oedipus]
rrsagent, create minutes
17:56:47 [RRSAgent]
I have made the request to generate
oedipus
17:57:02 [Hixie]
ChrisWilson, as far as i am aware, everything in the spec if well covered by our charter.
17:57:20 [Hixie]
ChrisWilson, after all, the charter was written mostly after teh spec, and with the spec in mind.
17:57:21 [DanC]
oedipus, there's bunch of irrelevant stuff at the top, but you don't have write access to /2007/11 ... perhaps you could (a) take a copy of 16-html-wg-minutes.html , edit it manually, and mail it to me and www-archive ?
17:57:45 [oedipus]
yes, i can do that
17:57:49 [DanC]
thanks
17:58:04 [Hixie]
ChrisWilson, also, if you think hyatt and i aren't editing fast enough, it would be helpful to know what you think should be being edited faster
17:58:16 [oedipus]
will get it to you and www-archive asap
17:58:21 [DanC]
Hixie, please consider the charter from the perspective of someone wholly unfamiliar with HTML 5. e.g. a patent laywer at VendorCo
17:58:47 [Hixie]
DanC, first, we do, and second, the only person complaining is microsoft, and they aren't "someone wholly unfamiliar with HTML 5"
17:59:05 [Hixie]
DanC, they are in fact intimiately aware that html5 exists and is why this group was created.
17:59:15 [gavin]
gavin has joined #html-wg
17:59:29 [DanC]
no, microsoft is not the only person complaining; they're the only one nice enough to do it on the public record
17:59:32 [ChrisWilson]
Hixie, from a quick glance through the ToC, canvas and offline; session history and navigation; client-side storage (both types) unless the WebAPI WG fails to deliver; server-sent DOM events; and the connection interface are not in our charter.
17:59:41 [ChrisWilson]
s/nice/foolish
17:59:55 [oedipus]
danC, how far down do you want me to snip - to "convene meeting"?
18:00:04 [DanC]
yes, down to convene
18:00:06 [oedipus]
ok
18:00:18 [DanC]
and fix the duplicate items in the TOC, if you would
18:00:27 [oedipus]
have done
18:00:32 [DanC]
good
18:00:46 [Hixie]
ChrisWilson, wow, i didn't realise how desparate you were to try and slow down the group.
18:00:57 [DanC]
Hixie, cut it out
18:01:01 [mjs]
ChrisWilson, you have a unique way of reading the charter
18:01:01 [ChrisWilson]
I'll try not to take the comment literally or personally.
18:01:01 [Hixie]
oh please
18:01:08 [Hixie]
it's blatently obvious what chris is doing
18:01:20 [Hixie]
no-one in their right mind would claim session history wasn't under HTML5's purview
18:01:22 [DanC]
no, it's not, and it's rude of you to presume
18:01:54 [ChrisWilson]
Hixie, why should it matter? You will continue to create your HTML 5 standard in the WHATWG; and it will continue to ignore patents and IPR.
18:02:14 [ChrisWilson]
s/should it matter/should it matter to you/
18:02:32 [Hixie]
ChrisWilson, we specifically came here to w3c to allow the spec to be covered by the patent policy for you
18:02:47 [Hixie]
ChrisWilson, and now you're claiming you don't think the spec is covered by the charter.
18:03:50 [Hixie]
ChrisWilson, what can we do going forward to make sure the spec isn't pared down, is published soon, and is published with your participation?
18:04:04 [ChrisWilson]
One moment.
18:04:11 [oedipus]
danC, should i trim the "diagnostics" section?
18:05:05 [oedipus]
the question of whether the spec should be pared down is a decision for the WG to make, not a unilateral decision by the editors
18:05:30 [ChrisWilson]
Hixie: in a company with a large patent portfolio, getting approval to allow RF licensing of IP requires knowing what you're signing up to.
18:06:14 [Hixie]
ChrisWilson: sure, that's why when we originally proposed the scope we made it explicit. the w3c staff cut it down saying that it was being redundant.
18:06:20 [ChrisWilson]
That means the charter has to cover exactly what areas are going to be in the spec, because comparing a 500-page specification against [large company]'s entire patent portfolio is not an easy thing to do.
18:06:23 [oedipus]
hixie, isn't the point of editing to make things as clear as possible? that entails clarifications and such that may lead to "paring" in one place and "growth" in another...
18:06:53 [ChrisWilson]
Then let
18:06:59 [ChrisWilson]
erk
18:07:13 [Hixie]
oedipus: (i just meant removing entire sections, i agree that editing work includes making things clearer.)
18:07:41 [jmb]
jmb has joined #html-wg
18:08:59 [ChrisWilson]
Then let's scope out what the charter SHOULD be, and get the charter changed to reflect that. More to the point, I think there should be separate groups handling some of these items; I agree, fwiw, that session history and navigation might belong here, but I don't honestly think connections do. I believe they belong in the webapi group.
18:09:32 [ChrisWilson]
At any rate, it's irresponsible of me to agree to a spec that I don't think our IP reviewers were covering.
18:10:29 [ChrisWilson]
I understand, for example, (because at least you and Maciej have repeatedly said) that the Canvas API is a fairly stable bit of the WHAT WG HTML5 spec.
18:11:40 [ChrisWilson]
I understand anyone's ability to get you to change that API is basically zero at this point (aside from the one or two minor points you mentioned as being in flux). That doesn't mean that I can blithely say "I'm sure we wouldn't mind giving up IP in that area" without what IP we have there being reviewed.
18:11:45 [Hixie]
ChrisWilson: what can we do to publish _soon_, though? rechartering takes easily 6 months which is simply not an option for us.
18:12:06 [Hixie]
ChrisWilson: i'd like to know what we can to publish the current spec soon, with your participation
18:12:54 [ChrisWilson]
Publish all of the current HTML5 spec, with Microsoft's participation? I don't know. I'm not sure it will be possible; it will depend on the patent review I'm kicking off right now with our legal team to look at the areas I mentioned above that I don't think are in the spec.
18:13:29 [ChrisWilson]
s/spec/charter
18:14:24 [ChrisWilson]
If that review is taking more than 90 days, or if it turns up areas of concern to the IP owners, then I would have to depart the WG, because that's the only option left. That's one of the reasons why RF WGs are best off not trying to bite off the entire world.
18:15:09 [Hixie]
ChrisWilson: wow, so there is the chance that microsoft would rather leave the group than license patents?
18:15:15 [ChrisWilson]
I understand you all think this is me being obstructionist, and that's unfortunate. I have to work within the system of a corporation with a large patent portfolio, and that means being responsible with their IP.
18:15:22 [ChrisWilson]
No, that's not it.
18:15:34 [smedero]
I'm a little confused as common-man involved in this process. The <canvas> element is about three years old... though I don't know the exact date it made it into the WHATWG HTML5 sepc.
18:15:55 [Hixie]
smedero: i'm pretty confused myself :-)
18:16:00 [ChrisWilson]
It's not "license patents". It's that what you are asking for is a open-ended "whatever patents might cover technology we think is handy to shove into the HTML5 spec."
18:16:00 [oedipus]
DanC and ChrisW: cleaned minutes attached to
18:16:06 [smedero]
It seems clear that the patent issues were going to be a problem... that's completely reasonable.
18:16:15 [oedipus]
cleaned minutes:
18:16:21 [smedero]
It just feels like this issue should have been reviewed much sooner in this process.
18:16:34 [Hixie]
ChrisWilson: the html5 spec at this point is past feature freeze, so there won't be any new things that can be covered by patents.
18:16:40 [ChrisWilson]
Indeed. My apologies. I have a couple of day jobs too.
18:17:27 [ChrisWilson]
Really? Do you believe that every area that is going in to HTML5 from the WHATWG side is already there, and if we capture everything that's in there today into our charter to my satisfaction, that's not going to change?
18:17:30 [Hixie]
ChrisWilson: so anyway you are saying there is no way to publish the current spec soon with your participation? that it's either publish later, publish without you, or publish something smaller?
18:17:33 [ChrisWilson]
(That's a serious question)
18:17:57 [Hixie]
ChrisWilson: yes, as far as i'm concerned we're in feature freeze, i don't expect any new features to be added before CR.
18:18:05 [Hixie]
ChrisWilson: (there's no "whatwg side" to this, btw)
18:18:09 [oedipus]
ChrisW and DanC: just found another regret notification: Marcin Hanclik's regrets:
18:18:46 [ChrisWilson]
The best possible case is that I take the current spec, categorize the areas, pass it back to legal for review, and the owners of the patents they turn up are all okay with RF-licensing that IP to the W3C for HTML5.
18:18:52 [Hixie]
ChrisWilson: obviously if microsoft has anything they'd like added they would be considered, since that would presumably sidestep the patent problem, and we want to make microsoft happy with the spec.
18:19:13 [ChrisWilson]
So, to echo one of my ultimate bosses' most unfortunate statements, that depends on your definition of soon.
18:19:31 [ChrisWilson]
I don't think we have anything that we've been holding on to, no.
18:19:34 [Hixie]
ChrisWilson: by "soon" i meant this week
18:20:01 [Hixie]
well, next week i guess
18:20:04 [Hixie]
what with it being friday
18:20:56 [oedipus]
ChrisW: for what it is worth, i don't think you're being obstructionist -- your being realistic and practical, something that most of us in spec writing don't necessarily need be...
18:20:58 [ChrisWilson]
Then that's possible - I told Dan last week I am explicitly removing myself from any decision-making around this - but that is running the risk I listed above, that the expanded patent review won't finish and Microsoft would have to depart prior to the 90-day countdown.
18:21:53 [Hixie]
ChrisWilson: aah, interesting.
18:22:12 [Hixie]
ChrisWilson: well that makes sense
18:22:27 [Hixie]
ChrisWilson: that's the same risk google would take too
18:22:52 [Hixie]
ChrisWilson: seems like that's the best course then
18:23:04 [Hixie]
it sidesteps the whole charter can of worms
18:23:31 [ChrisWilson]
What's the same risk Google would take?
18:24:20 [Hixie]
that our patent review wouldn't be complete in 90 days
18:25:37 [ChrisWilson]
The part that is frustrating is that ideally, if you create a clear enough charter, then you don't need to do a patent review every time a new document is issued by the WG; you do a review before joining the group, and then you know what is at stake.
18:25:46 [Hixie]
i agree
18:26:03 [Hixie]
like i said, the original scope list that i and others proposed for html5 was very detailed
18:26:06 [ChrisWilson]
If I have to go through a whole legal review every time there is a new document in a WG, I'm going to have to quit so I don't slit my wrists.
18:26:09 [Hixie]
and explicitly covered all these things
18:26:30 [Hixie]
w3c staff said that the list had too much redundancy and made it smaller, as i recall
18:26:33 [Hixie]
not sure why
18:26:52 [ChrisWilson]
Hmm. Nor am I; I was not involved in developing the charter at all, actually.
18:27:05 [ChrisWilson]
(other than the voting at the AC level, where I advised)
18:27:28 [Hixie]
yeah they didn't even contact me until someone pointed out to them that maybe they should at least consult the guy who'd edited the html5 spec for the past few years
18:27:41 [Hixie]
and even then they only unofficially asked for my advice
18:27:59 [mjs]
Apple's legal review may be hard to complete in 90 days as well, but I would still prefer to just publish and start the clock
18:28:26 [Hixie]
is where i sent the feedback i had
18:28:53 [Hixie]
look in particular at
18:30:12 [Hixie]
afk, bbiab
18:30:36 [mjs]
I'm not sure it's possible to predict all applicable patents short of an actual draft anyway
18:30:48 [mjs]
that's why FPWD and LC are what starts the review clock
18:30:50 [anne]
anne has joined #html-wg
18:32:08 [ChrisWilson]
That may be true - but we should at least know what areas are going to be covered. And I disagree with the scope of the charter Ian pointed to (0045) for this group.
18:32:30 [ChrisWilson]
I don't understand, btw, why the presumption that WebAPI has failed/is failing.
18:33:25 [mjs]
I don't think they have failed at everything, but they (we) certainly haven't delivered all their original charter deliverables
18:33:46 [ChrisWilson]
why not?
18:34:01 [anne]
no dedicated editors
18:34:41 [anne]
editing specs these days is much harder than it was before (if you look at the amount of detail of HTML 5 versus HTML 4 for instance)
18:34:42 [mjs]
nor have they even started on any kind of data storage API, nor does that seem likely to happen any time soon
18:35:13 [anne]
it's like writing an implementation in English
18:35:31 [ChrisWilson]
See, I don't get that. There's one in current HTML5; you guys are on that group too; why do you not just take that spec, move it into that group, get buyoff, stamp it and move on?
18:35:38 [ChrisWilson]
anne: ?
18:36:40 [mjs]
splitting specs is not easy
18:36:47 [ChrisWilson]
It seems like it's useful outside the context of HTML, and moving it into a group like that would make it quicker, not slower.
18:36:49 [mjs]
so far XMLHttpRequest has semi-succeeded
18:37:01 [anne]
i'm already editing cross-site requests, xhr 1 and 2, and several drafts for the HTML WG, besides QA work I do for Opera and trying to keep up with everything relevant
18:37:02 [mjs]
(though Microsoft's rep still objects to the remaining HTML dependencies)
18:37:12 [mjs]
and Window kind of failed
18:37:18 [ChrisWilson]
I understand breaking up HTML5 into, say, separate "tabular data" and "media elements" specs would be hard.
18:37:20 [mjs]
(due to lack of my time)
18:37:20 [anne]
i think i'm one of the few in the Web API WG who actually manages to produce stuff
18:38:08 [mjs]
I think lots of stuff would be better if split off in principle but I don't want to let the perfect be the enemy of the good
18:38:18 [kingryan]
kingryan has joined #html-wg
18:38:57 [ChrisWilson]
But it seems like taking the two client-side storage sections and making them a separate spec would make it easier to focus on. Not to mention use them outside of HTML.
18:39:41 [mjs]
in theory, yes
18:39:49 [mjs]
in practice, I'm not aware of a qualified and available editor
18:39:59 [ChrisWilson]
For what? Client-side storage?
18:40:54 [ChrisWilson]
afk
18:42:57 [Philip]
DanC: You said "There aren't any votes yet" 12 minutes ago, but I currently see 16 votes
18:43:59 [hober]
As one of those 16 voters, I'm all for withdrawing & rewording the question to take into account the feedback on it
18:46:32 [anne]
unless someone can point out volunteers this is really a theoretical question imo
18:48:05 [gsnedders]
DanC: <
> claims it isn't open yet for me
18:49:04 [anne]
i guess it will be rephrased
18:49:47 [gsnedders]
maybe I didn't go all the way back to where I left, then
18:52:07 [dbaron]
dbaron has joined #html-wg
19:06:36 [Hixie]
ChrisWilson: there are a number of sections (setTimeout, Window, XHR, alt stylesheets OM) that have been taken out of HTML5. Only one of them (XHR) has so far managed to get any real traction.
19:06:59 [Hixie]
ChrisWilson: so much so that i had to pull window back into HTML5 because I had dependencies that were falling by the wayside because of the issue
19:08:20 [Hixie]
ChrisWilson: setTimeout and the alt stylesheets OM are tiny bits that wouldn't even take much editing time -- if we could find someone to edit those, then we could consider taking out the much bigger and more important bits out
19:09:20 [Hixie]
ChrisWilson: but if we can't even find competent editors with enough time to edit those tiny bits, i would consider it irresponsible of us to take out the other bits and just throw them over the wall and hope for an editor, especially considering that the sections in question are amongst those sections that browser vendors have indicated are the most critical to html5's success
19:10:32 [Hixie]
bbiab, going to work
19:13:53 [DanC]
mjs, 20 minutes turned out to take longer... could you pick a time later this afternoon?
19:14:10 [DanC]
16 votes? hmm...
19:14:30 [DanC]
"No answer has been received." --
19:15:12 [Philip]
"16 answers have been received."
19:15:15 [Philip]
Cached?
19:15:26 [hober]
I see "3 answers have been received." must be cache
19:15:35 [hober]
shift-reload: 16
19:16:12 [DanC]
ah. shift-reload
19:16:36 [DanC]
"Future questions should avoid conflating distinct issues." indeed. this one should too
19:26:39 [Julian]
Julian has joined #html-wg
19:44:02 [timbl]
timbl has left #html-wg
19:45:08 [Lachy]
Lachy has joined #html-wg
19:58:56 [kingryan]
kingryan has joined #html-wg
20:07:08 [gavin]
gavin has joined #html-wg
20:26:16 [kingryan]
is
closed for editing?
20:28:04 [gsnedders]
yeah
20:34:29 [edas]
edas has joined #html-wg
20:36:31 [jgraham]
jgraham has joined #html-wg
20:40:24 [jgraham_]
jgraham_ has joined #html-wg
20:49:17 [DanC]
yes, closed for editing... I'm getting back to that now...
20:49:46 [jgraham_]
jgraham_ has joined #html-wg
20:50:20 [mjs]
DanC: I'll be around this afternoon some
20:53:44 [DanC]
ok... do you have a sense of how many questions yet?
20:55:12 [rubys]
rubys has left #html-wg
21:04:46 [mjs]
how many questions for what?
21:11:06 [edas]
edas has joined #html-wg
21:32:06 [Lachy]
Lachy has joined #html-wg
21:36:40 [DanC]
oops; hi mjs. can you see
? prolly not
21:39:02 [DanC]
ok... I moved the charter stuff from
to the tactics survey
21:39:11 [DanC]
mjs? kingryan ? Hixie ? anybody around to take a look?
21:42:49 [DanC]
any opinions?
21:42:53 [DanC]
i.e. is it coherent?
21:43:07 [DanC]
rather: are the 2 surveys coherent?
21:43:26 [hober]
It's not clear what question 2 in the tactics survey implies re: HTML 5 spec
21:43:40 [gsnedders]
hober: agreed
21:43:55 [kingryan]
concur
21:44:18 [kingryan]
DanC: is it implying that canvas be extracted into a separate document?
21:44:29 [gsnedders]
DanC: for question five can we just use Yes/No?
21:44:51 [DanC]
this one is unclear? "2. Canvas and immediate mode graphics API introductory/tutorial note"
21:44:56 [gsnedders]
DanC: yeah
21:45:05 [oedipus]
DanC: Should CANVAS and immediate mode graphics be spun off into a note, similar to Offline Web Applications? That is: a sort of extended abstract that might grow into a tutorial.
21:45:20 [hober]
For instance, I'd like to answer: "keep <canvas> in the html5 spec, don't recharter. additional documents (tutorials, etc.) are fine if someone wants to work on them."
21:45:22 [gsnedders]
DanC: also, regarding "charter a new W3C working group for the 2d graphics API" — Opera has experimental 3D support now
21:45:31 [DanC]
spun off? no; the design would stay where it is
21:45:43 [Philip]
(Mozilla has more advanced experiemental 3D support too)
21:45:50 [Philip]
s/e//
21:46:11 [oedipus]
DanC: Should CANVAS and immediate mode graphics be released first in the form of a note, similar to Offline Web Applications? That is: a sort of extended abstract that might grow into a tutorial.
21:46:37 [DanC]
reload; I changed it to: "How about a note to supplement the detailed specification, similar to ..."
21:47:42 [DanC]
what would yes and no mean for question 5? I want information on preferences as well as what people find acceptable
21:47:42 [hober]
I'd like a "no opinion" option on 2, although I suppose simply not answering conveys that...
21:48:11 [DanC]
right; you can just not click any of the options...
21:48:18 [DanC]
... though once you click one of them, you're kinda stuck
21:48:58 [oedipus]
that sounds like reason enough to add "no opinion" as an option
21:49:39 [jgraham]
Q2. on the second one is a bit brief
21:49:52 [jgraham]
s/second/tactics/
21:50:33 [jgraham]
Maybe s/How about/Should the Working Group produce/
21:50:42 [Lachy]
3d canvas could probably be done in webapi
21:51:47 [DanC]
yes, "How about" is overly colloquial; fixed
21:55:13 [DanC]
I'm pretty happy with it now
21:56:56 [Philip]
s/XMLHTTPRequest/XMLHttpRequest/
22:02:23 [hasather]
hasather has joined #html-wg
22:03:03 [DanC]
ok, I announced both of them, subject to change for a day
22:05:11 [DanC]
hmm... the requirement formal question doesn't have separate "no" and "formally object" options.
22:10:32 [hober]
which is the 'requirement formal question'?
22:13:06 [DanC]
22:13:30 [DanC]
anybody know where Hixie and/or mjs went?
22:15:16 [hober]
[11:10] <Hixie> bbiab, going to work
22:15:41 [mjs]
mjs has joined #html-wg
22:18:09 [DanC]
ah. thanks, hober.
22:20:45 [mjs]
mjs has joined #html-wg
22:21:58 [mjs]
mjs has joined #html-wg
22:22:30 [Philip]
DanC: Is it intentional that req-gapi-canvas/results shows 32 non-responders, while tactics- shows 489?
22:23:32 [mjs]
mjs has joined #html-wg
22:23:34 [Philip]
Ah, looks like the difference between a response-represents-organisation and response-is-just-personal survey
22:24:13 [DanC]
yes
22:24:51 [DanC]
though the 32 is low due to a bug; it should could public invited experts, I think
22:27:13 [timbl]
timbl has joined #html-wg
22:35:06 [Lachy]
Lachy has joined #html-wg
22:40:19 [jgraham_]
jgraham_ has joined #html-wg
22:42:35 [Philip]
jgraham_: By "a highly-inoperable mechanism", did you mean "highly-interoperable"?
22:46:56 [timbl]
timbl has joined #html-wg
22:48:45 [jgraham_]
Philip: Yeah, that would b a typo ;)
22:49:13 [gavin]
gavin has joined #html-wg
23:04:13 [gsnedders]
DanC: "Canvas and immediate mode graphics API introductory/tutorial note": An introduction to why it exists, or a tutorial about how to use it? They're very different.
23:06:57 [timbl]
timbl has joined #html-wg
23:09:03 [Philip]
gsnedders: A tutorial should teach readers when it is a suitable technology to use instead of the alternatives, so that would also serve as an introduction to why it exists
23:11:17 [sbuluf]
sbuluf has joined #html-wg
23:12:21 [mjs]
mjs has joined #html-wg
23:36:44 [ChrisWilson]
ChrisWilson has joined #html-wg
23:37:18 [marcos]
marcos has joined #html-wg
23:38:33 [jgraham__]
jgraham__ has joined #html-wg
23:41:18 [jgraham]
jgraham has joined #html-wg
23:51:08 [shepazu]
shepazu has joined #html-wg | http://www.w3.org/2007/11/16-html-wg-irc | CC-MAIN-2015-11 | en | refinedweb |
Understanding snoop(1M) NFSv3 file handles
By PeteH on Feb 01, 2007
Introduction
Reading the snoop(1M) trace of NFS traffic you'll see references to file handles, but interpreting these is not straightforward. As any NFS engineer should tell you, file handles are opaque to the NFS client and are meaningful only to the NFS server that issued them.
For Solaris the file handle is derived from the underlying file system so some parts of the file handle are meaningful only to the underlying file system.
Here's an example snoop(1M) output:
3 0.00000 v4u-450f-gmp03 -> v4u-80a-gmp03 NFS C GETATTR3 FH=FD0D 4 0.00035 v4u-80a-gmp03 -> v4u-450f-gmp03 NFS R GETATTR3 OK
So what does FH=FD0D mean? File handles are surely longer than that?
Analysis
To make things easier to read, snoop(1M) hashes the file handle to 16-bits. Check the sum_filehandle() code in OpenSolaris:
sum_filehandle(len) int len; { int i, l; int fh = 0; for (i = 0; i < len; i += 4) { l = getxdr_long(); fh \^= (l >> 16) \^ l; } return (fh); }
To see the complete file handle we need the verbose output:
# snoop -p3,4 -v -i /tmp/snoop2.out | grep NFS: NFS: ----- Sun NFS ----- NFS: NFS: Proc = 1 (Get file attributes) NFS: File handle = [FD0D] NFS: 0080001000000002000A0000000091EF23696D2E000A000000008FDF5F48F2A0 NFS: NFS: ----- Sun NFS ----- NFS: NFS: Proc = 1 (Get file attributes) NFS: Status = 0 (OK) NFS: File type = 1 (Regular File) NFS: Mode = 0644 NFS: Setuid = 0, Setgid = 0, Sticky = 0 NFS: Owner's permissions = rw- NFS: Group's permissions = r-- NFS: Other's permissions = r-- NFS: Link count = 1, User ID = 0, Group ID = 0 NFS: File size = 301, Used = 1024 NFS: Special: Major = 0, Minor = 0 NFS: File system id = 137438953488, File id = 37359 NFS: Last access time = 01-Feb-07 15:12:16.398735000 GMT NFS: Modification time = 01-Feb-07 15:12:16.410570000 GMT NFS: Attribute change time = 01-Feb-07 15:12:16.410570000 GMT NFS: NFS: #
Note the file system ID and file ID values for later.
To break down that file handle you need to understand the NFS server's implementation, for NFSv3 on OpenSolaris we;
Which means ...
fh3_fsid 0080001000000002 fh3_len 000A fh3_data 0000000091EF23696D2E fh3_xlen 000A fh3_xdata 000000008FDF5F48F2A0
The fh3_fsid is itself a compressed version of the dev_t for the device and the file system type, check cmpldev(). Essentially:
- the first 32 bits (0x00800010) are the major number shifted right 14 bits plus the minor number
- the second 32 bits (0x00000002) are the file system type, see struct vfssw
That compressed fsid is what you see in mnttab, for this case:
# grep 800010 /etc/mnttab /dev/dsk/c0t0d0s0 / ufs rw,intr,largefiles,logging,xattr,onerror=panic,dev=800010 1170093759 #
Reassuringly, file system type 2 is ufs.
The fh3_data is derived from the underlying file system (ufs) which is a ufid structure:
struct ufid { ushort_t ufid_len; ushort_t ufid_flags; int32_t ufid_ino; int32_t ufid_gen; };
So 0000000091EF23696D2E breaks down as:
ufid_flags 0000 ufid_ino 000091EF ufid_gen 23696D2E
Reassuringly again, ufid_ino (the inode) makes sense:
# mdb > 000091EF=U 37359 > !ls -li /export total 922 37359 -rw-r--r-- 1 root root 301 Feb 1 15:12 motd
That's the file I was checking from the NFS client and it matches the file ID from the snoop output.
The fh3_xdata represents the export data, ie the exported file system. The inode number in this case is 0x00008FDF. Checking:
> !share - /export rw "" > 00008FDF=U 36831 > !ls -lid /export 36831 drwxr-xr-x 2 root sys 512 Feb 1 15:12 /export >
If you've been paying attention you might be wondering what happened to the file system ID (137438953488). This is the uncompressed dev_t value. We can check it by compressing it (14 bit shift of the major, add the minor):
> 0t137438953488=J 2000000010 > (0t137438953488>>0t14) + 0t137438953488&0xffffffff=J 800010 >
Yes, that looks familiar.
Conclusion
As already noted, NFS file handles are only meaningful to the NFS server and this example is just for the Solaris NFSv3 implementation. However, I hope it's given some insight into how that works and with this knowledge it's relatively easy to match snoop(1M) file handles to files on the NFS server. | https://blogs.oracle.com/peteh/tags/nfs | CC-MAIN-2015-11 | en | refinedweb |
Details
Description
The following tests fail when running ant test on trunk 2.0
[junit] Running org.apache.nutch.api.TestAPI [junit] Tests run: 4, Failures: 1, Errors: 0, Time elapsed: 11.028 sec [junit] Test org.apache.nutch.api.TestAPI FAILED [junit] Running org.apache.nutch.crawl.TestGenerator [junit] Tests run: 4, Failures: 0, Errors: 4, Time elapsed: 0.478 sec [junit] Test org.apache.nutch.crawl.TestGenerator FAILED [junit] Running org.apache.nutch.crawl.TestInjector [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0.474 sec [junit] Test org.apache.nutch.crawl.TestInjector FAILED [junit] Running org.apache.nutch.fetcher.TestFetcher [junit] Tests run: 2, Failures: 0, Errors: 2, Time elapsed: 0.526 sec [junit] Test org.apache.nutch.fetcher.TestFetcher FAILED [junit] Running org.apache.nutch.storage.TestGoraStorage [junit] Tests run: 2, Failures: 0, Errors: 2, Time elapsed: 0.468 sec [junit] Test org.apache.nutch.storage.TestGoraStorage FAILED
Activity
Yes this one should be closed.
The tests for nutchgora seem to work fine now. Close?
Set and classify
Everything works fine except for org.apache.nutch.api.TestAPI. That one still fails, sometimes. When I just ran these tests with "ant test" they all worked perfectly fine, but the runs after that simply fail. Cleaning the project with "ant clean" doesn't help. See corresponding mailing list discussion in link above.
I have not yet looked into this test thouroughly, because it is part of Nutch that I am completely unfamiliar with. (The NutchServer API). I think it is best we close this issue, and start a new one that will deal with the this API and the test.
Hi Ferdy. Have you noticed anything dodgy with this?
Hi Ferdy. There has been almost no problems within the CI testing environment for a number of weeks/months. Any failures seem to have been down to the project building on Ubuntu slaves as oppose to Solaris slaves, the failures are a result of incorrect envars being specified. I've added some more functionality to the nutchgora build characteristics e.g. Publish JUnit test result report and publish Javadoc. So as agreed we will keep an eye on this.
Reopening this issue as per our concerns.
For the record, the Jenkins build area has been cleaned up and we now only maintain 3 builds; trunk, Nutchgora and a maven trunk.
TestAPI is troublesome:
As of the recent
NUTCH-1135 commit, this summary is being closed out.
My bad it was a local issue indeed.
Hi Ferdy
copy-generated-lib: test: [echo] Testing plugin: parse-tika [junit] Running org.apache.nutch.parse.tika.TestMSWordParser [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 4.349 sec [junit] Running org.apache.nutch.parse.tika.TestOOParser [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.409 sec [junit] Running org.apache.nutch.parse.tika.TestPdfParser [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.674 sec [junit] Running org.apache.nutch.parse.tika.TestRSSParser [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.698 sec [junit] Running org.apache.nutch.parse.tika.TestRTFParser [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.013 sec init: init-plugin: deps-jar: ... BUILD SUCCESSFUL Total time: 2 minutes 20 seconds
Ti make sure I checkedout our most recent nutchgora and applied your interim testGoraStorage patch. Is there maybe some config that you have changed at your end?
Are you aware of the fact that TestMSWordParser currently fails too? Or am I missing something?
[echo] Testing plugin: parse-tika
[junit] Running org.apache.nutch.parse.tika.TestMSWordParser
[junit] Tests run: 2, Failures: 2, Errors: 0, Time elapsed: 6.6 sec
[junit] Test org.apache.nutch.parse.tika.TestMSWordParser FAILED
If it is broken, could you make a subtask?
Thanks Ferdy. It was also my initial thoughts that this was maybe too simplistic a fix! If however you look here [1] you will see that Dogacan changed the import namespaces for the Gora imports. It would seem that over time we forgot to do this with hard-coded imports in Injector, Generator and Fetcher tests.
Are there any objections to committing this as an interm fix before concentrating on
NUTCH-896 ?
[1]
It seems like your patch is fine, at least as a temporary solution. Tests run fine. (Please see my notes for
NUTCH-1135 in the corresponding issue.)
I see
NUTCH-896 as a rather separate issue. Sure it would be nice have a configurable backend in tests, but as sql is currently the default backend (also for building) I see it as no problem to have it hard-coded in the tests for now.
To summarize: +1
to update on this issue. All tests as above e.g. all that extend AbstractNutchTest now pass successfully. The daemon TestGoraStorage is still giving us bother, and please bear in mind that none of this takes into consideration
NUTCH-896. Is the patch I submitted for these tests deemed appropriate as a temporary fix? Or should my efforts be concentrated towards NUTCH-896 ? Thanks
I have marked this as critical now as it is the 'only' thing preventing us from finally achieving a stable nightly build for the nutchgora branch. In an attempt to get this moving, I'm going to create subtasks for each test, this way we will be able to track reasonable progress on each potential blocker.
Over a number of issues this seems to have been phased out/addressed as testing has been stable for some time. Thanks guys. | https://issues.apache.org/jira/browse/NUTCH-1081?focusedCommentId=13151073&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-11 | en | refinedweb |
#include <avr/sleep.h>void setup(){ Serial.begin(9600);}void loop(){ int reading; reading = analogNoiseReducedRead(0); Serial.println(); delay(500);}int analogNoiseReducedRead(int pinNumber){ int reading; ADCSRA |= _BV( ADIE ); //Set ADC interrupt set_sleep_mode(SLEEP_MODE_ADC); //Set sleep mode reading = analogRead(pinNumber); //Start reading sleep_enable(); //Enable sleep do { //Loop until reading is completed sei(); //Enable interrupts sleep_mode(); //Go to sleep cli(); //Disable interrupts } while(((ADCSRA&(1<<ADSC))!= 0)); //Loop if the interrupt that woke the cpu was something other than the ADC finishing the reading sleep_disable(); //Disable sleep ADCSRA &= ~ _BV( ADIE ); //Clear ADC interupt sei(); //Enable interrupts return(reading);}
Where is the analog pin to be read defined?
You must set ADMUX before calling this function.
Where is the call made to read it with the ADC?
This performs an A/D conversion using the current ADMUX settings.
Would this work?
Well, I'm sure a lot of people would find it useful to have a function that reads an analog input using ADC noise reduction
Have you noticed a problem with analogRead?
2. General digital noise. Cured by adding a capacitor from AREF to GND and using the appropriate analog reference setting
just differences in board layout
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=45949.msg333357 | CC-MAIN-2015-11 | en | refinedweb |
Main Page | Modules | Namespace List | Class Hierarchy | Class List | File List | Namespace Members | Class Members | File Members | Related Pages
LensPanel Class ReferenceDefine the second the Lens panel. More...
#include <LensPanel.h>
Inheritance diagram for LensPanel:
Detailed DescriptionDefine the second the Lens panel.
- the second for lens selection to images
- Note:
- currently only one lens per panorama is supported.
Constructor & Destructor Documentation
Member Function Documentation
The documentation for this class was generated from the following files:
- hugin1/hugin/LensPanel.h
- hugin1/hugin/LensPanel.cpp | http://hugin.sourceforge.net/docs/html/classLensPanel.shtml | CC-MAIN-2015-11 | en | refinedweb |
NAME
sys/types.h - data types
SYNOPSIS
#include <sys/types.h>
DESCRIPTION
The <sys/types.h> header shall include definitions for gid_t. ino_t Used for file serial numbers. key_t().,}]. The type useconds_t shall be an unsigned integer type capable of storing values at least in the range [0,, suseconds_t, and useconds trace_attr_t The following sections are informative.
APPLICATION USAGE
None.
RATIONALE
None.
FUTURE DIRECTIONS
None.
SEE ALSO
<time . | http://manpages.ubuntu.com/manpages/dapper/man7/sys_types.h.7posix.html | CC-MAIN-2015-11 | en | refinedweb |
I have a directory with ~10,000 image files from an external source.
Many of the filenames contain spaces and punctuation marks that are not DB friendly or Web friendly. I also want to append a SKU number to the end of every filename (for accounting purposes). Many, if not most of the filenames also contain extended latin characters which I want to keep for SEO purposes (specifically so the filenames accurately represent the file contents in Google Images)
I have made a bash script which renames (copies) all the files to my desired result. The bash script is saved in UTF-8. After running it omits approx 500 of the files (unable to stat file...).?
The only way I've been able to figure out myself is by setting my terminal encoding to UTF-8, then iterating through all the likely candidate encodings with convmv until it displays a converted name that 'looks right'. I have no way to be certain that these 500 files all use the same encoding, so I would need to repeat this process 500 times. I would like a more automated method than 'looks right' !!!
There's no 100% accurate way really, but there's a way to give a good guess.
There is a python library chardet which is available here:
e.g.
See what the current LANG variable is set to:
$ echo $LANG
en_IE.UTF-8
Create a filename that'll need to be encoded with UTF-8
$ touch mÉ.txt
Change our encoding and see what happens when we try and list it
$ ls m*
mÉ.txt
$ export LANG=C
$ ls m*
m??.txt
OK, so now we have a filename encoded in UTF-8 and our current locale is C (standard Unix codepage).
So start up python, import chardet and get it to read the filename. I'm use some shell globbing (i.e. expansion through the * wildcard character) to get my file. Change "ls m*" to whatever will match one of your example files.
>>> import chardet
>>> import os
>>> chardet.detect(os.popen("ls m*").read())
{'confidence': 0.505, 'encoding': 'utf-8'}
As you can see, it's only a guess. How good a guess is shown by the "confidence" variable.
You may find this useful, to test the current working directory (python 2.7):
import chardet
import os
for n in os.listdir('.'):
print '%s => %s (%s)' % (n, chardet.detect(n)['encoding'], chardet.detect(n)['confidence'])
Result looks like:
Vorlagen => ascii (1.0)
examples.desktop => ascii (1.0)
Öffentlich => ISO-8859-2 (0.755682154041)
Videos => ascii (1.0)
.bash_history => ascii (1.0)
Arbeitsfläche => EUC-KR (0.99)
To recurse trough path from current directory, cut-and-paste this into a little python script:
#!/usr/bin/python
import chardet
import os
for root, dirs, names in os.walk('.'):
print root
for n in names:
print '%s => %s (%s)' % (n, chardet.detect(n)['encoding'], chardet.detect(n)['confidence'])
By posting your answer, you agree to the privacy policy and terms of service.
asked
5 years ago
viewed
12562 times
active
1 month ago | http://serverfault.com/questions/82821/how-to-tell-the-language-encoding-of-a-filename-on-linux | CC-MAIN-2015-11 | en | refinedweb |
Strategy design pattern allows you to use multiple algorithms interchangeably.
One reason you might use a Strategy Pattern is to simplify an overly complex algorithm. Sometimes, as an algorithm evolves to handle more and more situations, it can become very complex and difficult to maintain. Breaking these complex algorithms down into smaller more manageable algorithms might make your code more readable and more easily maintained. As a simple example of how to implement the Strategy Pattern consider the following scenario.
A software system uses a single authorization checking class to determine access rights within the system. Access rights will be different for Anonymous users, Logged-In users and System Administrators.
While we could certainly handle these 3 different access levels within a single class, but what if a fourth type of access level were to be added and perhaps a fifth? Let's break out each of the access checks into its own Strategy object that we can use interchangeably based on what type of user is logged into our system.
First, we need to define an Interface that all of our Strategy classes will implement. It is by implementing this Interface that our Strategy classes will be able to be used interchangeably.
Public Interface IAuthorityCheck
Function IsAuthorized(ByVal resource As Object) As Boolean
End Interface
With a simple interface defined, we can now create any number of different classes that will implement that interface and perform our security checks for us. Here are 3 examples that represent our Strategy classes.
First, for our Anonymous users, we create a security Strategy class that always returns False.
Public Class AnonymousSecurityCheckStrategy
Implements IAuthorityCheck
Public Function IsAuthorized(ByVal resource As Object) As Boolean Implements IAuthorityCheck.IsAuthorized
'when checking access rights, anonymous users will
'always be denied access to all resources
Return False
End Function
End Class
Next we have our Logged-In users. They will need a Strategy too to determine their access rights.
Public Class StandardSecurityCheckStrategy
Implements IAuthorityCheck
Public Function IsAuthorized(ByVal resource As Object) As Boolean Implements IAuthorityCheck.IsAuthorized
'based on the current users credentials
'examine the object and determine if the user is allowed to access it
Dim result As Boolean = PerformSomeApplicationSpecificSecurityChecks(resource)
Return result
End Function
End Class
And finally, we have our System Administrators. They can access everything so they get a Strategy that always returns True.
Public Class SysAdminSecurityCheckStrategy
Implements IAuthorityCheck
Public Function IsAuthorized(ByVal resource As Object) As Boolean Implements IAuthorityCheck.IsAuthorized
'The System Administrator will be granted access to all secure resources
Return True
End Function
End Class
Now, all we need to do is select one of these Strategy objects at runtime and call the IsAuthorized method. In order to use these classes interchangeably, we will code against the Interface IAuthorithyCheck and not against that actual concrete class types.
One method for selecting a particular strategy could be to use a Factory. The Factory would encapsulate all the necessary logic for determining which of our various Strategy classes is the correct one to use. For example:
Public Class AuthorizationStrategyFactory
'To avoid the need to repeat the logic required to select the correct Strategy object
'we will encapsulate that logic in a small Factory class
Public Shared Function GetAuthorizationStrategy() As IAuthorityCheck
'This method returns 1 of 3 different algorithms for performing authorizations
'sysadmins gets the Strategy object that always allows access
If currentUserObject.IsInRole("sysadmin") Then Return New SysAdminSecurityCheckStrategy
If currentUserObject.LoggedIn = True Then
'Logged in users (non-sysadmin) get a Strategy object that performs various security checks
Return New StandardSecurityCheckStrategy
Else
'All other users are considered Anonymous
'These users are given a Strategy object that always disallows access
Return New AnonymousSecurityCheckStrategy
End If
End Function
End Class
With the Strategy objects in place and a Factory to create the correct Strategy object for us, it becomes simple to perform the authorization check
'coding against the Interface we created
Dim authorityCheck As IAuthorityCheck
'ask the Factory for the Strategy object
authorityCheck = AuthorizationStrategyFactory.GetAuthorizationStrategy()
'at this point we have been returned 1 of the 3 strategy objects
'we do not know which one, and we do not need to know which one
'perform authorization check
If authorityCheck.IsAuthorized(someResourceObject) = False Then
'abort - user not authorized
Else
'proceed - user was authorized
End If
Now if the time comes for a fourth type of access to be needed (JuniorSysAdminEverythingButPayrollAccess), we can create a new Strategy class that implements our Interface and contains the specific algorithm appropriate for the new security check. Then we would modify the Factory class to return that new Strategy object when it was appropriate to do so. The last little bit of code that is calling the IsAuthorized method does not need to change at all.
JuniorSysAdminEverythingButPayrollAccess
Strategy
Factory
IsAuthorized
The example here was perhaps a bit contrived, but I hope that it illustrates the mechanics behind implementing the Strategy Pattern.
Also, note that the Provider Model introduced in ASP.NET 2.0 is based on strategy design pattern.
Example: Strategy Design Patterns in C#.
1. GoAlgorithm (Interface)
namespace DesignPatterns.StrategyDP {
interface GoAlgorithm {
void go();
}
class GoByFlyingAlgorithm: GoAlgorithm {
public void go() {
Console.WriteLine("Now I'm Flying");
}
}
class GoByDrivingAlgorithm: GoAlgorithm {
public void go() {
Console.WriteLine("Now I'm Driving.");
}
}
You can define more Alorithm's like "GoByMotorBikeAlgoritm" etc.
class GoByMotorBikeAlgoritm: GoAlgorithm {
public void go() {
Console.WriteLine("Now I'm Riding");
}
}
4. Vehicle (Abstract Class)
abstract class Vehicle {
private GoAlgorithm goAlgorithml;
public void setAlgorithm(GoAlgorithm Algorithm_Name) {
goAlgorithml = Algorithm_Name;
}
public void go() {
goAlgorithml.go();
}
}
5. Type of Vehicle Like 'Car', 'Vehicle' and 'Bike'
class Car: Vehicle {
public Car() {
setAlgorithm(new GoByDrivingAlgorithm());
}
}
class Helicopter:Vehicle {
public Helicopter() {
setAlgorithm(new GoByFlyingAlgorithm());
}
}
class Bike:Vehicle {
public Helicopter() {
setAlgorithm(new GoByMotorBikeAlgoritm());
}
}
6. Main
class Program {
static void Main(string[] args) {
/* Strategy Design Patterns*/
Car F1car = new Car();
Helicopter Choper = new Helicopter();
Bike HeroHonda = new Bike();
F1car.go();
Choper.go();
HeroHonda.go();
}
}
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | https://codeproject.freetls.fastly.net/Articles/667205/Strategy | CC-MAIN-2021-31 | en | refinedweb |
Order
This article mainly studies the FailureDetector of apache gossip.
FailureDetector
incubator-retired-gossip/gossip-base/src/main/java/org/apache/gossip/accrual/FailureDetector.java
public class FailureDetector { public static final Logger LOGGER = Logger.getLogger(FailureDetector.class); private final DescriptiveStatistics descriptiveStatistics; private final long minimumSamples; private volatile long latestHeartbeatMs = -1; private final String distribution; public FailureDetector(long minimumSamples, int windowSize, String distribution) { descriptiveStatistics = new DescriptiveStatistics(windowSize); this.minimumSamples = minimumSamples; this.distribution = distribution; } /** * Updates the statistics based on the delta between the last * heartbeat and supplied time * * @param now the time of the heartbeat in milliseconds */ public synchronized void recordHeartbeat(long now) { if (now <= latestHeartbeatMs) { return; } if (latestHeartbeatMs != -1) { descriptiveStatistics.addValue(now - latestHeartbeatMs); } latestHeartbeatMs = now; } public synchronized Double computePhiMeasure(long now) { if (latestHeartbeatMs == -1 || descriptiveStatistics.getN() < minimumSamples) { return null; } long delta = now - latestHeartbeatMs; try { double probability; if (distribution.equals("normal")) { double standardDeviation = descriptiveStatistics.getStandardDeviation(); standardDeviation = standardDeviation < 0.1 ? 0.1 : standardDeviation; probability = new NormalDistributionImpl(descriptiveStatistics.getMean(), standardDeviation).cumulativeProbability(delta); } else { probability = new ExponentialDistributionImpl(descriptiveStatistics.getMean()).cumulativeProbability(delta); } final double eps = 1e-12; if (1 - probability < eps) { probability = 1.0; } return -1.0d * Math.log10(1.0d - probability); } catch (MathException | IllegalArgumentException e) { LOGGER.debug(e); return null; } } }
- The constructor of FailureDetector receives three parameters, namely, minimum samples, windowsize, and distribution.
- Among them, minimumSamples indicates the minimum number of statistical values required before the phi value is actually calculated, windowSize indicates the size of the statistical window, distribution indicates which distribution to use, normal indicates NormalDistribution, and others indicate ExponentialDistribution
- FailureDetector uses the DescriptiveStatistics of Apache commonmath as the time window statistics of Heartbeat Interval; NormalDistribution and exploratory distribution are used to complete the cumulative distribution probability of normal distribution and ExponentialDist ribution. finally, phi value is calculated by using the formula-1.0d * math.log10 (1.0d-probability)
Summary
- The Phi Accrual Failure Detector by Hayashibara et alThis paper proposes an accurate failure detector method based on phi value.
- There are roughly two implementations of Failure Detector in the industry. one is based on NormalDistribution, represented by akka. One is based on the ExponentialDistribution represented by cassandra
- Apache gossip’s FailureDetector fully supports both NormalDistribution and ExponentialDistribution | https://ddcode.net/2019/06/21/talk-about-failuredetector-of-apache-gossip/ | CC-MAIN-2021-31 | en | refinedweb |
configureStructureViewItemProperties
How to get configureStructureViewItemProperties
import configureStructureViewItemProperties from 'fontoxml-families/src/configureStructureViewItemProperties.js'
Type: Function
Configure properties for elements in the structure view.
This can be used alongside configureAsStructureViewItem much like configureProperties can be used alongside the CVK families.
In addition to overriding specific properties from the base configuration, this function may also be used to configure properties for the source nodes in the documents hierarchy. Such properties may be used when the target of the corresponding hierarchy node is not available. Only the titleQuery (shared with the CVK) and the icon can currently be configured this way.
Except for the use case above, make sure that all structure view items configured using this function also have a matching configureAsStructureViewItem configuration. | https://documentation.fontoxml.com/api/latest/configurestructureviewitemproperties-33443126.html | CC-MAIN-2020-45 | en | refinedweb |
Wipy 2.0 Cannot read internal flash without a SD card inserted
I'm trying to read a file from the internal flash, but to do so I must have a sd card inserted even though I'm not using it.
If I try without a card I get this:
>>> from machine import SD >>> sd = SD() Traceback (most recent call last): File "<stdin>", line 1, in <module> OSError: the requested operation failed
It works fine when a SD card is inserted.
Is what I'm trying to do possible?
Ah derp. yeah that works.
I'm new to (micro)python & coding for hardware, so learning lots at the moment.
Thanks.
- Jurassic Pork last edited by Jurassic Pork
hello,
what is your code to read your file ?
you don't need to use SD to play with the internal flash file system.
Example to list files and read the content of the boot.py file in the internal flash file system :
import os os.listdir() f = open('boot.py') f.read() f.close()
Friendly, J.P | https://forum.pycom.io/topic/1226/wipy-2-0-cannot-read-internal-flash-without-a-sd-card-inserted | CC-MAIN-2020-45 | en | refinedweb |
annikaheflin577 Points
What's wrong with line 4? Also do you see anything else that needs work?
Can you please help me?
import random def even_odd(num): start = 5 while 5 = True: random.randint(, 99) start -= 1 if return not num % 2: print("{} is even".format(num)) else: print("{} is odd".format(num))
1 Answer
Steven Parker203,440 Points
There are a number of issues on that line:
- you can't assign something to a number
- the assignment("=") was probably intended to be a comparison ("==")
- a number will never equal the boolean value
True
- the loop should be checking the value of "start" instead of a fixed number
- when checking a value for "truthiness" you just name it and not compare it to anything
Some other hints for this challenge:
- don't modify the provided "even_odd" function
- but you can call the function to test your numbers
- do all the new work outside of the function
- be sure to give "randint" two arguments
- everything that is part of a loop body must be indented more than the loop itself
annikaheflin577 Points
annikaheflin577 Points
I don't get what I am suppose to do? Please explain in more detail. | https://teamtreehouse.com/community/whats-wrong-with-line-4-also-do-you-see-anything-else-that-needs-work | CC-MAIN-2020-45 | en | refinedweb |
The Fedora Documentation Project holds weekly IRC meetings. All members of the Fedora Project are welcome to participate.
Information about past meetings can be found on the Meetings Minutes page.
Meeting Details
- Date: Wednesday, 30 July 2008
- Time: 1900 Hrs UTC (refer to World clock for 30, 30 July 2008
- Wiki
- Wiki namespaces -- Decide what namespaces we need to organize the wiki at this moment - Archives, Meeting, More?
- Wiki page naming -- RFC on f-docs-l
- PackagingGuidelines needs our love, top priority to make it easier to navigate | https://www.fedoraproject.org/w/index.php?title=Docs_Project_meetings&oldid=42397 | CC-MAIN-2020-45 | en | refinedweb |
Join devRant
Search - "need to remember!187
-22
-
- manager: we need to design an admin system for a veterinary centre
dev: ok, this is it, remember your training
class Dog extends Animal {
- Long rant ahead. Should take about 2-3 minutes to read. So feel free to refill your cup of coffee and take a seat :)
It turns out that the battery in my new Nexus 6P is almost dead. Well not that I didn't expect that, the seller even explicitly put that in the product page. But it got me thinking.. why? Lithium batteries are often good for some 10k charges, meaning that they could last almost 30 years when charged every day! They'd outlive an entire generation of people!
Then I took a look at the USB-C wall charger that Huawei delivered with this thing. A 5V 3A brick. When I saw that, I immediately realized.. aah, that's why this battery crapped out after a mere 2 years.
See, while batteries are often advertised as capable of several amps (like 7A with my LiitoKala 18650 batteries that I often use in projects), that's only the current that they can safely take or deliver without blowing up. The manufacturer doesn't make this current rating with longevity in mind. It's the absolute maximum in current that a given battery can safely handle.
The longevity on the other hand directly depends on the demand that's placed on the battery. 500mA which is standard USB 2.0 rating or 1A which is standard USB 3.0 rating, no sweat. The battery will live for at least a decade of daily charges and discharges like that no problem.
But when you start shoving 3A continuous into a battery, that's when it will suffer. Imagine that your current workload is 500mA and suddenly you get shoved 6 times that work upon you. How long would you last?
Oh and not only the current is a problem, I suspect that it also overvolts the battery to maintain a constant current all the way till the end. When I charged my lithium cells with my lab bench power supply, the battery would only take a few milliamps when it got close to the supply voltage. Quick bit of knowledge: lithium cells are charged at constant current first, then when the current drops below that, it continues at constant voltage - usually 4.2 or 4.35V depending on the battery. So you'd set your lab bench power supply at 4.2V 500mA. But in that constant voltage mode, as the battery's voltage and the supply's voltage equalize, the current drops because the voltage difference becomes lower. Remember, voltage is what causes current to flow. Overvolting at the supply to stay in constant current mode all the way till the end speeds this process up but can be dangerous and requires constant monitoring of the battery voltage.
So, why does Huawei and a bunch of other manufacturers make these 3A power chargers? Well first it's because consumer demands ever more, regardless of the fact that they can just charge at 500mA for the night (8h of sleep) and charge a 4000mAh battery from 0 to 100% no problem. Secondly it's because sometimes you need that little bit of extra juice fast, like when you forgot to plug the damn thing in and you've got only 30 minutes in the morning to pour some charge into it.
But people use those damn fucking things even when they go to bed, making that 3A torture a fucking standard process!! And then they complain that their batteries go to shit?!
Hopefully this now made you realize that the fast charger shouldn't be used as a regular charger ^^30
-
-.)9
-
-.36
-
-
-
-:
- This happens nearly every sprint.
TEAM: So, are you happy with how we are going to make this feature?
Business: Yeah, we really need it! It's exactly list that! Quick build! 🏗
TEAM: You're sure.... remember what happened last time...
Business: yeah, yeah, yeah
TEAM: ☕️💻
one week later....
Business: Oh yeah, that thing, we changed our mind we don't want it can you do something else?
TEAM: ...
Business: Agile!!!!!!!!!
TEAM: 🤦♂️
Found out they all went on a 2 day course to learn SCRUM..
- - If I buy x amount of ram | hard drive space | cpu power I will never need more.
- No need for version control | Tests. This is a small project
- git commit -m "changes" (its a small change. I will remember why I did it)
- It is too obvious that I put a lot of personal time in this so my boss will definitely notice!
- Why comment this simple method? Anyone should know what it does. Especially me!
- "this should never happen"
- This call can't be from work. Everyone knows I am on vacation.
- I will back up next week. It's not like it is going to crush today.
- It’s 11:30 already? I will work on this for only a half hour more and then I will definitely go to bed!
- This project will take x amount of time!2
-
- A Monday morning poem
I enter the bureau, feeling all relaxed and well,
my colleague looks up:
"Abandon all hope, welcome to hell."
This indeed, he doesn't say,
his face only twists a little in dismay:
"I need that schematic, did you finish it yet?
And there also some tests I'd like to get -
how was your week-end by the way?"
I start my computer, don't remember what I say ...
I grab some coffee, half a day is gone,
the PM pressures: "I want that asap done!"
I am cluttered in tasks and bullshit, too:
"Go fuck you right now - yes, I meant you!"
I don't say what I like to, I mentally punch a wall,
I crank some more code out and git-commit it all.
Some devRant on the lunch-break, some shallow talk,
I leave the building and take a short walk.
My mind rotates, I cannot enjoy the scenery now,
I return to my desk, and figure out what to handle and how.
But my plans are crashed by a colleague dashing in:
"I need you to do a test setup! I need to begin -"
I do the setup, I do some other stuff,
At the end of the day I feel totally rough,
Work is piling up even more -
"Tomorrow", I think and close the door.
At home, I just flop on on my bed -
I should be learning instead ... -
with some pizza and chill.
I think about sleeping, I hope that I will.
...
It is now Friday,
my brain is fried, too.
I am finished with this poem - how about you? :.15
-
-
- When you start work on Monday and need half a day to remember what you actually did on Friday.
Someone knows this too
-14
- Me : I found this code issue, I think we need to fix it
PO: does it affect the user?
Me: not really but we can make it better
PO: do you have a defect for it in *insert issue tracker here*
Me: no, I just noticed it
PO: is there an IM ticket for it?
Me: I don't think so
PO: is this issue already in production?
Me: possibly. Yes. That's why I was wondering if we should fix it.
PO: okay then we will fix it in the 3rd release from now if you still remember it by then.5
- //I already write this but this is short version
When i was 8 i write my first "Hello world !"
In C and still continuing but i even learn Java thats all languages i need but lot of people ask me whats was the first moment when i say yes to programming ? It was my dad who did it by sentence "Do you want arduino its Programming board" now im 16 (8 years experience in programming) and im proud of me and my family too i choose IT school but i know already a lot from real life so im feeling bored in class.
Thats all i hope my programming journey will never end and Remember Kill all that fucking bugs !1
-
- >
- Manager: we need to design an admin system for a veterinary centre
Dev: ok, this is it, remember your training
namespace Vetcentre{
class Dog : Animal {
}
}1
-
- New episode on my clients being morons.
Got a call this morning:
Client: hello, we've got a problem here...
Me: tell me about it
C: well... Do you remember the 1200 account we loaded last week ?
Me: yes? What's wrong, we tested them, everything was alright.
C: yeah... But we just noticed we loaded them in the wrong status... Fix that!
Me: easy, we clear the database and load the correct data back.
C: NO WAY! We already worked on 3 accounts. Don't want to lose any of that. Just change the status, it's easy
Me: well not really, there's a lot more going on when you go from one status to another.
C: Don't care, just do it
So... now I need to delete the bad data, checking nothing else gets impacted in the application. And then reload that same data with the proper status this time.
As weird as this sounds like, this is the reason why I love my job. You get challenges like that every single day
-2
-
-
- Ahh if only this were true... (The Google part)
Nice day tho... Next time I need to remember to bring my selfie stick...36
-
- I need quit my job, my boss is an as***le... I want to kick my boss as** but, suddenly remember that I need this job, I need this money to help my mom and my sister to rise her childrens, my nephews...6
- Ok, time to eat some humble pie. I seem to remember ranting about the fact we were going to use an offshore dev house a while back, and I'd convinced myself they were going to be absolutely useless.
Far from it. It's certainly meant I've had something else to do in managing them, and I can't say everything has been completely rosy - but overall, they're a bunch of hard working, decent devs who write good, well-tested code, are receptive to feedback in code reviews and take the initiative and ask questions when they need to. Shame on me for initially thinking otherwise - I'll miss working with them when I leave this place
-
-
- Stubborn fucking cunt. You could have just sent a request to DevOps and this would all be done. But nooo, I just have to do it with all my other tasks and limited access. Now all you do is complain about it not following the structure. Bitch, I don't even have access to the goddamn schema. I can only give you the values or whatever I can get. How the fuck am I supposed to know what I can't even fucking see? I asked what you use to retrieve it so I can at least validate from my side. Nah, makes you think your dick is bigger when you tell me what I did was wrong. If I'm wrong, help me the fuck out and give me the shit I need to do my tasks.
The egos of these hardcore developers and their sense of importance. You develop websites. Jesus Christ, look into the fucking mirror. Someday you're gonna die and no one's gonna remember your fucking website.
This takes one fucking click if someone who has access did it but fuck you, you have to make everything complicated. Fuck you
-!13
- #
-
-.9
- Why is it that if you are not able to remember every single detail about something you've heard, that you as a person are deemed worthless?
Everything you do and need in life is bound to your memory.
Can't remember a name?
That's rude.
Can't remember what a technology does?
You won't get that job.
Can't remember the topics you learned about in school?
You won't get that education.
I can't remember things, my mind is constantly drifting. This, together with my inability to articulate myself clearly, makes me a complete nobody.
I hope that someday I'm just able to do something creative and not have these issues. Until then I'll just try not to jump in front of a train
- Try to finish some of the projects I've started in 2018. Right now I have a todo list text file, along with multiple written lists (the written ones are more focused on a single project normally).
-Finish the startpage I've been doing off and on for at least a month now. I ended up making a lot of it command based (just need to write the scripts for the commands..). I had a little config menu but I just got tired of it and the text box is autofocus anyways, so I figured I'd make it command focused.
-Nice little root safety script as I call it. I've made very stupid mistakes as root before. I once made a typo and ran "chmod --recursive 644 /" while half asleep. I believe I was trying to run that on the current directory I was in, but as you know, the . and / are right next to each other. Basically the script would see what you're doing and echo "you're about to do x, are you sure that's what you want to do?". Something I know I could knock out in a day, but I've been putting it off for at least a year now.
-Compiling notification. I saw something similar once a few years ago, and it was so fucking cool. I remember it being a Mac, and it had a notification that would basically tell you how many files and shit you had left to compile if you were building something. Kinda want to build something for polybar.
-FUCKING RUBBER DUCK DEBUGGING TO THE EXTREME! This one was inspired by a comment someone made once months ago. Might have been here, or reddit, or in real life, not sure. Basically a big ass fucking rubber duck with LEDs in it that will like glow red if your code wouldn't compile (I think Visual Studio has like an automatic error detecting thing in there?? Maybe something similar if I can figure that out). Honestly not sure how the fuck I'd do this one, but I love the idea and I really want to fucking do it
There's more shit. These are just the main ones I want to attempt sometime in the near future.
-
-
-
- Couldn't remember my password that I have been using several times a day for the past year. I even used it today, but could not consciously remember the code.
So I am going to post it here so I can look it up if I need to: 12345
Notice: This is a private message only to be shared with the intended recipient. You must disavow any knowledge. When asked about this message you will spontaneously squawk like a duck. You have been thoroughly disclaimed
-
- So this happened some years ago:
The phone rings and as soon as I pick it up some fast talking sales rep begins his spiel.
"Good afternoon my name is [don't remember, calling him 'jigglybum'] and we have a device that you plug into your phone line and it will allow you to make free international calls over the internet. It's real easy to set up and you can have it on us for the first three months absolutely free, if you could just confirm your address..."
"Don't want it."
"I'm sorry sir but I think you're throwing away a massive opportunity here we're offering you free international calls."
"No you're not. You're offering me a free trial of some sort of VoIP hardware."
"We yes, but it's free for the first three months and..."
"We also don't make international calls."
"That maybe true sir but with this box you could."
"I'm really not interested in your product."
"I don't think you fully understand all the benefits..."
*there's a clicking noise followed by a dial tone for a second and a new voice*
"Hi, I'm the supervisor for 'jigglybum' and I think perhaps he is having difficulty explaining what it is that we are trying to give you here..."
"Listen to me, from what I have understood you are offering to send us a VOIP hardware device that directly connects to our broadband and facilitates international calls, and presumably any calls for that matter on a three month trial which after will presumably have a subscription fee, have I had any difficulty understanding the nature of the device and terms of use?"
"Well, no sir, that's a very accurate description, so if you could just confirm your address for me..."
"NO! As you have just admitted there was no misunderstanding about what your product is or what it does. There seems to be a real misunderstanding on your part on the concept of 'no'. We don't want this product, we don't need the product and if we want to make VOIP calls, we have Skype!"
"Ok sir, goodbye."
This is, to my knowledge the only and only time that a supervisor in a call centre has wanted to talk to ME
- Remember how I made script to change brightness with keyboard shortcuts?...
Well, after 2 days of complaining why the fuck this shit does not work anymore, I figured out I deleted it...also from trash.... OH, FUCK!!
-
-.48
-
- Spent all morning debugging legacy code that I need to migrate.
Most of the time is just waiting for it to load --pieces of data-- entire tables from the database and then filter out the records it doesn't want using some app logic.
WHAT SORT OF MONKEY WRITES CODE LIKE THIS? HOW WAS THIS EVEN ALLOWED INTO PRODUCTION...
I have to open Notepad to write down my chain of thoughts, steps, and things to check once the next breakpoints are hit so that I don't forget them.
So in theory I'm being paid all morning to sit around and do nothing.
That sounds great but I'm falling asleep... Shoulda worked from home...
What was I saying again...yea...
DON'T HIRE MONKEYS!!! THEY WRITE SHIT CODE THAT WASTES EVERYONE'S TIME EVEN AFTER THEY LEAVE...
I'm going to lunch now... Hopefully Notepad has enough into for me to remember what I was doing...
-
- Me: Ok I'm not going to bother upgrading my PC, just going to use it and buy a whole new system later on...
PC: Yo' bro, remember when you had no clue at what you were doing and you didn't buy an 80 Plus certified PSU and used stock intel cooler? Yeah man you're going to need to replace both of these things, that'll be $280!
Of course this fucking happens, knew I should have waited till I actually knew what I was doing with PC hardware!11
- I really think I prefer to be judged in an interview based on a walk through of a project you would want me to work on or a project to work on at a specified time or even my public projects in my repo, not some theoretical basics of programming.
Frankly, most times I barely remember this basic shits but obviously use them as the need arise in projects.1
-
- If a team uses multiple languages and stacks (Have, JS, Python) do you think it's better to have everyone use/constantly switch between them or have dedicated developers for each language (ie. 80% main, 20% others)?
--END QUESTION, ANSWER NOW BEFOREHAND CONTINUING---
---BEGIN RANT---
My boss likes keeping the team "will rounded" so everyone does everything. One month in working in Java, the next with Node web apps. When I switch to node, it takes like a week of "wtf doesn't it work.... what changed, is it a big?" And usually end it"oh right I remember I need to ..."
And also always... "How the fuck do I write tests in {some reading framework} again?"
So feels like everyone is just a generalist and no one is a master/has time to develop mastery. I don't know if it's just me (1/3 Senior developers on the team that has to do everything) or if I'm the only one that complains... Not that it makes a difference... (Only option to really be heard is to resign but I need to somewhere else to work and finding one is hard for personal reasons)
And well this is the biggest reason I would leave the team. No time for mastery, no standardization/shared knowledge (everyone does their own thing but probably not well and no time for testing or documentation; how the fuck does whatever you wrote work, how do we use it, what the fuck did you put in prod that does ... And where the fuck did you put it cuz it's not in ANY of our repos).
I always feel one day soon it will come crashing down and I can say "I told you so" but will then it's too late and I'll be there one cleaning it up... Again6
-
- Installing a third OS in my PC (Manjaro, alongside Win10 and OSX Mojave). I do not remember how I configured dual boot for Win10 and OSX and now I have no clue of what I'm doing. Nothing will boot and I'm having to boot OSX through clover installed on my USB stick. Good thing I'm on quarantine and have a lot of time to play with this. Oh wait not really since I have some college work to do and I need Linux for that. Yay.8
-
- Okay, my first serious rant.
An acquaintance of mine when needed my help always explain his problem equivocally. Like, he would explain laboriously of the method to achieve what he needed when the thing he only needed is just a simple API call. Im not saying im an expert in this area but his explanation doesnt help me to understand his problem. If i do not understand his problem, how can i help him? At least if i know what his problem is and i cant help, i can seek help from others.
And hes not even working in the same company as me. And he wants it solved ASAP. I dont know your problem, yet you want me to solve it? I dont even know if im capable of solving it! And I have my own job to do..
He always try hard to explain it. He tried to sound professional. And he always ask for my help first because I knew he doesnt want others to know that he doesnt know how to code. Why do you apply for the position if you know you cant handle it?! Everytime. He's been fired before. And he did it again. I cant. We are fresh graduate. Apply for a fresh grad position. If you dont know anything, just said you dont know unless youre very quick to learn..
I remember once we need to submit a linux commands or something homework. We need to code it during the class and submit it by the end of the class. He asked me to code for him while mine is still half done. "Quicker please!" he remarked. There were still plenty of our classmates still doing it and some even havent done it yet. What the f are you rushing i felt like slapping him in the face with the keyboard at that time but because i am a matured adult i did not do it.
Hes not even a bully he just always get panic without reasons. He wants things done early and then he can post on social media. "Oh so tired this program is so complicated" or like "Oh damn, they want me to lead the group again (roll eyes emoticon)"...
Please somebody run over him.
Hes making me bald everyday and i think this is unhealthy. If he wants to get bald, get bald alone. I was just starting to work but my hair has been falling everyday.6
-
-
- I remember when I was at vocational school, my teacher sat us down and had use start web development with HTML, (HTML wasn’t my first programming experience but that’s a rant for another day) and after I printed Hello World and changed it’s color, I was even more hooked than ever. This is something that’s fun, and interesting and I don’t need to pay to do or be around specific things to be able to practice. I can do this at home or at school and I can make my own programs if I need them, automate mundane tasks, and learn so much more about technology than ever.
And the final thing that sealed the deal was I could do this and make money and not be stuck in a field I would be miserable in. Which was a very important factor for me.4
- ..
- | https://devrant.com/search?term=need+to+remember+this | CC-MAIN-2020-45 | en | refinedweb |
Appearance
import { Appearance } from 'react-native';
The
Appearance module exposes information about the user's appearance preferences, such as their preferred color scheme (light or dark).
The
AppearanceAPI is inspired by the Media Queries draft from the W3C. The color scheme preference is modeled after the
prefers-color-schemeCSS media feature.
The color scheme preference will map to the user's Light or Dark theme preference on Android 10 (API level 29) devices and higher.
The color scheme preference will map to the user's Light or Dark Mode preference on iOS 13 devices and higher.
ExampleExample
You can use the
Appearance module to determine if the user prefers a dark color scheme:
const colorScheme = Appearance.getColorScheme(); if (colorScheme === 'dark') { // Use dark color scheme }
Although the color scheme is available immediately, this may change (e.g. scheduled color scheme change at sunrise or sunset). Any rendering logic or styles that depend on the user preferred color scheme should try to call this function on every render, rather than caching the value. For example, you may use the
useColorScheme React hook as it provides and subscribes to color scheme updates, or you may use inline styles rather than setting a value in a
StyleSheet.
ReferenceReference
MethodsMethods
getColorScheme()
static getColorScheme().
See also:
useColorScheme hook.
Note:
getColorScheme()will always return
lightwhen debugging with Chrome.
addChangeListener()
static addChangeListener(listener)
Add an event handler that is fired when appearance preferences change.
removeChangeListener()
static removeChangeListener(listener)
Remove an event handler. | https://reactnative.dev/docs/appearance | CC-MAIN-2020-45 | en | refinedweb |
.
Understanding Render()
In the last part of this series, the following code was added at the end of the “index.js” file:
ReactDOM.render( <h1>Hello, world!</h1>, document.getElementById('root') );
The render() function as part of ReactDOM object is central to how React works when used to create HTML content. It “renders” all of the code passed to it.
Renders?
The two arguments passed to the RectDOM object in this example are (1) the HTML content and (2) where it should put this content. (By default, the root will be an element with the id of ‘root’.)
The HTML code passed to the render() function may seem strange, as it was not enclosed in strings or otherwise escaped somehow. This is one of the most powerful parts of how React works: HTML can be used directly in the JavaScript code!
By passing it to the render() function, it is “rendered” into code that the underlining system understands and passed off to be processed. In the example, the simple use of a single tag is not overly complex, but it demonstrates the basis of how React works: everything is “rendered” into HTML by its code.
All Components Render()
When working with React, individual parts are called components. Think of them as sections of the interface.
Every component has the same function as the ReactDOM object: render().
With this in mind, it becomes easier to see how to use components. Because all components also render content, they can also use HTML in their code. Different parts of an interface could be broken up into sections (components) and then combined together. And each would have access to the same ability to render its contents!
Inheriting from React.Component
To use the React.Component class, the keyword extends can be used to “extend” a base object into an existing one as part of JavaScript ES6.
class Example extends React.Component { render() { return ( <h1>Hello, world!</h1> ); } }
Writing a new class Example, then, means using the extends keyword and having its parent be the React.Component class. This makes a new object a React component!
As mentioned before, all components render().
Thus, adding a render() function to the class allows it to render its own content. Add in a return statement helps it to pass HTML code. Like with the ReactDOM.render() function, it returns code that is ultimately rendered.
Using React.Component Classes with ReactDOM
React understands JavaScript classes a little differently than normal. In order to have them rendered as part of the normal ReactDOM.render() function, they must be included as if they too were HTML.
An existing Example class that extends the React.Component object, then, becomes the following:
<Example />
Notice that it has an ending slash, /. This signals, like with HTML, that the tag is self-closing.
Putting It All Together
The following code makes a new class called Example that inherits from React.Component and has its own render() function. This new class is included as part of the ReactDOM.render() process by using it as if it was a HTML tag.
import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; class Example extends React.Component { render() { return ( <h1>Hello, world!</h1> ); } } ReactDOM.render( <Example />, document.getElementById('root') );
While the final result did not change, the code that produced it did. Now, a class based on a React Component is passing off its rendering to the ReactDOM.render() function. | https://videlais.com/2019/05/25/learning-react-part-2-understanding-render/ | CC-MAIN-2020-45 | en | refinedweb |
Code generated by idea:
package obj {
public class Person {
private var firstName:String;
private var lastName:String;
public function Person() {
}
public function get firstName():String {
return firstName;
}
public function set firstName(value:String):void {
firstName = value;
}
public function get lastName():String {
return lastName;
}
public function set lastName(value:String):void {
lastName = value;
}
}
}
Code generated by idea:
Generate Getter/Setter action in ActionScript is designed according to Flex coding conventions.
"Give the storage variable for the getter/setter foo the name _foo"
So please rename your variables to _firstName and _lastName and you'll get expected behavior.
Thank you for quick reply and work around help!
But convention is the only convention. What if i write code as i prefer (i have experience in java)?
Can i expect to see compilable code generated by Idea in next releases?
getter/setter in java is a method that has name getXXX/setXXX, whereas in flex there are special keywords get/set. So generally speaking java-style code in Flex is not what is called getters/setters, so this is not what Alt+Ins action is about.
If you are talking about java-style code - I'm not sure that it is widely used approach in Flex. May be a different approach looks better: automatically rename property field to _property before generating getter/setter.
I think It would be a good thing. Thanks.
Alexander,
That would be a great feature enhancement. I often find I want to encapsulate a property in a setter & getter. Currently I have to first do a rename of property to _property and then do the setter/getter generation. It'd be great if this was done in one step. Additionally, if propery is currently public, having IDEA change it to private or protected (selected via an option in the getter/setter dialog) would be icing on the cake. Let me know if you'd like a YouTrack feature request submitted on these.
Thanks,
Mark
Yes, please create an issue has been created.
Thanks Alexander. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206955475-ActionScript-invalid-object-getter-setter-generation-with-alt-insert?page=1 | CC-MAIN-2020-45 | en | refinedweb |
boxes;
import React, { Component } from "react"; import { View, Text } from "react-native"; class App extends Component { render() { return ( <View style={{ flexDirection: "row", height: 100, padding: 20 }} > <View style={{ backgroundColor: "blue", flex: 0.3 }} /> <View style={{ backgroundColor: "red", flex: 0.5 }} /> <Text>Hello World!</Text> </View> ); } } export default App;
Views are designed to be used with
StyleSheetfor clarity and performance, although inline styles are also supported.
Synthetic Touch EventsSynthetic Touch Events
For
View responder props (e.g.,
onResponderMove), the synthetic touch event passed to them are in form of PressEvent.
ReferenceReference
PropsProps
onStartShouldSetResponder
Does this view want to become responder on the start of a touch?
View.props.onStartShouldSetResponder: (event) => [true | false], where
event is a PressEvent... PressEvent.Reject
Another responder is already active and will not release it to that
View asking to be the responder.
View.props.onResponderRe.
pointerEvents
Controls whether the
View.
focusable
Whether this
View should be focusable with a non-touch input device, eg. receive focus with a hardware keyboard. | https://reactnative.dev/docs/view | CC-MAIN-2020-45 | en | refinedweb |
Summary
I’ve done a lot of .Net Web APIs. APIs are the future of web programming. APIs allow you to break your system into smaller systems to give you flexibility and most importantly scalability. It can also be used to break an application into front-end and back-end systems giving you the flexibility to write multiple front-ends for one back-end. Most commonly this is used in a situation where your web application supports browsers and mobile device applications.
Web API
I’m going to create a very simple API to support one GET Method type of controller. My purpose is to show how to add Cross Origin Resource Sharing CORS support and how to connect all the pieces together. I’ll be using a straight HTML web page with a JQuery page to perform the AJAX command. I’ll also use JSON for the protocol. I will not be covering JSONP in this article. My final purpose in writing this article is to demonstrate how to troubleshoot problems with APIs and what tools you can use.
I’m using Visual Studio 2015 Community edition. The free version. This should all work on version 2012 and beyond, though I’ve had difficulty with 2012 and CORS in the past (specifically with conflicts with Newtonsoft JSON).
You’ll need to create a new Web API application. Create an empty application and select “Web API” in the check box.
Then add a new controller and select “Web API 2 Controller – Empty”.
Now you’ll need two NuGet packages and you can copy these two lines and paste them into your “Package Manager Console” window and execute them directly:
Install-Package Newtonsoft.Json
Install-Package Microsoft.AspNet.WebApi.Cors
For my API Controller, I named it “HomeController” which means that the path will be:
myweburl/api/Home/methodname
How do I know that? It’s in the WebApiConfig.cs file. Which can be found inside the App_Start directory. Here’s what is default:
config.Routes.MapHttpRoute(
name: “DefaultApi“,
routeTemplate: “api/{controller}/{id}“,
defaults: new { id = RouteParameter.Optional }
);
The word “api” is in all path names to your Web API applications, but you can change that to any word you want. If you had two different sets of APIs, you can use two routes with different patterns. I’m not going to get any deeper here. I just wanted to mention that the “routeTemplate” will control the url pattern that you will need in order to connect to your API.
If you create an HTML web page and drop it inside the same URL as your API, it’ll work. However, what I’m going to do is run my HTML file from my desktop and I’m going to make up a URL for my API. This will require CORS support, otherwise the API will not respond to any requests.
At this point, the CORS support is installed from the above NuGet package. All we need is to add the following using to the WebApiConfig.cs file:
using System.Web.Http.Cors;
Then add the following code to the top of the “Register” method:
var cors = new EnableCorsAttribute(“*“, “*“, “*“);
config.EnableCors(cors);
I’m demonstrating support for all origins, headers and methods. However, you should narrow this down after you have completed your APIs and are going to deploy your application to a production system. This will prevent hackers from accessing your APIs.
Next, is the code for the controller that you created earlier:
using System.Net;
using System.Net.Http;
using System.Web.Http;
using WebApiCorsDemo.Models;
using Newtonsoft.Json;
using System.Text;
namespace WebApiCorsDemo.Controllers
{
public class HomeController : ApiController
{
[HttpGet]
public HttpResponseMessage MyMessage()
{
var result = new MessageResults
{
Message = “It worked!“
};
var jsonData = JsonConvert.SerializeObject(result);
var resp = new HttpResponseMessage(HttpStatusCode.OK);
resp.Content = new StringContent(jsonData, Encoding.UTF8, “application/json“);
return resp;
}
}
}
You can see that I serialized the MessageResults object into a JSON message and returned it in the response content with a type of application/json. I always use a serializer to create my JSON if possible. You can generate the same output using a string and just building the JSON manually. It works and it’s really easy on something this tiny. However, I would discourage this practice because it becomes a programming nightmare when a program grows in size and complexity. Once you become familiar with APIs and start to build a full-scale application, you’ll be returning large complex data types and it is so easy to miss a “{” bracket and spend hours trying to fix something that you should not be wasting time on.
The code for the MessageResults class is in the Models folder called MessageResults.cs:
public class MessageResults
{
public string Message { get; set; }
}
Now we’ll need a JQuery file that will call this API, and then we’ll need to setup IIS.
For the HTML file, I created a Home.html file and populated it with this:
<!DOCTYPE html>
<html>
<head>
<title></title>
<meta charset=”utf-8” />
<script src=”jquery-2.1.4.min.js“></script>
<script src=”Home.js“></script>
</head>
<body>
</body>
</html>
You’ll need to download JQuery, I used version 2.1.4 in this example, but I would recommend going to the JQuery website and download the latest version and just change the script url above to reflect the version of JQuery that you’re using. You can also see that I named my js file “Home.js” to match my “Home.html” file. Inside my js file is this:
$(document).ready(function () {
GetMessage();
});
function GetMessage() {
var url = ““;
$.ajax({
crossDomain: true,
type: “GET“,
url: url,
dataType: ‘json‘,
contentType: ‘application/json‘,
success: function (data, textStatus, jqXHR) {
alert(data.Message);
},
error: function (jqXHR, textStatus, errorThrown) {
alert(formatErrorMessage(jqXHR, textStatus));
}
});
}
There is an additional “formatErrorMessage()” function that is not shown above, you can copy that from the full code I posted on GitHub, or just remove it from your error return. I use this function for troubleshooting AJAX calls. At this point, if you typed in all the code from above, you won’t get any results. Primarily because you don’t have a URL named “” and it doesn’t exist on the internet (unless someone goes out and claims it). You have to setup your IIS with a dummy URL for testing purposes.
So open the IIS control panel, right-click on “Sites” and “Add Website”:
For test sites, I always name my website the exact same URL that I’m going to bind to it. That makes it easy to find the correct website. Especially if I have 50 test sites setup. You’ll need to point the physical path to the root path of your project, not solution. This will be the subdirectory that contains the web.config file.
Next, you’ll need to make sure that your web project directory has permissions for IIS to access. Once you create the website you can click on the website node and on the right side are a bunch of links to do “stuff”. You’ll see one link named “Edit Permissions”, click on it. Then click on the “Security” tab of the small window that popped up. Make sure the following users have full permissions:
IUSR
IIS_IUSRS (yourpcnameIIS_IUSRS)
If both do not exist, then add them and give them full rights. Close your IIS window.
One more step before your application will work. You’ll need to redirect the URL name to your localhost so that IIS will listen for HTTP requests.
Open your hosts file located in C:WindowsSystem32driversetchosts. This is a text file and you can add as many entries into this file that you would like. At the bottom of the hosts file, I added this line:
127.0.0.1
You can use the same name, or make up your own URL. Try not to use a URL that exists on the web or you will find that you cannot get to the real address anymore. The hosts file will override DNS and reroute your request to 127.0.0.1 which is your own PC.
Now, let’s do some incremental testing to make sure each piece of the puzzle is working. First, let’s make sure the hosts table is working correctly. Open up a command window. You might have to run as administrator if you are using Windows 10. You can type “CMD” in the run box and start the window up. Then execute the following command:
ping
You should get the following:
If you don’t get a response back, then you might need to reboot your PC, or clear your DNS cache. Start with the DNS cache by typing in this command:
ipconfig /flushdns
Try to ping again. If it doesn’t work, reboot and then try again. After that, you’ll need to select a different URL name to get it to work. Beyond that, it’s time to google. Don’t go any further until you get this problem fixed.
This is a GET method, so let’s open a browser and go directly to the path that we think our API is located. Before we do that, Rebuild the API application and make sure it builds without errors. Then open the js file and copy the URL that we’ll call and paste it into the browser URL. You should see this:
If you get an error of any type, you can use a tool called Fiddler to analyze what is happening. Download and install Fiddler. You might need to change Firefox’s configuration for handling proxies (Firefox will block Fiddler, as if we needed another problem to troubleshoot). For the version of Firefox as of this writing (42.0), go to the Options, Advanced, Network, then click the “Settings” button to the right of the Connection section. Select “Use system proxy settings”.
OK, now you should be able to refresh the browser with your test URL in it and see something pop up in your Fiddler screen. Obviously, if you have a 404 error, you’ll see it long before you notice it on Fiddler (it should report 404 on the web page). This just means your URL is wrong.
If you get a “No HTTP resource was found that matches the request URI” message in your browser, you might have your controller named wrong in the URL. This is a 404 sent back from the program that it couldn’t route correctly. This error will also return something like “No type was found that matches the controller named [Home2]” where “Home2” was in the URL, but your controller is named “HomeController” (which means your URL should use “Home”).
Time to test CORS. In your test browser setup, CORS will not refuse the connection. That’s because you are requesting your API from the website that the API is hosted on. However, we want to run this from an HTML page that might be hosted someplace else. In our test we will run it from the desktop. So navigate to where you created “Home.html” and double-click on that page. If CORS is not working you’ll get an error. You’ll need Fiddler to figure this out. In Fiddler you’ll see a 405 error. If you go to the bottom right window (this represents the response), you can switch to “raw” and see a message like this:
HTTP/1.1 405 Method Not Allowed
Cache-Control: no-cache
Pragma: no-cache
Allow: GET
Content-Type: application/xml; charset=utf-8
Expires: -1
Server: Microsoft-IIS/10.0
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sun, 15 Nov 2015 00:53:34 GMT
Content-Length: 96
<Error><Message>The requested resource does not support http method ‘OPTIONS’.</Message></Error>
The first request from a cross origin request is the OPTIONS request. This occurs before the GET. The purpose of the OPTIONS is to determine if the end point will accept a request from your browser. For the example code, if the CORS section is working inside the WebApiConfig.cs file, then you’ll see two requests in Fiddler, one OPTIONS request followed by a GET request. Here’s the OPTIONS response:
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Server: Microsoft-IIS/10.0
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: content-type
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sun, 15 Nov 2015 00:58:23 GMT
Content-Length: 0
And the raw GET response:
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 24
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/10.0
Access-Control-Allow-Origin: *
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sun, 15 Nov 2015 01:10:59 GMT
{“Message”:”It worked!”}
If you switch your response to JSON for the GET response, you should see something like this:
One more thing to notice. If you open a browser and paste the URL into it and then change the name of MyMessage action, you’ll notice that it still performs a GET operation from the controller, returning the “It worked!” message. If you create two or more GET methods in the same controller one action will become the default action for all GET operations, no matter which action you specify. Modify your route inside your WebApiConfig.cs file. Add an “{action}” to the route like this:
config.Routes.MapHttpRoute(
name: “DefaultApi“,
routeTemplate: “api/{controller}/{action}/{id}“,
defaults: new { id = RouteParameter.Optional }
);
Now you should see an error in your browser if the the action name in your URL does not exist in your controller:
Finally, you can create two or more GET actions and they will be distinguished by the name of the action in the URL. Add the following action to your controller inside “HomeController.cs”:
[HttpGet]
public HttpResponseMessage MyMessageTest()
{
string result = “This is the second controller“;
var jsonData = JsonConvert.SerializeObject(result);
var resp = new HttpResponseMessage(HttpStatusCode.OK);
resp.Content = new StringContent(jsonData, Encoding.UTF8, “application/json“);
return resp;
}
Rebuild, and test from your browser directly. First use the URL containing “MyMessage”:
Then try MyMessagetest:
Notice how the MyMessageTest action returns a JSON string and the MyMessage returns a JSON message object.
Where to Find the Source Code
You can download the full Visual Studio source code at my GitHub account by clicking here. | http://blog.frankdecaire.com/2015/11/14/web-apis-with-cors/ | CC-MAIN-2018-13 | en | refinedweb |
Learn to create the game “Spin The Bottle” on Android
What is better that creating some funny games to learn the basics of Android development ? Today, you’re going to discover more on the Android Animation API by creating the game “Spin The Bottle”.
Note that you can discover this tutorial in video on Youtube too :.
With our application, it is not necessary any more to have a real bottle, you will just need a smartphone. And it’s great because everyone has always a smartphone on him nowadays.
First step is to find two specific images : one for the floor which will be defined as background of the game and an other for the bottle which will be rotated during the game.
We have chosen to use these both images :
- Floor image for the background
- Bottle image
Now, we need to define the layout of our game “Spin The Bottle”. We have a Relative Layout with the floor image as background and we add a bottle in the center of this layout :
<?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns: <ImageView android: </RelativeLayout>
The next step is to write the Java code of the Main Activity. First, we get references for the views. Then, we install a click listener on the main layout of the Main Activity. Thus, the user will just have to touch the screen to spin the bottle and to start the game.
The core of the game is in the spinTheBottle() method. We are going to define a RotateAnimation from start angle to end angle centered on the center of the bottle. The end angle will be defined randomly to simulate really the game “Spin The Bottle”. We should store the last end angle value to restart the bottle in the same position for the next turn of the game.
The Code of the Main Activity will have the following form :
package com.ssaurel.spinthebottle; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.view.Menu; import android.view.MenuItem; import android.view.View; import android.view.animation.Animation; import android.view.animation.RotateAnimation; import android.widget.ImageView; import android.widget.Toast; import java.util.Random; public class MainActivity extends AppCompatActivity { public static final Random RANDOM = new Random(); private View main; private ImageView bottle; private int lastAngle = -1; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); main = findViewById(R.id.root); bottle = (ImageView) findViewById(R.id.bottle); main.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { spinTheBottle(); } }); Toast.makeText(this, R.string.touch_to_spin, Toast.LENGTH_SHORT).show(); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.main, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { switch(item.getItemId()) { case R.id.action_spin : spinTheBottle(); break; case R.id.action_zero : resetTheBottle(); break; } return super.onOptionsItemSelected(item); } private void spinTheBottle() { int angle = RANDOM.nextInt(3600 - 360) + 360; float pivotX = bottle.getWidth() / 2; float pivotY = bottle.getHeight() / 2; final Animation animRotate = new RotateAnimation(lastAngle == -1 ? 0 : lastAngle, angle, pivotX, pivotY); lastAngle = angle; animRotate.setDuration(2500); animRotate.setFillAfter(true); bottle.startAnimation(animRotate); } private void resetTheBottle() { float pivotX = bottle.getWidth() / 2; float pivotY = bottle.getHeight() / 2; final Animation animRotate = new RotateAnimation(lastAngle == -1 ? 0 : lastAngle, 0, pivotX, pivotY); lastAngle = -1; animRotate.setDuration(2000); animRotate.setFillAfter(true); bottle.startAnimation(animRotate); } }
You can run the application and play your game :
To go further, you can download the game “Spin The Bottle” created in this tutorial on the Google Play Store :
Hi, thanks for this useful tutorial for newbies like me 🙂
I have this 3 errors:
Can you help me please?
Thanks!
Hi,
You need to create a main.xml menu in res/menu with both items : action_spin and action_zero.
Follow the video to learn how to make that.
Sylvain
Hi again, thank you for reply. The video and the code of this post are a little different. In the video there is not resetTheBottle() method and this lines of code for example -> (I’ve problems here xD) | http://www.ssaurel.com/blog/learn-to-create-the-game-spin-the-bottle-on-android/ | CC-MAIN-2018-13 | en | refinedweb |
The first step is the replacement of all strings of the form:
"This needs to be translated"
by the following call (interpreted to be a C macro in Gnu's gettext)
_("This needs to be translated")
which is very simple.
The standard way then requires the use of gettextand results in the creation of ".pot" files (Portable Object Templates) to be copied and translated in ".po" (Portable Object) files by a human translator; these are then converted into ".mo" (Machine Object) files by a compiler. Yet, a few more steps, mostly dealing with directory structures and setting up locales, are needed before one can conclude the process.
I present here a simpler way to proceed which, at a later time, can easily be converted to the standard gettext method as it basically uses the same "_()" notation required by gettext. This was inspired by a comp.lang.python post by Martin v. Löwis to a question I asked almost a year ago.
Consider the following simple (English) program [Note: "." are used to indicate indentation as Blogger appears to eat up leading spaces]
def name():The French translation of this program would be
....print "My name is Andre"
if __name__ == '__main__':
....name()
# -*- coding: latin-1 -*-Without further ado, here's the internationalized version using a poor man's i18n method and demonstrating how one can easily switch between languages:
def name():
....print u"Je m'appelle André"
if __name__ == '__main__':
....name()
from translate import _, selectThe key here is the creation of the simple translate.py module:
def name():
....print _("My name is Andre")
if __name__ == '__main__':
....name()
....select('fr')
....name()
....select('en')
....name()
__language = 'en' # leading double underscore to avoid namespace collisionstogether with the language-specific fr.py module containing a single dictionary whose keys are the original English strings.:
def select(lang):
....global __language
....__language = lang
....if lang == 'fr':
........global fr
........import fr
def _(message):
....if __language == 'en':
........return message
....elif __language == 'fr':
........return fr.translation[message]
# -*- coding: latin-1 -*-That's it! Try it out!
translation = {
"My name is Andre" : u"Je m'appelle André"
}
In conclusion, if you want to make your programs translation friendly, all you have to do is:
- replace all "strings" by _("strings")
- include the statement
"from translate import _, select"at the beginning of your program.
- create a file named "translate.py" containing the following:
def select(lang):and leave the rest to the international users/programmers; you will have done them a huge favour!
....pass
def _(message):
....return message | https://aroberge.blogspot.com/2005/07/ | CC-MAIN-2018-13 | en | refinedweb |
Introduction: Pi Day | Python Program
As you know already, pi is an endless number often used in math. It is celebrated on 3/14 (March 14th) because the standard use pi is 3.14
Please Note: I am doing this from a Mac, but it also works with extra software for windows. It also works in the Python Shell on the Raspberry Pi.
Step 1: Step 1: Download and Install Textwrangler
Download and install TextWrangler. To do this, go to and select your computer version. Then download. I recommend that you do it from this website because it is the official one and it prevents the likelihood of virus and malware infections.
Once it is done downloading, open it.
Step 2: Step 2: Create New Python Program
Whether you are into computer programming (for this-python), or not, you just need to copy and paste the following code into a new document.
To Open a New Document: Go to File>New>Text Document
import time
today = raw_input("What is today??") if today == "pi day": print "Yea!!!" time.sleep(2) print "3.14159265358979323846"
This document will not be reading as if it is python till you tell it to. So, save your document. But, before you click save, change the extension (.txt) to .py
This will make it a python program that you can run.
Step 3: Step 3: Run!
Run your program by going to #!>Run in Terminal at the top of the screen. It may take a minute, but it will run your program in Terminal.
First, it will say: What is today?
Respond by typing:
pi day
followed by the return key.
It will then say: Yea!!
Then, 3.14159265358979323846
After that, the program is done
If there is something specific you would like to add, but you do not program, let me know and I can try to elaborate on if for you and send you the code.
If you do program with python and you can make a really cool update to this code, let me know down below and I will definitly respond.
Any Questions?
Ask them below and I will respond.
Have a nice day!!
Recommendations
We have a be nice policy.
Please be positive and constructive. | http://www.instructables.com/id/Pi-Day-Python-Program/ | CC-MAIN-2018-13 | en | refinedweb |
Introduction: Simple Auto Fish Feeder in Ardinuo
I intended to take a holiday trip with my family, but no one take care my fishes, I check out other instructablers decided to build my auto fish feeder, my only principle is simple to make, save water and save power, I found the roller method is much easier to do.
Step 1: Simple Parts
I found an old toothpick tube suitable for the roller, I need the servo and the disk to mount the roller. Of course any parts can be used, most important to save water and save power, I drilled 5 holes 5mm each on the tube, mounted the servo disk on the bottom (no matter up or down), number of holes and how big is the hole concerned how many food to be given.
Step 2: Simple Assembly
Then I build a ply wood bracket to mount the servo and hang the bracket underneath the lamp position, no screws and no glue used, save water save power, fish food can be scattered from the top.
Step 3: Simple Code
The servo motor is controlled by Arduino code, major parts of the code refer to servo sweep example, I intended to feed the fishes once or twice a day, probably at the same time, so I need a timer, I found the millis() function very usable to count the second, to feed once daily I set the counter up to 86400 seconds, no need RTC.
The roller turned by servo motor at 180 degrees, I tested it rolls three times should be enough food, the shake() function will be called three time every 86400 seconds, save water save power!
#include <Servo.h>
Servo myservo; // create servo object to control a servo // twelve servo objects can be created on most boards
int second =0 ;
int pos = 0; // variable to store the servo position
void setup() {
myservo.attach(9); // attaches the servo on pin 9 to the servo object
myservo.write(pos);
Serial.begin(9600);
}
void loop() {
static unsigned long lastTick = 0;
if (millis() - lastTick >= 1000) {
lastTick = millis();
second++;
Serial.println(second);
}
if (second >= 86400) { //or 43200 for twice a day
shake();
delay(100);
shake();
delay(100);
shake();
second = 0; //reset counter
}
}
void shake() { // roll would be much better, at first I consider to shake it, this example in the Arduino 4: Simple Operation
The servo can direct plug in the Arduino board power, because it only moves once per day. With the holes downward when the servo turned to 180 degrees, simply press reset and wait it carry on until tomorrow, save water save power!
Recommendations
We have a be nice policy.
Please be positive and constructive.
4 Comments
My servo just got really hot.Has anyone else had this problem??
yes,you should check the voltage.If you use voltage above 5+ your servo will get hot(usually).OR if you use it for a very very very very long time
Great idea! I love the simplicity of your plan and the ability to time between dispensing food without the use of a RTC.
Thanks for your watch, but I made a minor mistake, the data type of second must correct to static unsigned long second = 0; because int type cannot accommodate 86400. | http://www.instructables.com/id/Simple-Auto-Fish-Feeder-in-Ardinuo/ | CC-MAIN-2018-13 | en | refinedweb |
Write some Software -- 9
Budget R$30-150 BRL
The project is an data generator that uses a XSD to describe the XML structure that have to be generated. This XSD are builded using some namespaces, some optional elements composition, various enumarations and string fields.
The code project uses Java 1.8 with Maven and JAX-B. It already have the Java classes generated by XJC and a validated XML file as example.
The main goal is generate some random compositions that are included the a optional and mandatory XSD blocks. It is mandatory that some elements have to consume a domain inclued on a text field. It will be described after the start of project.
The second goal is implement an command line argument menu options to execute the project.
We use Git and Gif Flow. It is important that all progress are versioned using this structure. The code have to be reviewed.
On the attachments there are an example from the XSD used the be generated. | https://www.fr.freelancer.com/projects/software-architecture/write-some-software-14846855/ | CC-MAIN-2018-13 | en | refinedweb |
Start a PCM playback channel running
#include <sys/asoundlib.h> int snd_pcm_playback_go ( snd_pcm_t *handle );
libasound.so. | http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.audio/topic/libs/snd_pcm_playback_go.html | CC-MAIN-2018-13 | en | refinedweb |
It seemed a simple bug report. "When we close the editing screen the framework asks the user to save the temporary portfolio we use internally as storage. Make sure it gets removed cleanly instead". "Should be easy," I thought. "We can't be leaking the objects, as the whole form and its helpers are written in C#. I just need to destroy the portfolio object at the right time."
The resulting solution opened my eyes to what Garbage Collection does and, more importantly, what it doesn't do. To understand this, we'll go back to the very basics of memory and object management, and see what various techniques are available. I'll concentrate on the C family of languages: C, C++, C# and the upcoming C++/CLI.
The diagram below shows the stages in the life of memory and objects.
Raw memory is very simple: you acquire some from a pool of available memory, use it, and release it back to the pool to be reused. Failing to release it causes it to be considered "leaked".
Objects are slightly more complex because as well as obtaining the raw memory for their storage, they need to be initialised to a usable state and establish their class invariant, and have that state destroyed before releasing the memory. If an object is not destroyed then its state is considered leaked, which is important if that state is a scarce non-memory resource such as system file handles.
Let's look at some pseudo-code for creating and destroying an object, and then see how the C family languages map onto each part:
Memory_location memory_for_T = Acquire_Memory(size_of_T); if(succeeded) { T_location T_object = Initialise_T(memory_for_T); if(not succeeded) Release_Memory(memory_for_T); else { // use T_object.... Destroy_T(T_object); Release_Memory(memory_for_T); } }
There are four situations I'll look at: creating an object on the stack; as a base class of some other object; as a member of an object; and on the heap. We'll illustrate these by considering a class described loosely as:
class B : A { C c; D* d; }
We'll initialise each instance of A, B, C, and D with the numbers 1, 2, 3, and 4.
C++ provides a very simple and clean solution
class B : public A { public: B(int b_param) : A(1), c(3), d(new D(4)) {} private: C c; std::auto_ptr<D> d; }; int main() { B b(2); // use b }
In C++, and all the other languages, the size of an object is worked out by the system, and takes into account all the space for the sub-objects.
If objects are constructed on the stack, then the memory is acquired and released automatically, often by just adjusting the stack pointer. Constructors play the part of the initialise function, and the programmer usually writes them, although there are cases where the compiler will generate them. When objects go out of scope the destructor is automatically called and the memory released.
Destructors are the destroy functions and the compiler automatically calls the destructors for base classes and members. It can even write them too - if there is nothing else needed other than the members' destructors to be called, then the required destructor is trivial, and the compiler will generate it for us if we leave it out altogether. Members of other objects are very similar to stack variables. In particular, when the outer object is destroyed, all its member objects are destroyed too.
Allocation on the heap is done using a "new-expression" which allocates memory and calls the required constructor. A "delete-expression" calls the destructor and frees the memory.
Experts at Kipling's game of "Kim" may have spotted something missing - error checking. Fortunately the compiler generates it all for you using exceptions. I'd have to be more careful if I had raw pointers in my class, but wrapping them up in auto_ptr makes that problem go away and I can be lazy and correct, which is every programmer's ideal.
C is more verbose - after all you have to do a lot more yourself. The only things the compiler will do for you is allocate and deallocate space on the stack and for struct members, and tell you the size needed for objects using the sizeof operator.
typedef struct B_tag { A a; C c; D* d; } B; D* D_new(int d_param) { void* memory = malloc(sizeof(D)); if(memory == NULL) goto malloc_failed; D* d = D_init(memory, d_param); if(d == NULL) goto init_failed; return d; init_failed: free(memory); malloc_failed: return NULL; } B* B_init(void* memory, int b_param) { A* a = A_init(memory, 1); if(a == NULL) goto A_failed; C* c = C_init(memory + offsetof(B, c), 3); if (c == NULL) goto c_failed; d = D_new(4); if(d == NULL) goto d_failed; return (B*)memory; d_failed: C_destroy(c); c_failed: A_destroy(a); A_failed: return NULL; } void* B_destroy(B* b) { // assume destruction can't fail free(D_destroy(b->d)); C_destroy(&b->c); A_destroy((A*)b); } int main() { B b; B_init(&b, 2); // use b B_destroy(&b); }
This is directly analogous to the C++ solution, and illustrates the sort of tricks the C++ compiler is doing behind the scenes, in particular the error checking and clearing up of partially constructed objects. But it is a lot of work. I'll leave it as an exercise for the reader to come up with a better solution in terms of writing and maintaining this sort of code.
Garbage Collection replaces manual releasing of memory such that 'leaked' memory is automatically reclaimed by the system and is then available for use.
It does this by finding objects that are no longer needed (technically, objects that are unreachable from "root" objects such as global variables and the stack by following member references) and reclaims their memory for future use. It can be compared to treating the program as having an infinite amount of memory - if you can never run out, then you don't need to bother to delete anything, and all objects can live forever and can be thought of as immortal [Griffiths].
It is very tempting, when starting to use garbage collection, to think that it means you don't have to worry any more about the tedious work of keeping track of object ownership and lifetimes, and the programmer can concentrate on more interesting and more productive work.
"... Garbage collection relieves you from the burden of freeing allocated memory ... First, it can make you more productive..." [gc]
"A second benefit of garbage collection ... is that relying on garbage collection to manage memory simplifies the interfaces between components ... that no longer need expose memory management details ("who is responsible for recycling this memory")." [GC-faq]
Unfortunately, this misses a subtle point in the relationship between ownership, object lifetimes, and memory management - they aren't the same thing. Garbage Collection frees you from having to clean up the memory, true. But Ownership and Lifetime still have to be carefully considered as part of design.
For example, holding on to object references for too long, or giving them to global objects, will keep them locked into memory. This is often referred to as a memory leak, although it is achieved by incorrectly holding onto things for too long, and not by forgetting to clean up as in C++.
In C# construction is very much like C++ in that the new keyword combines allocating the memory and calling the constructor.
There is no explicit memory deallocation stage - that's done automatically by the Garbage Collector - but is there something that can destroy an object? Not for releasing memory for its members - again that's done by the Collector - but something for cleaning up non-memory resources at a specific time?
There is a special function called when an object is being reclaimed - the finalizer. At first sight this looks very much like a destructor (the C# syntax is the same, and the designers of Managed C++ in VC7 thought they were the same - MC++ destructors are actually the finalizer in disguise), but it has since become clear that the finalizer can't be used to destroy an object, for three reasons:
You don't know when it gets called. Things get finalized when the garbage collector runs, but all you know is that is may run at some unspecified point in the future, so you can't rely on it being called at a specific time
You don't know how many times it gets called, if at all. It's possible that the program finishes before the collector runs, in which case the finalizer is never called. Also, an object that has been finalized can be kept alive using the ReRegisterForFinalize method, and then finalized again. And again. And again.
You can't do much in it. When your finalizer is running, you don't know which other managed objects you have references to have already been finalized, so unless your design guarantees they're still living - in other words, you've carefully thought out their lifetimes - you can't touch any other objects. The only sensible thing you can do is to log some information somewhere to say it's been finalized.
It is sometimes recommended to use the finalizer to clean up important unmanaged resources that need to be released, such as handles from the operating system that the Garbage Collector doesn't know about. Unfortunately you may have run out of these resources before the collector runs and the finalizers get called, so you can't rely on that[1].
Recall in my original problem, that I needed to tidy up a particular object at a particular time. A common solution is to write a teardown method, and the .Net designers have provided a standard interface: IDisposable, which has a single method Dispose() to be called when you want the object to clean up and "die". However, as there can be other references to the object, Dispose may be called on an object multiple times, and it is also allowed that a disposed object may be reused, for example a disposed File Object could be reopened, and become "alive" again, but I suggest that this would get too confusing to recommend - keep it simple: Dispose destroys the object, and nothing else can use it afterwards.
Used like this, Dispose is a candidate for the equivalent of a destructor. If an object has resources that must be released at a specific time, implement Dispose and remember to call it. C# has even added help to the language to do this - using - which will automatically call Dispose on its argument at the end of a statement block, in a similar way to auto_ptr, or Boost's scoped_ptr.
So finally, here's the example in C#
We'll have our base class A inherited from a helper class Disposable - it's based on a pattern for writing disposable objects where both the Finalizer and the Dispose methods are dispatched to a single virtual helper [msdn]. Some classes in the .Net framework such as UserControl use this technique
public class Disposable : IDisposable { private bool isDisposed = false; public Disposable() {} ~Disposable() { Dispose(false); } public sealed virtual void Dispose() { if(!isDisposed) { isDisposed = true; Dispose(true); GC.SuppressFinalize(this); } } protected virtual void Dispose( bool isDisposing) {} protected sealed void TryDispose( Object object) { TryDispose((IDispose)object); } protected sealed void TryDispose( IDispose idispose) { if(idispose != null) idispose.Dispose(); } } public class B : A { public B(int b_param) : base(1) { try { c = new C(3); d = new D(4) } catch(Exception e) { Dispose(); throw e; } } public override void Dispose( bool isDisposing) { if(isDisposing) { // dispose of managed resources here TryDispose(d); d = null; TryDispose(c); c = null; } // dispose of unmanaged resources here // and call the base class base.Dispose(isDisposing); } public static int main() { B b; using(b = new B(1)) { // use b; } // b.Dispose called automatically } }
Unfortunately using only works for objects whose lifetime is a local scope, but not for members, and they have to be cleaned up by hand.
C# doesn't allow objects embedded in other objects, only simple types and references to objects on the heap, so c has to be created on the heap, and this makes writing the constructor to cope with an exception being thrown more difficult.
Dispose has to be written by hand every time and if you forget to dispose of something, or it didn't used to be disposable but now ought to be, the resources haven't been disposed of at the right time.
Microsoft is about to release their new attempt at getting C++ to work with CLI (the common language part of .Net). Its previous Managed C++ suffered from many problems, and is not widely used.
In this language, the solution can use many familiar C++ idioms:
ref class B : public A { public: B(int b_param) : A(1), c(3), d(gcnew D(4)) {} private: int b; C c; auto_ptr<D> d; // write one for CLI references }; int main() { B b(1); // use b }
The destructors here are Dispose(), and the compiler is generating the implementation and the calls, just like C++.
I've assumed that there is an auto_ptr analogue that works with CLI references and the rest is just the slightly different syntax for creating an object on the managed heap.
In the original system, storage for financial instruments was managed by a simple Portfolio object, which had a Close method to tidy it up. An instance of this was shared between several Processor objects used to manipulate the portfolio, instances of which were in turn shared between several User Interface components.
The obvious first step was to make the Portfolio implement Dispose, and have that close the storage.
But it was not obvious who should be disposing of this object or when - there was no clear ownership and no notion of how long the object would remain usable for - one Processor could dispose of the Portfolio and the others could then try to use it again. My solution was to push the issue of ownership and destruction up a level, by making all the Processors that used the Portfolio themselves disposable, and documented that they could use the Portfolio given to them until they themselves were disposed of.
The User Interface objects were already disposable, so it was a simple matter to pass in the Processor they needed, and again define that they could use it throughout their own lifetime.
The top-level form created the Portfolio and Processors, hooked them up to the User Interface and set everything going. Finally, in response to the form needing to close, it was then a simple matter to dispose of all the User Interface objects, dispose of the Processors, and then dispose of the Portfolio.
So here we have an interesting consequence: if a resource must be cleaned up promptly, then every object that uses it needs to think about when it is no longer allowed to use it. In this case I did it by imposing a lifetime on the Processors and User Interface objects and guaranteeing that the Portfolio would outlive them.
The consequence of having a lifetime managed by calling Dispose has just spread from a low level helper tucked away in some other objects, all the way up to a top level object. It is very pervasive.
In this case, the solution resulted in virtually all non-trivial classes needing to implement Dispose, and involved a non-trivial amount of design rework to make the ownership relations and lifetime issues clear. The only classes that were not affected were very simple "value" types used to group together data items. The language and compiler provided no help as I had to write all the Dispose methods by hand, call Dispose for every non-trivial member, and hope that if a new member is added in the future or a class becomes disposable, then the writer remembers to update the Dispose method.
Far from Garbage Collection relieving the programmer of having to think about ownership and lifetime, these issues still exist in just the same way as in C++. Only relatively simple types have no need of the Dispose idiom and can be left to the collector - any type that uses, directly or indirectly, resources that need to be released in a timely fashion, needs to have their relative lifetimes thought about.
Current languages such as C# don't help the programmer in writing the mechanics of these things, but the forthcoming C++/CLI will bring many of the tools that C++ provides to improving this area.
[1] As Java's Garbage Collection is very like .Net's, this has led to some implementations of the Java library to try and get around this for file handles by triggering the garbage collector if an attempt to get a file handle fails, then trying again. This helps that particular program avoid running out, but may still be starving the system of the handles in the meantime. | https://accu.org/index.php/journals/244 | CC-MAIN-2018-13 | en | refinedweb |
Get information about a PCM channel's capabilities (plugin-aware)
#include <sys/asoundlib.h> int snd_pcm_plugin_info( snd_pcm_t *handle, snd_pcm_channel_info_t *info );
Before calling this function, set the info structure's channel member to specify the direction. This function sets all the other members.
libasound.so
The snd_pcm_plugin_info() function fills the info structure with data about the PCM channel selected by handle.
Zero on success, or a negative error code (errno is set).
See the wave.c example in the appendix.
QNX Neutrino
This function is the plugin-aware version of snd_pcm_channel_info(). It functions exactly the same way. However, make sure that you don't mix and match plugin- and nonplugin-aware functions in your application, or you may get undefined behavior and misleading results.
snd_pcm_channel_info(), snd_pcm_channel_info_t | https://www.qnx.com/developers/docs/6.4.1/neutrino/audio/libs/snd_pcm_plugin_info.html | CC-MAIN-2018-13 | en | refinedweb |
getcontext, setcontext - get and set current user context
[OB XSI]
#include <ucontext.h>#include <ucontext.h>
int getcontext(ucontext_t *ucp);
int setcontext(const ucontext_t *ucp);.
Upon successful completion, setcontext() shall not return and getcontext() shall return 0; otherwise, a value of -1 shall be returned.
No errors are defined.
Refer to makecontext()..
The obsolescent functions getcontext(), makecontext(), and swapcontext() can be replaced using POSIX threads functions.
None.
None.
bsd_signal(), makecontext(), setcontext(), setjmp(), sigaction(), sigaltstack(), siglongjmp(), sigprocmask(), sigsetjmp(), the Base Definitions volume of IEEE Std 1003.1-2001, <ucontext.h>
First released in Issue 4, Version 2.
Moved from X/OPEN UNIX extension to BASE.
The following sentence was removed from the DESCRIPTION: "If the ucp argument was passed to a signal handler, program execution continues with the program instruction following the instruction interrupted by the signal."
IEEE Std 1003.1-2001/Cor 2-2004, item XSH/TC2/D6/45 is applied, updating the SYNOPSIS and APPLICATION USAGE sections to note that the getcontext() and setcontext() functions are obsolescent. | http://pubs.opengroup.org/onlinepubs/009695399/functions/setcontext.html | CC-MAIN-2018-13 | en | refinedweb |
Abstract:
Welcome to the 69th edition of The Java(tm) Specialists' Newsletter. After my last newsletter, the listserver told me that quite a few of you did not receive the newsletter. If you are reading this on the internet, and you did not receive this newsletter, be warned - you have probably been deleted from the list! We now have approximately 5593 working email addresses.
Why can we not treat everyone equally? A few weeks ago, my four year old son Maximilian wanted to know the answer to that age-old question. Maxi's 3 year old cousin was visiting and was allowed to stay up and watch cartoons on TV, whilst he had to go to bed to sleep. "Sorry, Maxi, but as you will soon find out: Life is not fair! There is nothing I can do to change that, that is just the way it is."
A well known person (so well known that I have forgotten who it was) once said: "Everyone that I meet is my superior in some way"
This newsletter is about some operations who treat types equally (when perhaps they should not), and others, who due to their own feelings of inadequacy, do not. It all gets rather confusing, as you will soon find out. Remember, if you don't understand this newsletter: Life is not fair! Read the Java VM Spec. Write to the editor of this fine newsletter (editor@dev/null). But don't moan.
I want to thank all those who make the effort to write to me to correct errors in my newsletters. Please continue your good work. It makes the newsletter more useful when we remove the Gremlins.
Learning.JavaSpecialists.EU: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.
What is the purpose of a
byte?
Let me ask that question a bit differently: Why would you use a
byte, as opposed to an
int?
To save memory? Only since JDK 1.4.x does a byte field take
less memory than an int field. To increase computational speed? Maybe the opcodes for
adding bytes are faster than for ints?
Perhaps
byte should have been left out
of the Java Programming Language? Not just
byte,
but also
short and
char?
The original Oak specification, on which Java is based, had provision
for unsigned integral values, but they did not make it, so why do we
have those other types?
Ahh, but a
byte[] will take up less space than
a
char[]. That is true, so they do have a
reason to exist! But what about computation?
"Most of the instructions in the Java virtual machine instruction set
encode type information about the operations they perform. For instance,
the iload instruction loads the contents of a local variable, which must
be an
int, onto the operand stack. The
fload instruction does the same
with a
float value. The two instructions may have identical
implementations, but have distinct opcodes." - VM Spec The Structure of
the Java Virtual Machine.
This brings along a slight problem: there are too many types in Java for the instruction set. Java has only one byte per opcode, so there are a maximum of 256 opcodes. There is therefore great pressure to have as few opcodes as possible.
The 18 opcodes that are defined for
int, and not for
short,
char
and
byte, are: iconst, iload, istore, iinc, iadd, isub,
imul, idiv, irem, ineg, ishl, ishr, iushr, iand, ior, ixor, if_icmpOP,
ireturn. If these were also defined for the other three primitive
types, we would require at least an additional 54 opcodes.
There is only one opcode that is marked as "unused". It is opcode 0xBA. Go figure. Probably the BSc Computer Science nerds having a dig at all the BA's that are unused ;-) Fries with that?
What does that mean for you and me? Let's look at a code snippet, sent to me by Jeremy Meyer. Jeremy helped me get my first job at a company called DataFusion Systems in South Africa, now called DataVoice, and part of a bigger company called Spescom. DataVoice are probably not hiring anyone at the moment, but they are one fine company to work for, so if ever you are offered a job there, take it at any price! Even working there for free would be a bargain, considering what you will learn there. Thanks Jeremy!
public class ByteFoolish { public static void main(String[] args) { int i = 128; byte b = 0; b |= i; System.out.println("Byte is " + b); i = 0; i |= b; System.out.println("Int is " +i); } }
When we run this, we see the following:
Byte is -128 Int is -128
Here we start with a value bigger than 127 (the maximum positive
byte value). We store it in an
int,
and then OR the bits into the
byte.
The first System.out.println statement naturally reports the value of
the
byte as -128, which one would expect.
The
byte is, after all, signed and has range
-128 to +127. Then we reset the
int to 0, and
OR the value of the
byte back into the
int.
The
int now has the value of -128!
How could we OR the int with just the bits that belong to the last byte? We could first bitwise AND it with a mask that only shows the last byte and OR the result with the int.
public class ByteFoolish2 { public static void main(String[] args) { int i = 128; byte b = 0; b |= i; System.out.println("Byte is " + b); i = 0; i |= (b & 0x000000FF); System.out.println("Int is " +i); } }
Now when we run the program, we see:
Byte is -128 Int is 128
Bitwise arithmetic with byte, short and char is challenging. Inside the JVM, these are first translated to ints, worked on, and then converted back to bytes. Let's disassemble the class to make sure that this is what is happening:
public class ByteFoolish3 { public ByteFoolish3() { int i = 128; byte b = 0; b |= i; i = 0; i |= b; } }
We disassemble with javap -c ByteFoolish3 (you know how to do that by now):
0 aload_0 1 invokespecial #9 <Method java.lang.Object()> 4 sipush 128 // push 128 onto stack 7 istore_1 // store in int register 1 8 iconst_0 // push constant "0" onto stack 9 istore_2 // store in int register 2 10 iload_2 // load register 2 11 iload_1 // load register 1 12 ior // OR them together as ints 13 i2b // convert the int on the stack to a byte 14 istore_2 // store the value in register 2 15 iconst_0 // push constant "0" onto stack 16 istore_1 // store this in int register 1 17 iload_1 // load register 1 18 iload_2 // load register 2 19 ior // OR them together 20 istore_1 // store them in register 1 21 return
I was sitting in a seminar on Refactoring by Martin Fowler a few
years ago. The things Martin was saying sounded like music to my
ears. I had refactored my code for many years, but had never heard
such a thorough approach on the subject. The one thing that stuck
in my mind was the difference between
i += n and
i = i + n.
Would the following compile?
public class Test1 { public static void main(String[] args) { int i = 128; double d = 3.3234123; i = i + d; System.out.println("i is " + i); } }
The answer is that it would not compile. A double is 64 bits with a very big range. There is no way that it would fit into an int without losing precision. It is not safe to run such code, so when we attempt to compile it, we get the following message:
Test1.java:5: possible loss of precision found : double required: int i = i + d; ^ 1 error
This is good. We pick up errors before we run the program. Ya'll know how to cast a double to an int if you definitely want to do that. You can either cast the values individually before you add, or you can cast the result:
public class Test2 { public static void main(String[] args) { int i = 128; double d = Integer.MAX_VALUE + 12345.33; i = i + (int)d; System.out.println("i1 is " + i); i = 128; i = (int)(i + d); System.out.println("i2 is " + i); } }
In a way, I would expect both i's to have the same value, but due to the precision loss of doubles, they are not equal:
i1 is -2147483521 i2 is 2147483647
Let's have a look at the next class, Test3:
public class Test3 { public static void main(String[] args) { int i = 128; double d = Integer.MAX_VALUE + 12345.33; i += d; // oops, forgot to cast! System.out.println("i is " + i); } }
Does this compile? Ooops, we forgot to cast! But, does it compile? Yes it does, and when we run it, we get:
i is 2147483647
Therefore, we can say that
i += n is the same
as
i = (type_of_i)(i + n).
Kind regards, and thanks for the great feedback after the last... | https://www.javaspecialists.eu/archive/Issue069.html | CC-MAIN-2018-13 | en | refinedweb |
What's different about the 3 versions of Rx? Part 1: Silverlight 3
Rx is a .NET Library that allows programmers to write succinct declarative code to orchestrate and coordinate asynchronous and event-based programs based on familiar .NET idioms and patterns.
Just wow!
One question, how well does this play with WPF triggers?
Thanks for making this video. Can you please post the code for this sample app? My vision is too poor to be able to read any of the code in the video.
Very cool! Really shows the expressive, declarative nature of Rx. Big improvement over the way we normally do things.
How about a video showing asynchronous programming via Rx?
did you check out the wmv high version? text is much clearer there
triggers are based on dependency properties, and you could write something that makes a dependecy property into an observable
im working on something like that infact
also @wes, Observable.Context is static, what if i want to have one observable use one context and another observable to use another?
-edit-
this discussion continued on the rx forums where a neater solution was found
-end edit-
Hello again, i wrote a little helper class for converting dependency properties to Observable, im not sure its the best way to do it though, what do you guys think? am i missing something? There are some anonymous Observable/disposable in rx that i'd liked to use but they are internal.. i suspect there is something smarter i can do with the Create methods but i coudnt get around making a helper class..
public partial class MainWindow : Window { public MainWindow( ) { InitializeComponent( ); Observable.Context = SynchronizationContext.Current; this.ActualHeightAsObservable( ).Subscribe( d => label1.Content = d ); } } public class DependencyObservable<T> : IDisposable { EventHandler eh; DependencyPropertyDescriptor des; object instance; public DependencyObservable( IObserver<T> observer, DependencyProperty dp, Type ownertype, object ins ) { des = DependencyPropertyDescriptor.FromProperty( dp, ownertype ); instance = ins; eh = new EventHandler( ( o, e ) => observer.OnNext( ( T )des.GetValue( instance ) ) ); des.AddValueChanged( instance, eh ); } public void Dispose( ) { des.RemoveValueChanged( instance, eh ); } } public static class WindowExtensions { public static IObservable<double> ActualHeightAsObservable( this MainWindow w ) { return CreateWithDisposable<double>( obs => new DependencyObservable<double>( obs, MainWindow.HeightProperty, w.GetType( ), w ) ) } }
The nice thing is that the returned observable respects all the rules of Dependency properties, if the dp is changed by a style, trigger or binding, the Observable should reflect that (ive only tried with bindings though)
This video makes it very clear what Rx is capable of. Thanx a lot
@aL_, use .SubscribeOn and .ObserveOn expect more work in the future on this area...
does RX work with asp.net applications too ?
Rx is a general purpose library, should work fine in asp.NET, just make sure you don't set the Observable.Context to a UI context (the default context in .NET 3.5 SP1 & .NET 4 Beta 2 is the Reactive EventLoop which should work fine for ASP.NET).
i tried them but i must have made some mistake because i got the cross thread excepitons anyway.. i tried having them both before and after the assignment to the label but no luck :/
can you give an example perhaps?
Amazing! Erik Meijer is a programming God!
What am I doing wrong? This doesn't do anything.
Public Class Form1 Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Observable.Context = System.Threading.SynchronizationContext.Current Dim xs = Observable.Interval(TimeSpan.FromSeconds(1)) xs.Subscribe(Function(y) Me.Text = y.ToString) End Sub End Class
try setting Observable.Context = System.Threading.SynchronizationContexts.WindowsForms
If both your observables update the UI, they should both run in the UI context, can you send out a sample of what is going wrong?
the code i posted before does work
however this:
Observable.Return( 10 ).SubscribeOn( SynchronizationContext.Current ).Subscribe( d => label1.Content = d );
or this:
Observable.Return( 10 ).SubscribeOnWindowsForms( ).Subscribe( d => label1.Content = d );
or this:
Observable.Return( 10 ).SubscribeOnDispatcher(this.Dispatcher ).Subscribe( d => label1.Content = d );
doesnt work for some reason. they all fail with a cross thread exception in the label1.Content setter, the rest of the program is just an empty wpf app with a label
I tried exactly the same thing in VB and in C#, and guess what? The VB version doesn't work, but the C# version does!
The only difference is the way the Form_Load handler is hooked up.
Ok, I was wrong before, but the real problem is that Me.Text = y.ToString is being treated as a boolean comparison:
Reflector:
Twas the night before Christmas, when all through the house
Not a creature was stirring, not even an observable.
The context was set by the chimney with care,
In hopes that a subscription would soon would be there.
Sorry, Jeff. Any other guesses?
My man Richard! Nice find. This leave me with another mystery. When I put the same code inside a button_click event, it also does nothing. I guess the head in the box does not use vb.net.
I tried it out and it seems that the anonymous functions in VB are getting in the way. If you write it without anonymous functions then it works fine. The reason is that assignment is a statement and so cannot be an expression and therefore not in an anonymous function. Instead, the = becomes the equality operator so each time the interval fires it checks to see if the form's text is equal to the current textual representation of the interval's value: obviously wrong.
Here is the fix:
Imports System.Threading Public Class Form1 Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Observable.Context = SynchronizationContexts.WindowsForms Dim xs = Observable.Interval(TimeSpan.FromSeconds(1)) xs.Subscribe(AddressOf Foo) End Sub Private Sub Foo(ByVal X As Long) Me.Text = X.ToString End Sub End Class
This does work.
Private Function SetText(ByVal text As Object)
Me.Text = text.ToString()
Return Me.Text
End Function
Private Sub Form1_Load_1(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Me.Load
Observable.Context = System.Threading.SynchronizationContext.Current
Dim xs = Observable.Interval(TimeSpan.FromSeconds(1))
xs.Subscribe(Function(y) SetText(y))
End Sub
Yes, I can definitely verify that's the case. Bug!
It gets worse, that is an illegal cross thread call. So, I have to either disable checking, or have a private field and invoke some update method. Or, maybe this is a side effect of running the app in x64 in the debugger. Either way, this is not good. I thought of overloading the = operator, but that is even more confusing.
Did you set the Observable.Context correctly?
What does the code look like?
Public Class Form1 Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load 'uncomment this to disable ictc checking 'Control.CheckForIllegalCrossThreadCalls = False Observable.Context = Threading.SynchronizationContexts.WindowsForms Dim theAnswer = Observable.Return(42) theAnswer.Subscribe(Function(monkeyDo) MonkeySee(monkeyDo)) End Sub Private Function MonkeySee(ByVal monkeyDo As Long) as Boolean Me.Text = monkeyDo.ToString Return Me.Text = monkeyDo.ToString End Function End Class
EDIT: It looks like I'll have to eat my hat. That function should be a sub. Or did I just stumble across something completely different? That's why I love tomorrow, because tomorrow is another day! I'm going to bed.
EDIT: I just ran it on 2010 and noticed that the thread that hits foo is a worker thread.
Public Class Form1 Private meText As String Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Observable.Context = Threading.SynchronizationContexts.WindowsForms Dim o = Observable.Return(42) o.Subscribe(AddressOf foo) End Sub Public Sub foo(ByVal bar As Long) meText = bar.ToString Invoke(New MethodInvoker(AddressOf UpdateMeText)) End Sub Public Sub UpdateMeText() Me.Text = meText End Sub End Class
EDIT: If I use Observable.Context = Threading.SynchronizationContexts.CurrentDispatcher, my threads are in order. Bug #2.
Is this a VB bug or an Rx bug then? Am I going to get my lunch with Eric?
Unfortunately, it has nothing to do with Rx. And if you reported it as a VB bug then the VB team would correctly say that it is by design. Which probably means that it is a bug in the user code.
Now, understandably this is a rather confusing bug, and I tend to think that overloading "=" to be either relational equality or assignment leads to some rather interesting cases and so I personally think that VB is rather confusing here.
LINQ plus Rx...Cool!!!
Hi guys!
I've problems with the sample from this video. Current Rx version has no Observable.Context property. Could you please tell me how this code should be rewritten? I played with SubscribeOn....
var xs = Observable.Interval(TimeSpan.FromSeconds(1)). SubscribeOn(SynchronizationContext.Current); xs.Subscribe(value => TextBlock.Text = value.ToString());
, but still have exception 'The calling thread cannot access this object because a different thread owns it'.
Thanks
Manual helped. ObserveOn has to be used instead.
var xs = Observable.Interval(TimeSpan.FromSeconds(1)); xs.ObserveOn(SynchronizationContext.Current).Subscribe(value => TextBlock.Text = value.ToString());
How can the same sample be done in Silverlight?
1. I discovered I do not need the Observable.Context.
2. the MouseEventArgs do not contain a definition of StartWith.
Is there any site that has samples like:
The WPF way:
The Silverlight way:
The Winform way:
????
The code as shown in the screen cast does not work as others have commented because Observable.Context has been removed. Would it not make sense to repost the screen cast reflecting the changes to the API to avoid causing pain to all who look at this screen cast and find that their code is broken?
For the March 2010 release of Rx, the code needs to be adjusted as follows:
public Window1()
{
InitializeComponent();
var xs = Observable.Interval(TimeSpan.FromSeconds(1)).StartWith(-1);
var mouseDown = from evt in Observable.FromEvent<MouseButtonEventArgs>(this, "MouseDown")
select evt.EventArgs.GetPosition(this);
var mouseUp = from evt in Observable.FromEvent<MouseButtonEventArgs>(this, "MouseUp")
select evt.EventArgs.GetPosition(this);
var mouseMove = from evt in Observable.FromEvent<MouseEventArgs>(this, "MouseMove")
select evt.EventArgs.GetPosition(this);
var q = from start in mouseDown
from delta in mouseMove.StartWith(start).TakeUntil(mouseUp)
.Let(mm => mm.Zip(mm.Skip(1), (prev, cur) =>
new { X = cur.X - prev.X, Y = cur.Y - prev.Y }))
select delta;
q.ObserveOnDispatcher().Subscribe(value =>
{
Canvas.SetLeft(image, Canvas.GetLeft(image) + value.X);
Canvas.SetTop(image, Canvas.GetTop(image) + value
There is no longer an Observable.Context.. I got this working with the following code
var xs = Observable.Interval(TimeSpan.FromSeconds(1)); xs = xs.ObserveOnDispatcher<long>(); xs.Subscribe(value => textBlock.Text = value.ToString()); | https://channel9.msdn.com/Blogs/J.Van.Gogh/Writing-your-first-Rx-Application?format=html5 | CC-MAIN-2018-13 | en | refinedweb |
NAME
dirent.h - format of directory entries
SYNOPSIS
#include <dirent.h>
DESCRIPTION
The internal format of directories is unspecified. The <dirent.h> header shall define the following type: DIR A type representing a directory stream. It shall also define the structure dirent which shall include the following.
FUTURE DIRECTIONS
None.
SEE ALSO
<sys/types.h> , the System Interfaces volume of IEEE Std 1003.1-2001, closedir(), opendir(), readdir(), readdir_r(), rewinddir(), seekdir(), tell . | http://manpages.ubuntu.com/manpages/jaunty/man7/dirent.h.7posix.html | CC-MAIN-2015-48 | en | refinedweb |
Hibernate Architecture
are the components of the Hibernate O/R mapping tool Architecture:
Hibernate...Hibernate Architecture - Understand the architecture of the Hibernate ORM
framework.
Here we will discuss about the architecture of very extensible used
Hibernate Architecture
Hibernate Architecture
In this tutorial you will learn about the Hibernate architecture.
Here we will understand about the architecture of Hibernate using... architecture.
The above image demonstrates that a Hibernate architecture
Hibernate Architecture
In this tutorial we are going to describe hibernate architecture with example
Hibernate Architecture
Hibernate Architecture
In this lesson you will learn the
architecture of Hibernate.
There are three components of Hibernate
architecture:
Connection
about hibernate - Hibernate
about hibernate tell me the uses and advantages of hibernate Hi friend,
I am sending a link. This link will help you.
Please visit for more information. Architecture
Hibernate Architecture
In this section we are discussing Hibernate Framework Architecture. Hibernate
architecture is layers architecture. You can are free... cream architecture.
Read more at
Hibernate Architecture.
Give a one to one example in Hibernate.
Give a one to one example in Hibernate. Hello,
Can you give me one to one example in Hibernate..Thanks....
Hello,
TO read about One...:
Hibernate One-to-one Relationships
Hibernate One to One Bi-directional Mapping
Hibernate - Hibernate
Hibernate Give me a brief description about Hibernate?How we will use it in programming?give me one example
Hibernate Please tell me the difference between hibernate and jdbc can any one one explain what is hibernate ?
Please visit the following link:
Hibernate Tutorials
about connectivity
about connectivity hello i am basavaraj,will any one tell me how to use hibernate in struts.
Please visit the following link:
Struts Hibernate Integration
Hibernate Hello,
I wanted to learn hibernate.I am not understanding which version to learn.please let me know about
Intro please - Hibernate
Intro please Hi,
Anyone please tell me why we go for hibernate? is there any advanced feature?
Thanks,
Prabhakaran. Hi friend,
Here is the detail information about hibernate.
Read for more
HIBERNATE
HIBERNATE What is mean by cache? How many types are there give me one example - Hibernate
hibernate wts the use of inverse in hibernate?
can any one explain the association mapping(1-1,1-many,many-1,many-many
select query using hibernate
select query using hibernate Hi,
can any one tell me to select records from table using hibernate.
Thanks
Please visit the following link:
Hibernate Tutorials
The above link will provide you Tutorial for Beginners Roseindia
. Everything about Hibernate from its features to its compatibility and to its... much and today it one of the leading ORM framework. Hibernate is free.... You don't have to pay any licensing fee.
Hibernate is an Object
Hibernate - Hibernate
Hibernate what is hibernaate?wherec we have to use it>
give me one example
Hi mamatha,
Hibernate 3.0, the latest Open Source... from Hibernet.org.Hibernate is a solution for object relational mapping
hibernate - Hibernate
the application I got an exception that it antlr..... Exception.Tell me the answer plz.If any application send me thank u (in advance). Hi friend....
For read more information:
hibernate - Hibernate
|timestamp)?,(property|many-to-one|one-to-one|component|dynamic-component|properties|any... Hibernate;
import org.hibernate.Session;
import org.hibernate.*;
import...();
//Create Select Clause HQL
String SQL_QUERY ="Select
Hibernate Tutorial
with the various
databases, Hibernate dialect of various databases, Architecture...
Hibernate Architecture
Hibernate architecture is designed as that it acts as a mediator...Hibernate Tutorial
This section contains the various aspects of Hibernate
What is Hibernate in Java?
for the information about the Hibernate ORM? Then this is the best place
to learn about... the complete information about Hibernate framework in Java.
There are many different... the exceptions occurred in the program.
Hibernate is one of the ORM tool
please tell me about command line arguments in java?
please tell me about command line arguments in java? please tell me about command line arguments in java?
Hi Friend,
The command-line... take any number of command-line arguments which may be necessary for the program... generation, Hibernate Dual-Layer Cache Architecture (HDLCA), High performance
hibernate please give me the link where i can freely download hibernate software(with dependencies)
Learn hibernate, if you want to learn hibernate, please visit the following link:
Hibernate Tutorials
hibernate pls give one simple example program for hibernate
Hibernate Overview
will be able to understand the Hibernate framework.
Hibernate is one of the most... Relational Mapping tool. Hibernate is high
performance, highly scalable... advantages of using any ORM tool such as Hibernate. First of
all it makes
spring hibernate - Hibernate
spring hibernate can u tell me how we can integration of spring with hibernate? Hi friend,
For solving the problem Spring with Hibernate visit to :
http
Hibernate application - Hibernate
Hibernate application Hi,
Could you please tell me how to implement hibernate application in eclipse3.3.0
Thanks,
Kalaga. Hi Friend,
Please visit the following link:
Hibernate
Hibernate hello,
how to run hibernate program in netbeans?please send me step by step instructions to run hibernate in netbeans
creating table in hibernate
database by the help of hibernate, so if you would help me regarding this please tell me about that with example, it is very urgent to me and i didn't get any... of hibernate
There are 2 alternatives to create table in hibernate: Search - Hibernate
in their application...
i requesting u pleaase tell me the toturial on your site which leads me to integrate the Hibernate search in my exsisting hibernate...Hibernate Search hello
i am java developer and mostely concern
What are dialects in hibernate?
heard something about the dialects. Can anyone explain me What are dialects.... You can use any of the supported database with Hibernate.
dialects...,
Hibernate is database independent ORM tool. You can use any of the supported database
Can you provide me Hibernate configuration tutorial?
Can you provide me Hibernate configuration tutorial? Hello There,
I want to learn about Hibernate configuration, Can anyone provide me with hibernate configuration tutorial>
Thanks
please tell me
please tell me i have created one table,when i close and again login, table name will be there, but its content not displayed, showing as no rows selected, please tell me the reason
min() Function
Hibernate Named HQL Query using XML mapping
Hibernate Named HQL Query in Annotation
Hibernate Mapping Examples
Hibernate
One to One Mapping using Annotation
Complete Hibernate 3.0 and Hibernate 4 Tutorial
of Hibernate framework
Hibernate Architecture - Learn the Architecture... Configuration files - Learn about the Configuration files in
the Hibernate framework...
features of the Hibernate 3.0
Hibernate
Architecture
; Tutorial Section
Introduction
to Hibernate 3.0 |
Hibernate Architecture... Hibernate O/R Mapping |
Understanding Hibernate <generator> element... Clause |
HQL Select
Clause |
Hibernate
Count Query |
Hibernate Avg
Hibernate 4 Example with Eclipse
about the Hibernate 4 Example with Eclipse and how to install it. Can someone brief me about the same?
Thanks.
It is easy to learn how easily one... in details about all your issues regarding Hibernate 4, Examples with Eclipse and
hibernate code problem - Hibernate
. Please visit for more information.
If you have any problem this send me detail and source code...hibernate code problem suppose i want to fetch a row from the table
hibernate
hibernate Hi
Good Morning
Will u please send me the some... urgent please send me.
thanks in advance
thanks®ards
teja
Hi Friend,
Please visit the following link:
Hibernate Tutorials
framework.
Hibernate takes care of the mapping from Java classes to database tables...
Hibernate
Architecture
In this lesson you will learn the architecture of Hibernate.
First
Hibernate Application
Hibernate - Hibernate
Hibernate application development help Hi, Can anyone help me in developing my first Java and Hibernate application
Hibernate one-to-many relationships.
Hibernate one-to-many relationships. How does one-to-many relationships works in hibernate...............
hibernate............... goodevining. I am using hibernate...)
at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:423)
... 7 more
please suggest me wat to do?? and provide me jar lib
hibernate - Hibernate
hibernate pl send me hibernate interview questions
Reg Hibernate in Myeclipse - Hibernate
without any error.
I opened HQL editor and executed from User
My output...Reg Hibernate in Myeclipse Hi,
My table name is user... plz help me.
regards
usha
hibernate - Hibernate
hibernate is there any tutorial using hibernate and netbeans to do a web application add,update,delete,select
Hi friend,
For hibernate tutorial visit to :
HIBERNATE- BASICS
is agog with
excitement about a very popular open-source technology , Hibernate...
landscape.
HIBERNATE is an ORM ( Object-Relational-Mapping)
technology. It is an Open...
HIBERNATE- BASICS
(R.S.Ramaswamy)- ( in 3 parts)
(part-1)
( CMP
Hibernate Configuration File
Hibernate Configuration File
In this section we will read about the hibernate... XML mapping file. The hibernate XML file i.e.
xxxx.hbm.xml file is a file where... add Hibernate mapping file programmatically using
the addResource() method please let know the most likely asked questions in the interview assoonaspossible.
my interview is tomarrow please send me as soon as possible.Hibernate questions
Regards,
Shweta
Hibernate beginner tutorial
Overview - Learn the overview of Hibernate framework
Hibernate Architecture - Learn the Architecture of Hibernate ORM
Framework
Setup Hibernate Environment....
Hibernate Configuration files - Learn about the Configuration files
hibernate joins
hibernate joins how to fetch the data from multiple tables in mysql using HQL. Give me one example using joins query - Hibernate Interview Questions
Hibernate query Hi,In my one interview i face one question that is how to write query without using hql and sql.If any body knows please give the answer thank u association and join example
Example program of Associations and Joins in Hibernate framework
In this section we will learn about the Hibernate Associations and Joins,
which.... In this you will learn about the database relationship and the Hibern
Hibernate One-to-one Relationships
to create and run the one-to-many mapping in
Hibernate.
Download the code example... Hibernate One-to-one Relationships
Hibernate One-to-one Relationships
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/10777 | CC-MAIN-2015-48 | en | refinedweb |
Adobe Flash Professional version MX and higher
Adobe Flex
This technique relates to:
See User Agent Support for Flash for general information on user agent support.
The intent of this technique is to prevent sounds from playing when the Flash movie loads. This is useful for those who utilize assistive technologies (such as screen readers, screen magnifiers, switch mechanisms, etc.) and those who may not (such as those with cognitive, learning and language disabilities). By default, the sound will be played automatically. When a screen reader such as JAWS is detected however, the sound will have to be started manually.
To perform screen reader detection, Flash provides the
flash.accessibility.Accessibility.active
property. If this property is set to true, it means that the Flash
player has detected running assistive technology. Based on this flag,
the Flash developer can choose to run different functionality.
Note 1: The Flash player requires some time to detect active assistive technology and set the Accessibility.active property. To get accurate results, do not check for this property immediately on the first frame of the movie. Instead, perform the check 5 frames in or based on a timed event.
Note 2: Not every screen reader will be detected using this mechanism. In general, the property will be set to true when any MSAA client is running.
Note 3:
Other assistive technology tools, including screen magnifiers,
or tools not used as assistive technologies may also utilize MSAA in
ways that result in
Accessibility.active being set to
true.
A class called SoundHandler is created which automatically starts playing an mp3 file only when Accessibility.active is set to false. Note that this example also checks the flash.system.Capabilities.hasAccessibility property. This property does not check whether a screen reader is running, but instead indicates whether the Flash player is running in an environment that supports MSAA (which basically means the Windows operating system).
Example Code:
package wcagSamples { import flash.accessibility.Accessibility; import flash.display.Sprite; import flash.net.URLRequest; import flash.media.Sound; import flash.media.SoundChannel; import flash.system.Capabilities; import fl.controls.Button; import fl.accessibility.ButtonAccImpl; import fl.controls.Label; import flash.events.MouseEvent; public class SoundHandler extends Sprite { private var snd: Sound = new Sound(); private var button: Button = new Button(); private var req: URLRequest = new URLRequest( ""); private var channel: SoundChannel = new SoundChannel(); private var statusLbl: Label = new Label(); public function SoundHandler() { snd.load(req); ButtonAccImpl.enableAccessibility(); button.x = 10; button.y = 10; statusLbl.autoSize = "left"; statusLbl.x = 10; statusLbl.y = 40; addChild(statusLbl); button.addEventListener(MouseEvent.CLICK, clickHandler); this.addChild(button); if (! Capabilities.hasAccessibility || ! Accessibility.active) { channel = snd.play(); button.label = "Stop Sound"; statusLbl.text = "No Assistive technology detected. \ Sound will play automatically"; } else { button.label = "Start Sound"; statusLbl.text = "Assistive technology detected. \ Sound will not play automatically"; } } private function clickHandler(e: MouseEvent): void { if (button.label == "Stop Sound") { button.label = "Start Sound"; channel.stop(); } else { channel = snd.play(); button.label = "Stop Sound"; } } } }
This technique can be viewed in the working version of A SoundHandler class. The source of A SoundHandler class is available.
Resources are for information purposes only, no endorsement implied.
ActionScript 3.0 Language and Components Reference: Accessibility.active property
Developer Beware: Using Flash to Detect Screen Readers
Start a screen reader that supports MSAA.
Open a page containing a Flash movie that starts playing audio automatically when a screen reader is not running
Confirm that the audio is stopped.
#3 is true | http://www.w3.org/WAI/GL/2010/WD-WCAG20-TECHS-20100708/FLASH34.html | CC-MAIN-2015-48 | en | refinedweb |
querying pattr system
For some externals I’m planning, I would like to be able to query the values of named [pattr] objects and named UI objects that have been exposed to the pattr system by [autopattr]. I’m kind of lost on where to look for info on this (and with searching specific forums still broken on the new site, that hasn’t been much help).
In the "Preset Support" section of "Enhancements to Objects" in the Max5 API, there is mention of "more powerful and general state-saveing, use the pattr system described below" but no more pattr mentions after that. After browsing around the modules I came across places that suggested reading the PattrSDK from Max-4.5.5-SDK for more details, but as far as I can tell, that only has information on registering your objects with the pattr system (?). Is what I’m looking for documented anywhere? Or if not, does anyone have an example?
To clarify, I am specifically asking how my external can receive a notification when the value of an object bound to the pattr system changes. I’m guessing this could be done with object_attach(), but I’m not sure of the name_space involved.
Although it would also be useful to know how to query the value at a time of my choosing (which it sounded like I was asking in my first post).
To query an object’s attribute value use object_attr_getvalueof() and related functions.
To listen for notifications use object_attach() and then define a "notify" message for your object as described in the SDK.
I’ll correct the errant text you mention in the SDK documentation.
Thanks
Is there documented pattern to the name_space of named [pattr] or named UI objects? Or is the best method to find them for attaching to search the linked list of jboxes in the appropriate jpatcher for the desired [pattr] (Or named UI object)?
P.S. If you are making edits to that page, you may also want to take care of the last thing on the page "buffer support~". Looks like some sort of heading for nothing.
Every object that you can create and use in a patcher is in the "box" namespace. There is also a "nobox" namespace which includes objects you can’t use in a patcher, like atomarray and linklist.
HTH
Thanks for the help with these newbie questions. I meant to edit my last post but ran out of time before I needed to leave for class.
I guess I am asking if there an easy way to find an object already found by an [autopattr] to attach to, or do I need to search the appropriate list of jboxes to find it by varname?
The easy way is to do as you suggest, or use one of the ‘iterator’ objects in the SDK as a model.
Cheers
Forums > Dev | https://cycling74.com/forums/topic/querying-pattr-system/ | CC-MAIN-2015-48 | en | refinedweb |
I need some help with the prime_factor fxn. I think I'm just missing something easy that I'm not seeing right now. Anyways, it's supposed to output the prime factors of an input number.
When I input 12, it outputs the desired results; however, if I enter in another nonprime number, it doesn't output the desired results. For example, if I input 6, it only says that 2 is a factor, but does not say 3 is a factor.
Ex. Output:
Enter an integer < 1000 -> 12
The prime factorization of the number 12 is:
2 is a factor
2 is a factor
3 is a factor
______________________
Enter an integer < 1000 -> 6
The prime factorization of the number 6 is:
2 is a factor
// It doesn't output that 3 is also a factor for the number 6.
Here's an example main to go with the program, followed by the function I need help with:
Code:
#include <iostream>
#include <string>
#include <cmath>
using namespace std;
const string SPACE_STR = " ";
void prime_factor(int number);
bool is_prime(int number);
int main() {
int number;
cout << "Enter a number < 1000 -> ";
cin >> number;
prime_factor(number);
return 0;
}
// Function I need help with.
void prime_factor(int number)
{
bool isPrime = is_prime(number);
int prime = number;
int i = 2, j;
double squareRoot = sqrt(static_cast<double>(number));
int count = 0;
cout << "The prime factorization of the number " << number << " is:" << endl;
if(isPrime)
cout << SPACE_STR << number << " is a prime number." << endl;
else {
while((prime > 0) && (i <= squareRoot)) {
if((prime % i) == 0) {
count++;
for(j = 0; j < count; j++)
cout << SPACE_STR;
cout << i << " is a factor" << endl;
prime /= i;
} else
i++;
}
}
}
bool is_prime(int number)
{
int i;
for(i = 2; i < number; i++) {
if((number % i) == 0)
return false;
}
return true;
} | http://cboard.cprogramming.com/cplusplus-programming/46147-prime-factor-fxn-printable-thread.html | CC-MAIN-2015-48 | en | refinedweb |
Hamming Numbers
August 30, 2011
The sequence of Hamming numbers 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36, … (A051037) consists of all numbers of the form 2i·3j·5k where i, j and k are non-negative integers. Edsger Dijkstra introduced this sequence to computer science in his book A Discipline of Programming, and it has been a staple of beginning programming courses ever since. Dijkstra wrote a program based on three axioms:
Axiom 1: The value 1 is in the sequence.
Axiom 2: If x is in the sequence, so are 2 * x, 3 * x, and 5 * x.
Axiom 3: The sequence contains no other values other than those that belong to it on account of Axioms 1 and 2.
.
Your task is to write a program to compute the first thousand terms of the Hamming sequence. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
I wrote a Haskell version of Python’s test of using lazy evaluation to generate them; as it turns out, it’s the same as the SRFI-41 version.
Since I’m new to Haskell, I probably didn’t find the most elegant solution (and suggestions are welcome), but lazy evaluation is kind of amazing.
Looks like I had some redundant parentheses in my definition of merge, according to hlint. Apologies.
Re the imperative version, I think it is nicer to omit the 0 that does not belong to the sequence and simply produce a 0-based sequence of n elements:
(Cut-and-paste from my editor window to be safer against silly typos. I called my translation of Dijkstra’s code aq.)
Jussi: I tend to add a useless element to the beginning of arrays when porting code from languages that base arrays at 1 instead of 0. It’s mostly harmless, and saves a lot of x+1 and x-1 statements mapping between the two formats. I agree it’s ugly, but only a little bit. Where it actually matters, I do it right.
Here’s a shortish but quite efficient lazy Clojure version:
(defn hammings [initial-set]
(let [v (first initial-set)
others (rest initial-set)]
(lazy-seq
(cons
v
(hammings (into (sorted-set (* v 2) (* v 3) (* v 5)) others))))))
(take 1000 (hammings #{1}))
Runs in about 0.07 ms for the first 1000 hamming numbers on my machine.
Here’s my clojure solution. More details at my blog.
hemming = [1] ++ concatMap (\ x -> [x*2, x*3, x*5]) hemming
first1k = sort . take 1000 $ hemming
My Haskell variant
Dijkstra advocated 0-based indexing, so I think his algorithm in the chapter may actually be intended to be 0-based. It is hard to tell for sure. However, his initialization of aq to (1 1) is funny either way when only one 1 is meant to remain in the result, and the assignments to ik, xk seem to need to be either parallel or swapped when I implement it. Below is a 0-based version with parallel assignments. For example, (set*! (x y) y x) swaps x and y. (Originally I had tail-recursion instead of assignments, and I did not even notice that the second assignments in Dijkstra depend on the first ones.)
Cheers.
@Philipp: I like the brevity of your Haskell answer, but it seems to include a lot of repeats; as a Haskell newbie, I couldn’t figure out a way to remove them easily. Any ideas?
nub
Continuing to attempt to become more familiar with Haskell. Here is what I came up with.
In case you’re curious, my solution is in Lazy Racket :-)
Ended up playing with my answer some more while waiting for some other code to compile. I like how this one better.
HammingNumbers.java
hamm.go
@programmingpraxis: I’ve tried throwing
nubinto the solution, but it either (1) comes after
take 1000, in which case there are no longer 1000 entries, or (2) comes before it, and the program just hangs. I am not very adept in Haskell :-(
Hamming number generator in Python 3.
Follows directly from the definition. Yield 1, then yield 2*, 3* and 5* a number in the list. It’s lazy too.
from heapq import merge
from itertools import tee
def hamming_numbers():
last = 1
yield last
a,b,c = tee(hamming_numbers(), 3)
for n in merge((2*i for i in a), (3*i for i in b), (5*i for i in c)):
if n != last:
yield n
last = n
x = list(islice(hamming_numbers(),1000))
print(x[:10], x[-5:])
# output -> [1, 2, 3, 4, 5, 6, 8, 9, 10, 12] [50331648, 50388480, 50625000, 51018336, 51200000]
This time with proper formating:
Nothing magical here…
Inelegant but fast to write in Python:
my C++ solution. I’m disappointed that it takes so long for my implementation to calculate the sequence. (especially since someone got it in less thana tenth of a second.)Can someone tell me where I can improve the code?
The suggested solution makes n trips through a single loop, at each iteration calculating a minimum of three integers, performing three
iftests comparing two integers at each iteration, and making a single addition and a single multiplication.
Your code has 5
forloops, two of them with
iftests inside them, and 1
whileloop containing an
ifwhich has a nested
ifinside it. And the inner loop of your code performs a
modulooperation (a very expensive operation) and a compare for
divisorSize×
sequenceIndexiterations.
And you forgot the 1 that starts the sequence.
By the way, the suggested solution runs in a thousandth of a second, not a tenth of a second:
> (define x #f)
> (time (set! x (hamming 1000)))
(time (set! x ...))
no collections
1 ms elapsed cpu time
1 ms elapsed real time
12048 bytes allocated
> (vector-ref x 1000)
51200000
To improve your code, you should follow the suggested solution.
I’m sorry if that’s a little bit harsh. I’ll be happy to look at your improved solution.
Phil
[…] compute the first thousand terms of the Hamming sequence. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments […]
[…] This question is copyright of programmingpraxis.com. Click here to see the original […]
Here is my C implementation!
Thanks Phil for a review of my code. I now see how inefficient it is and was working on a revision but I don’t think I can make my algorithm as efficient as Supriya Koduri’s, to whom I want to ask “what train of thought allowed him to create such an elegant solution?”.
ruby solution :
Another solution based upon Dijkstra’s paper:
cosmin’s solution in Go:
The results of the cosmin’s solution in Go posted by glnbrwn 12Dec2012 in Go do not match.
Python version posted October 30, 2012 by cosmin results are:
[1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, …
Go when executed at the link given above results in:
[1 2 3 4 6 8 9 12 16 18 24 27 32 36 48 54 64
Go is missing 10, 15, 20, 25, 30 when executing it. Multiples of 5 are not getting added to the result. | http://programmingpraxis.com/2011/08/30/hamming-numbers/?like=1&_wpnonce=76a2d9cd23 | CC-MAIN-2015-48 | en | refinedweb |
In HTML, it is actually possible to embed raw image data directly into HTML so that separate image files are not needed. This can speed up HTTP transfers in many cases, but there are compatibility issues especially with Internet Explorer up to and including version 8. This short article will show an easy way to extract these images and convert the HTML to use external images.
Normally images are included using this syntax:
<img src="Image1.png">
The data URI syntax however allows images to be directly embedded, reducing the number of HTTP requests and also allowing for saving on disk as a single file. While this article deals with only the img tag, this method can be applied to other tags as well. Here is an example of data URI usage:
img
<img src=">
More information on data URIs is available here:
Most editors do not use the data URI syntax. However, starting with SeaMonkey 2.1 Composer (Mozilla HTML Editor), images which are dragged and dropped are imported using this syntax. This is quite a bad change in my opinion, especially since it is not obvious and because it is a change in behavior from 2.0. In my case, I made a large HTML file with over 50 images before I discovered it was not linking them, but instead embedding them.
Amazingly, there are quite a few online utilities to convert images to the data URI format, but none that I could find that could do the reverse. Because I did not want to hand-edit my document, I wrote a quick utility to extract the images to disk and change the HTML to use external images. This allows the document to be loaded by any standard browser including Internet Explorer 8.
The source code is quite targeted to my specific need. It has a lot of limitations. I have published it however so that it is available as a foundation for you to expand should you have the same need.
ImageExtract is a console application and accepts one parameter. The parameter is the HTML file for input. The images will be output in the same directory, and the new HTML file will have a -new suffix. So if the input is index.html, the output HTML will be index-new.html.
ImageExtract
I have made the project available for download, but it is quite simple. It is a C# .NET Console application. For easy viewing, here is the class:
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
namespace ImageExtract {
class Program {
// NOTE - This program is rough and dirty - I designed
// it to accomplish and urgent task. I have not built in
// normal error handling etc.
//
// It also has not been optimized at all
// and certainly is not very efficient.
//
// It also assumes all images are png files.
static void Main(string[] aArgs) {
string xSrcPathname = aArgs[0];
string xPath = Path.GetDirectoryName(xSrcPathname);
string xDestPathname = Path.Combine(xPath,
Path.GetFileNameWithoutExtension(xSrcPathname) + "-New.html");
int xImgIdx = 0;
Console.WriteLine("Processing " + Path.GetFileName(xSrcPathname));
string xSrc = File.ReadAllText(xSrcPathname);
var xDest = new StringBuilder();
string xStart = @";
string xB64;
int x = 0;
int y = 0;
int z = 0;
do {
x = xSrc.IndexOf(xStart, z);
if (x == -1) {
break;
}
// Write out preceding HTML
xDest.Append(xSrc.Substring(z, x - z));
// Get the Base64 string
y = xSrc.IndexOf('"', x + 1);
xB64 = xSrc.Substring(x + xStart.Length, y - x - xStart.Length);
// Convert the Base64 string to binary data
byte[] xImgData = System.Convert.FromBase64String(xB64);
string xImgName;
// Get Image name and replace it in the HTML
// We don't want to overwrite images that might already exist on disk,
// so cycle till we find a non used name
do {
xImgIdx++;
xImgName = "Image" + xImgIdx.ToString("0000") + ".png";
} while (File.Exists(Path.Combine(xPath, xImgName)));
Console.WriteLine("Extracting " + xImgName);
// Write image name into HTML
xDest.Append(xImgName);
// Write the binary data to disk
File.WriteAllBytes(Path.Combine(xPath, xImgName), xImgData);
z = y;
} while (true);
// Write out remaining HTML
xDest.Append(xSrc.Substring(z));
// Write out result
File.WriteAllText(xDestPathname, xDest.ToString());
Console.WriteLine("Output to " + Path.GetFileName(xDestPathname));
}
}
}
This article, along with any associated source code and files, is licensed under The BSD License
code-comment">//8/w38GIAXDIBKE0DHxgljNBAAO9TXL0Y4OHwAAAABJRU5ErkJggg==
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/221284/Data-URI-Image-Extractor | CC-MAIN-2015-48 | en | refinedweb |
using openmp on a 64 threads system
By blue on Apr 14, 2009
What do you do when you get a 64 threads machine? I mean other than trying to find the hidden messages in Pi?
Our group recently acquired a T5120 behemoth for builds, and I wanted to see what it was capable of.
|uname -a
SunOS hypernova 5.10 Generic_127127-11 sun4v sparc SUNW,SPARC-Enterprise-T5120
|psrinfo | wc -l
64
In my case I settled a slightly less ambitious endeavor. I recently had to implement Gaussian elimination as part of a university course work, I converted it to use the OpenMP and compiled with SunStudio.
|cat Makefile
gauss: gauss.omp.c
/opt/SUNWspro/bin/cc -xopenmp=parallel gauss.omp.c -o gauss
|diff -u gauss.single.c gauss.omp.c
--- gauss.single.c Tue Apr 14 14:32:57 2009
+++ gauss.omp.c Tue Apr 14 14:44:48 2009
@@ -7,6 +7,7 @@
#include <sys/times.h>
#include <sys/time.h>
#include <limits.h>
+#include <omp.h>
#define MAXN 10000 /\* Max value of N \*/
int N; /\* Matrix size \*/
@@ -35,7 +36,7 @@
char uid[L_cuserid + 2]; /\*User name \*/
seed = time_seed();
- procs = 1;
+ procs = omp_get_num_threads();
/\* Read command-line arguments \*/
switch(argc) {
@@ -63,7 +64,7 @@
exit(0);
}
}
-
+ omp_set_num_threads(procs);
srand(seed); /\* Randomize \*/
/\* Print parameters \*/
printf("Matrix dimension N = %i.\\n", N);
@@ -170,6 +171,7 @@
}
+#define CHUNKSIZE 5
void gauss() {
int row, col; /\* Normalization row, and zeroing
\* element row and col \*/
@@ -178,7 +180,9 @@
/\* Gaussian elimination \*/
for (norm = 0; norm < N - 1; norm++) {
+ #pragma omp parallel shared(A,B) private(multiplier,col, row)
{
+ #pragma omp for schedule(dynamic, CHUNKSIZE)
for (row = norm + 1; row < N; row++) {
multiplier = A[row][norm] / A[norm][norm];
for (col = norm; col < N; col++) {
As you can see, the changes are very simple, and requires very little modification to the code. Below was my result running it in a single thread and next using all 64 threads.
First the single threaded version.
|time ./gauss 10000 1 4
Random seed = 4
Matrix dimension N = 10000.
Number of processors = 1.
Initializing...
Starting clock.
Stopped clock.
Elapsed time = 1.11523e+07 ms.
(CPU times are accurate to the nearest 10 ms)
My total CPU time for parent = 1.11523e+07 ms.
My system CPU time for parent = 1080 ms.
My total CPU time for child processes = 0 ms.
--------------------------------------------
./gauss 10000 1 4 11163.06s user 1.64s system 99% cpu 3:06:04.96 total
And now using all threads.
|time ./gauss 10000 64 4
Random seed = 4
Matrix dimension N = 10000.
Number of processors = 64.
Initializing...
Starting clock.
Stopped clock.
Elapsed time = 254993 ms.
(CPU times are accurate to the nearest 10 ms)
My total CPU time for parent = 1.53976e+07 ms.
My system CPU time for parent = 37960 ms.
My total CPU time for child processes = 0 ms.
--------------------------------------------
./gauss 10000 64 4 15371.53s user 38.51s system 5757% cpu 4:27.65 total
Now I am all set to look for my name in Pi. :)
\*the gaussian elimination source is here. | https://blogs.oracle.com/blue/tags/parallelization | CC-MAIN-2015-48 | en | refinedweb |
hello, I hope that one of you could help me out a bit: is there a way to, right from the start of the program, to set all inputed character values to be transfer to only CAPS or only lower case.
This is what I've found out:
#include <ctype>
char c
cout << "input a letter ";
cin >> c ;
c = toupper(c);
this works and is not too bad, as my program is a constant loop that finishes when the game is over, but I am looking for an easier way out, thanks in advance,
axon | http://cboard.cprogramming.com/cplusplus-programming/34223-setting-all-input-caps-printable-thread.html | CC-MAIN-2015-48 | en | refinedweb |
A ContextManager for keeping threaded output associated with a cell, even after moving on.
import sys import threading import time from contextlib import contextmanager
# we need a lock, so that other threads don't snatch control # while we have set a temporary parent stdout_lock = threading.Lock() @contextmanager def set_stdout_parent(parent): """a context manager for setting a particular parent for sys.stdout the parent determines the destination cell of output """ save_parent = sys.stdout.parent_header with stdout_lock: sys.stdout.parent_header = parent try: yield finally: # the flush is important, because that's when the parent_header actually has its effect sys.stdout.flush() sys.stdout.parent_header = save_parent
Just use this tic as a marker, to show that we really are printing to two cells simultaneously
tic = time.time()
class counterThread(threading.Thread): def run(self): # record the parent when the thread starts thread_parent = sys.stdout.parent_header for i in range(3): time.sleep(2) # then ensure that the parent is the same as when the thread started # every time we print with set_stdout_parent(thread_parent): print i, "%.2f" % (time.time() - tic)
for i in range(3): counterThread().start()
0 2.05 0 2.05 0 2.05 1 4.05 1 4.05 1 4.05 2 6.06 2 6.06 2 6.06
for i in range(3): counterThread().start()
0 2.07 0 2.07 0 2.07 1 4.07 1 4.07 1 4.08 2 6.08 2 6.08 2 6.08 | http://nbviewer.ipython.org/gist/minrk/4563193 | CC-MAIN-2015-48 | en | refinedweb |
- OSI-Approved Open Source (64)
- GNU Library or Lesser General Public License version 3.0 (2)
- Affero GNU Public License (1)
- Apache License V2.0 (1)
- GNU General Public License version 2.0 (1)
- GNU General Public License version 3.0 (1)
- GNU Library or Lesser General Public License version 2.0 (1)
- Python Software Foundation License (1)
- Public Domain (2)
- Windows (61)
- Linux (60)
- Mac (48)
- Android (40)
- Grouping and Descriptive Categories (40)
- BSD (12)
- Modern (8)
Libraries Software
dlib C++ Library
Dlib is a C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real world problems.459 weekly downloads
SOCI - The C++ Database Access Library
The database access library for C++ programmers that provides the illusion of embedding SQL in regular C++ code, staying entirely within the C++ standard.123 weekly downloads
POCO C++ Libraries
Cross-platform C++ libraries with a network/internet focus.33 weekly downloads
Mirror C++ reflection library
The Mirror library provides both compile-time and run-time meta-data describing common C++ constructs like namespaces, types, classes, their base classes and member variables, constructors, etc. and provides generic interfaces for their introspection28
Utf8StringBuf
Utf8StringBuf is a small wrapper around standard c string-functions.12 weekly downloads
Boost.Application
Boost.Application2 weekly downloads
Compile-Time Intervals Library
A C++ library to manage compile-time intervals, provided as an extension of Boost.MPL.2=
Generic parallel partition
Implementation of paralell partitioning algorithms achieving an optimal number of comparisons. See L. Frias and J. Petit. Parallel partition revisited. for further reference.1 weekly downloads
safe numerics
Error Trapping replacements fro C++ integer data types.1 weekly downloads
C++ PDF Lexer
Simple header-only library written in C++, for lexical analysis of files in PDF format1 weekly downloads
HSM [Now Moved to GitHub]
C++ Hierarchical State Machine framework library1 weekly downloads
Interval Container Library (ICL)
Interval Container Library (ICL): An STL based generic library for computations on intervals, interval containers and cubes1 weekly downloads
Limbus
A cross-platform multimedia library aimed at supporting a wide range of programming languages. It exposes a set of C APIs used to generate high-level object-oriented bindings for supported languages at compile-time | http://sourceforge.net/directory/development/softdevlibraries/license:boostlicense/ | CC-MAIN-2015-48 | en | refinedweb |
This is the first in a series of articles that looks at how programming in Java is taught to new programmers. In part, this series proposes to introduce object-oriented programming using the programming-centric practices of Extreme Programming (XP). This is not an argument of whether XP is the right methodology for you to use when developing your applications; instead, we'll rethink what new programmers should learn in an introductory course in Java. Please join the discussion at the end of each article and feel free to suggest future topics in the forum or by emailing me at DSteinberg@core.com.
When the Java programming language was new, introductory books, courses, and online tutorials stressed that the C-like syntax would make it easy for experienced programmers to move to Java. Sure, there were books that focused on the advantages of object-oriented programming and promised that inheritance would save the world. But, after a brief history that included some references to Oak, set-top boxes, and the Web, most introductions were ports of existing books on C and C++.
We didn't have to throw out our copies of
Kernighan and Ritchie, we could just learn this extra "object stuff." Somehow, Java was supposed to be better. We were told that Java didn't have any pointers, and yet many of us, within hours of
downloading the JDK, immediately threw a
NullPointerException.
As we learned more and more about programming in Java, we found that C
was not the right way to approach Java. That doesn't mean the early books were
wrong; that's what we needed to hear at the time. Those resources feel dated now
because many of the newcomers to Java are newcomers to programming. We don't
need to explain to them how to get to Java from C -- we get to rethink the best
way to approach programming, if Java is the first language.
This is a particularly interesting time to rethink our approach. Starting in September 2003, the High School Advanced Placement exam in Computer Science will be available in Java. Introductory courses at colleges are being taught in Java, and educators have put a lot of effort into rethinking the curriculum. One project led by Lynn Andrea Stein has resulted in the Rethinking CS101 Project and its upcoming Web site. Stein has moved from the AI lab at MIT to the newly-created Olin College.
If you were allowed to begin a course on object-oriented programming using Java, where would you begin? An obvious choice is:
public class HelloWorld { public static void main( String [] args) { System.out.println("Hello, world."); } }
As Jason Hunter notes in his
Java Servlet Programming book (O'Reilly, 2nd edition 2002),
HelloWorld
has been used as a first program since the B language (the predecessor to the
C language). Introductions to everything from Java to JSPs and Servlets to Ruby to JDBC start with some sort of
HelloWorld example program. It's a short, simple-to-write program with an
immediately observable result. A novice user can be led through the
editing tools to enter and save the half a dozen lines of source code. The
beginner can then be told how to compile and run the program. Even a newbie
knows that the tasks have been successfully completed when the words "Hello
World" appear in the console window.
On the other hand, there are many reasons to view
HelloWorld as the worst
possible way to begin a course in object-oriented programming. At its core, an
object-oriented program should consist of highly-cohesive objects sending
and receiving messages to other objects. Maybe when a user clicks on a
JButton
that is part of the application's GUI, all of the objects interested in knowing
when this particular
JButton is clicked will be notified. These objects then
do what needs to be done -- perhaps by sending
messages to other objects.
Now look back at
HelloWorld. No objects are created. The
HelloWorld class
doesn't even contain any method, other than
main() -- that's a rant I'll save
for the next section.
HelloWorld is a class without any reason to exist,
and yet this is where we begin our discussion of object-oriented
programming. If you were highly charitable, you might give
HelloWorld OO points
because the
println() method of the
out class in
the
System package is being invoked. Blech.
So an OO program is all about objects interacting with each other by
sending messages back and forth. A pedagogical dilemma is that all of this interacting
has to start somewhere, but this isn't where the explanation of object-oriented programming should begin.
In other words, you need to have a
main() method to start a program, but you don't want to
start the course by explaining all of the components of
main().
Similarly, in a Physics course, we begin by explaining how physical objects interact
with each other. We don't begin by explaining the Big Bang and the creation of the universe.
It's much easier to demonstrate and explain Newton's laws of motion. In
a Java program, the "big bang" that creates the first object is the
main()
method:
public static void main( String [] args );
You could start the class by explaining what each keyword means. This means you'll have to begin the class by explaining:
Accessors: Before students even understand the difference between objects and
classes, before they have even seen a program with more than one object or class,
you have to explain levels of access. The
HelloWorld class isn't even
in a package, and yet you'll be forced to discuss packaging and hierarchies in order to
distinguish among
public,
protected,
private,
and that access level that doesn't even have a name.
Class methods: The use of the
static keyword depends on students having
an understanding of which methods and attributes should belong to a class and which should belong to an object. As mentioned before, at this point, students don't see the difference between classes and objects. (If you are an experienced Java developer, you may not remember how difficult it is to grasp this distinction.) Note that the
HelloWorld class has a single method. Whether or not
main() is
static is not based so much on the conceptual differences between a class and an
object as it is on whether the
main() method needs to be called
before an object of type
HelloWorld is created.
Return type: Imagine it is your task to explain the return type of a method.
You may choose something like
int add(int i, int j) to illustrate that the
add()
method will return the sum of
i and
j as an
int.
You would not choose a method that doesn't return anything as your first example. You wouldn't introduce
void as your first return type any more than you would try to explain multiplication starting with examples of numbers multiplied by zero.
Command line arguments: The parameters of the
main() method
are command-line arguments. Often the example that follows
HelloWorld is a
refinement that allows the user to type in
java HelloWorld myName
and get the feedback
Hello myName. Do you really want to cover
these right away? For this and the other items in this list, the question isn't
whether these are important topics, but rather whether they are the topics that
would best be presented first in a course designed to teach students object-oriented
programming in Java.
Arrays of Strings: The
args variable is a
String array.
I don't want to beat this example to death, but you don't want to have to introduce arrays in order to adequately explain the signature of this method. You also don't want to have to explain that if you want to read in an
int, then you read it in as a
String and convert it
to an
int by using another
static method in a wrapper class.
Sigh. The
main() method might be the right place to start the
execution of a Java program, but it's not the right place to start your
explanation of a Java program.
So what do you think? Are you ready to give up on
HelloWorld as a first
program, or do you see benefits that you would like to argue in favor of? What
about
main()? You need
main() or your program
won't run. How do you approach it? Do you tell your students to "just type it in
verbatim, you learn what all of the terms mean later"? Do you explain each of
the terms and then continue the semester with the three people that haven't dropped
your class?
How would you begin a Java-based introduction to object-oriented programming?. | http://www.onjava.com/lpt/a/2407 | CC-MAIN-2015-48 | en | refinedweb |
Real World Applications/Event Driven Applications
From HaskellWiki
1 Introduction
An event driven application/architecture is an application/architecture that is driven by internal/external events. Most non-trivial applications/architectures (from Operating Systems to Web Servers and Enterprise applications) are event driven.
Examples of events are:
- A Loan Application has been accepted/rejected (commercial business).
- A new Rostering Schedule is ready for distribution to all crew (Airline Management System).
- An Illegal Trade Pattern has been detected (Trading Fraud Detection System).
- A simulated car has hits another simulated car (Commercial Racing Game).
- A robot has reached its destination (Real Time Warehouse Management System).
- A HTML message has been received (Web Server).
- A key has been pressed (Text Editor).
The examples demonstrate that events can be anything from high level business events ("A loan application accepted/rejected") to low level events ("User pressed key").
In the following, I will show a way to architecture a Haskell system so that it can scale from small "toy" applications (dealing with low level IO events) to large scale, high volume, distributed, fault tolerant "Enterprise" scale applications.
Please note that the following is not the only way to attack the problem. So please contribute your (clearly superior of course) alternative way to do it here: Real World Applications
2 Events in Haskell
In the following, I define an "Event" to be a value describing something that has happened in the past. And yes this should really be called an "Event Notification" but life is too short :-)
Here is a straightforward way to define Events in Haskell:
data Event = EventUserExit -- User wants to exit | EventUserSave -- User wants to save | EventUserSaveAs String | EventUserUndo -- User wants to undo | EventUserRedo -- User wants to redo deriving(Eq,Show)
Events can be high level or low level depending on how "low level" in the system you are operating. Within a UI sub-system the events are typically low level (key pressed, window closed). In a large scale distributed system the events are typically high level business events (EventCustomerCreated <details>).
3 A Tiny Event Driven Haskell Application
Let's warm up with a tiny toy event driven Haskell application. In this case, an event driven calculator that can add numbers and exit. Not very exciting but it is useful for introducing a number of key concepts.
I will in the following use the term "Domain" to stand for the application state. Fell free to use "Shipping" instead of Domain if your application area is shipping or "Game" if your domain is a game application. More info here: Domain Model.
Here is the Domain for the calculator application:
module Main where data Domain = Domain Int deriving(Eq,Show)
The calculator state simply keeps track of the current value.
The next step is to define the events that the user can generate:
data Event = EventAdd Int | EventExit deriving(Eq,Show)
In this case all the user can do is to add and exit.
With those two definition, we can now write the "domain update" function. It simply takes the current state of the domain and an event as input and outputs the updated domain:
dmUpdate :: Domain -> Event -> Domain dmUpdate (Domain v) (EventAdd a) = Domain (v + a) dmUpdate dm _ = dm
For this tiny toy example all we can do is to react to the EventAdd event and calculate the new value.
The next step is to write the UI. For now it doesn't really matter how the UI is implemented. All that matters is that it can somehow present the Domain to a user and get user events back (More on this later):
uiUpdate :: Domain -> IO [Event] uiUpdate (Domain v) = do putStrLn $ "Value is now: " ++ show v if v < 3 then return [EventAdd 1] else return [EventExit]
For this tiny toy example the UI will not even read from stdin. It instead return EventAdd events until the Domain state value is >= 3. After which it generates the EventExit event. Yes it is just a tiny toy example :-)
Given dmUpdate and uiUpdate we can now write what is traditionally called the "Event Loop". The event loop is the beating heart of an event driven application:
run :: Domain -> [Event] -> IO () run dm [] = do events <- uiUpdate dm run dm events run _ (EventExit:_) = return () run dm (e:es) = run (dmUpdate dm e) es
For this tiny toy example, the event loop simply asks the UI for new events. And updates the domain when events are available. It exits when EventExit is received from the UI.
And here is the main function to bootstrap the event loop:
main :: IO () main = run (Domain 0) []
main simply creates a new domain (Domain 0) and calls run with an empty event list to get started.
The result of running the application is:
Value is now: 0 Value is now: 1 Value is now: 2 Value is now: 3
4 Consequences
The tiny application above is simple. However a number of key choices have been made that will profoundly shape how the application scales from this tiny application to "Enterprise" level:
UI depends on Domain - Domain does not depend on UI. This is the complete opposite of how most applications are (wrongly) developed.
UI can be swapped without changing Domain. Try that with your average application.
dmUpdate is not generating any events. However in a more realistic application, updating the domain will typically result in additional events being generated.
Domain is pure. All the goodness of functional programming. The idea is to keep as much of the code UI independent.
Domain can be tested without the UI. No need to setup Database to test.
The Domain API is Value based not API based. Events can be queued, recorded, distributed and replayed.
Events can (should) be asynchronous.
And by the way, this is not MVC/MVP as will be shown later :-)
5 Growing the Application
So how do we grow this tiny application to "Enterprise" scale? Read on:
- Simple File Storage
- Logging
- Testing
- Crash Recovery
- Undo/Redo
- Time
- UI
- Performance Monitoring (Dashboard)
- Automatic Server Scaling
- Complex Event Processing
- (Multiple) Databases (Separating Online and Reporting)
- Client/Server (Haste etc.)
- Humble "File in Directory" Events
- Reporting
- Business Workflow
- Remove Control
- Event Sourcing
- Event Bus
- Security (Event Pattern Based)
- Interop with non-Haskell applications
6 Questions and feedback
If you have any questions or suggestions, feel free to mail me. | https://wiki.haskell.org/Real_World_Applications/Event_Driven_Applications | CC-MAIN-2015-48 | en | refinedweb |
Solution
UrlDisp provides (Fast)CGI programs a minimalistic domain-specific parser for URLs.
Hierarchical part of the URL (e.g., /foo/bar/ or /bar/baz/quix) is tokenized (turned into a list of "URL fragments", e.g. ["foo","bar"]) and matched against rules defined using UrlDisp combinators. Every rule consists of, basically, a predicate and a CGI action. Once a predicate is satisfied, an action is performed; otherwise, alternatives are tried in the order given (<|> associates to the left). The matching algorithm is backtracking.
:
<|> endPath |? ("cmd, "bar") *> output "goodbye"
Will behave as follows:
- all GET requests to /foo/bar (and anything that follows) and parameter "cmd" set to "foo" will output "hello"
- requests with empty path and parameter cmd set to "bar" will output "goodbye"
- other requests will trigger a 404 page
As you can see, the |/ combinator matches current token against its.
.
import Network.UrlDisp its queryText [])
output $ show v
-- you will have to provide this one
queryText = "select * from ..." | https://wiki.haskell.org/index.php?title=UrlDisp&diff=28318&oldid=26443 | CC-MAIN-2015-48 | en | refinedweb |
This section describes the WwdEmbedded.java program, highlighting details specific to accessing a Derby database from a JDBC program.
INITIALIZATION SECTION: The initial lines of code identify the Java packages used in the program, then set up the Java class WwdEmbedded and the main method signature. Refer to a Java programming guide for information on these program constructs.
import java.sql.*; public class WwdEmbedded { public static void main(String[] args) {
DEFINE VARIABLES SECTION: The initial lines of the main method define the variables and objects used in the program. This example uses variables to store the information needed to connect to the Derby database. The use of variables for this information makes it easy to adapt the program to other configurations and other databases.
String driver = "org.apache.derby.jdbc.EmbeddedDriver"; String dbName="jdbcDemoDB"; String connectionURL = "jdbc:derby:" + dbName + ";create=true"; ... String createString = "CREATE TABLE WISH_LIST " + "(WISH_ID INT NOT NULL GENERATED ALWAYS AS IDENTITY " ... + " WISH_ITEM VARCHAR(32) NOT NULL) " ;
LOAD DRIVER SECTION: Loading the Derby embedded JDBC driver starts the Derby database engine. The try and catch block (the Java error-handling construct) catches the exceptions that may occur. A problem here is usually due to an incorrect classpath setting.
String driver = "org.apache.derby.jdbc.EmbeddedDriver"; ... try { Class.forName(driver); } catch(java.lang.ClassNotFoundException e) { ... }
BOOT DATABASE SECTION: The DriverManager class loads the database using the Derby connection URL stored in the variable connectionURL. This URL includes the parameter ;create=true so that the database will be created if it does not already exist. The primary try and catch block begins here. This construct handles errors for the database access code.
String connectionURL = "jdbc:derby:" + dbName + ";create=true"; ... try { conn = DriverManager.getConnection(connectionURL); ... <most of the program code is contained here> } catch (Throwable e) { ... }
INITIAL SQL SECTION: The program initializes the objects needed to perform subsequent SQL operations and checks to see if the required data table exists.
The statement object s is initialized. If the utility method WwdUtils.wwdChk4Table does not find the WISH_LIST table, the statement object's execute method creates the table by executing the SQL stored in the variable createString.
s = conn.createStatement(); if (! WwdUtils.wwdChk4Table(conn)) { System.out.println (" . . . . creating table WISH_LIST"); s.execute(createString); }
The INSERT statement used to add data to the table is bound to the prepared statement object psInsert. The prepared statement uses the question mark parameter ? to represent the data that will be inserted by the user. The program sets the actual value to be inserted later on, before executing the SQL. This is the most efficient way to execute SQL statements that will be used multiple times.
psInsert = conn.prepareStatement ("insert into WISH_LIST(WISH_ITEM) values (?)");
ADD / DISPLAY RECORD SECTION: This section uses the utility method WwdUtils.getWishItem to gather information from the user. It then uses the objects set up previously to insert the data into the WISH_LIST table and then display all records. A standard do loop causes the program to repeat this series of steps until the user types exit. The data-related activities performed in this section are as follows:
psInsert.setString(1,answer); psInsert.executeUpdate();
myWishes = s.executeQuery("select ENTRY_DATE, WISH_ITEM from WISH_LIST order by ENTRY_DATE");The while loop reads each record in turn by calling the next method. The getTimestamp and getString methods return specific fields in the record in the proper format. The fields are displayed using rudimentary formatting.
while (myWishes.next()) { System.out.println("On " + myWishes.getTimestamp(1) + " I wished for " + myWishes.getString(2)); }Close the ResultSet to release the memory being used.
myWishes.close();
DATABASE SHUTDOWN SECTION: If an application starts the Derby engine, the application should shut down all databases before exiting. The attribute ;shutdown=true in the Derby connection URL performs the shutdown. When the Derby engine is shutdown, all booted databases will automatically shut down. The shutdown process cleans up records in the transaction log to ensure a faster startup the next time the database is booted.
"This section verifies that the embedded driver is being used, then issues the shutdown command and catches the shutdown exception to confirm that the Derby engine shut down cleanly. The shutdown status is displayed before the program exits.
if (driver.equals("org.apache.derby.jdbc.EmbeddedDriver")) { boolean gotSQLExc = false; try { DriverManager.getConnection("jdbc:derby:;shutdown=true"); } catch (SQLException se) { if ( se.getSQLState().equals("XJ015") ) { gotSQLExc = true; } } if (!gotSQLExc) { System.out.println("Database did not shut down normally"); } else { System.out.println("Database shut down normally"); } }
DERBY EXCEPTION REPORTING CLASSES: The two methods at the end of the file, errorPrint and SQLExceptionPrint, are generic exception-reporting methods that can be used with any JDBC program. This type of exception handling is required because often multiple exceptions (SQLException) are chained together and then thrown. A while loop is used to report on each error in the chain. The program starts this process by calling the errorPrint method from the catch block of the code that accesses the database.
// Beginning of the primary catch block: uses errorPrint method } catch (Throwable e) { /* Catch all exceptions and pass them to ** the exception reporting method */ System.out.println(" . . . exception thrown:"); errorPrint(e); }
The errorPrint method prints a stack trace for all exceptions except a SQLException. Each SQLException is passed to the SQLExceptionPrint method.
static void errorPrint(Throwable e) { if (e instanceof SQLException) SQLExceptionPrint((SQLException)e); else { System.out.println("A non SQL error occured."); e.printStackTrace(); } } // END errorPrint
The SQLExceptionPrint method iterates through each of the exceptions on the stack. For each error, the method displays the codes, message, and stacktrace.
// Iterates through a stack of SQLExceptions static void SQLExceptionPrint(SQLException sqle) { while (sqle != null) { System.out.println("\n---SQLException Caught---\n"); System.out.println("SQLState: " + (sqle).getSQLState()); System.out.println("Severity: " + (sqle).getErrorCode()); System.out.println("Message: " + (sqle).getMessage()); sqle.printStackTrace(); sqle = sqle.getNextException(); } } // END SQLExceptionPrint
To see the output produced by this method, type a wish-list item with more than 32 characters, such as I wish to see a Java program fail. | http://db.apache.org/derby/docs/10.5/getstart/rwwdactivity3.html | CC-MAIN-2015-48 | en | refinedweb |
Using the numbers 1, 3, 4 and 6, create an algebraic expression that equals 24. All four numbers must be used and each number may only be used once. Those four numbers are the only numbers permitted in the expression. The expression is restricted to using addition, subtraction, multiplication and division. A particular mathematical operator maybe applied more than once. The expression may contain parentheses to define the order of operations.
Here are some valid expressions that do not equal 24:
((6 * 4) - 3) - 1
(3 + 4) * (1 + 6)
4 * (1 + (6 / 3))
Here are some invalid expressions:
((6 * 4) + (-3)) + 1
34 * (1 + 6)
3.4 * (1 + 6)
4 * (1 + (6 ^ 3))
6 * (3 + 1)
1 + 4 + (6 * 4)
The first expression used the negation unary operator. The symbol for negation is the same symbol used in subtraction, but it’s an entirely different mathematical function. The second expression attempted to form 34 by putting the numbers 3 and 4 adjacent to each other. The third expression attempted to form 3.4 by introducing a decimal point between 3 and 4. Numbers can only be combined using the four operators stated in the problem. The fourth expression used 63. Exponents are not permitted even though it is possible to express them without an explicit operator like ^. The fifth expression does not include 4. The sixth expression includes 4 twice.
^
1, 3, 4 and 6 are base-10, real numbers. The expression must evaluate to 24 in base-10. During evaluation, each sub-expression evaluates to a real number. There is no implicit conversion to integers via the floor, ceiling or round functions at any stage during evaluation.
A simple solution exists. The answer equals exactly 24 and it doesn’t apply some sort of trick or loophole that breaks the rules explained above. But, I bet you can’t find it.
As far as I can determine, the only way to find the solution is via a brute force search. Is it feasible to do that by hand? How large is the problem space? How many expressions exists with the constraints specified in the problem?
The number of possible expressions can be determined by multiplying together the following 3 values:
The number of permutations of 1, 3, 4 and 6 is 4! = 24. That follows from the fact that we have four choices for the first number in the expression. Once that number is in place, we can select among one of the three remaining numbers for the second number in the expression. And so on: 4 * 3 * 2 * 1 = 24.
Next, we need to compute the number of operator permutations where these permutations may contain repeats. But, how many operators are in the expression? Does it vary? Any expression containing parentheses can be expressed in Reverse Polish notation (RPN, a.k.a. Postfix notation) using no parentheses. An RPN expression is easy to evaluate using a stack. The expression is evaluated from left-to-right. When a number is encountered, it is pushed onto the stack and the stack size increases by 1. When one of the binary operators above is encountered, 2 values are popped from the stack, the operator is applied and the result is pushed back onto the stack. In that case, the stack size decreases by 1. The final result is left on the stack. Since the expression contains 4 numbers and after evaluation the stack size is 1, the expression must always contain exactly 3 operators, each of which reduces the stack size by 1.
We have four choices for each of the three operators in the expression. 4*4*4 = 43 = 64.
Finally, how many expression forms are there? Meaning, how many RPN expressions exists containing 4 numbers and 3 binary operators? Let N denote a number and B a binary operator. Consider this RPN expression:
N N N N
We need to insert 3 B’s. Let’s number the insertion points:
1 N 2 N 3 N 4 N 5
A binary operator consumes the top 2 values on the stack. Hence, we can’t put a B at point 1 or 2 because it needs to follow at least 2 N’s. If we insert a B at point 3, we can’t put a second B there because the first reduced the stack size to 1. As described above, the stack size at any point is the number of N’s minus the number of B’s. Analyzing the remaining possibilities yields these 5 expressions:
N N B N B N B
N N B N N B B
N N N B B N B
N N N B N B B
N N N N B B B
The infix version of those expressions is:
((N B N) B N) B N
(N B N) B (N B N)
(N B (N B N)) B N
N B ((N B N) B N)
N B (N B (N B N))
If we plug in the permutations of the four numbers and the permutations (with repeats) of the operators, we get 24 * 64 * 5 = 7680 expressions. That’s way too many to do by hand. Instead, I used a simple C# program.
First, I manually listed out the permutations of the 4 numbers:
1: private double[,] permutations = {
2: { 1, 3, 4, 6, },
3: { 1, 3, 6, 4, },
4: { 1, 4, 3, 6, },
5: { 1, 4, 6, 3, },
6: { 1, 6, 3, 4, },
7: { 1, 6, 4, 3, },
8: { 3, 1, 4, 6, },
9: { 3, 1, 6, 4, },
10: { 3, 4, 1, 6, },
11: { 3, 4, 6, 1, },
12: { 3, 6, 1, 4, },
13: { 3, 6, 4, 1, },
14: { 4, 1, 3, 6, },
15: { 4, 1, 6, 3, },
16: { 4, 3, 1, 6, },
17: { 4, 3, 6, 1, },
18: { 4, 6, 1, 3, },
19: { 4, 6, 3, 1, },
20: { 6, 1, 3, 4, },
21: { 6, 1, 4, 3, },
22: { 6, 3, 1, 4, },
23: { 6, 3, 4, 1, },
24: { 6, 4, 1, 3, },
25: { 6, 4, 3, 1, },
26: };
Next, I created an array of the 4 operators and I defined a function that can apply them:
1: private char[] operators = { '+', '-', '*', '/' };
2:
3: private double F(double a, char op, double b) {
4: switch (op) {
5: case '+':
6: return a + b;
7: case '-':
8: return a - b;
9: case '*':
10: return a * b;
11: default:
12: return a / b;
13: }
14: }
Finally, I created a function that contains 4 nested loops. The outer 3 loops iterate over the operator permutations. The inner-most loop iterates over the number permutation array. The body of that loop evaluates the 5 expression forms and tests for equality against 24.
1: public void Solve() {
2: foreach (char p in operators) {
3: foreach (char q in operators) {
4: foreach (char r in operators) {
5: for (int i = 0; i < 24; i++) {
6: double a = permutations[i, 0];
7: double b = permutations[i, 1];
8: double c = permutations[i, 2];
9: double d = permutations[i, 3];
10:
11: if (F(F(a, p, b), r, F(c, q, d)) == 24) {
12: Console.WriteLine("({0} {1} {2}) {3} ({4} {5} {6}) = 24",
13: a, p, b, r, c, q, d);
14: return;
15: }
16: if (F(F(F(a, p, b), q, c), r, d) == 24) {
17: Console.WriteLine("(({0} {1} {2}) {3} {4}) {5} {6} == 24",
18: a, p, b, r, c, q, d);
19: return;
20: }
21: if (F(F(a, p, F(b, q, c)), r, d) == 24) {
22: Console.WriteLine("({0} {1} ({2} {3} {4})) {5} {6} == 24",
23: a, p, b, r, c, q, d);
24: return;
25: }
26: if (F(a, p, F(b, q, F(c, r, d))) == 24) {
27: Console.WriteLine("{0} {1} ({2} {3} ({4} {5} {6})) == 24",
28: a, p, b, r, c, q, d);
29: return;
30: }
31: if (F(a, p, F(F(b, q, c), r, d)) == 24) {
32: Console.WriteLine("{0} {1} (({2} {3} {4}) {5} {6}) == 24",
33: a, p, b, r, c, q, d);
34: return;
35: }
36: }
37: }
38: }
39: }
40: }
One of those if-statements discovers and prints the answer. I omit the answer from this article just incase you’re crazy enough to attempt to solve this by hand.
Why stop there? What about a computational solution to a generalized version of the problem? Consider expressions containing N numbers, N-1 binary operators and any number of unary operators. The goal is to discover an expression that equals some specified target value.
Unary operators, such square-root and negation, are kind of a problem. In an RPN expression, a unary operator pops a single value off the stack, it applies its associated unary function and it pushes the result back onto the stack. Unary operators don’t alter the stack size; they can be applied indefinitely. To limit the search space, we’re forced to specify the number of unary operators that can be in the expression.
First, let’s consider ways of generating permutations. Generating permutations of the operators is easier than generating permutations of the numbers because repeats are allowed. It can be done with a simple recursive function:
1: public void PrintPermutations(char[] operators) {
2: PrintPermutations(operators, 0, new char[operators.Length]);
3: }
4:
5: private void PrintPermutations(char[] operators, int index,
char[] permutation) {
6: if (index == permutation.Length) {
7: PrintArray(permutation);
8: } else {
9: for (int i = 0; i < operators.Length; i++) {
10: permutation[index] = operators[i];
11: PrintPermutations(operators, index + 1, permutation);
12: }
13: }
14: }
The bootstrap function on line 1 accepts the array of operators. It creates a second array on line 2 to store an individual permutation and it makes a call to the recursive function on line 5. The recursive function acts like the set of nested loops in the prior example. The index variable is an index into the permutation array, but it's essentially the nesting level. The loop on line 9 plugs every possible operator into that position of the array and then it recursively calls the function again to do the same to the rest of the array. Note that index + 1 is passed in the call. If the index equals the length of the array, then it’s full and it’s printed. The output looks like this:
index
permutation
index + 1
+ + + +
+ + + -
+ + + *
+ + + /
+ + - +
+ + - -
+ + - *
+ + - /
+ + * +
+ + * -
+ + * *
+ + * /
+ + / +
+ + / -
+ + / *
+ + / /
+ - + +
+ - + -
+ - + *
+ - + /
...
/ / * *
/ / * /
/ / / +
/ / / -
/ / / *
/ / / /
Permuting the numbers can be done using a similar technique; however, additional logic is required to prevent repeats:
1: public void PrintPermutations(double[] numbers) {
2: Array.Sort(numbers);
3: bool[] available = new bool[numbers.Length];
4: for (int i = 0; i < available.Length; i++) {
5: available[i] = true;
6: }
7: PrintPermutations(numbers, 0, new double[numbers.Length], available);
8: }
9:
10: private void PrintPermutations(double[] numbers, int index,
11: double[] permutation, bool[] available) {
12: if (index == numbers.Length) {
13: PrintArray(permutation);
14: } else {
15: double lastNumber = Double.NaN;
16: for (int i = 0; i < available.Length; i++) {
17: if (available[i] && numbers[i] != lastNumber) {
18: available[i] = false;
19: permutation[index] = lastNumber = numbers[i];
20: PrintPermutations(numbers, index + 1, permutation, available);
21: available[i] = true;
22: }
23: }
24: }
25: }
The bootstrap function on line 1 accepts the numbers to permute. That array may contain duplicate numbers. To make it easier to discover duplicates, the array is sorted on line 2. On line 3, an array of boolean flags is allocated to the same length as the numbers array. Lines 4—6 initialize the elements to true. As each number is placed into the permutation, the code will note that it can’t be used again by setting the corresponding index of the available array to false. The arrays are passed to the recursive function on line 10. The loop on lines 16—24 scans the available array for the next unused number. Since the sorted numbers array may contain duplicates, before the loop plugs an available number into the permutation array, it checks that it doesn’t match the last number it plugged into the same position. After the recursive call on line 20, it renders the number available again.
true
available
false
numbers
My generalized solution uses those recursive techniques of producing permutations as part of building RPN expressions. An RPN expression is a list of numbers and operators. I abstracted the elements of the list into this interface:
1: public interface IElement {
2: void Evaluate(Stack<double> stack);
3: void Print(Stack<string> stack);
4: }
Each element is able to evaluate itself and to print itself. For evaluation, it pops 0, 1 or 2 values off the stack, it applies a function to those values and it pushes a new value back onto the stack. It prints itself in infix notation also by pulling values off a stack and pushing on a new one.
Here’s the implementation of the number element:
1: public class Number : IElement {
2:
3: public double value;
4:
5: public Number(double value) {
6: this.value = value;
7: }
8:
9: public void Evaluate(Stack<double> stack) {
10: stack.Push(value);
11: }
12:
13: public void Print(Stack<string> stack) {
14: stack.Push(value.ToString());
15: }
16: }
Number is evaluated and printed by simply pushing its contained value onto the stack.
The binary operators are modeled as follows:
1: public delegate double BinaryFunction(double a, double b);
2:
3: public class BinaryOperator : IElement {
4:
5: private BinaryFunction binaryFunction;
6: private string name;
7:
8: public BinaryOperator(BinaryFunction binaryFunction, string name) {
9: this.binaryFunction = binaryFunction;
10: this.name = name;
11: }
12:
13: public void Evaluate(Stack<double> stack) {
14: double b = stack.Pop();
15: double a = stack.Pop();
16: stack.Push(binaryFunction(a, b));
17: }
18:
19: public void Print(Stack<string> stack) {
20: string b = stack.Pop();
21: string a = stack.Pop();
22: stack.Push(string.Format("({0} {1} {2})", a, name, b));
23: }
24: }
The constructor accepts a binary function, defined by the delegate, and a name. The Evaluate() and Print() functions are straightforward. They pop 2 values off the stack, they process them and they push a new value back onto the stack.
Evaluate()
Print()
The same idea is used for unary operators:
1: public class UnaryOperator : IElement {
2:
3: private UnaryFunction unaryFunction;
4: private string name;
5: private bool before;
6:
7: public UnaryOperator(
8: UnaryFunction unaryFunction, string name, bool before) {
9: this.unaryFunction = unaryFunction;
10: this.name = name;
11: this.before = before;
12: }
13:
14: public void Evaluate(Stack<double> stack) {
15: stack.Push(unaryFunction(stack.Pop()));
16: }
17:
18: public void Print(Stack<string> stack) {
19: stack.Push(string.Format(
20: before ? "{0}({1})" : "({1}){0}", name, stack.Pop()));
21: }
22: }
Unary operators may be printed prefix or postfix. For instance, negation is denoted by a minus-sign to the left of the value. Factorial, on the other hand, is denoted by an exclamation mark to the right of the value. The before parameter passed into the constructor enables Print() to show the operator before or after the value.
before
A list of IElement objects represents an RPN expression in deferred execution form. To evaluate and to print the expression, the following methods are used:
IElement
1: private double EvalutateExpression(List<IElement> elements,
2: Stack<double> stack) {
3: foreach (IElement element in elements) {
4: element.Evaluate(stack);
5: }
6: return stack.Pop();
7: }
8:
9: private void PrintExpression(List<IElement> elements, double target,
10: Stack<string> stack) {
11: foreach (IElement element in elements) {
12: element.Print(stack);
13: }
14: string expression = stack.Pop();
15: Console.WriteLine("{0} = {1}", expression, target);
16: }
The stack passed into the methods starts out empty. The methods iterate over the list using the stack to hold intermediate values. The final value in the stack is returned.
Finally, to find an expression that equals a specified target value, a large recursive method is used. I’ll break it down for you:
1: private void Solve(
2: Number[] numbers,
3: UnaryOperator[] unaryOperators,
4: BinaryOperator[] binaryOperators,
5: double target,
6: List<IElement> elements,
7: bool[] numbersAvailable,
8: bool[] unaryOperatorsAvailable,
9: int stackSize,
10: Stack<double> numberStack,
11: Stack<string> stringStack) {
The numbers, unaryOperators, binaryOperators and target are the parameters that specify the problem. The elements list is the RPN expression under construction. The numbersAvailable boolean array is used as shown in a prior example for computing the permutations of the numbers. Since unary operators can be applied indefinitely, this recursive function prevents duplicates using the unaryOperatorsAvailable array. If you want the expression to contain a specific unary operator a certain number of times, list the operator that number of times within the unaryOperators array. The stackSize represents the size of the stack if the partially formed RPN expression in elements were to be evaluated. Finally, numberStack and stringStack are used for evaluating and printing the RPN expression once it is fully formed.
unaryOperators
binaryOperators
target
elements
numbersAvailable
unaryOperatorsAvailable
stackSize
numberStack
stringStack
The method expands the partial RPN expression by plugging in every possible number:
1: bool remainingNumbers = false;
2: double lastNumber = Double.NaN;
3: for (int i = 0; i < numbersAvailable.Length; i++) {
4: if (numbersAvailable[i] && numbers[i].value != lastNumber) {
5: remainingNumbers = true;
6: elements.Add(numbers[i]);
7: numbersAvailable[i] = false;
8: Solve(numbers, unaryOperators, binaryOperators, target, elements,
9: numbersAvailable, unaryOperatorsAvailable, stackSize + 1,
10: numberStack, stringStack);
11: numbersAvailable[i] = true;
12: elements.RemoveAt(elements.Count - 1);
13: }
14: }
The loop is analogous to the loop used for computing the number permutations seen in a prior example. Each iteration checks if the number is available and if the number was already plugged-in in a previous iteration (the numbers array is pre-sorted). If a number is available, it is appended to the RPN expression, the recursive method is called again and finally the number is removed from the expression. The remainingNumbers flag is set if it discovered an available number. Note that appending a number to the expression will increase the stack size by 1. See the value passed into the recursive call.
remainingNumbers
Next, the method does virtually the same thing for the unary functions. The only difference being that it also has to check if the stack size is at least 1. When the recursive call is made, the stackSize variable is left the same.
1: if (stackSize >= 1) {
2: for (int i = 0; i < unaryOperatorsAvailable.Length; i++) {
3: if (unaryOperatorsAvailable[i]) {
4: elements.Add(unaryOperators[i]);
5: unaryOperatorsAvailable[i] = false;
6: Solve(numbers, unaryOperators, binaryOperators, target, elements,
7: numbersAvailable, unaryOperatorsAvailable, stackSize,
8: numberStack, stringStack);
9: unaryOperatorsAvailable[i] = true;
10: elements.RemoveAt(elements.Count - 1);
11: }
12: }
13: }
Then, the method plugs-in every possible binary operator. This code is simpler because it doesn’t have to check for duplicates:
1: if (stackSize >= 2) {
2: for (int i = 0; i < binaryOperators.Length; i++) {
3: elements.Add(binaryOperators[i]);
4: Solve(numbers, unaryOperators, binaryOperators, target, elements,
5: numbersAvailable, unaryOperatorsAvailable, stackSize - 1,
6: numberStack, stringStack);
7: elements.RemoveAt(elements.Count - 1);
8: }
9: }
Finally, here’s the remainder of the recursive method:
1: if (stackSize == 1 && !remainingNumbers) {
2: if (target == EvalutateExpression(elements, numberStack)) {
3: PrintExpression(elements, target, stringStack);
4: }
5: }
6: }
It checks if there is only 1 element in the stack and that it ran out of numbers to append to the RPN expression. If so, it evaluates the expression and compares the result to the target. If it’s expression that we’re looking for, it gets printed.
Here’s the bootstrap method that calls the recursive function:
1: public void Solve(double[] numbers, UnaryOperator[] unaryOperators,
2: BinaryOperator[] binaryOperators, double target) {
3: Array.Sort(numbers);
4: Number[] nums = new Number[numbers.Length];
5: for (int i = 0; i < numbers.Length; i++) {
6: nums[i] = new Number(numbers[i]);
7: }
8: List<IElement> elements = new List<IElement>();
9: bool[] numbersAvailable = new bool[numbers.Length];
10: for (int i = 0; i < numbersAvailable.Length; i++) {
11: numbersAvailable[i] = true;
12: }
13: bool[] unaryOperatorsAvailable = new bool[unaryOperators.Length];
14: for (int i = 0; i < unaryOperatorsAvailable.Length; i++) {
15: unaryOperatorsAvailable[i] = true;
16: }
17: Stack<double> numberStack = new Stack<double>();
18: Stack<string> stringStack = new Stack<string>();
19: Solve(nums, unaryOperators, binaryOperators, target, elements,
20: numbersAvailable, unaryOperatorsAvailable, 0, numberStack,
21: stringStack);
22: }
It’s pretty straightforward. Note that numberStack and stringStack are allocated here and they are only used for evaluating and printing fully formed RPN expressions.
To use the bootstrap method for solving The 24 Puzzle, the following call is made:
1: public static void Solve24Puzzle() {
2: Example4 example4 = new Example4();
3: example4.Solve(
4: new double[] { 1, 3, 4, 6 },
5: new UnaryOperator[0],
6: new BinaryOperator[] {
7: new BinaryOperator((a, b) => a + b, "+"),
8: new BinaryOperator((a, b) => a - b, "-"),
9: new BinaryOperator((a, b) => a * b, "*"),
10: new BinaryOperator((a, b) => a / b, "/")
11: },
12: 24);
13: }
How many integers can you generate using only four 4’s and any set of functions? The recursive method used in The 24 Puzzle solution can be slightly modified to solve The Four 4’s puzzle. Instead of hunting for a target value, all expressions that evaluate to an integer are stored in a SortedDictionary<int, List<IElement>>. To make it more interesting, only the shortest expressions are kept. Here’s the modified segment of code:
SortedDictionary<int, List<IElement>>
1: if (stackSize == 1 && !remainingNumbers) {
2: double value = EvalutateExpression(elements, numberStack);
3: int floored = (int)Math.Floor(value);
4: if (value == floored) {
5: if (expressions.ContainsKey(floored)) {
6: if (elements.Count < expressions[floored].Count) {
7: expressions[floored] = new List<IElement>(elements);
8: }
9: } else {
10: expressions[floored] = new List<IElement>(elements);
11: }
12: }
13: }
For binary operators, I’ll use addition, subtraction, multiplication, division and exponentiation. For unary operators I’ll use negation, square-root and factorial. However, to reduce the computation time, I’ll request that each unary operator be used at most once. If you want to use a particular unary operator more than once, simply add it to the UnaryOperator array as many times as desired.
UnaryOperator
1: public static void SolveFour4sPuzzle() {
2: Example5 example5 = new Example5();
3: UnaryOperator negate = new UnaryOperator(a => -a, "-", true);
4: UnaryOperator squareRoot
5: = new UnaryOperator(a => Math.Sqrt(a), "sqrt", true);
6: UnaryOperator factorial = new UnaryOperator(a => {
7: if (a == Math.Floor(a) && a >= 0 && a <= 12) {
8: return factorials[(int)a];
9: } else {
10: return Double.NaN;
11: }
12: }, "!", false);
13: example5.FindAll(
14: new double[] { 4, 4, 4, 4 },
15: new UnaryOperator[] { negate, squareRoot, factorial },
16: new BinaryOperator[] {
17: new BinaryOperator((a, b) => a + b, "+"),
18: new BinaryOperator((a, b) => a - b, "-"),
19: new BinaryOperator((a, b) => a * b, "*"),
20: new BinaryOperator((a, b) => a / b, "/"),
21: new BinaryOperator((a, b) => Math.Pow(a, b), "^")
22: });
23: }
For factorial, I defined an unsigned integer array for 0! to 12!. It can only be applied to integer intermediate values. The if-statement returns NaN if it can’t evaluate the input value. Consequentially, the entire RPN expression will equal NaN in that case.
Here’s a sample of the output: = (4 + (4 + (4 - sqrt(4))))
11 = (4 + ((4 + (4)!) / 4))
12 = (4 * (4 - (4 / 4)))
13 = ((4 + (sqrt(4) * (4)!)) / 4)
14 = (4 + (4 + (4 + sqrt(4))))
15 = ((4 * 4) - (4 / 4))
16 = (4 + (4 + (4 + 4)))
17 = ((4 * 4) + (4 / 4))
18 = (4 * (4 + (sqrt(4) / 4)))
19 = ((4)! - (4 + (4 / 4)))
20 = (4 * (4 + (4 / 4)))
21 = ((4 / 4) - (4 - (4)!))
22 = (4 + ((4 * 4) + sqrt(4)))
23 = (((4 * (4)!) - 4) / 4)
24 = (4 + (4 + (4 * 4)))
25 = ((4 + (4 * (4)!)) / 4)
26 = (((4 + 4) / 4) + (4)!)
27 = (4 - ((4 / 4) - (4)!))
28 = ((4 * (4 + 4)) - 4)
29 = (4 + ((4 / 4) + (4)!))
30 = ((4 * (4 + 4)) - sqrt(4))
This article, along with any associated source code and files, is licensed under The GNU Lesser General Public License (LGPLv3)
Michael Birken wrote:Can you present a non-brute force approach that you can do by hand?
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/28435/The-24-Puzzle | CC-MAIN-2015-48 | en | refinedweb |
Be the first to know about new publications.Follow publisher Unfollow publisher Josh Rattray
- Info
Spread the word.
Share this publication.
- Stack
Organize your favorites into stacks.
- Like
Like this publication.
2010 Butler Football Guide
2010 Butler University football guide
2010 Butler Football Contents All-Americans, Butler ...................................... 28 Butler Bowl ....................................................... 6 Butler University ............................................. 40 Coaching Records, Butler............................... 32 Coaching Staff ............................................... 4-6 Head Coach Jeff Voris ................................. 4 Joe Cheshire ................................................ 5 Tim Cooper .................................................. 5 Neil Tabacca................................................. 5 Lettermen, Butler All-Time..................... 36-39 Opponents ................................................. 24-26 Pioneer Football League ................................ 27 Player Profiles ............................................. 7-18 Records, Annual Team Won-Lost ................... 32 Records, Butler Individual.......................... 28-31 Records, Butler Team ..................................... 29 Results, 2009 Game-by-Game ....................... 23 Results, All-Time........................................ 33-35 Roster, 2010 .............................................. 20-21 Schedule, 2010................................................. 1 Statistics, 2009 ............................................... 22 Butler Quick Facts Location: Indianapolis, Ind. 46208 Founded: 1855 Enrollment: 4,200 Nickname: Bulldogs Colors: Blue and White Conference: Pioneer Football League National Affiliation: NCAA Football Championship Subdivision (FCS): 24-21 (4 yrs.) Career Record: 39-55 (9 yrs.) Office Phone: (317) 940-6803 Best Time to Call: 9 a.m.-12 noon Assistant Coaches: Joe Cheshire (DePauw ‘99) Tim Cooper (DePauw ‘97) Nick Tabacca (Ball State ‘04) Athletic Trainer: Chris Tinkey Stadium: Butler Bowl Capacity: 5,647 Surface: Pro Grass (Field Turf) Press Box Phone: (317) 940-9817 2009 Record: 11-1-0 2009 League Record: 7-1 (1st-tie) Lettermen Returning: 40 (20 Off., 18 Def., 2 Spec.) Lettermen Lost: 13 Starters Returning: 17 (9 Off., 8 Def.) All-Time Football Record: 539-417-35/120 years Safety Mark Giacomantonio celebrated Butler’s come-from-behind Homecoming victory over San Diego. The dramatic win helped the Bulldogs compile a per perfect 7-0 record in the Butler Bowl in 2009. 2010 BUTLER FOOTBALL SCHEDULE Game Time at Site Date Opponent Game Site City Sept. 4 At Albion Sprankle-Sprandel Stadium Albion, Mich. 1 p.m. Sept. 11 At Youngstown State Stambaugh Stadium Youngstown, Ohio 6 p.m. Sept. 18 TAYLOR Butler Bowl Indianapolis, Ind. 1 p.m. Sept. 25 At San Diego* Torero Stadium San Diego, Calif. 6 p.m. Oct. 2 CAMPBELL* Butler Bowl Indianapolis, Ind. noon Oct. 9 At Davidson* Richardson Stadium Davidson, N.C. noon Oct. 16 DAYTON* Butler Bowl Indianapolis, Ind. 1 p.m. Oct. 23 MOREHEAD STATE*(HC) Butler Bowl Indianapolis, Ind. 1 p.m. Oct. 30 At Valparaiso* Brown Field Valparaiso, Ind. 1 p.m. Nov. 6 JACKSONVILLE* Butler Bowl Indianapolis, Ind. noon Nov. 13 At Drake* Drake Stadium Des Moines, Iowa 1 p.m. *--Pioneer Football League game. (HC)--Homecoming. CREDITS: Butler’s 2010 Football Guide was designed and edited by Jim McGrath, Butler sports information director. Photography by Butler photographer Brent Smith. Cover design by the Butler Publications Office. For the latest in Butler sports information, visit: 1 2010 Butler Football Wide receiver Zach Watkins led the Pioneer Football League and set a Butler record with 78 receptions in 2009. Bulldogs Seek Repeat Performance field goal on the final play of the game! The Bulldogs, who were picked fifth in the league preseason poll, went on to capture a PFL co-championship and advance to the Gridiron Classic, where they defeated Central Connecticut State for their first postseason victory. Butler’s final 11-1 record was the best in school history! This season, fueled by last year’s success, 17 returning starters and a brand new 5,647-seat stadium, the Bulldogs are eager to make a repeat run. But head coach Jeff Voris, entering his fifth first finished field at different times and at different spots, while still training him as a quarterback.” “We definitelyleading receiver last year with 38 catches for 493 yards, while Koopman had 24 receptions for 258 yards. The backfield is well-stocked with Gray, who finished backfield since I’ve been here,” said Voris. “Again, we have to find ways to get them all on the field.” Defensive end Grant Hunter finished 11th in the NCAA Division I FCS in quarterback sacks, despite missing three games with injury. 2010 Butler Football Quarterback Andrew Huck compiled the third-highest passing total in Butler history last seasaon.American, Hunter finished finishedleading tackler Spencer Summerville. Caldicott, Butler’s leading tackler last season, anchors a linebacker corps that includes returning regular Andrew Cottrell and top 2009 reserves Jordan Ridley and William Lamar. Caldicott finished with 70 tackles last season, one more than Guggenberger and two more than Summerville. He also led the team with three forced fumbles and tied for second on the squad with three pass interceptions. Cottrell, playing his first season with the Bulldogs finished fifth final season with seven career pass interceptions and more than 100 career tackles. Giacomantonio, Butler’s fourth-leading tackler last season, led the PFL in pass interceptions in 2009 with five in nine games. Junior Jack McKenna, named Butler’s Most Improved Defensive Player last season, returns at cornerback, where he started all 12 games in 2009 and tied Dombart with three pass thefts. He also finished field position. Junior David Lang took over as the team’s placekicker, and he finished as the Butler’s third-leading scorer with 54 points. He booted seven field goals, including a pair of game-winners, and wound up third in the PFL in kick scoring. Koopman returns as the squad’s top kick returner, averaging 22.7 yards per return last season, while Dombart led the Bulldogs in punt returns. Tailback Scott Gray led the Bulldogs and ranked third in the PFL with a career-best 868 rushing yards in 2009. 3 Head Coach Jeff Voris In just four seasons, Jeff Voris has taken Butler from an 0-11 record to the most successful season in school history! Voris, who’s improved the Bulldogs’ win total in each of his four seasons, took a Butler team picked fifth in the Pioneer Football League preseason poll, and guided it to an 11-1 campaign, a tie for the PFL championship and Butler’s first-ever post-season victory. The 11 wins in 2009 set a Butler single season record and tied the PFL single season mark, and the league title was Butler’s first since 1994. Butler also tied the PFL record for league wins (7), while setting a record for home wins (7) and completing an unbeaten Butler Bowl slate. The Buldogs. To no one’s surprise, Voris was named the PFL Coach of the Year. The historic 2009 Butler campaign came one year after Voris guided the Bulldogs to a 6-5 record, the program’s first winning season in more than a decade! The six wins were Butler’s most since 1997 and the second-most since the Bulldogs moved to Division I in 1993. And Butler’s ffour 4 The Jeff Voris File Name: Jeff Voris Age: 43 Birthdate: 8/27/67 Wife: Julie Children: Jenna (14), Josie (12), Jessy 2009 Career School Carroll Carroll Carroll Carroll Carroll Butler Butler Butler Butler Collegiate Head Coaching Record Won Lost Pct. 2 7 .222 1 9 .100 3 7 .300 3 7 .300 6 4 .600 3 8 .273 4 7 .364 6 5 .545 11 1 .917 39 55 .415 (14), Josie (12) and Jessy (9). Assistant Coaches Joe Cheshire Co-Defensive Coordi Coordinator Joe Cheshire is in his sixth season as a full-time assistant coach, following one season with the Bulldogs as a parttime assistant. He handles the team’s linebackers, while also serving as Butler’s recruiting coordinator, and this season, he’s taking on the additional duties of Co-Defensive. Nick Tabacca Offensive Line Nick Tabacca, who served the past three seasons on the football staff at Defiance College, was named an assistant football coach at Butler in July, 2010. He took over as offensive line coach for the Bulldogs, filling the spot of departed coach and offensive coordinator Frank Smith. Tabacca was the offensive line coach for all three of his seasons at Defiance, and he spent the past two campaigns as offensive coordinator for the Yellow Jackets. He directed an offensive unit that led the Heartland Collegiate Athletic Conference in fewest turnovers for two straight years. During his three seasons at the Ohio school, Defiance had seven AllConference Defiance in 2007. He received a B.S. degree from Ball State in 2004 and earned his masters degree in business administration from Ball State in 2006. Tim Cooper Co-Defensive Coordi Coordinator Former Miami (Ohio) and Carroll College football assistant Tim Cooper was named an assistant football coach at Butler in March, 2010. He’s working with Butler’s defensive secondary and serves as a Co-Defensive Coordinator. Cooper was. David Kenney Defensive Tackles David Kenney is in his fifth season on Butler’s coaching staff, and he has responsibility for the Bulldogs’ defensive line. He spent a portion of the 2010 preseason working with the Indianapolis Colts’ defensive unit under the Bill Walsh Minority Internship Program. Kenney (17), David (15), Khirra (10) and Dakari (9). He’s a member of Kappa Alpha Psi Fraternity, Inc. Nick Anderson Cornerback Cornerbacks Nick Anderson is in his fourth season with the Bulldogs, taking over this year as cornerbacks’ coach. He’s also coached wide receivers for the Bulldogs. Anderson Receiver Receivers Danny Sears, a former standout wide receiver at Franklin College, is in his fourth season at Butler. He’s working with the team’s receivers this year, after coaching cornerbacks in 2009. Sears was a three-time, first team All-Heartland Conference wide receiver (2004-06) and a two-time conference Special Teams MVP (2004 & 2006) as a punt and kick returner. He finished his collegiate career with more than 2,000 receiving yards and 20 career touchdown catches. The four-year letterwinner for the Grizzlies compiled over 5,000 all-purpose yards, 2003-06. Chris Davis Running Backs ack acks Chris Davis, who earned two varsity letters ers as an of offensive lineman with the Bulldogs, is in his third season on the Butler staff. He’s coaching Butler’s running backs, after working with the squad’s tight ends last season.. 5 Assistant Coaches/Staff Rob Noel Defensive Ends Matt Walker Tight Ends Rob Noel, a four-year letterwinner with the Bulldogs (2005-08), is in his second final 22 games. He earned AllLeague honors as an offensive lineman, and he was a two-time Academic All-League player. Noel earned his degree from Butler in 2009. Matt Walker, former head coach at DePauw University, is in his first season on Butler’s staff. He has responsibility. SUPPORT STAFF Butler Bowl Thorn Murphy Student Assistant Offense 6 Grant Lewis Student Assistant Defense Chris Tinkey Football Athletic Trainer Lester Burris Student Video Coordinator Stephen Blowers Student Manager Ted Pajakowski Student Manager John Harding Equipment Manager Jennifer Johnson Equipment Room Jim Peal Strength/Conditioning Coordinator The Butler Bowl, home to Butler football for eight decades, received a major face-lift for the 2010 season. Located on the northeast side of Butler’s campus just east of storied Hinkle Fieldhouse, the Butler Bowl project took a major step forward with the addition of new bleacher and chairback seats on the west side, visitor’s seating on the east side and a new brick press box stretching between the 30-yard lines. The new seats boosted the stadium capacity to 5,647, and the new press box added main level areas for media, radio, coaches and boosters as well as an upper level video area and observation deck. The renovation of the Butler Bowl, part of a plan to build new student housing along the east side of the stadium,. 2010 Bulldogs 46 Returning Letterwinners 33 BILL BORK 6-1, 225, Jr. TIGHT END Dyer, Ind. (Lake Central) 2009 Career RUSH 0 0 0 YDS. 0 0 0 14 AVG. 0.0 0.0 0.0 TD 0 0 0 REC. 2 2 4 NICK CALDICOTT 6-1, 225, Jr. YDS. 11 11 22 TD 0 0 0 YEAR 2008 2009 Career ST 9 39 48 AT 5 31 36 TT 14 70 84 FOR LOSS SACKS 0-0 0 8-22 1-7 8-22 1-7 95 TAYLOR CLARKSON DEFENSIVE TACKLE 6-2, 275, So. Fortville, Ind. (Mt. Vernon) YEAR 2009 ST 2 AT 0 TT 2 FOR LOSS SACKS 0-0 0 YEAR 2007 2008 2009 Career ST 0 6 15 21 FR 0 AT 0 5 11 16 TT 0 11 26 37 FOR LOSS SACKS 0-0 0 4-20 1 1.5-3 1 4.5-23 2 FR 0 0 0 0 15 ANDREW COTTRELL LINEBACKER 5-11, 215, Jr. West Chester, Ohio (Lakota West/Grand Valley State) 2009: Played in 10 games, including eight as a starter... finished fifth on the squad in tackles...wound up fourth on the team in assisted tackles...recorded a career-high six tackles in five different games...named to the PFL Academic Honor Roll... YEAR 2009 ST 16 FR 0 1 1. DEFENSIVE END Winter Park, Fla. (Winter Park) 2009: Saw action in all 12 games, including seven as a starter... recorded a career-high five tackles in Butler’s victory over Davidson...had a quarterback sack in Butler’s season-opening win over Albion...named to the PFL Academic Honor Roll...2008: Played in all 11 games...tied for fifth on the team in tackles for loss...had a season-high four tackles, including two tackles for loss, at Campbell...recorded a quarterback sack at Valparaiso...named to the PFL Academic Honor Roll...2007: Played in three varsity games in initial season at Butler...named to the Pioneer Football League Academic Honor Roll...Personal: Economics major...born 4/8/88...son of Steve and Lynne Cosler...High School: Three-year starter as a defensive lineman at Winter Park High School...team captain and Most Outstanding Defensive Lineman as a senior...earned All-Metro honors in 2006...ranked third in central Florida with 12 quarterback sacks as a senior...helped high school team to a 12-2 record and a state runner-up finish as a junior...earned three varsity football letters...High School Coach: Larry Gergley. LINEBACKER Oak Park, Ill. (Fenwick) 2009: Named Butler’s Defensive MVP in 2009...second team All-PFL...starter in all 12 games...led the Bulldogs in tackles (70) and solo tackles (39)...topped the squad in forced fumbles (3)...tied for second on the team in pass interceptions with three...recorded a career-high 11 tackles in Butler’s victory over Davidson...had a career-best eight solo tackles against Davidson...posted eight tackles against Central Connecticut State in the Gridiron Classic...named to the PFL Academic Honor Roll...2008: Saw action in five games...posted a season-high four tackles against both Franklin and Drake...had a season-best four solo tackles against Franklin...named to the PFL Academic Honor Roll...2007: Did not see varsity action for the Bulldogs...Personal: Management Information Systems. ROB COSLER 6-1, 230, Sr. AT 27 9 TT 43 FOR LOSS SACKS 2.5-4 0 ANDY DAUCH 5-10, 180, Jr. FR 0 DEFENSIVE BACK Bloomfield Hills, Mich. (Lahser) 2009: Saw action in all 12 games...had a career 2009 Career ST 3 5 8 AT 1 5 6 TT 4 10 14 FOR LOSS 0-0 0-0 0-0 INT. 0-0 1-19 1-19 BREAKUPS 0 1 1 7 2010 Bulldogs 36 STEVEN DEPOSITAR 5-10, 195, So. 2009: Played in all 12 games...finished fourth on the team in rushing...had a career-best 50 rushing yards in Butler’s victory at Franklin, including a career-long 41-yard carry... rushed for 47 yards against Hanover...named to the PFL Academic Honor Roll...2008: Didn’t see varsity action in his first season with the Bulldogs...named to the PFL Academic Honor Roll...Personal: Finance. YEAR 2009 RUSH 19 YDS. 130 28 AVG. 6.8 TD 0 REC. 0 TADD DOMBART 5-10, 185, Sr. YDS. 0 TD 0 DEFENSIVE BACK Cincinnati, Ohio (Lakota West) 2009: Played in 11 games, including 10 as a starter...finished second on the team in pass breakups (3) and fourth in pass interceptions (2)...led the Bulldogs and ranked ninth in the PFL in punt returns...had a season-high five tackles against both Drake and Central Connecticut State..high: Finance 2009 Career ST 31 32 20 83 AT 8 10 10 28 TT 39 42 30 111 FOR LOSS 1-1 0.5-2 2.5-4 4-7 INT. 3-47 2-35 2-9 7-91 BREAKUPS 3 5 3 11 Bulldogs Picked Second In PFL Preseason Poll Butler, coming off a Pioneer Football League co-championship, was tabbed second in the PFL’s 2010 preseason coaches’ poll. Ten-time PFL champion Dayton was picked as the coaches’ preseason favorite. Dayton and Jacksonville, chosen third, each recieved four first place votes in the poll, while the Bulldogs received two. Picked in order behind the top three teams were Drake, San Diego, Marist, Davidson, Campbell, Morehead State and Valparaiso. 8 4 RUNNING BACK Mishawaka, Ind. (Penn) MATTHEW FOOR 5-11, 205, So. DEFENSIVE BACK West Chester, Ohio (Lakota West) 2009: Played in 10 games...had a career-high three tackles in Butler’s victory over Central Connecticut State...named to the PFL Academic Honor Roll...2008: Did not see varsity action in first season with the Bulldogs...Personal: Finance. YEAR 2009 ST 7 AT 6 TT 13 FOR LOSS 0-0 INT. 0-0 16 MARK GIACOMANTONIO 5-11, 190, Jr. BREAKUPS 0 DEF. BACK Carmel, Ind. (Carmel) 2009: Honorable mention All-PFL...saw action in nine game, all as a starter...led the PFL and ranked ninth in the NCAA Division I FCS in pass interceptions...finished fourth on the team in tackles...had a career-high 10 tackles in Butler’s victory over Central Connecticut State...named to the 2009 ESPN The Magazine Academic All-District Team and picked first team Academic All-PFL...named to the PFL Academic Honor Roll...2008: Played in all 11 games...recorded a season/Criminology 2009 Career ST 12 30 42 AT 9 15 24 64 TT 21 45 66 FOR LOSS 0.5-2 4-11 4.5-13 DONNIE GILMORE 6-3, 295, Sr. INT. 1-30 5-45 6-75 BREAKUPS 0 2 2 OFFENSIVE LINE Westphalia, Ind. (North Knox) 2009: First team All-PFL offensive lineman...Butler “Battle of the Trenches” Award winner...starter in all 12 games... named to the PFL Academic Honor Roll..: Math/Mechanical Engineering major...born 2/12/88...son of Don and Patricia Gilmore...High School: All-State lineman at North Knox High School...twoway starter for high school squad...all-conference and all-area performer...received Academic All-Area recognition...helped team to a 7-2 record as a senior...earned four varsity football letters...High School Coach: Shawn McDowell. 2010 Bulldogs 22 SCOTT GRAY 5-8, 190, Sr. 2009: First team All-PFL running back...starting tailback in all 12 games...led the Bulldogs and ranked third in the PFL in rushing... finished fifth on the team in scoring...rushed for a season-high 117 yards against Davidson, including a season-long 48-yard rush...had 113 rushing yards against San Diego...rushed for 109 yards against Dayton...named first team Academic All-PFL and chosen to the PFL Academic Honor Roll.. 2009 Total RUSH 141 25 169 335 YDS. 547 101 868 1516 5 AVG. 3.9 4.0 5.1 4.5 TD 15 2 5 22 REC. 6 0 13 19 ARTIS HAILEY III 6-0, 205, So. YDS. 12 0 134 146 TD 0 0 1 1 LINEBACKER Hammond, Ind. (Hammond) 2009: Played in all 12 games as a running back...rushed for a career-best 29 yards in Butler’s victory over Hanover...had a career-best two pass receptions at Franklin...2008: Didn’t see varsity action in his first season with the Bulldogs...earned Butler’s Offensive “Unsung Hero” Award (Scout Team Player of the Year)...Personal: Computer Engineering/Economics. YEAR 2009 RUSH 13 YDS. 65 AVG. 5.0 TD 0 2 RUNNING BACK Lowell, Ind. (Lowell/Ball State) REC. 3 YDS. 4 TD 0 Homework The Bulldogs set a school record with seven victories at home in 2009. It was Butler’s first unbeaten home record as an NCAA Division I program, and the Bulldogs’ first undefeated mark in the Butler Bowl since 1992. The Bulldogs were the lone team in the Pioneer Football League to go unbeaten at home in 2009. Butler’s all-time record in the Butler Bowl is 257-139-11. STUART HARVEY 6-1, 190, Jr. WIDE RECEIVER Carmel, Ind. (Westfield) 2009: Did not see varsity action...named to the PFL Academic Honor Roll...2008: Saw action in all 11 games...finished fifth on the team in receptions and receiving yards...caught a careerhigh four passes, including a seven-yard touchdown reception, against Missouri S & T...had a career-best 32 receiving yards against both Missouri S & T and Campbell...2007: Did not see varsity action in first season with the Bulldogs...Personal: Philosophy/Education, Jr. Bedford, Ind. (Bedford-North Lawrence) 2009: Played in 10 games...finished third on the squad in rushing...wound up seventh in pass receptions...rushed for a season-high 60 yards against Drake...scored a pair of rushing touchdowns against Valparaiso...named to the PFL Academic Honor Roll..: Media Arts 2009 Career RUSH 64 55 120 YDS. 352 346 698 53 AVG. 5.3 6.3 5.8 TD 5 4 9 ROB HOBSON 6-4, 280, Sr. REC. 12 9 21 YDS. 107 50 157 TD 1 0 1 OFFENSIVE LINE Arcadia, Ind. (Sheridan) 2009: Starting center in all 12 Butler games...2008: Saw action in three games...helped the Bulldogs lead the PFL in total offense and rank second in both rushing offense and scoring offense...2007: Did not see varsity action in his initial season at Butler...Personal: Marketing/Digital Media. 9 2010 Bulldogs 10 ANDREW HUCK 6-2, 190, Jr. QUARTERBACK Bloomington, Ind. (Bloomington North) 2009: Named Butler’s Offensive Most Valuable Player and Most Improved Offensive Player...earned honorable mention All-PFL honors...selected as the Most Valuable Player of the 2009 Gridiron Classic...four-time PFL Offensive Player of the Week...named to the College Sporting News National AllStar List on Nov. 9, following a five-touchdown performance at Dayton...starting quarterback in all 12 games...posted the third-highest single season passing total (2,454 yards) in Butler football history...eighth quarterback in Butler history to pass for more than 2,000 yards in a season...finished with the third-highest total for touchdown passes (21) in Butler history...led the PFL in pass completions (233) and pass attempts (371) and finished second in the league in passing yards...wound up third in the league in passing average (204.5)...finished second in the PFL and 27th in the NCAA Division I FCS in total offense (2,921)... ranked 34th in the NCAA in passing efficiency (131.11)...had the league’s highest single game pass completion percentage in 2009 with a 13 of 15 (.867) performance against Hanover...recorded a league-high 33 pass completions in Butler’s victory over Franklin...completed 20 or more passes in six of Butler’s 12 games...threw for a career-high 327 yards in Butler’s victory over Albion...tied Butler’s single game record with a career-best five touchdown passes in the Albion win...had a career-high 395 yards in total offense against Albion...completed 33 of 48 passes (with no interceptions) and threw for 316 yards in Butler’s win at Franklin...tossed four touchdown passes each against Franklin and Morehead State...threw 14 touchdown passes in his first four collegiate starts...ran for a career-high three touchdowns and threw two more against Dayton...completed 15 of 21 passes for 182 yards and rushed for three touchdowns in Butler’s Gridiron Classic victory over Central Connecticut State...rushed for a career-best 81 yards on a career-high 16 carries against Davidson...tied for the team scoring lead and tied for fifth in the PFL in scoring with 10 rushing touchdowns...named first team Academic All-PFL... chosen to the PFL Academic Honor Roll... Year 2008 2009 Total ATT. 1 371 372 COMP. 0 233 233 PCT. .000 .628 .626 98 INT. 0 11 11 TD 0 21 21 YDS. RUSH 0 5 2454 102 2454 107 YDS. AVG. TD 55 11.0 1 467 4.6 10 522 4.9 11 GRANT HUNTER DEFENSIVE END 6-3, 235, Jr. Liberty Twp., Ohio (Lakota West/Robert Morris) 2009: Second team Academic All-America...named second team All-PFL...starting defensive end in nine games...missed three games with illness...finished second in the PFL and 11th in the NCAA Division I FCS in quarterback sacks...led the Bulldogs in tackles for loss and quarterback sacks...finished eighth on the team in total tackles...had a season-high seven tackles against Morehead State and Valparaiso...recorded three quarterback sacks against Morehead State...tied for third on the team in pass breakups (3)...named first team Academic All-PFL, chosen to the ESPN The Magazine Academic All-District squad and selected to the PFL Academic Honor Roll..-high eight tackles, including four quarterback sacks and a career-high six 10: Criminology 2009 Career ST 27 19 46 AT 15 20 35 TT 42 39 81 FOR LOSS SACKS 18-109 14-97 11.5-52 7.5-42 29.5-161 21.5-139 12 MATT KOBLI 6-3, 220, Sr. FR 1 0 1 QUARTERBACK Whiting, Ind. (Whiting) 2009: Sat out the 2009high 48 pass attempts against Franklin...had a career-high 350 total offense yards against Davidson...passed for more than 200 yards in eight of 11 games...had at least 21 pass completions in eight different games...rushed for more/Crim Preseason All-Americans Butler’s Grant Hunter and Zach Watkins each were named to a Preseason All-America Team for 2010. Hunter was chosen as a third team defensive end on the College Sporting News FCS Preseason All-America Team, while Watkins was named a third team wide receiver on The Sports Network Preseason All-America Team. 2010 Bulldogs 1 JORDAN KOOPMAN 5-8, 185, Jr. 2009: Honorable mention All-PFL return specialist...led the Bulldogs and ranked second in the PFL in kick return average (22.7)...ranked fifth in the league in all-purpose yards (1,122)... finished fourth on the team in receiving...had a career-high 146 yards on six kickoff returns at Jacksonville...recorded a career-high six receptions for a career-high 77 yards and a touchdown at Franklin...2008: Saw action in eight games... led the team in kickoff returns and kick return yards...had a career-best six kickoff returns for a season-high 112 yards against Jacksonville...finished eighth on the team in receptions...caught a season-high four passes for a season-high 48 yards against Morehead State...Personal: Exercise Science 2009 Career REC. 11 24 35 51 YDS. 77 258 335 AVG. 7.0 10.8 9.6 TD 1 1 2 KR 20 31 51 ROBERT KOTEFF 6-0, 220, Jr. YDS. 385 703 1088 AVG. 19.2 22.7 21.3 ST 3 0 3 AT 1 1 2 TT 4 1 5 FOR LOSS SACKS 0.5-0 0 0-0 0 0.5-0 0 FR 0 0 0 YEAR 2009 WILLIAM LAMAR 5-10, 195, So. LINEBACKER Oak Park, Ill. (Oak Park-River Forest) 2009: Saw action in all 12 games...had a career-high four tackles against Valparaiso and Davidson...posted a career-high three solo tackles against Jacksonville...had a blocked punt against Morehead State...2008: Did not see varsity action in his initial season at Butler...named to the PFL Academic Honor Roll...Personal: Mechanical-Conference performer...earned two varsity football letters... High School Coach: Jim Nudera. YEAR 2009 ST 10 AT 14 TT 24 FOR LOSS SACKS 1-2 0 FR 0 PAT 33-43 7 PLACEKICKER/PUNTER Lowell, Ind. (Lowell) FG 7-11 LONG 39 PTS. 54 JEFF LARSEN 6-2, 190, Jr. WIDE RECEIVER Chicago, Ill. (Notre Dame) 2009: Saw action in all 12 games...tied for fifth on the squad in receptions...had a career-high four catches for a career-best 42 yards against Jacksonville...recorded three receptions each against Franklin and Campbell...named to the PFL Academic Honor Roll...2008: Did play in any games...named to the PFL Academic Honor Roll...2007: Did not see varsity action... named to the Pioneer Football League Academic Honor Roll...Personal: Finance. YEAR 2009 54 DAVID LANG 6-0, 215, Jr. 2009: Saw action in all 12 games as Butler’s placekicker... two-time PFL Special Teams Player of the Week...wound up third on the team and tenth in the PFL in scoring...third in the PFL in kick scoring...tied for fifth in the league in field goal percentage and finished seventh in the circuit in field goals... kicked a game-winning, 37-yard field goal with 0:01.9 left to lift the Bulldogs past San Diego, 25-24...hit a 27-yard field goal with one second remaining to give Butler a 20-17 victory over Drake and a share of the PFL season championship...had a career-long 39-yard field goal against San Diego...was perfect on six PAT attempts against Albion and Hanover...named first team Academic All-PFL...2008: Played in all 11 games, handling team kickoff chores...averaged 54.7 yards on 57 kickoffs...had one touchback...named to the PFL Academic Honor Roll...Personal: Exercise Science. LINEBACKER Lexington, Ky. (Paul L. Dunbar) 2009: Played in 10 games...named to the PFL Academic Honor Roll... YEAR 2008 2009 Career 35 WIDE RECEIVER Chula Vista, Calif. (Eastlake) REC. 13 72 YDS. 112 AVG. 8.6 PETE MATTINGLY 6-5, 290, Jr. TD 0 OFFENSIVE GUARD Zionsville, Ind. (Cathedral). 11 2010 Bulldogs 81 EDDIE McHALE 6-2, 190, Sr. 2009: Played in all 12 games, including 11 as a starter... finished third on the squad and tenth in the PFL in receiving yards per game...third on the team in receptions...matched his career-high with seven catches at Franklin...had five receptions for a season-high 83 yards against Drake...2008: Saw action in 10 games...finished third on the team in receptions, second in receiving yards, and tied for third in scoring...had a career-high seven catches for 63 yards and a touchdown against Albion...caught four passes for a career-high 100 yards and two touchdowns at Valparaiso...had a career-long 54-yard reception for a TD at Valparaiso...grabbed a 19-yard TD pass against Morhead State...2007: Played in 10 games in initial season with the Bulldogs...had a season-best three catches at Drake...transfer from Ohio University (didn’t play football at Ohio)...Personal: Biology major...born 9/22/87...son of Kevin and Teri McHale...High School: Two-sport athlete at St. Xavier High School...earned three varsity football letters as T a wide receiver...had 35 receptions as a senior...helped lead St. Xavier to a two-year record of 26-1, two conference titles and a state championship, 2004-05...earned three letters in track...High School Coach: Steve Specht. YEAR 2007 2008 2009 Total REC. 9 30 38 77 25 YDS. 53 366 493 912 AVG. 5.9 12.2 13.0 11.8 TD 0 6 3 9 JACK McKENNA 6-0, 180, Jr. ST 2 31 33 AT 6 11 17 TT 8 42 50 FOR LOSS 0-0 1-1 1-1 47 BOB OLSZEWSKI 6-1, 210, Sr. INT. 0-0 3-30 3-30 BREAKUPS 0 5 5 LINEBACKER LaGrange, Ill. (Lyons Township) 2009: Named Butler’s “Unsung Hero” (Scout Team Player of the Year) on defense...played in all 12 games...recorded first career tackle at Franklin... YEAR 2009 12 ST 1 AT 0 TT 1 FOR LOSS SACKS 0-0 0-0 FR 0 JEFF POSS 5-8, 230, Jr. DEFENSIVE END Salt Lake City, Utah (Scripps Ranch, Calif.) 2009: Saw action in all 12 games, including 10 as a starter... finished seventh on the team in tackles...wound up 15th in the PFL in tackles among defensive linemen...fifth on the team in quarterback sacks...had a season-high six tackles against both Dayton and Drake...named to the PFL Academic Honor Roll.. 2009 Career ST 12 24 36 DEFENSIVE BACK Elmwood Park, Ill. (Rochester) 2009: Named Butler’s Most Improved Player...starter in all 12 games...tied for second on the team in pass interceptions... led the team in pass breakups...tied for 10th in the PFL in passes defended...finished sixth on the team in tackles... had a pass interception on the final play of the 2009 Gridiron Classic to seal Butler’s first postseason victory...recorded a career-high seven tackles, all solo, and a pass theft in Butler’s victory over Dayton...named to the PFL Academic Honor Roll...2008: Saw action in all 11 games...had a season-high three tackles at Campbell...named to the PFL Academic Honor Roll...2007: Did not see varsity action.. 2009 Career 52 WIDE RECEIVER Cincinnati, Ohio (St. Xavier/Ohio) AT 8 17 25 TT 20 41 61 FOR LOSS SACKS 6.5-17 1-8 7.5-19 3-13 14-36 4-21 59 JORDAN RIDLEY 5-11, 230, So. FR 0 0 0 LINEBACKER Indianapolis, Ind. (Lawrence Central) 2009: Saw action in all 12 games...finished 10th on the squad in tackles...second on the team in tackles for loss and tied for third on the squad in quarterback sacks...had a careerStar Game...two-time team captain...four-year Scholar-Athlete...four-year football letterwinner...High School Coach: Jayson West. YEAR 2009 ST 20 AT 16 TT 36 FOR LOSS SACKS 8.5-35 4-22 76 RYAN SECRIST 6-4, 275, Sr. FR 1 OFFENSIVE LINE Rochester, Ind. (Rochester) 2009: Missed the entire season with injury...2008: Butler’s starting center in the first five games, before being sidelined. 2010 Bulldogs 97 TYLER SKAGGS 6-2, 240, Sr. 2009: Saw action in 10 games, including seven as a starter...first collegiate tackle was a tackle for loss against Hanover... YEAR 2009 ST 1 AT 3 TT 4 57 DEFENSIVE LINE Winona Lake, Ind. (Warsaw) FOR LOSS SACKS 0.5-1 0-0 FR 0 MIKE STANIEWICZ 6-5, 290, Sr. LOGAN SULLIVAN 6-0, 195, Jr. AT 3 16 19 TT 8 38 46 FOR LOSS 0.5-1 1-1 1.5-2 TT 20 29 49 FOR LOSS SACKS 1.5-3 0 4-14 2-8 5.5-17 2-8 55 JACE TENNANT 6-1, 215, So. FR 0 0 0 DEFENSIVE END Champaign, Ill. (Central) ST 10 AT 6 TT 16 FOR LOSS SACKS 4.5-19 5-17 90 LARRY THOMAS 5-10, 250, Jr. FR 0 DEFENSIVE BACK Winston-Salem, N.C. (Horizon, Ariz.) 2009: Played in all 12 games, including four as a starter... finished ninth on the team in tackles...second on the squad in pass breakups...had a career-high seven tackles against Valparaiso...came up with first career pass interception in Butler’s title-clinching victory over Drake...scored two points on a PAT run at Morehead State to tie the game and send it into overtime...2008: Saw action in all 11 games...had a season...three-time AIA Scholar Athlete Award winner...earned three varsity football letters...High School Coach: Steve Casey. ST 5 22 27 AT 13 20 33 2009: Played in all 12 games...finished second on the team in quarterback sacks...ranked 19th in the PFL in sacks...... recorded first collegiate sack at Franklin...had a pair of quarterback sacks against Hanover...had a career-high three tackles against Albion and Franklin...named to the PFLAcademic Honor Roll...Personal: Finance major...born 7/16/10...son of Mike and Judy Tennant...High School:. YEAR 2009 YEAR 2008 2009 Career ST 7 9 16 OFFENSIVE TACKLE Hebron, Ind. (Lowell) 2009: Second team All-PFL offensive lineman...starting offensive tackle in all 12 games...helped the Bulldogs lead the PFL in total offense and rank second in both rushing offense and scoring offense...named second team Academic All-PFL... selected to the PFL Academic Honor Roll..: Finance major...born 3/12/89...son of April Staniewicz...High School: Two-Conference performer...earned four varsity football letters...High School Coach: Kirk Kennedy. 6 DEFENSIVE TACKLE Sammamish, Wash. (Eastlake) 2009: Saw action in all 12 games, including four as a starter... recorded a career-high five tackles against San Diego...2008: Played in all 11 games...had a season-high four tackles against Morehead State and Jacksonville...Personal: MIS and Marketing 2009 Career 78 ROSS TEARE 6-1, 290, Jr. INT. 0-0 1-10 1-10 BREAKUPS 0 4 4 DEFENSIVE TACKLE West Chester, Ohio (St. Xavier) 2009: Saw action in 11 games...finished sixth on the team in tackles for loss...recorded a career-high five tackles in Butler’s Gridiron Classic victory over Central Connecticut State...had four tackles, including a quarterback sack, at Campbell...2008: Played in all 11 games...had a season-high two tackles in five different games...named to the PFL Academic Honor Roll... Personal: Economics 2009 Career ST 5 8 13 AT 8 9 17 TT 13 17 30 FOR LOSS SACKS 2-18 2-18 5.5-18 1-2 7.5-36 3-20 FR 0 0 0 13 2010 Bulldogs 94 CARTER WALLEY 6-3, 245, So. 2009: Played in all 12 games, including 11 as a starter...scored first collegiate touchdown on a 16-yard pass for Butler’s final score (and eventual game-winner) at Dayton...had a career-high two receptions at Dayton...2008: Did not see varsity action in his first season with the Bulldogs...Personal: Marketing. YEAR 2009 REC. 5 YDS. 42 AVG. 8.4 20 TIGHT END Peru, Ind. (Rochester) TD 1 ZACH WATKINS 6-2, 205, Jr. REC. 28 78 106 YDS. 329 918 1247 AVG. 11.8 11.8 11.8 RYAN WEBB 6-3, 235, So. TD 0 10 10 DEFENSIVE END Batavia, Ill. (Batavia) 2009: Played in 10 games as a tight end...had one reception for two yards at Albion...Personal: Marketing major...born 8/31/90...son of Ken and Dawn Webb...High School: AllConference. 14 YDS. 2378 AVG. 37.7 LONG 69 Additional Returning Players 63 NICK ATKINSON OFFENSIVE LINE 6-2, 270, R-So. Kokomo, Ind. (Western) 2009: Played in Butler’s first four games...named to the PFL Academic Honor Roll...2008: Did not see varsity action during his initial season with the Bulldogs... Personal: Psychology major...born 8/25/89...son of Scott Atkinson...High School: Three-year . 40 MATT BENSON RUNNING BACK 6-2, 215, R-Fr. Traverse City, Mich. (St. Francis) 2009: Did not see varsity action during his initial season with the Bulldogs...Personal: Business major...born 12/13/90...son of Charles and Barbara Benson... High School: All-Conference tight end at Travers City St. Francis High School... has 22 receptions for 501 yards and 10 touchdowns...earned nine varsity letters in three sports...three-year team captain and All-State performer on high school rugby team...High School Coaches: Josh Sellers/Greg Vaughan. 18 88 PUNTS 63 WIDE RECEIVER Chicago, Ill. (St. Ignatius) 2009 Career PUNTER Grand Rapids, Mich. (East Grand Rapids) 2009: Butler’s Special Teams Most Valuable Player...honorable mention All-PFL punter...ranked seventh in the PFL in punting average...led the league in number of punts inside the 20-yard line (29)...dropped four punts inside the 20-yard line in Butler’s victory at Dayton...had seven punts of 50 or more yards with a long punt of 69 yards against Davidson...named to the PFL Academic Honor Roll..to-back state championships in 2005 and 2006...named to the 2007 Dream Team... set a Michigan state high school record for most PATs (179)...averaged 41.0 yards as a punter in 2007...honor student...High School Coach: Peter Stuursma. YEAR 2009 11 MICHAEL WILSON 6-0, 180, So. CALVIN BLAIR QUARTERBACK 6-2, 210, R-So. Grand Rapids, Mich. (East Grand Rapids) 2009: Saw action in Butler’s first four games...completed 12 of 17 passes (.706) for 77 yards...threw an eight-yard touchdown pass in Butler’s victory over Hanover and a 10-yard scoring pass in the Bulldogs’ win at Franklin...named to the PFL Academic Honor Roll...2008: Did not see varsity action for the Bulldogs...named to the Pioneer Football League Academic Honor Roll...Personal: Business exploratory. 2010 Bulldogs 21 CHRIS BURNS DEFENSIVE BACK 5-9, 190, So. Aurora, Ill. (Waubonsee Valley). 69 JOSH DORFMAN OFFENSIVE LINE 6-0, 260, Jr. Durham, N.C. (C. E. Jordan) 2009: Played in Butler’s first three games...named to the PFL Academic Honor Roll...2008: Did not see varsity action...named to the PFL Academic Honor Roll...2007: Did not see varsity action in his initial season with the Bulldogs... Personal: Finance. 27 SEAN GRADY DEFENSIVE BACK 5-10, 180, R-Fr. Geneva, Ill. (Geneva) 2009: Did not see varsity action in first season with the Bulldogs...named to the PFL Academic Honor Roll...Personal: Education major...born 8/10/91...son of Brian and Maureen Grady...High School:. 29 DAN HABER DEFENSIVE BACK 6-0, 175, R-Fr. Calabasas, Calif. (Chaminade Prep) 2009: Chosen Butler’s Special Teams “Unsung Hero” (Scout Team Player of the Year)...did not see varsity action in first season with the Bulldogs...named to the PFL Academic Honor Roll...Personal: Business major...born 4/2/91...son of Rosemary Haber and Mitchell Haber...High School: Four-year Scholar-Athlete Award winner... recorded 32 tackles and eight pass breakups as a senior...earned varsity letters in football, basketball, baseball and volleyball...baseball team captain...High School Coach: Anthony Harris. 91 TAYLOR HARRIS DEFENSIVE TACKLE T 6-5, 290, Jr. Logansport, Ind. (Logansport) 2009: Did not see varsity action... Captains Corner Offensive guard Donnie Gilmore and defensive back Tadd Dombart were named captains for the 2010 Bulldogs, by a vote of their teammates. Gilmore, a fifth-year senior, has started every Butler game for the past three seasons, while Dombart has played in 33 games over the past three years, including 24 as a starter. 17 SCOTT HARVEY WIDE RECEIVER 6-3, 190, R-Fr. Carmel, Ind. (Westfield) 2009: Did not see varsity action in his initial season at Butler...named to the PFL Academic Honor Roll...Personal: Finance major...born 10/8/90...son of Bart and Jayne Harvey...High School: Starting quarterback. 3 TOM JUDGE QUARTERBACK 6-2, 190, R-Fr. Elmhurst, Ill. (York) 2009: Did not see varsity action in his initial season at Butler...Personal: Undecided. 13 T. J. LUKASIK DEFENSIVE BACK 5-8, 180, So. Lowell, Ind. (Lowell) 2009: Did not see varsity action...named to the PFL Academic Honor Roll... 30 BOBBY McDONALD DEFENSIVE BACK 6-0, 190, R-Fr. Orland Park, Ill. (Providence Catholic). 79 RYAN MYERS OFFENSIVE LINE 6-2, 285, So. Fort Wayne, Ind. (Bishop Dwenger) 2009: Saw action in Butler’s first three games...2008: Did not see varsity action... Personal:. 15 2010 Bulldogs 43 ALEX PERRITT LINEBACKER 6-1, 215, R-Fr. Whiteland, Ind. (Whiteland) 2009: Did not see varsity action in his first season at Butler...Personal: Biology major...born 2/8/91...son of Dan and Robin Perritt...High School: Honorable mention All-State performer at Whiteland. 75 DOUG PETTY OFFENSIVE LINE 6-4, 220, R-Fr. Indianapolis, Ind. (Southport) 2009: Did not see varsity action in his initial season at Butler...Personal: Biology/PreMed. 24 ANDREW PRATT DEFENSIVE BACK 5-10, 180, Jr. Ridge Farm, Ill. (Georgetown-Ridge Farm) 2009: Played in Butler’s first two games...credited with a pass breakup against Albion...2008: Did not see varsity action...2007: Did not see varsity action... named a 2007 Josten’s Scholar-Athlete...Personal: Business/Finance. 26 MIKE ROSE DEFENSIVE BACK 6-2, 200, R-Fr. Canton, Mich. (Plymouth) 2009: Did not see varsity action in his first season at Butler...named to the PFL Academic Honor Roll...Personal: Business major...born 11/28/90...son of Francine Rose...High School: All-Conference safety at Plymouth High School...helped high school squad to a Division championship as a senior...earned All-Conference honors on high school baseball squad...High School Coach: Mike Sawchuk. 9 LUCAS RUSKE QUARTERBACK 6-1, 190, R-Fr. Wilmette, Ill. (Loyola Academy) 2009: Did not see varsity action...Personal: Undecided major...born 7/15/91... son of Gary and Elizabeth Ruske...High School: more than 1,000 yards...had 13 rushing touchdown and 15 passing TDs...Academic All-State and Scholar-Athlete award recipient...High School Coach: John Holecek. 77 NICK SCHIRMANN OFFENSIVE LINE 6-1, 275, So. Cincinnati, Ohio (Anderson) 2009: Played in five of Butler’s first six games...Personal: Business major...born 3/26/91...son of Mike and Patty Schirmann...High School: All-State offensive lineman...helped lead Anderson to a 13-2 record and a state championship in 2007 and a 12-3 mark and a state runner-up finish in 2008...two-year team captain...twoyear All-Conference, All-City and All-District performer...received Anthony Munoz Foundation DII Offensive Lineman of the Year Award as a senior...earned three varsity football letters...High School Coach: Jeff Giesting. 16 87 CHARLIE SCHMELZER TIGHT END 6-7, 255, So. Bloomington, Ill. (Bloomington) 2009: Played in Butler’s first three games and in four total contests...named to the PFL Academic Honor Roll..Athlete...earned two varsity letters each in football and basketball...High School Coach: Rigo Schmelzer. 65 PAUL SCIORTINO OFFENSIVE LINE 6-1, 265, R-Fr. Wilmette, Ill. (Loyola Academy) 2009: Did not see varsity action in his first season with the Bulldogs...named to the PFL Academic Honor Roll...Personal: Business major...born 6/30/91...son of Bill and Celeste Sciortino...High School: in baseball...earned two varsity letters each in football and baseball...High School Coach: John Holecek. 84 BRENDAN SHANNON WIDE RECEIVER 5-11, 190, R-Fr. Lombard, Ill. (Montini Catholic) 2009: Did not see varsity action during his initial season with the Bulldogs...named to the PFL Academic Honor Roll...Personal: Undecided. 73 MATT STOREY OFFENSIVE LINE 6-3, 275, So. St. Louis, Mo. (St. Louis University H.S.). 23 DAVID THOMAS RUNNING BACK 5-9, 185, R-Fr. Chicago, Ill. (North Shore Country Day). 2010 Bulldogs 39 BRETT THOMASTON PLACEKICKER/PUNTER 6-5, 205, So. Frankfort, Ill. (Lincoln-Way East) 2009: Played in five of Butler’s first six contests, handling kickoffs...Personal: Pre-Physical Therapy. 82 JEFF URCH TIGHT END 6-4, 250, R-Fr. Avon, Ind. (Avon) 2009: Did not see varsity action in his initial season with the Bulldogs...Personal: Undecided. Newcomers GREG AMBROSE, OL, 6-3, 275, Fr., Dayton, Ohio (Oakwood)...Three-year starting offensive lineman at Oakwood High School...named conference Lineman of the Year...first team All-Southwest Ohio...HM All-State...team captain as a senior... High School Coach: Paul Stone. BRYCE BERRY BERRY, DB, 6-2, 190, Fr., St. Charles, Ill. (St. Charles East)...AllConference. JAY BRUMMEL, DB, 6-1, 210, R-Fr., Cedar Falls, Iowa (Cedar Falls/Iowa State)... Transfer from Iowa State where he sat out initial collegiate season as a redshirt... started one season at defensive back and one season at running back at Cedar Falls High School...2008 second team INA All-State selection...helped high school squad to a 12-2 record and a 4A state runner-up finish as a senior...played on 111 squad as a junior...four-year track and three-year wrestling letterwinner...High School Coaches: Pat Mitchell and Brad Remmert. DAVID BURKE, DB, 6-2, 180, Fr., Cincinnati, Ohio (Lakota West)...Two-year varsity starter at Lakota West High School...named Most Improved Varsity Defensive Player as a senior...second team All-Conference performer...helped high school squad to its first conference championship and a school-record nine wins in 2009... All-Academic Award winner...High School Coach: Larry Cox. BILL BUSCH, LB, 6-0, 238, Fr., St. Louis, Mo. (St. Louis Priory)...All-State linebacker at St. Louis Priory...named first team All-ABC League on defense...led high school squad in tackles in junior and senior seasons...helped lead St. Louis Priory to a 9-4 record and the state semifinals in 2009...played on high school teams that compiled a 26-10 record over three seasons...High School Coach: Marty Combs. JOHN CANNOVA, OL, 6-0, 260, Fr., Wheaton, Ill. (Benet Academy)...AllConference and All-Area offensive lineman at Benet Academy...team captain as a senior...three-year letterwinner on high school track and field squad...High School Coach: Gary Goforth. JOSEPH CIANCIO, DB, 5-10, 190, Fr., Darien, Ill. (Downers Grove South).... CALEB CONWAY CONWAY, LB, 5-9, 190, Fr., River Forest, Ill. (Oak Park-River Forest)... Two-year letterwinner as a linebacker at Oak Park-River Forest High School... team captain as a senior...finished second on the team in tackles (69) as a senior...tied for the team lead in quarterback sacks (8) in 2009...earned Academic All-Conference recognition...sprinter on high school track squad...High School Coach: Jim Nudera. KEVIN COOK, DB, 6-1, 180, Fr., Fishers, Ind. (Hamilton Southeastern)...All-State defensive back at Hamilton Southeastern High School...three-time All-Conference and two-time All-County performer...team captain as a senior...recorded 11 career pass interceptions...had a school-record 100-yard kickoff return for a touchdown... member of high school’s lacrosse and baseball squads...High School Coach: Scott May. CHRISTIAN EBLE, LB, 5-11, 220, Fr., Naperville, Ill. (Neuqua Valley)...Most Valuable Defensive Lineman as a nose guard at Neuqua Valley High School in 2009...HM All-State...first team All-Conference and All-City as a senior...HM AllArea and All-Region...named a “Top 50” player by both the Chicago Tribune and the Chicago Daily Herald...recorded 50 tackles as a senior, including 22 tackles for loss and five quarterback sacks...earned Academic All-Conference honors...High School Coach: Bryan Wells. GREG EGAN, LB, 5-11, 180, Fr., Naperville, Ill. (Neuqua Valley)...Two-year starting safety at Neuqua Valley High School...team captain and HM All-Conference performer...recorded 54 tackles and scored one defensive touchdown as a senior... helped high school track squad to a state runner-up finish in 2009...High School Coach: Bryan Wells. BRANDON GRUBBE, RB, 5-11, 195, Fr., Lowell, Ind. (Lowell)...All-time leading rusher at Lowell High School...first team All-State running back in 2009...named to the Indiana Football Coaches Association (IFCA) “Top 50” as a senior...three-time first team All-Conference and first team All-Area performer...two-time Junior All-State pick...team captain and Most Valuable Player...helped lead high school squad to a 39-5 record, three conference championships and two state runner-up finishes in three seasons...received Top Scholar-Athlete Award in 2007...varsity letterwinner in basketball and baseball...High School Coach: Kirk Kennedy. TRAE HEETER, RB, 5-9, 185, Fr., Indianapolis, Ind. (Lawrence North)...All-State running back at Lawrence North High School...team captain, MVP and All-Conference as a senior...Junior All-State and HM All-Conference in 2008...rushed for 933 yards and nine TDs as a senior...scored 12 touchdowns as a junior...set a school record with a 99-yard kickoff return for a touchdown...Academic Scholar-Athlete...member of high school track and field squad...High School Coach: Tom Dilley. MATT HITTINGER, WR, 6-2, 180, Fr., Valparaiso, Ind. (Valparaiso)...Two-way starter as a wide receiver and defensive back at Valparaiso High School...threetime HM All-Area on both offense and defense...first team All-Conference...two-time HM All-State...IFCA Region One North-South All-Star...team captain, MVP and Mental Attitude Award winner...had 43 career receptions and seven career pass interceptions...helped team to a school record for victories (9) and a conference championship in 2008...High School Coach: Mark Hoffman. JAY HOWARD, OL, 6-4, 245, Fr., Dayton, Ohio (Oakwood)... 17 2010 Bulldogs KYLE JACHIM, WR, 6-0, 175, Fr., Chicago, Ill. (St. Rita)... DYLAN JOHNSON, TE, 6-3, 215, So., Bloomington, Ill. (Central Catholic/ Lindenwood)...Transferred to Butler from Lindenwood University... WADE MARKLEY, QB, 6-4, 210, Fr., Fort Wayne, Ind. (Dwenger)... JT MESCH, WR, 6-2, 185, Fr., Glen Ellyn, Ill. (Glenbard West)... ARTHUR MONACO, RB, 5-9, 175, Fr., Bloomingdale, Ill. (Lake Park).. touchdown...helped Lake Park to a 7-1 conference record and an 8-3 overall mark in 2009...All-Conference performer on high school baseball squad...High School Coach: Andy Livingston. NICK NYKAZA, OL, 6-0, 235, Fr., Wilmette, Ill. (New Trier)...Three-year starting lineman at New Trier High School...named to the All-Central Suburban South Team as an offensive lineman...chosen as an IHSA Scholar Athlete...played on high school’s lacrosse team...High School Coach: Matt Irvin. DEREK O’CONNOR, WR, 6-0, 180, Fr., Bourbannais, Ill. (Bishop McNamara).... CHARLES PERRECONE, OL, 6-2, 285, Fr., Roselle, Ill. (Lake Park)... LOGAN PERRY, LB, 5-11, 205, Fr., Columbus, Ind. (Columbus East).....had unbeaten regular seasons in 2008 and 2009...Academic Letterman...High School Coach: Bob Gaddis. 18 PHILLIP POWELL, LB, 6-2, 210, Fr., Indianapolis, Ind. (Lawrence Central)... All-County linebacker at Lawrence Central High School...team captain and second team All-Conference pick...named Lawrence Central Male Athlete of the Year in in 2009-10...recorded 203 tackles and 10 quarterback sacks...three-year varsity starter...two-time All-County baseball player...High School Coach: Jayson West. JOEY PURZE, WR, 6-0, 195, Fr., Clayton, Mo. (Clayton)...Two-year starter as a wide receiver at Clayton High School...two-time HM All-Conference...participated in the U.S. Army Combine...compiled 545 receiving yards in three seasons...member of high school’s golf team...High School Coach: Sam Horell. WILL SCHIERHOLZ, QB, 6-0, 215, Fr., St. Louis, Mo. (MICDS)... JIMMY SCHWABE, DB, 5-11, 190, Fr., Downers Grove, Ill. (Downers Grove South)...Two-year starter at Downers Grove South High School..missed all but three games of his senior season with injury...two-time team captain...helped high school squad to a pair of unbeaten conference championships...Academic All-Conference performer...High School Coach: John Belskis. JEREMY STEPHENS, DL, 6-1, 267, So., Indianapolis, Ind. (Lawrence Central)... Transfer from Thomas More College...played in nine games as a freshman at Thomas More and helped the team to an 11-1 season...recorded 22 tackles, including two quarterback sacks and four tackles for loss, in first collegiate season...earned three varsity letters as a defensive lineman at Lawrence Central High School... High School Coach: Jayson West. DON STEWART, DB, 5-10, 175, Fr., St. Louis, Mo. (Clayton)... JAYME SZAFRANSKI, DB, 6-1, 190, Fr., Barrington, Ill. (Fremd)... MIKE WENDAHL, OL, 6-5, 290, Jr., Valparaiso, Ind. (Valparaiso/Indianapolis)... Transfer from the University of Indianapolis...will sit out the 2010 season...two-time All-State offensive lineman at Valparaiso High School...named All-Conference and All-Area in junior and senior seasons...team captain in final prep season...chosen to play in the Indiana North-South All-Star game...earned three varsity letters in football... member of high school basketball team...High School Coach: Mark Hoffman. DANIEL WILSON, PK, 5-9, 198, Fr., Zionsville, Ind. (Zionsville)...Two-year letterwinner as a placekicker at Zionsville High School...tied a school record with a 44-yard field goal...earned Academic All-State recognition...High School Coach: Larry McWhorter. PAUL YANOW, LB, 6-0, 200, Fr., Cincinnati, Ohio (Sycamore)...Two-year ...Two-year leading ...T tackler at Sycamore High School...named high school team’s Interior Defensive Player of the Year in back-to-back seasons...two-year second team All-Conference...HM All-City as a senior...Sycamore Old Spice Player of the Year in 2009...chosen to play in the Ohio East-West All-Star Game...four-time Academic All-Conference honoree... two-year varsity letterwinner in baseball...High School Coach: Scott Datillo. 2009 Gridiron Classic Butler 28, Central Connecticut State 23 Safety Mark Giacomantonio (above) led Butler’s defensive effort against Central Connecticut State with a team-high 10 tackles. Runningback Scott Gray (above) rushed for a teamhigh 88 yards and scored the Bulldogs’ first touchdown in the Gridiron Classic. Quarterback Andrew Huck (above) scored the final of his three touchdowns to clinch Butler’s first postseason football victory. Quarterback Andrew Huck (above) had over 200 yards in total offense and was named the Most Valuable Player of the 2009 Gridiron Classic. Wide receiver Dan Bohrer (above) had a team-high seven receptions for 69 yards and defensive end Grant Hunter (left) added five tackles against CCSU. Linebacker Nick Caldicott (above) had eight tackles in Butler’s winning effort in the 2009 Gridiron Classic. 19 2010 Butler Roster Numerical Roster/ Pronunciation Guide 1 2 3 4 5 6-D 6-O 7-D 7-O 8 9-D 9-O 10 11 12 13 14 15-D 15-O 16 17-D 17-O 18 19 20 21-D 21-O 22 23-D 23-O 24 25 26 27 28 29 30 31-D 31-O 32-D 32-O 33 34-D 34-O 35 36 37 38 39 40 41 42 43 44 20 Jordan Koopman Stuart Harvey Tom Judge Matt Foor Artis Hailey III Logan Sullivan Wade Markley Jimmy Schwabe (Shwab-ee) Jeff Larsen Ryan Hitchcock Andy Dauch (DOWK) Lucas Ruske (Russ-KEE) Andrew Huck Zach Watkins Matt Kobli (KOE-blee) T. J. Lukasik (Loo-CAY-sik) Nick Caldicott (KAL-dah-cott) Andrew Cottrell Will Schierholz (SHEER-holz) Mark Giacomantonio (JOK-ah-mon-TOE-nee-oh) Bryce Berry Scott Harvey Calvin Blair Joseph Ciancio (See-AHN-SEE-oh) Michael Wilson Chris Burns Joseph Purze (PURRS) Scott Gray David Burke David Thomas Andrew Pratt Jack McKenna Mike Rose Sean Grady Tadd Dombart Dan Haber Bobby McDonald Caleb Conway Brandon Grubbe (GROO-bee) Don Stewart Trae Heeter William Bork Jay Brummel Arthur Monaco David Lang Steven Depositar Jayme Szafranski (ZAH-fran-skee) Kevin Cook Brett Thomaston Matt Benson Bill Busch Phillip Powell Alex Perritt Greg Egan NO. 66 63 17-D 40 18 NAME Greg Ambrose Nick Atkinson Bryce Barry Matt Benson Calvin Blair POS. OL OL DB RB QB HT. 6-3 6-2 6-2 6-2 6-2 WT. 275 280 190 220 205 YR. Fr. So. Fr. R-Fr. So. HOMETOWN/HIGH SCHOOL Dayton, Ohio/Oakwood Kokomo, Ind./Western St. Charles, Ill./St. Charles East Traverse City, Mich./St. Francis Grand Rapids, Mich./East Grand Rapids 33 34-D 23-D 21-D 41 William Bork** Jay Brummel David Burke Chris Burns Bill Busch TE DB DB DB LB 6-1 6-1 6-2 5-9 6-0 225 210 180 180 238 Jr. R-Fr. Fr. So. Fr. Dyer, Ind./Lake Central Cedar Falls, Iowa/Cedar Falls/Iowa State Cincinnati, Ohio/Lakota West Aurora, Ill./Waubonsee Valley St. Louis, Mo./St. Louis Priory 14 71 19 95 31-D Nick Caldicott** John Cannova Joseph Ciancio Taylor Clarkson* Caleb Conway LB OL DB DT LB 6-1 6-0 5-10 6-2 5-9 225 260 190 275 190 Jr. Fr. Fr. So. Fr. Oak Park, Ill./Fenwick Wheaton, Ill./Benet Academy Darien, Ill./Downers Grove South Fortville, Ind./Mt. Vernon River Forest, Ill./Oak Park-River Forest 38 46 15-D 9-D 36 Kevin Cook Rob Cosler** Andrew Cottrell* Andy Dauch** Steven Depositar* DB DE LB CB RB 6-1 6-1 5-11 5-10 5-10 180 230 215 180 195 Fr. Sr. Jr. Jr. So. Fishers, Ind./Hamilton Southeastern Winter Park, Fla./Winter Park West Chester, Ohio/Lakota W./Grand Valley Bloomfield Hills, Mich./Lahser Mishawaka, Ind./Penn 28 69 49 44 4 Tadd Dombart*** Josh Dorfman Christian Eble Greg Egan Matthew Foor* CB OL LB LB DB 5-10 6-0 5-11 5-11 5-11 185 260 220 180 205 Sr. Jr. Fr. Fr. So. Cincinnati, Ohio/Lakota West Durham, N.C./C.E. Jordan Naperville, Ill./Neuqua Valley Naperville, Ill./Neuqua Valley West Chester, Ohio/Lakota West 16 64 27 22 31-O Mark Giacomantonio** Donnie Gilmore*** Sean Grady Scott Gray*** Brandon Grubbe SS OL CB RB RB 5-11 6-3 5-10 5-8 5-11 190 295 190 185 195 Jr. Sr. R-Fr. Sr. Fr. Carmel, Ind./Carmel Westphalia, Ind./North Knox Geneva, Ill./Geneva Lowell, Ind./Lowell/Ball State Lowell, Ind./Lowell 29 5 91 17-O 2 Dan Haber Artis Hailey III* Taylor Harris Scott Harvey Stuart Harvey* CB LB DT WR WR 6-0 6-0 6-5 6-3 6-1 180 215 285 185 190 R-Fr. So. Jr. R-Fr. Jr. Calabasas, Calif./Chaminade Prep Hammond, Ind./Hammond Logansport, Ind./Logansport Carmel, Ind./Westfield Carmel, Ind./Westfield 32-O 8 86 53 67 Trae Heeter Ryan Hitchcock** Matt Hittinger Rob Hobson* Jay Howard RB RB WR OL OL 5-9 5-7 6-2 6-4 6-4 185 200 180 280 245 Fr. Jr. Fr. Sr. Fr. Indianapolis, Ind./Lawrence North Bedford, Ind./Bedford-North Lawrence Valparaiso, Ind./Valparaiso Arcadia, Ind./Sheridan Dayton, Ohio/Oakwood 10 98 80 93 3 Andrew Huck* Grant Hunter** Kyle Jachim Dylan Johnson# Tom Judge QB DE WR TE QB 6-2 6-3 6-0 6-3 6-2 195 240 175 215 190 Jr. Jr. Fr. So. R-Fr. Bloomington, Ind./Bloomington North Liberty Twp., Ohio/Lakota West/Robert Morris Chicago, Ill./St. Rita Bloomington, Ill./Cent. Catholic/Lindenwood Elmhurst, Ill./York 12 1 51 54 35 Matt Kobli** Jordan Koopman** Robert Koteff* William Lamar* David Lang** QB WR LB LB PK/P 6-3 5-8 6-0 5-11 6-0 240 190 225 200 220 Sr. Jr. Jr. So. Jr. Whiting, Ind./Whiting Chula Vista, Calif./Eastlake Lexington, Ky./Paul L. Dunbar Oak Park, Ill./Oak Park-River Forest Lowell, Ind./Lowell 7-O 13 6-O 72 30 Jeff Larsen* T. J. Lukasik Wade Markley Pete Mattingly** Bobby McDonald WR DB QB OL DB 6-1 5-8 6-4 6-5 6-0 190 180 210 290 200 Jr. So. Fr. Jr. R-Fr. Chicago, Ill./Notre Dame Lowell, Ind./Lowell Fort Wayne, Ind./Dwenger Zionsville, Ind./Cathedral Orland Park, Ill./Providence Catholic 2010 Butler Roster NO. 81 25 89 34-O 79 NAME Eddie McHale*** Jack McKenna** JT Mesch Arthur Monaco Ryan Myers POS. WR CB WR RB OL HT. 6-3 6-0 6-2 5-9 6-2 WT. 205 180 185 175 290 YR. Sr. R-Jr. Fr. Fr. So. HOMETOWN/HIGH SCHOOL Cincinnati, Ohio/St. Xavier/Ohio Elmwood Park, Ill./St. Patrick Glen Ellyn, Ill./Glenbard West Bloomingdale, Ill./Lake Park Fort Wayne, Ind./Bishop Dwenger 68 83 47 62 43 Nick Nykaza Derek O’Connor Bob Olszewski* Charles Perrecone Alex Perritt OL WR LB OL LB 6-0 6-0 6-1 6-2 6-1 235 180 205 285 215 Fr. Fr. Sr. Fr. R-Fr. Wilmette, Ill./New Trier Bourbannais, Ill./Bishop McNamara LaGrange, Ill./Lyons Township Roselle, Ill./Lake Park Whiteland, Ind./Whiteland 48 75 52 42 24 Logan Perry Doug Petty Jeff Poss** Phillip Powell Andrew Pratt LB OL DE LB FS 5-11 6-4 6-2 6-2 5-10 205 240 240 210 180 Fr. R-Fr. Jr. Fr. Jr. Columbus, Ind./Columbus East Indianapolis, Ind./Southport Salt Lake City, Utah/Scripps Ranch (Calif.) Indianapolis, Ind./Lawrence Central Ridge Farm, Ill./Georgetown-Ridge Farm 21-O 59 26 9-O 15-O Joseph Purze Jordan Ridley* Mike Rose Lucas Ruske Will Schierholz WR LB FS QB QB 6-0 5-11 6-2 6-1 6-0 195 230 200 190 215 Fr. So. R-Fr. R-Fr. Fr. Clayton, Mo./Clayton Indianapolis, Ind./Lawrence Central Canton, Mich./Plymouth Wilmette, Ill./Loyola Academy St. Louis, Mo./MICDS 77 87 7-D 65 76 Nick Schirmann Charlie Schmelzer Jimmy Schwabe Paul Sciortino Ryan Secrist** OL TE DB OL OL 6-1 6-7 5-11 6-1 6-4 275 255 190 265 275 So. So. Fr. R-Fr. Sr. Cincinnati, Ohio/Anderson Bloomington, Ill./Bloomington Downers Grove, Ill./Downers Grove South Wilmette, Ill./Loyola Academy Rochester, Ind./Rochester 84 97 78 56 32-D Brendan Shannon Tyler Skaggs* Mike Staniewicz*** Jeremy Stephens# Don Stewart WR DT OL DL DB 5-11 6-2 6-6 6-1 5-10 190 240 285 267 175 R-Fr. Sr. Sr. So. Fr. Lombard, Ill./Montini Catholic Winona Lake, Ind./Warsaw Hebron, Ind./Lowell Indianapolis, Ind./Lawrence C./Thomas More St. Louis, Mo./Clayton 73 6-D 37 57 55 Matt Storey Logan Sullivan** Jayme Szafranski Ross Teare** Jace Tennant* OL SS DB DT DE 6-3 6-0 6-1 6-1 6-1 275 195 190 290 225 So. Jr. Fr. Jr. So. St.Louis, Mo./St. Louis University H.S. Winston-Salem, N.C./Horizon (Ariz.) Barrington, Ill./Fremd Sammamish, Wash./Eastlake Champaign, Ill./Central 23-O 90 39 82 94 David Thomas Larry Thomas Brett Thomaston Jeff Urch Carter Walley* RB DT K/P TE TE 5-9 5-10 6-5 6-4 6-2 185 250 205 250 255 R-Fr. Jr. So. R-Fr. So. Chicago, Ill./North Shore Country Day West Chester, Ohio/St. Xavier Frankfort, Ill./Lincoln-Way East Avon, Ind./Avon Peru, Ind./Rochester 11 88-D 61 96 20 45 Zach Watkins** Ryan Webb* Mike Wendahl# Daniel Wilson Michael Wilson* Paul Yanow WR DE OL PK P LB 6-2 6-3 6-5 5-9 6-0 6-0 205 235 290 198 180 200 Jr. So. So. Fr. So. Fr. Chicago, Ill./St. Ignatius Batavia, Ill./Batavia Valparaiso, Ind./Valparaiso/Indianapolis Zionsville, Ind./Zionsville Grand Rapids, Mich./East Grand Rapids Cincinnati, Ohio/Sycamore 45 46 47 48 49 51 52 53 54 55 56 57 59 61 62 63 64 65 66 67 68 69 71 72 73 75 76 77 78 79 80 81 82 83 84 86 87 88 89 90 91 93 94 95 96 97 98 Paul Yanow (YEAH-now) Rob Cosler Bob Olszewski (OLE-chev-skee) Logan Perry Christian Eble (EHBLE) Robert Koteff (COE-teff) Jeff Poss Rob Hobson William Lamar Jace Tennant Jeremy Stephens Ross Teare (TEER) Jordan Ridley Mike Wendahl Charles Perrecone (Purr-CONE-ee) Nick Atkinson Donnie Gilmore Paul Sciortino (SCORE-in-teen-oh) Greg Ambrose Jay Howard Nick Nykaza (NYE-kah-zah) Josh Dorfman John Cannova (Cah-NO-vah) Pete Mattingly Matt Storey Doug Petty Ryan Secrist Nick Schirmann Mike Staniewicz (STAN-ah-witts) Ryan Myers Kyle Jachim (YAH-kum) Eddie McHale Jeff Urch Derek O’Connor Brendan Shannon Matt Hittinger (HIT-in-jer) Charlie Schmelzer Ryan Webb JT Mesch Larry Thomas Taylor Harris Dylan Johnson Carter Walley (WALLY) Taylor Clarkson Daniel Wilson Tyler Skaggs Grant Hunter *—Varsity letters earned. #--Not eligible until 2011. O--Number on offense. D--Number on defense. HEAD COACH: Jeff Voris (DePauw ’89) ASSISTANT COACHES: Joe Cheshire, Tim Cooper, Nick Tabacca, David Kenney, Nick Anderson, Danny Sears, Chris Davis, Rob Noel, Matt Walker 21 2009 Butler Statistics SCORING Zach Watkins Andrew Huck David Lang Dan Bohrer Scott Gray Ryan Hitchcock Eddie McHale Ricky Trujillo Jordan Koopman Carter Walley Andy Dauch Logan Sullivan Michael Wilson Butler Opponents FGs David Lang Butler Opponents G TD 12 10 12 10 12 0 12 7 12 6 10 4 12 3 11 2 12 2 12 1 12 1 12 0 12 0 12 46 12 30 PAT-1 0-0 0-0 33-43 0-0 0-0 0-0 0-0 0-0 0-0 0-0 0-0 0-0 0-1 33-44 24-24 PAT-2 0-0 0-0 0-0 0-0 0-0 0-0 0-0 0-0 0-0 0-0 0-0 1-2 0-0 1-2 2-6 FG PTS 0-0 60 0-0 60 7-11 54 0-0 42 0-0 36 0-0 24 0-0 18 0-0 12 0-0 12 0-0 6 0-0 6 0-0 2 0-0 0 7-11 332 7-11 229 G 10-19 20-29 30-39 40-49 50+ LG 12 0-0 4-4 3-6 0-1 0-0 39 12 0-0 4-4 3-6 0-1 0-0 39 12 0-0 2-2 3-4 2-4 0-1 44 RUSHING Scott Gray Andrew Huck Ryan Hitchcock Steven Depositar Ricky Trujillo Dan Bohrer Artis Hailey Jordan Koopman Calvin Blair Jordan Ridley Team Butler Opponents G 12 12 10 12 11 12 12 12 4 12 12 12 12 ATT GAIN LOSS NET TD AVG 169 913 45 868 5 5.1 102 580 113 467 10 4.6 55 350 4 346 4 6.3 19 132 2 130 0 6.8 24 127 2 125 2 5.2 13 90 0 90 0 6.9 13 65 0 65 0 5.0 5 62 5 57 1 11.4 6 21 4 17 0 5.7 1 13 0 13 0 13.0 7 0 65 -65 0 --414 2353 240 2113 22 5.1 425 1709 310 1399 13 3.3 PASSING Andrew Huck Calvin Blair Team Butler Opponents G 12 4 12 12 12 ATT COMP PCT. 371 233 .628 17 12 .706 2 0 .000 390 245 .628 365 196 .537 INT 11 0 0 11 18 YDS. 2454 77 0 2531 2265 RECEIVING Zach Watkins Dan Bohrer Eddie McHale Jordan Koopman Scott Gray Jeff Larsen Ryan Hitchcock Ricky Trujillo Carter Walley Artis Hailey William Bork Ryan Webb Butler Opponents G 12 12 12 12 12 12 10 11 12 12 12 10 12 12 NO. 78 52 38 24 13 13 9 7 5 3 2 1 245 196 PUNTING Michael WIlson Team Butler Opponents G 12 12 12 12 NO. 63 1 64 74 YDS. 2378 0 2378 2637 AVG. 37.7 0.0 37.2 35.6 LG 69 0 69 63 PUNT RETURNS Tadd Dombart Jordan Koopman Logan Sullivan Andy Dauch Team Butler Opponents G 12 12 12 12 12 12 12 NO. 11 8 1 1 2 23 24 YDS. 66 104 15 11 0 196 147 AVG. 6.0 13.0 15.0 11.0 0.0 8.5 6.1 LG 16 34 0 11 0 34 25 22 YDS. TD 918 10 471 7 493 3 258 1 134 1 112 0 50 0 36 0 42 1 4 0 11 0 2 0 2531 23 2265 14 TD 21 2 0 23 14 AVG. LG 11.8 62 9.1 28 13.0 42 10.8 34 10.3 71 8.6 27 5.6 35 5.1 10 8.4 16 1.3 5 5.5 10 2.0 2 10.3 71 11.6 62 KICKOFF RETURNS Jordan Koopman Andy Dauch Zach Watkins Butler Opponents G 12 12 12 12 12 NO. 31 10 1 42 51 TEAM STATISTICS First Downs By Rush By Pass By Penalty Rush Yards Per Game Pass Yards Per Game Points Per Game Total Plays Total Offense Offense Per Play Offense Per Game Fumbles/Lost Penalties/Yards Third Down Conversions Fourth Down Conversions Time of Possession/Game PLAYER Nick Caldicott Derek Guggenberger Spencer Summerville Mark Giacomantonio Andrew Cottrell Jack McKenna Jeff Poss Grant Hunter Logan Sullivan Jordan Ridley Jacob Fritz Tadd Dombart Brian Adika Ross Teare Nick Comotto Rob Cosler William Lamar Pete Xander Larry Thomas Jace Tennant Matt Foor Thorn Murphy Derek Bradford Andy Dauch Dan Bohrer Jeff Larsen Tyler Skaggs Zach Watkins Taylor Clarkson Kevin Credille Jordan Koopman Brian Crable Pete Mattingly Bob Olszewski David Lang Andrew Huck Donnie Gilmore Eddie McHale Robert Koteff Ben Jones Brett Thomaston Andrew Pratt YDS. 703 174 0 877 998 BUTLER 255 111 118 26 176.1 210.9 27.7 804 4644 5.8 387.0 19/8 57/575 69/164 6/15 30:24 G 12 12 12 9 10 12 12 9 12 12 12 11 10 12 12 12 12 12 11 12 10 8 12 12 12 12 10 12 11 2 12 12 12 12 12 12 12 12 10 4 5 2 ST 39 31 36 30 16 31 24 19 22 20 18 20 10 9 12 15 10 7 8 10 7 6 6 5 3 2 1 3 2 2 0 1 1 1 1 1 0 0 0 0 0 0 AVG. 22.7 17.4 0.0 20.9 19.6 LG 62 28 0 62 46 OPPONENTS 202 78 110 14 116.6 188.8 19.1 790 3664 4.6 305.3 20/8 68/607 57/174 8/22 29:36 SCORE BY QUARTERS 1 2 Butler 47 108 Opponents 48 45 3 81 79 2009 RESULTS (11-1, 7-1 PFL) BU S. 5 - ALBION 42 S. 12 - At Franklin 49 S. 19 - HANOVER 42 S. 26 - At Morehead State* (OT) 28 O. 3 - SAN DIEGO* 25 O. 17 - VALPARAISO* 23 O. 24 - At Campbell* 23 O. 31 - DAVIDSON* 14 N. 7 - At Dayton* 31 N. 14 - At Jacksonville* 7 N. 21 - DRAKE* 20 D. 5 - CENTRAL CONNECTICUT ST.# 28 4 89 57 OT 7 0 OPP. 3 19 21 21 24 14 16 7 28 36 17 23 T 332 229 ATTD. 2,886 2,500 2,215 3,880 4,218 1,723 4,851 2,568 4,012 1,933 2,122 1,577 *--Pioneer Football League game. #--Gridiron Classic. BUTLER UNIVERSITY DEFENSIVE STATISTICS TACKLES QB FORCED FUMBLE PASS AT TOTAL FOR LOSS SACKS FUMBLE REC. INT. 31 70 8-22 1-7 3 1 3-25 38 69 8-25 4-19 0 1 0-0 32 68 0-0 0-0 0 0 1-22 15 45 4-11 0-0 1 1 5-45 27 43 2.5-4 0-0 0 0 0-0 11 42 1-1 0-0 0 0 3-30 17 41 7.5-19 3-13 0 0 0-0 20 39 11.5-52 10-42 1 0 0-0 16 38 1-1 0-0 0 0 1-10 16 36 8.5-35 4-22 1 1 1-15 16 34 0-0 0-0 0 1 0-0 10 30 2.5-4 0-0 0 0 2-9 19 29 0-0 0-0 0 0 0-0 20 29 4-14 2-8 0 0 0-0 16 28 0.5-0 0-0 0 0 0-0 11 26 1.5-3 1-2 0 0 0-0 14 24 1-2 0-0 0 0 0-0 11 18 4-15 1-9 0 1 0-0 9 17 5.5-18 1-2 1 0 0-0 6 16 5-19 5-19 0 0 0-0 6 13 0-0 0-0 0 1 0-0 6 12 0-0 0-0 0 0 1-8 6 12 0-0 0-0 0 0 0-0 5 10 0-0 0-0 1 0 1-19 3 6 0-0 0-0 0 0 0-0 3 5 0-0 0-0 0 0 0-0 3 4 0.5-1 0-0 0 0 0-0 0 3 0-0 0-0 0 0 0-0 0 2 0-0 0-0 0 0 0-0 0 2 0-0 0-0 0 0 0-0 2 0 0 0-0 0-0 0 0 0-0 PASS BRKUP 1 0 2 2 0 5 1 3 4 0 2 3 2 0 2 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 Blocked Kicks: Logan Sullivan (Punt vs. Albion), Spencer Summerville (Punt vs. Morehead St.), William Lamar (Punt vs. Morehead St.) 2009 Butler Results BUTLER 42, ALBION 3 Sept. 5 at Butler Bowl (A - 2,886) Score By Quarters 1 2 3 4 F Albion 0 0 3 0 3 Butler 0 28 14 0 42 SCORING SUMMARY BU - Eddie McHale, 10 pass from Andrew Huck (David Lang kick) BU - Scott Gray, 71 pass from Huck (Lang kick) BU - Zach Watkins, 48 pass from Huck (Lang kick) BU - Zach Watkins, 62 pass from Huck (Lang kick) AC - Mychal Galla, 29 field goal BU - Zach Watkins, 19 pass from Huck (Lang kick) BU - Andrew Huck, 3 run (Lang kick) RUSHING: BU - Huck, 8-68; Gray, 9-34; Depositar, 6-33; Hailey, 4-17; Hitchcock, 3-17; Blair, 4-17; Trujillo, 2-9; Bohrer, 1-5. AC - Orr, 13-36; Frisbey, 7-22; Harris, 3-minus 8; Fusee, 1-minus 10. PASSING: BU - Huck, 18-25-1 for 327 yards. AC - Harris, 10-25-1 for 59 yards; Fusee, 9-16-1 for 61 yards. RECEIVING: BU - Watkins, 7-183; McHale, 3-30; Gray, 2-74; Bohrer, 2-14; Trujillo, 2-9; Koopman, 1-16; Webb, 1-2; Larsen, 1-1. AC - McRobb, 8-52; Wunderlich, 5-36; Pointer, 2-13; Cruse, 2-12; Mayhoe, 1-6; Orr, 1-1. BUTLER 49, FRANKLIN 19 Sept. 12 at Red Faught Stadium (A - 2,500) Score By Quarters 1 2 3 4 F Butler 10 13 14 12 49 Franklin 0 7 6 6 19 SCORING SUMMARY BU - David Lang, 28 field goal BU - Zach Watkins, 3 pass from Andrew Huck (Lang kick) BU - Dan Bohrer, 9 pass from Huck (Kick failed) FC - Ryan Momberger, 1 pass from Kyle Ray (M. Magdalinos kick) BU - Eddie McHale, 8 pass from Huck (Lang kick) BU - Watkins, 7 pass from Huck (Lang kick) BU - Ricky Trujillo, 8 run (Lang kick) FC - Nick Purichia, 8 run (Pass failed) BU - Andrew Huck, 5 run (Kick failed) FC - A. Mellencamp, 62 pass from Purichia (Pass ffailed) BU - Jordan Koopman, 10 pass from Calvin Blair (Kick Failed) RUSHING: BU - Depositar, 5-50; Huck, 8-45; Gray, 7-27; Trujillo, 7-20; Hailey, 3-19; Bohrer, 1-10; Team, 1-minus 10. FC - Purichia, 6-21; Snellenbarger, 3-8; Cook, 1-4; Downs, 3-3; Heller, 3-3; Ellis, 1-minus 9; Ray, 7-minus 16; Team, 2-minus 31. PASSING: BU - Huck, 33-48-0 for 316 yards; Blair, 5-5 for 25 yards. FC - Ray, 19-36-2 for 261 yards; Purichia, 6-16-0 for 98 yards. RECEIVING: BU - Watkins, 8-91; Bohrer, 8-72; McHale, 7-62; Koopman, 6-77; Larsen, 3-19; Gray, 3-11; Hailey, 2-4; Trujillo, 1-5. FC - Deffner, 7-152; Mellencamp, 6-113; Momberger, 5-49; Walton, 3-20; Zmich, 2-21; Cook, 1-7; Downs, 1-minus 3. BUTLER 42, HANOVER 21 Sept. 19 at Butler Bowl (A - 2,215) Score By Quarters 1 2 3 4 F Hanover 7 0 7 7 21 Butler 14 21 7 0 42 SCORING SUMMARY BU - Andy Dauch, 18 pass interception return (David Lang kick) HC - S. Gibson, 8 run (P. Polochanin kick) BU - Andrew Huck, 8 run (Lang kick) BU - Ricky Trujillo, 12 run (Lang kick) BU - Dan Bohrer, 3 pass from Huck (Lang kick) BU - Scott Gray, 1 run (Lang kick) BU - Dan Bohrer, 8 pass from Calvin alvin Blair (Lang kick) HC - B. Smart, 15 pass from D. Passafiume (P. Polochanin kick) HC - C. Zeck, 5 pass from D. Seay (P. Polochanin kick) RUSHING: BU - Depositar, 8-47; Gray, 6-35; Huck, 4-31; Hailey, 6-29; Trujillo, 4-27; Blair, 2-4. HC - Seay, 7-19; Cook, 7-11; Zeck, 1-7; Gibson, 13-1; Juett, 2-1; Armstrong, 2-minus 7; Team, 1-minus 1. PASSING: BU - Huck, 13-15-0 for 134 yards; Blair, 6-11-0 for 50 yards. HC - Gibson, 11-15-0 for 92 yards; Seay, 1-2-0 for 5 yards; Passafiume, 1-1 for 15 yards; Armstrong, 1-1 for 6 yards; Polochanin, 1-1 for 14 yards; Stewart, 0-1. RECEIVING: BU - Watkins, 5-53; Bohrer, 5-45; Koopman, 5-36; McHale, 2-47; Gray, 1-3; Hailey, 1-0. HC - Passafiume, 6-27; Robinette, 2-33; Miller, 2-28; Smart, 2-22; Zeck, 2-16; Cason, 1-6. BUTLER 28, MOREHEAD STATE 21 (OT) Sept. 26 at Jayne Stadium (A - 3,880) Score By Quarters 1 2 3 4 OT F Butler 0 0 13 8 7 28 Morehead State 21 0 0 0 0 21 SCORING SUMMARY MS - D. Sawyer, 60 pass from E. Sawyer (R. Duzan kick) MS - D. Harkness, 63 pass interception return (Duzan kick) MS - D. Morgan, 28 pass from E. Sawyer (Duzan kick) BU - Zach Watkins, 9 pass from Andrew Huck ( David Lang kick) BU - Dan Bohrer, 2 pass from Huck (Kick blocked) BU - Zach Watkins, 7 pass from Huck (Logan Sullivan rush) BU - Eddie McHale, 22 pass from Huck (Lang kick) RUSHING: BU - Gray, 14-59; Bohrer, 1-18; Trujillo, 2-5; Hitchcock, 1-0; Huck, 5-minus 12, Team, 1-minus 39. MS - Pendleton, 11-57, E. Sawyer, 18-38; Morgan, 1-9; Smart, 1-6; Bodrick, 1-4; Cox, 3-1; Lewis, 1-minus 4; McDermott, 1-minus 6. PASSING: BU - Huck, 26-49-3 for 208 yards. MS - E. Sawyer, 9-20-1 for 149 yards; Lewis, 2-7-2 for 14 yards. RECEIVING: BU - Watkins, 13-130; Bohrer, 5-22; McHale, 3-27; Hitchcock, 3-2; Gray, 1-13; Larsen, 1-4. MS - McDermott, 3-23; Yoshimura, 2-15; Bodrick, 2-13; D. Sawyer, 1-60; Morgan, 1-28; Williams, 1-21; Pendleton, 1-3. BUTLER 25, SAN DIEGO 24 Oct. 3 at Butler Bowl (A - 4,218) Score By Quarters 1 2 3 4 F San Diego 3 14 0 7 24 Butler 0 9 6 10 25 SCORING SUMMARY SD - Mike Levine, 44 field goal BU - David Lang, 39 field goal SD - JT Rogan, 1 run (Levine kick) BU - Scott Gray, 12 run (Kick failed) SD - Patrick Doyle, 13 pass from Sam Scudellari (Levine kick) BU - Ryan Hitchcock, 43 run (Kick failed) SD - Matt Jelmini, 11 run (Levine kick) BU - Zach Watkins, 21 pass from Andrew Huck (Lang kick) BU - David Lang, 37 field goal (0:01) RUSHING: BU -Gray, 16-113; Hitchcock, 5-50; Huck, 11-35; Ridley, 1-13; Bohrer, 1-4. SD - Jelmini, 22-112; Rogan, 12-49; Scudellari, 8-49; Fontenberry, 1-minus 1. PASSING: BU - Huck, 25-38-1 for 260 yards. SD - Scudellari, 1728-0 for 175 yards. RECEIVING: BU -Watkins, 8-97; Bohrer, 8-79; Koopman, 4-57; McHale, 2-18; Walley, 1-8; Hitchcock, 1-1; Gray, 1-0. SD - Fiege, 5-30; Doyle, 4-65; McGough, 4-49; Brown, 2-15; Smith, 1-16; Jelmini, 1-0. BUTLER 23, VALPARAISO 14 Oct. 17 at Butler Bowl (A - 1,723) Score By Quarters 1 2 3 4 F Valparaiso 0 0 14 0 14 Butler 3 0 7 13 23 SCORING SUMMARY BU - David Lang, 38 field goal VU - Eli Crawford, 60 pass interception return (Andrzej Skiba kick) VU - Quinn Schafer, 1 run (Skiba kick) BU - Ryan Hitchcock, 4 run (Lang kick) BU - Scott Gray, 7 run (Lang kick) BU - Ryan Hitchcock, 2 run (Kick failed) RUSHING: BU - Gray, 22-99; Huck, 9-76; Hitchcock, 7-53; Trujillo, 2-11; Bohrer, 2-9; Team, 1-minus 1. VU - Wildermuth, 23-87; Lynn, 7-37; Wysocki, 5-7; Morelli, 5-6; Jones, 1-5; Schafer, 4-minus 15. PASSING: BU - Huck, 10-20-3 for 77 yards. VU - Schafer, 3-11-1 for 74 yards; Wysocki, 5-10-1 for 48 yards. RECEIVING: BU - McHale, 4-25; Watkins, 3-22; Hitchcock, 1-20; Walley, 1-6; Bohrer, 1-4. VU - Henton, 2-63; McCarty, 2-25; Myers, 1-15; Bennett, 1-14; Jones, 1-4; Wildermuth, 1-1. BUTLER 23, CAMPBELL 16 Oct. 24 at Barker-Lane Stadium (A - 4,851) Score By Quarters 1 2 3 4 F Butler 13 7 0 3 23 Campbell 0 10 3 3 16 SCORING SUMMARY BU - Scott Gray, 5 run (Kick Failed) BU - Zach Watkins, 22 pass from Andrew Huck (David Lang kick) BU - Dan Bohrer, 1 pass from Huck (Lang kick) CU - CJ Oates, 9 pass from Daniel Polk (Adam Willets kick) CU - Adam WIllets, 38 field goal CU - Adam Willets, 43 field goal BU - David Lang, 21 field goal CU - Adam Willets, 37 field goal RUSHING: BU - Huck, 6-56; Hitchcock, 8-34; Gray, 13-31; Trujillo, 1-28; Koopman, 1-14; Bohrer, 1-6; Team, 1-minus 2. CU - Polk, 1563; Oates, 6-25; Smith, 7-21; Jordan, 3-11; Brown, 2-8; Cramer, 1-5; Kirtz, 2-4; Bryant, 1-3. PASSING: BU - Huck, 22-30-0 for 214 yards. CU - Polk, 15-31-1 for 161 yards. RECEIVING: BU - Watkins, 5-54; Larsen, 3-34; Bohrer, 3-28; Koopman, 3-26; Gray, 3-12; McHale, 2-43; Bork, 2-11; Walley, 1-6. CU - Jordan, 4-66; Stallings, 3-37; Stryffeler, 3-22; Oates, 2-17; Smith, 2-12; Murphy, 1-7. BUTLER 14, DAVIDSON 7 Oct. 31 at Butler Bowl (A - 2,568) Score By Quarters 1 2 3 4 F Davidson 0 0 7 0 7 Butler 7 0 0 7 14 SCORING SUMMARY BU - Ryan Hitchcock, 8 run (David Lang kick) DC - Kenny Mantuo, 9 run (Ben Behrendt kick) BU - Andrew Huck, 1 run (David Lang kick) RUSHING: BU - Gray, 14-117; Huck, 16-81; Hitchcock, 8-58; Bohrer, 2-9; Trujillo, 1-4; Team, 1-minus 1. DC - Mantuo, 19-147; Blanchard, 9-41; Williams, 7-15. PASSING: BU - Huck, 14-26-0 for 136 yards. DC - Blanchard, 1223-1 for 91 yards. RECEIVING: BU - Watkins, 4-33; McHale, 3-55; Bohrer, 2-23; Koopman, 2-20; Hitchcock, 2-minus 7; Larsen, 1-12. DC - Hanabury, 5-40; Aldrich, 4-37; Mantuo, 2-13; Williams, 1-1. BUTLER 31, DAYTON 28 Nov. 7 at Welcome Stadium (A - 4,012) Score By Quarters 1 2 3 4 F Butler 0 13 6 12 31 Dayton 7 0 7 14 28 SCORING SUMMARY UD - Dan Jacob, 4 run (Nick Glavin kick) BU - Andrew Huck, 7 run (David Lang kick) BU - Dan Bohrer, 7 pass from Andrew Huck (Kick failed) BU - Andrew Huck, 7 run (Run failed) UD - Justin Watkins, 50 pass from Steve V Valentino (Glavin kick) BU - Andrew Huck, 36 run (Kick failed) UD - Steve Valentino, 4 run (Pass failed) BU - Carter Walley, 16 pass from Huck (Kick failed) UD - Nick Collins, 8 pass from Valentino (Luke Bellman pass) RUSHING: BU - Gray, 19-109; Huck, 12-54; Koopman, 2-30; Hitchcock, 3-23; Bohrer, 1-14; Trujillo, 1-5. UD - Valentino, 12-44; Mack, 9-39; Jacob, 7-32; Watkins, 1-minus 5; Team, 1-0. PASSING: BU - Huck, 12-29-0 for 146 yards; UD - Valentino, 29-44-2 for 413 yards; Team, 0-1-0. RECEIVING: BU - McHale, 3-21; Bohrer, 2-35; Walley, 2-22; Watkins, 2-9; Hitchcock, 1-35; Koopman, 1-14; Trujillo, 1-10. UD - Watkins, 7-163; Jonard, 7-98; Collins, 7-82; Bellman, 4-51; Mack, 2-12; Papp, 1-9; Millio, 1-minus 2. JACKSONVILLE 36, BUTLER 7 Nov. 14 at Milne Field (A - 1,933) Score By Quarters 1 2 3 4 F Butler 0 7 0 0 7 Jacksonville 0 14 22 0 36 SCORING SUMMARY BU - Dan Bohrer, 7 pass from Andrew Huck (David Lang kick) JU - Christopher Kuck, 10 pass from Josh McGregor (D. Curry kick) JU - Josh Philpart, 42 pass from McGregor (Donovan Curry kick) JU - Christopher Kuck, 2 pass from McGregor (Kuck run) JU - Rudell Small, 54 run (Curry kick) JU - Brian Valdez, 42 pass interception return (Curry kick) RUSHING: BU - Gray, 19-87; Hitchcock, 8-37; Huck, 8-15; Bohrer, 2-11; Trujillo, 1-5. JU - Small, 18-103; McGregor, 5-26; Laster, 2-4; Harris, 2-4 PASSING: BU - Huck, 21-35-2 for 220 yards; Team, 0-1-0. JU - McGregor, 10-19-0 for 157 yards; Stepelton, 1-1 for 4 yards; Curry, 0-1. RECEIVING: BU - Watkins, 8-96; Bohrer, 5-42; Larsen, 4-42; McHale, 2-30; Koopman, 1-5; Trujillo, 1-5. JU - Thompson, 2-65; Philpart, 2-46; Sumter, 2-14; Kuck, 2-12; Laster, 1-12; Davis, 1-8; Louissaint, 1-4. BUTLER 20, DRAKE 17 Nov. 21 at Butler Bowl (A - 2,122) Score By Quarters 1 2 3 4 F Drake 3 0 7 7 17 Butler 0 3 7 10 20 SCORING SUMMARY DU - Brandon Wubs, 23 field goal BU - David Lang, 29 field goal DU - Pat Cashmore, 7 run (Wubs kick) BU - Jordan Koopman, 18 run (David Lang kick) BU - Zach Watkins, 32 pass from Andrew Huck (Lang kick) DU - Joey Orlando, 53 pass from Mike Piatkowski (Wubs kick) BU - David Lang, 27 field goal (0:01) RUSHING: BU - Gray, 15-69; Hitchcock, 7-60; Koopman, 1-18; Trujillo, 1-2; Huck, 6-minus 23; Team, 1-minus 11. DU - Cashmore, 10-39; Piatkowski, 13-26; Morse, 5-7; Broman, 3-5; Platek, 1-3; Kostek, 7-3. PASSING: BU - Huck, 24-35-1 for 234 yards; Team, 0-1. DU - Piatkowski, 17-29-3 for 166 yards. RECEIVING: BU - Watkins, 11-109; McHale, 5-83; Bohrer, 4-38; Trujillo, 2-7; Hitchcock, 1-minus 1; Gray, 1-minus 2. DU - Platek, 6-37; Orlando, 4-70; Pucher, 2-24; Blackmon, 2-23; Kostek, 2-8; Broman, 1-4. BUTLER 28, CENTRAL CONNECTICUT STATE 23 Dec. 5 at Butler Bowl (A - 1,577) Score By Quarters 1 2 3 4 F Central Conn. State 7 0 3 13 23 Butler 0 7 7 14 28 SCORING SUMMARY CC - E. Richardson, 1 run (Joe Izzo kick) BU - Scott Gray, 2 run (David Lang kick) CC - Joe Izzo, 32 field goal BU - Andrew Huck, 10 run (Lang kick) BU - Andrew Huck, 19 run (Lang kick) CC - James Mallory, 5 run (Izzo kick) BU - Andrew Huck, 7 run (Lang kick) CC - E. Richardson, 2 run (Run Failed) RUSHING: BU - Gray, 15-88; Huck, 9-41; Hitchcock, 5-14; Trujillo, 2-9; Bohrer, 1-4; Koopman, 1-minus 5; Team, 1-minus 1. CC - Mallory, 21-109; Fowler, 5-39; Norris, 10-36; Paul, 2-24; Richardson, 6-15; Spadaro, 2-8. PASSING: BU - Huck, 15-21-0 for 182 yards. CC - Norris, 17-26-1 for 202 yards. RECEIVING: BU - Bohrer, 7-68; W Watkins, 4-41; McHale, 2-42; Gray, 1-23; Koopman, 1-7. CC - Paul, 3-57; Grochowski, 3-37; Colagiovanni, 3-37; Fisher, 3-35; Easley, 2-15; Mallory, 2-15; Fowler, 1-6. 23 2010 Opponents Albion Youngstown State Taylor San Diego September 4 Sprankle-Sprandel Stadium Albion, Mich. September 11 Stambaugh Stadium Youngstown, Ohio September 18 Butler Bowl Indianapolis, Ind. September 25 Torero Stadium San Diego, Calif. Location: Albion, Mich. Enrollment: 1,635 Nickname: Britons Colors: Purple and Gold Conference: Michigan Intercollegiate Stadium: Sprankle-Sprandel Stadium (4,244) Athletic Director: Greg Polnasek SID: Bobby Lee Office Phone: (517) 629-0434 Fax: (517) 629-0566 E-Mail: blee@albion.edu Web: Location: Youngstown, Ohio Enrollment: 14,682 Nickname: Penguins Colors: Red and White Conference: Missouri Valley Football Stadium: Stambaugh Stadium (20,630) Athletic Director: Ron Strollo SID: Trevor Parks Office Phone: (330) 941-3192 Fax: (330) 941-3191 E-Mail: tparks@ysu.edu Web: Location: Upland, Ind. Enrollment: 1,975 Nickname: Trojans Colors: Purple and Gold Conference: Mid-States Football Assoc. Stadium: Jim Wheeler Memorial Stadium (4,000) Athletic Director: Dr. Angie Fincannon SID: Eric Smith Office Phone: (765) 998-4569 Fax: (765) 998-4590 E-Mail: ersmith@taylor.edu Web: Location: San Diego, Calif. Enrollment: 7,800 Nickname: Toreros Colors: Torero Blue, Navy and White Conference: Pioneer Football League Stadium: Torero Stadium (7,000) Athletic Director: Ky Snyder SID: Ted Gosen Office Phone: (619) 260-4745 Fax: (619) 260-2990 E-Mail: tgosen@sandiego.edu Web: Head Coach: Craig Rundle Alma Mater: Albion ’74 Career Record: 132-100-1 (24 yrs.) Office Phone: (517) 629-0459 Assistant Coaches: Dustin Beurer, Anthony Cole, Greg Polnasek, D. J. Rehberg, Ron Parker 2009 Record: 4-6 2009 Results (4-6) Butler Thiel At Millikin Central At Olivet Hope Adrian At Trine Kalamazoo At Alma AC OPP 20 6 16 0 13 30 12 13 10 5 21 28 6 28 7 30 23 20 22 34 2010 Schedule Sept. 4 BUTLER Sept. 11 WHEATON Sept. 18 At Greenville Sept. 25 At Wis.-Stevens Point Oct. 2 KALAMAZOO Oct. 9 At Alma Oct. 16 HOPE Oct. 23 At Olivet Oct. 30 ADRIAN Nov. 13 At Trine 24 Head Coach: Eric Wolford Alma Mater: Kansas State ’94 Career Record: First Season Office Phone: (330) 941-3478 Assistant Coaches: Tom Sims, Shane Montgomery, Rick Kravitz, Louie Matsakis, Phil Longo, Frank Buffano, Carmen Bricillo, Andre Coleman, Rollen Smith, Ron Stoops Jr. 2009 Record: 6-5 2009 Results (6-5) At Pittsburgh Austin Peay At Northeastern At Indiana State Missouri State Western Illinois At Southern Illinois South Dakota State At Northern Iowa Illinois State At North Dakota State YSU OPP 3 38 38 21 38 21 28 0 7 17 31 21 8 27 3 17 7 28 30 18 39 35 2010 Schedule Sept. 4 At Penn State Sept. 11 BUTLER Sept. 18 CENT. CONN. STATE Sept. 25 SOUTHERN ILLINOIS Oct. 2 At Missouri State Oct. 9 NORTH DAKOTA STATE Oct. 16 At Western Illinois Oct. 23 At South Dakota State Oct. 30 NORTHERN IOWA Nov. 6 At Illinois State Nov. 13 INDIANA STATE Head Coach: Ron Korfmacher Alma Mater: Taylor ’82 Career Record: 9-11 (2 yrs.) Office Phone: (765) 998-5311 Assistant Coaches: Greg Wolfe, Pete Demorest, Dale Perine, Tony Kijanko, Mike Miley, Lance Brookshire 2009 Record: 7-3 2009 Results (7-3) Anderson At William Penn At St. Francis (Ill.) St. Xavier At Malone Walsh Trinity International At Marian At Olivet Nazarene Saint Francis (Ind.) TU OPP 31 16 16 42 38 23 35 51 23 17 12 31 48 7 36 35 45 35 23 16 2010 Schedule Sept. 2 At Anderson Sept. 11 WILLIAM PENN Sept. 18 At Butler Sept. 25 NOTRE DAME COLLEGE Oct. 2 At St. Xavier Oct. 9 MALONE Oct. 16 At Walsh Oct. 23 At Trinity International Oct. 30 MARIAN Nov. 6 OLIVET NAZARENE Nov. 13 SAINT FRANCIS (IND.) Head Coach: Ron Caragher Alma Mater: UCLA ‘90 Career Record: 22-9 (3 yrs.) Office Phone: (619) 260-4740 Assistant Coaches: Sam Anno, Keith Carter, Tanner Engstrand, Gabe Franklin, Jerome Pathon, Mike Rish, Andrew Rolin, Joe Staab, Jon Sumrall, Dorian Keller 2009 Record: 4-7 2009 Results (4-7) At Azusa Pacific At Northern Colorado Marist At Butler At Valparaiso Drake Jacksonville At Dayton Davidson At Morehead State Southern Utah USD OPP 24 12 12 31 17 10 24 25 48 7 14 21 16 34 14 21 27 34 13 7 32 37 2010 Schedule Sept. 4 AZUSA PACIFIC Sept. 11 At Southern Utah Sept. 18 UC DAVIS Sept. 25 BUTLER Oct. 2 At Jacksonville Oct. 9 DAYTON Oct. 16 At Marist Oct. 23 VALPARAISO Oct. 30 At Drake Nov. 6 MOREHEAD STATE Nov. 13 At Davidson 2010 Opponents Campbell Davidson Dayton Morehead State October 2 Butler Bowl Indianapolis, Ind. October 9 Richardson Stadium Davidson, N.C. October 16 Butler Bowl Indianapolis, Ind. October 23 Butler Bowl Indianapolis, Ind. Location: Buies Creek, N.C. Enrollment: 6,834 Nickname: Fighting Camels Colors: Orange and Black Conference: Pioneer Football League Stadium: CU Football Stadium (5,000) Athletic Director: Stan Williamson SID: Joe Prisco Office Phone: (910) 893-1369 Fax: (910) 893-1330 E-Mail: priscoj@campbell.edu Web: Location: Davidson, N.C. Enrollment: 1,800 Nickname: Wildcats Colors: Red and Black Conference: Pioneer Football League Stadium: Richardson Stadium (4,500) Athletic Director: Jim Murphy SID: Marc Gignac Office Phone: (704) 894-2123 Fax: (704) 894-2636 E-Mail: magignac@davidson.edu Web: Location: Dayton, Ohio Enrollment: 7,700 Nickname: Flyers Colors: Red and Blue Conference: Pioneer Football League Stadium: Welcome Stadium (11,000) Athletic Director: Ted Wabler SID: Doug Hauschild Office Phone: (937) 229-4390 Fax: (937) 229-4461 E-Mail: sid@udayton.edu Web: Location: Morehead, Ky. Enrollment: 9,046 Nickname: Eagles Colors: Blue and Gold Conference: Pioneer Football League Stadium: Jayne Stadium (10,000) Athletic Director: Brian Hutchinson SID: Drew Dickerson Office Phone: (606) 783-2557 Fax: (606) 783-2550 E-Mail: a.dickerson@moreheadstate. edu Web: Head Coach: Dale Steele Alma Mater: South Carolina ‘76 Career Record: 4-18 (2 yrs.) Office Phone: (910) 893-1874 Assistant Coaches: Nick Cavallo, Andre Fontenette, Art Link, Landon Mariani, Oscar Olejniczak, Greg Williams 2009 Record: 3-8 Head Coach: Tripp Merritt Alma Mater: UNC Charlotte ’90 Career Record: 23-28 (5 yrs.) Office Phone: (704) 894-2378 Assistant Coaches: Brett Hayford, Ryan Heasley, Meade Clendaniel, Amish Patel, Jay Poag, Jimmy Means, Robert Wilk, Josh Lustig, Frank Swart, Steve Leahy 2009 Record: 3-7 Head Coach: Rick Chamberlin Alma Mater: Dayton ’80 Career Record: 18-5 (2 yrs.) Office Phone: (937) 229-4423 Assistant Coaches: Dave Whilding, Chris Ochs, Craig Turner, Tony Davis, Patrick Henry, James Stanley, Landon Fox, Kris Ketron, Kevin Hoyng, Trevor Zeiders 2009 Record: 9-2 2009 Results (3-8) CU OPP Methodist 48 28 At Birmingham-So. (OT) 28 35 At Davidson 7 24 At Marist 13 34 Dayton 17 35 At Old Dominion 17 28 Butler 16 23 Morehead State 31 22 At Drake 6 49 At Valparaiso 17 3 Jacksonville 14 34 2009 Results (3-7) At Elon Lenoir-Rhyne Campbell At Jacksonville Morehead State At Dayton Drake At Butler At San Diego Marist 2009 Results (9-2) Urbana At Robert Morris Duquesne At Morehead State At Campbell Davidson At Valparaiso San Diego Butler At Drake Marist 2010 Schedule Sept. 4 At Virginia-Wise Sept. 11 OLD DOMINION Sept. 18 DAVIDSON Sept. 25 GEORGIA STATE Oct. 2 At Butler Oct. 16 DRAKE Oct. 23 At Dayton Oct. 30 MARIST Nov. 6 VALPARAISO Nov. 13 At Jacksonville Nov. 20 At Morehead State 2010 Schedule Sept. 4 GEORGETOWN Sept. 11 At Lenoir-Rhyne Sept. 18 At Campbell Sept. 25 JACKSONVILLE Oct. 9 BUTLER Oct. 16 At Morehead State Oct. 23 At Drake Oct. 30 DAYTON Nov. 6 At Marist Nov. 13 SAN DIEGO Nov. 20 At Presbyterian DC OPP 0 56 0 42 24 7 21 27 16 10 0 17 16 21 7 14 34 27 6 14 UD OPP 10 13 21 14 24 17 30 15 35 17 17 0 36 7 21 14 28 31 23 6 27 16 2010 Schedule Sept. 4 ROBERT MORRIS Sept. 11 At Duquesne Sept. 18 MOREHEAD STATE Sept. 25 CENTRAL STATE Oct. 2 VALPARAISO Oct. 9 At San Diego Oct. 16 At Butler Oct. 23 CAMPBELL Oct. 30 At Davidson Nov. 6 DRAKE Nov. 13 At Marist Head Coach: Matt Ballard Alma Mater: Gardner-Webb ’79 Career Record: 125-109 (22 yrs.) Office Phone: (606) 783-2020 Assistant Coaches: John Gilliam, Gary Dunn, Rob Tenyer, Chris Garner, Paul Humphries 2009 Record: 3-8 2009 Results (3-8) MSU OPP Southern Virginia 61 10 At St. Francis (Pa.) 0 31 At N. Carolina Cent. (2OT) 13 10 Butler (OT) 21 28 Dayton 15 30 At Davidson 10 16 At Jacksonville 0 39 Marist 14 24 At Campbell 22 31 San Diego 7 13 At Valparaiso 29 6 2010 Schedule Sept. 4 At James Madison Sept. 11 ST. FRANCIS (PA.) Sept. 18 At Dayton Sept. 25 At Marist Oct. 2 At Georgia State Oct. 16 DAVIDSON Oct. 23 At Butler Oct. 30 JACKSONVILLE Nov. 6 At San Diego Nov. 13 VALPARAISO Nov. 20 CAMPBELL 25 2010 Opponents Valparaiso Jacksonville Drake Marist October 30 Brown Field Valparaiso, Ind. November 6 Butler Bowl Indianapolis, Ind November 13 Drake Stadium Des Moines, Iowa PFL Member Not on 2010 Butler Football Schedule Location: Valparaiso, Ind. Enrollment: 3,980 Nickname: Crusaders Colors: Brown and Gold Conference: Pioneer Football League Stadium: Brown Field (5,000) Athletic Director: Mark LaBarbera SID: Ryan Wronkowicz Office Phone: (219) 464-5232 Fax: (219) 464-5762 E-Mail: ryan.wronkowicz@valpo.edu Web: Location: Jacksonville, Fla. Enrollment: 3,436 Nickname: Dolphins Colors: Green and Gold Conference: Pioneer Football League Stadium: D. B. Milne Field (5,000) Athletic Director: Alan Verlander SID: Joel Lamp Office Phone: (904) 256-7409 Fax: (904) 256-7424 E-Mail: jlamp@ju.edu Web: Location: Des Moines, Iowa Enrollment: 5,668 Nickname: Bulldogs Colors: Blue and White Conference: Pioneer Football League Stadium: Drake Stadium (14,557) Athletic Director: Sandy Hatfield Clubb SID: Mike Mahon Office Phone: (515) 271-3014 Fax: (515) 271-3015 E-Mail: mike.mahon@drake.edu Web: Location: Poughkeepsie, N.Y. Enrollment: 4,256 Nickname: Red Foxes Colors: Red and White Conference: Pioneer Football League Stadium: Tenney Stadium (5,000) Athletic Director: Tim Murray SID: Mike Ferraro Office Phone: (845) 575-3321 Fax: (845) 471-0466 E-Mail: michael.j.ferraro@marist.edu Web: Head Coach: Dale Carlson Alma Mater: Concordia-Chicago ‘78 Career Record: 110-103-3 (21 yrs.) Office Phone: (219) 464-5513 Assistant Coaches: Bob Muckian, Tony Pierce, Robert Lee, Rob Hansen, Cliff Glover, Marcus Knight, John Dahman 2009 Record: 1-10 Head Coach: Kerwin Bell Alma Mater: Florida ‘87 Career Record: 19-16 (3 yrs.) Office Phone: (904) 256-7470 Assistant Coaches: Jerry Odom, Andy McCleod, Kerry Webb, Ernie Logan, Ernie Mills, Danny Verpaele, Jerry Crafts, Jim Stomps 2009 Record: 7-4 Head Coach: Chris Creighton Alma Mater: Kenyon ‘91 Career Record: 14-8 (2 yrs.) Office Phone: (515) 271-2104 Assistant Coaches: Tim Allen, Rick Fox, Ben Needham, Bill Charles, Jeff Martin, Aaron Selby, Kyle York, Mark Watson 2009 Record: 8-3 Head Coach: Jim Parady Alma Mater: Maine ‘83 Career Record: 96-88-1 (18 yrs.) Office Phone: (845) 575-3699 Assistant Coaches: Scott Rumsey, Tom Kelly, Larry Riley, Bill Roos, Casey Lorenz, Pete Mahoney, Tom Taylor, Nate Fields, Jason Tillery, Clarence Johnson 2009 Record: 7-4 2009 Results (1-10) At St. Joseph’s (Ind.) At Concordia (Wis.) Carthage At Drake San Diego At Butler Dayton At Marist At Jacksonville Campbell Morehead State 2009 Results (7-4) At Webber Int’l At Samford Old Dominion Davidson At Marist Morehead State At San Diego At Drake Valparaiso Butler At Campbell 2009 Results (8-3) Grand View At Marist At South Dakota Valparaiso Missouri S & T At San Diego At Davidson Jacksonville Campbell Dayton At Butler VU OPP 6 31 20 17 24 34 14 34 7 48 14 23 7 38 0 24 20 49 3 17 6 29 2010 Schedule Sept. 2 At Western Illinois Sept. 11 At Franklin Sept. 18 ST. JOSEPH’S (IND.) Sept. 25 DRAKE Oct. 2 At Dayton Oct. 9 MARIST Oct. 16 JACKSONVILLE Oct. 23 At San Diego Oct. 30 BUTLER Nov. 6 At Campbell Nov. 13 At Morehead State 26 JU OPP 40 24 0 27 27 28 27 21 27 31 39 0 34 16 38 45 49 20 36 7 34 14 2009 Schedule Sept. 4 At Old Dominion Sept. 11 At Appalachian State Sept. 18 WEBBER INT’L Sept. 25 At Davidson Oct. 2 SAN DIEGO Oct. 9 DRAKE Oct. 16 At Valparaiso Oct. 23 MARIST Oct. 30 At Morehead State Nov. 6 At Butler Nov. 13 CAMPBELL DU OPP 22 0 34 6 21 51 34 14 19 0 21 14 21 16 45 38 49 6 6 23 17 20 2010 Schedule Sept. 4 LEHIGH Sept. 11 At Missouri S & T Sept. 18 At Montana State Sept. 25 At Valparaiso Oct. 2 MARIST Oct. 9 At Jacksonville Oct. 16 At Campbell Oct. 23 DAVIDSON Oct. 30 SAN DIEGO Nov. 6 At Dayton Nov. 13 BUTLER 2009 Results (7-4) At Sacred Heart Drake At San Diego At Bucknell Campbell Jacksonville At Morehead State Valparaiso Georgetown At Davidson At Dayton MC OPP 31 12 6 34 10 17 16 17 34 13 31 27 24 14 24 0 23 21 14 6 16 27 2010 Schedule Sept. 3 SACRED HEART Sept. 11 BUCKNELL Sept. 25 MOREHEAD STATE Oct. 2 At Drake Oct. 9 At Valparaiso Oct. 16 SAN DIEGO Oct. 23 At Jacksonville Oct. 30 At Campbell Nov. 6 DAVIDSON Nov. 13 DAYTON Nov. 20 At Georgetown Opponents/PFL The Pioneer Football League All-Time Series Records Opponent Akron Ala.-Birmingham Albion Anderson Ashland Austin Peay Ball State Bethany Bradley California-Davis Campbell Centenary Central Conn. State Central State (Ohio) Chicago Cincinnati Clinch Valley Danville Normal Davidson Dayton Denison DePauw Drake Duquesne Earlham Eastern Illinois Evansville Ferris State Findlay Florida International Franklin Georgetown (D.C.) Georgetown (Ky.) George Washington Grand Valley State Hanover Haskell Indians Hillsdale Hofstra Howard Payne Illinois Illinois State Illinois Wesleyan Indiana Indianapolis Indiana State Iowa Jacksonville Kentucky State Lindenwood (Mo.) Lombard Louisville Loyola Manchester Marquette Marshall Miami, Ohio Michigan Michigan State Millikin Minnesota Missouri S & T Moores Hill Morehead State W 0 0 5 2 8 1 22 1 6 0 2 1 1 0 0 3 1 4 1 9 1 44 6 0 19 3 35 1 1 0 36 0 10 1 1 21 1 6 0 2 1 0 1 2 19 27 0 0 3 1 0 2 1 3 0 0 4 0 1 1 0 2 3 3 L 8 2 5 0 5 2 14 1 2 1 0 1 0 1 0 6 1 0 5 25 0 13 12 2 8 3 12 3 0 2 16 1 1 1 4 1 3 4 2 0 8 1 0 5 7 7 1 5 1 1 2 2 1 0 2 1 4 1 1 4 2 2 0 10 T 0 0 0 0 0 0 2 0 0 0 0 0 0 0 1 1 0 0 0 1 0 3 1 0 1 0 2 0 0 0 2 0 0 0 0 1 0 0 0 0 0 0 0 0 2 2 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 Opponent New York University North Central Northeast Mo. State Northern Michigan Northern Illinois Northwestern Northwood Institute Notre Dame Ohio Ohio Wesleyan Pittsburg State Purdue Quincy (Ill.) Robert Morris Rose Hulman Saginaw Valley San Diego St. Ambrose St. Francis (Ind.) St. Francis (Pa.) St. Joseph’s St. Louis St. Norbert St. Xavier Southern Utah Taylor Tennessee-Martin Thomas More Tiffin Toledo Towson State Transylvania Tufts Valparaiso Wabash Wash. & Jefferson Washington (Mo.) Wayne State Wesley (Del.) Western Kentucky Western Michigan Western Reserve Wilmington Winona Wis.-Stevens Point Wittenberg Xavier Youngstown State W 0 1 1 2 0 0 2 0 1 1 0 3 4 0 8 2 5 2 0 4 34 0 2 1 0 1 0 0 1 2 0 1 1 44 42 1 9 5 2 1 3 2 2 4 2 3 0 0 L 1 0 0 1 2 2 0 3 8 1 1 7 1 4 9 1 12 0 4 0 19 1 1 0 1 0 1 1 3 1 2 1 0 24 17 0 12 1 0 7 10 5 0 0 2 7 3 0 T 0 0 0 0 0 0 0 0 1 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 8 0 1 0 0 0 0 1 0 0 0 0 0 0 Butler University....................................................Indianapolis, Ind. Campbell University .......................................... Buies Creek, N.C. Davidson College ...................................................Davidson, N.C. University of Dayton....................................................Dayton, Ohio Drake University..................................................Des Moines, Iowa Jacksonville University ....................................... Jacksonville, Fla. Marist College .................................................Poughkeepsie, N.Y. Morehead State University ......................................Morehead, Ky. University of San Diego........................................San Diego, Calif. Valparaiso University..............................................Valparaiso, Ind. The 2010 season marks the 18th year for the Pioneer Football League – the nation’s only non-scholarship NCAA Football Championship Subdivision conference. The league expanded to 10 members a year ago, with Marist College joining as a fulltime member. Currently the PFL plays an eightgame league schedule, rotating the ninth league opponent on a twoyear basis. The PFL is one of only three conferences that sponsor football as its only sport (the Missouri Valley Football Conference and Great West Football Conferences being the others). However, league for the 2008 season. In January 1991 the NCAA passed legislation to require Division I institutions to sponsor all intercollegiate sports at the Division I level. The five charter members (Evansville the fifth classification within Division I and adopted the moniker of Pioneer based on the intent to become the first league in that new division. All 10 current members are committed to the non-scholarship football model. The PFL began play in 1993 with Dayton winning the league’s first crown. The league spent its first season in 1993 under the administrative guidance of the Midwestern Collegiate Conference, and the office moved to St. Louis in 1994 when current commissioner Patty Viverito was named the PFL commissioner, a leadership position she continues to fill. 2009 Pioneer Football League Standings Butler Dayton Drake Jacksonville Marist San Diego Davidson Campbell Morehead State Valparaiso Conference W L T 7 1 0 7 1 0 6 2 0 6 2 0 5 3 0 3 5 0 3 5 0 2 6 0 1 7 0 0 8 0 PCT. .875 .875 .750 .750 .625 .375 .375 .250 .125 .000 W L 11 1 9 2 8 3 7 4 7 4 4 7 3 7 3 8 3 8 1 10 T 0 0 0 0 0 0 0 0 0 0 All Games PCT. Home .917 7-0 .818 4-2 .727 5-1 .636 4-1 .636 4-1 .364 1-4 .300 2-3 .273 2-3 .273 1-4 .091 0-5 Away 4-1 5-0 3-2 3-3 3-3 3-3 1-4 1-5 2-4 1-5 27 Butler Records INDIVIDUAL HONORS 2006 2003 1998 1995 1994 1993 1989 1988 1976 1975 1974 1963 1961 1959 1958 1957 - All-America Chris Marzotto (Football Gazette) Brandon Martin (Football Gazette, Sports Network) Ryan Zimpleman (Sports Network) Arnold Mickens (Sports Network) Arnold Mickens (Sports Network, A. P.) Richard Johnson (Sports Network) Steve Roberts (Kodak) Steve Roberts (Kodak) Bill Lynch (A.P. Little All-America HM) Bill Lynch (A.P. Little All-America HM) Bill Lynch (A.P. Little All-America HM) Lee Grimm (A.P. Little All-America) Don Benbow (A.P. Little All-America) Walt Stockslager (A. P. Little All-America) Paul Furnish (A.P. Little All-America) John Harrell (A.P. Little All-America HM) 2009 1999 1998 1984 - Academic All-America Grant Hunter Mike Goletz Nick Batalis, Mike Goletz Steve Kollias National Scholar Athlete (National Football Foundation Hampshire Honor Society) 2008 - Mike Bennett (National Football Foundation/College Hall of Fame) 1998 - Nick Batalis 2003 1991 1989 1988 1977 1976 1975 1974 - Conference Player Of The Year Brandon Martin (PFL North, Defensive) Paul Romanowski (MIFC) Steve Roberts (HCC) Steve Roberts (HCC) Mike Chrobot (HCC) Bill Lynch (ICC) Bill Lynch (ICC) Bill Lynch (ICC) 1983 1976 1975 1974 1973 1972 1970 1968 1966 1962 1960 1954 - Conference Back Of The Year Paul Romanowski (MIFC Offense) Steve Roberts (HCC Offense) Chuck Orban (HCC Defense) Steve Roberts (HCC Offense) Todd Yeoman (HCC Defense) Eric Chapman (HCC Offense) Bill Lynch (ICC) Bill Lynch (ICC) Bill Lynch (ICC) Keith Himmel (ICC) Noble York (ICC) Dan Nolan (ICC) Larry Gilbert (ICC) Dan Warfel (ICC) Ron Adams (ICC) John Skirchak (ICC) Leroy Thompson (ICC) 1992 1989 1988 1987 1985 1983 1980 1978 1977 1975 1974 1972 1971 1962 1959 1958 1953 - Conference Lineman Of The Year Ruben DeLuna (MIFC Offense) Greg Mariacher (HCC Defense) Mark Allanson (HCC Offense) Todd Jones (HCC Offense) Jeff Palmer (HCC Defense) John Carwile (HCC Offense) Tony Pence (HCC Defense) Mike Chrobot (HCC) Mike Chrobot (HCC) Dave Swihart (ICC) Dave Swihart (ICC) Mike McDevitt (ICC) Tom Redmond (ICC) Lee Grimm (ICC) Jim Ringer (ICC) Paul Furnish (ICC) George Freyn (ICC) 1991 1989 1988 - 2009 2008 2007 2006 2005 - 28 Butler MVP’s Andrew Huck (Offense) Nick Caldicott (Defense) Matt Kobli (Offense) Grant Hunter (Defense) Scott Gray (Offense) Chris Marzotto (Defense) T. J. Brown (Offense) Mike Marzotto (Defense) Rick Tyson (Offense) Chris Marzotto (Defense) Individual MOST GAMES PLAYED Career: 45, Brian Adika, 2006-09 MOST POINTS Game: 36, Steve Roberts vs. St. Ambrose, 11/4/89 Consecutive Games: 54, Scott Gray vs. Albion (30), 9/1/07, and Hanover (24), 9/8/07 Season: 142, Steve Roberts, 1988 Career: 386, Steve Roberts, 1986-89 MOST TOUCHDOWNS Game: 6, Steve Roberts vs. St. Ambrose, 11/4/89 Consecutive Games: 9, Scott Gray vs. Albion (5), 9/1/07, and Hanover (4), 9/8/07 Season: 23, Steve Roberts, 1988 Career: 63, Steve Roberts, 1986-89 MOST EXTRA POINTS Game: 8, Bob Ligda vs. St. Joseph’s, 11/9/74; Tim Witmer vs. St. Ambrose, 11/4/89 Season: 42, Jordan Quiroz, 2008 Career: 114, John Jenkins, 1985-88 MOST FIELD GOALS Game: 5, John Jenkins vs. Dayton, 9/26/87 Season: 11, John Jenkins, 1987; Tim Witmer, 1990 Career: 34, John Jenkins, 1985-88 MOST YARDS RUSHING Game: 295, Arnold Mickens vs. Valparaiso, 10/8/94 Season: 2,255, Arnold Mickens, 1994 Career: 4,623, Steve Roberts, 1986-89 MOST RUSHING ATTEMPTS Game: 56, Arnold Mickens vs. Valparaiso, 10/8/94 Season: 409, Arnold Mickens, 1994 Career: 1,026, Steve Roberts, 1986-89 MOST YARDS PASSING Game: 497, DeWayne Ewing vs. San Diego, 10/21/00 Season: 3,182, DeWayne Ewing, 2000 Career: 8,094, DeWayne Ewing, 1998-2001 MOST PASSES ATTEMPTED Game: 50, Curt Roy vs. St. Joseph’s, 11/7/81; Eli Stoddard vs. San Diego, 10/11/97 Season: 388, DeWayne Ewing, 2000 Career: 1094, DeWayne Ewing, 1998-2001 MOST PASSES COMPLETED Consecutive: 12, DeWayne Ewing vs. Valparaiso, 10/6/01; Matt Kobli at Albion, 9/6/08 Game: 36, DeWayne Ewing vs. St. Francis (Ind.), 9/2/00 Season: 246, DeWayne Ewing, 2000 Career: 656, DeWayne Ewing, 1998-2001 MOST TOUCHDOWN PASSES Game: 5, Ron Kiolbassa vs. Valparaiso, 10/13/90; DeWayne Ewing vs. Quincy, 11/3/01; Andrew Huck vs. Albion, 9/5.09 Season: 23, Matt Kobli, 2008 Career: 60, Bill Lynch, 1972-76 MOST TOTAL OFFENSE Game: 465, DeWayne Ewing vs. San Diego, 10/21/00 Season: 3,079, DeWayne Ewing, 2000 Career: 7,743, DeWayne Ewing, 1998-2001 MOST PASS RECEPTIONS Game: 15, Tom Redmond vs. Ball State, 9/23/72 Season: 78, Zach Watkins, 2009 Career: 192, Dan Bohrer, 2006-09 MOST YARDS RECEIVING Game: 260, Kyle Conner vs. San Diego, 10/21/00 Season: 1,031, Tom Redmond, 1972 Career: 2,241, Dan Bohrer, 2006-09 MOST TOUCHDOWN PASSES CAUGHT Game: 4, Dave Oliver vs. St. Joseph’s, 11/9/74; Dan Bohrer vs. Franklin, 9/13/08 Season: 11, Dan Bohrer, 2008 Career: 22, Dan Bohrer, 2006-09 MOST PUNT RETURNS Game: 9, Eric Voss vs. Valparaiso, 10/19/91; Eric Voss vs. St. Joseph’s, 9/12/92 Season: 42, Noble York, 1972 Career: 103, Eric Voss, 1990-93 MOST PUNT RETURN YARDS Game: 136, Steve Roberts vs. Northwood, 9/9/89 Season: 356, Noble York, 1972 Career: 723, Eric Voss, 1990-93 MOST KICKOFF RETURNS Game: 8, Lou Andreadis vs. San Diego, 11/4/95 Season: 38, Justin Campbell, 2002 Career: 110, Justin Campbell, 2001-04 MOST KICKOFF RETURN YARDS Game: 243, Justin Campbell vs. San Diego, 10/19/02 Season: 985, Justin Campbell, 2002 Career: 2,512, Justin Campbell, 2001-04 MOST PUNTS Game: 14, Ron Stryzinski vs. Wittenberg, 9/25/82 Season: 80, Ron White, 1990 Career: 255, Ron White, 1990-93 MOST YARDS PUNTING Game: 493, Ron Stryzinski vs. Wittenberg, 9/25/82 Season: 3,122, Ron White, 1990 Career: 9,731, Ron White, 1990-93 BEST PUNTING AVERAGE Season: 43.2, Shawn Wood, 1998 Career: 39.5, Shawn Wood, 1995-98 MOST SOLO TACKLES Game: 21, Kevin Johnson vs. Ashland, 11/2/91 Season: 101, Chuck Orban, 1990 Career: 269, Chuck Orban, 1987-90 MOST ASSISTED TACKLES Game: 16, John Doctor vs. Ashland, 10/23/82; Joe Miles vs. Dayton, 10/4/97 Season: 81, Chuck Orban, 1989 Career: 236, Dave Ginn, 1981-84 MOST TOTAL TACKLES Game: 29, Kevin Johnson vs. Ashland, 11/2/91; Kevin Johnson vs. Pittsburg State, 11/26/91 Season: 181, Chuck Orban, 1990 Career: 487, Chuck Orban, 1987-90 Butler Records MOST QUARTERBACK SACKS Game: 5, Elgin Reese vs. St. Joseph’s, 9/12/92 Season: 17, Scott Cook, 1983 Career: 39, Tony Pence, 1978-81 MOST FUMBLE RECOVERIES Game: 3, Chuck Orban vs. Dayton, 9/24/88 Season: 4, Steve Torrence, 1980; Chuck Orban, 1988; Bob Espich, 1988 Career: 8, Steve Torrence, 1980-83 MOST PASSES COMPLETED Game: 36 vs. St. Francis (Ind), 9/2/00 Season: 250, 2000 2004 - MOST PASSES, TWO TEAMS Game: 96, Butler (44) vs. Franklin (52), 10/9/82 2002 - MOST TOTAL OFFENSE Game: 673 vs. St. Ambrose, 11/4/89 Season: 4,644, 2009 2000 - MOST TOTAL PLAYS Game: 94 vs. Franklin, 10/26/85 Season: 804, 2009 LONGEST PLAYS Run: 96, Harry Muta vs. Valparaiso, 10/11/75; Don Kelly vs. Indiana State, 11/17/51 Pass: 88, John Moses to John Harrell vs. Washington (Mo.), 9/17/56; Mike McGeorge to Tom Wallace vs. Ashland, 10/18/80 Field Goal: 57, Bob Ligda vs. St. Joseph’s, 10/16/76 Punt: 72, Mike Morgan vs. Morehead State, 10/20/07 Kickoff Return: 98, D’Andre Dowdy vs. Valparaiso, 10/15/05 Punt Return: 84, Noble York vs. Ball State, 9/23/72 Int. Return: 89, Derek Bradford vs. Valparaiso, 10/18/08 Fumble Return: 86, Buck Ulery vs. Dayton, 10/23/04 MISCELLANEOUS TEAM Best Record: 9-0, 1959, 1961 Most Wins: 11, 2009 Consecutive Wins, One Season: 9, 1959, 1961, 2009 Consecutive Wins, Multiple Seasons: 16, 1960-62 Consecutive Games Without A Loss: 19, 1960-62 Most Shutouts: 6, 1905, 1922, 1937 Most Shutouts (Modern Record): 3, 1947, 1958 Consecutive Shutouts: 6, 1937 Consecutive Shutouts (Modern Record): 2, 1947 Team MOST POINTS Game: 122 vs. Hanover, 1921 Game (Modern Record): 68 vs. Indianapolis, 1948 Season: 356, 2000 By Butler Opponent: 81, Minnesota, 1926 By Butler Opp. (Modern Record): 65, Ball State, 1967 Butler + Opp.: 122, Butler (122) vs. Hanover (0), 1921 Butler + Opp. (Modern Record): 113, Butler (56) vs. Georgetown (57) - OT, 9/23/00 In a Loss: 56, Butler vs. Georgetown (57), 9/23/00 2003 - 2001 - 1999 1998 - MOST PASS INTERCEPTIONS Game: 3, George Yearsich vs. St. Joseph’s, 10/16/71; Dan Fuhs vs. Indianapolis, 11/8/80; Weylin Stewart vs. Grand Valley State, 9/21/91; Brandon Martin vs. Davidson, 9/19/03 Season: 9, Noble York, 1972 Career: 17, Brandon Martin, 2000-03 Note: An item carried in Believe It or Not by Ripley noted Kenneth Booz of Butler University punted 110 yards against Ft. Harrison on Oct. 10, 1928. Butler MVP’s (Continued) MOST FIRST DOWNS Game: 31 vs. Valparaiso, 10/1/83; UAB, 11/6/93 Season: 255, 2009 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 - The Last Time... 1984 1983 - Interception Returned For A Touchdown Butler: Andy Dauch (19 yds.) vs. Hanover, 9/19/09 Opp.: Brian Valdez (42 yds.), Jacksonville, 11/14/09 Fumble Returned For A Touchdown Butler: Collin McGann vs. Robert Morris, 66 yds, 9/16/06 Opp.: Shaun Lewis, Jacksonville, 38 yards, 11/1/08 Punt Returned For A Touchdown Butler: Tadd Dombart, 75 yds vs. Morehead St., 10/25/08 Opp.: Matt Loula, Missouri-Rolla, 5 yds, 9/22/07 Kickoff Returned For A Touchdown Butler: D’Andre Dowdy, 98 yards vs. Valparaiso, 10/15/05 Opp.: Jason Jones, Drake, 73 yards, 10/16/04 200-Yard Rushing Game Butler: Naim Sanders (244) vs. St. Francis (Pa.), 9/19/98 Opp.: Justin Williams (225), Davidson, 10/27/07 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 - MOST YARDS RUSHING Game: 406 vs. Valparaiso, 10/15/88 Season: 2,544, 1975 100-Yard Receiving Game Butler: Zach Watkins (109) vs. Drake, 11/21/09 Opp.: Justin Watkins (163), Dayton, 11/7/09 1969 - MOST RUSHING ATTEMPTS Game: 80 vs. Franklin, 10/23/76 Season: 531, 1959 300-Yard Passing Game Butler: Andrew Huck (316) vs. Franklin, 9/12/09 Opp.: Steve Valentino (413), Dayton, 11/7/00 1966 - MOST YARDS PASSING Game: 497 vs. San Diego, 10/21/00 Season: 3,233, 2000 Safety Scored Butler: vs. Jacksonville, 11/1/08 Opp.: Drake, 10/6/07 MOST PASSES ATTEMPTED Game: 54 vs. Ball State, 9/23/72 Season: 404, 2000 Defensive 2-Point Conversion Butler: Jacob Fritz vs. Davidson, 10/27/07 Opponent: Tony Tobias, Lindenwood, 11/14/98 1968 1967 - 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956 1955 1954 - Billy Nardini (Offense) Jim Hart (Defense) Bob Leonard (Offense) Brandon Martin (Defense) Dale Jennings (Offense) Lou Capizzi (Defense) DeWayne Ewing (Offense) Nick Ober (Defense) DeWayne Ewing (Offense) Nick Ober (Defense) David Bailey (Offense) Steve Hall (Defense) Nick Batalis, Naim Sanders (Offense) Kevin Ward (Defense) Eli Stoddard, Jeremy Harkin (Offense) Kevin Ward (Defense) Naim Sanders (Offense) Ron Griswold (Defense) Arnold Mickens (Offense) Adam Borrell (Defense) Arnold Mickens (Offense) Chris Toner, Cameron McDaniel (Defense) Richard Johnson (Offense) Dave Kathman (Defense) Kevin Kimble, Ruben DeLuna (Offense) Don DeCraene (Defense) Paul Romanowski (Offense) Kevin Johnson (Defense) Todd Roehling (Offense) Chuck Orban (Defense) Steve Roberts (Offense) Chuck Orban (Defense) Steve Roberts (Offense) Todd Yeoman (Defense) Steve Roberts (Offense) Joe Annee (Defense) Paul Page (Offense) Mark Ribordy (Defense) Paul Page (Offense) Ron Bunt (Defense) Jim Hoskins (Offense) Dave Ginn (Defense) Curt Roy (Offense) Steve Torrence (Defense) Andy Howard, Dave Newcomer (Offense) Dave Ginn, Chris McGary (Defense) Curt Roy (Offense) Tony Pence (Defense) Andy Howard (Offense) Tony Pence (Defense) Ken LaRose (Offense) Paul Harrington (Defense) Mike Chrobot (Offense) Mike Daughty (Defense) Bruce Scifres (Back) Kevin Greisl (Lineman) Bill Lynch (Back) Bruce Ford (Lineman) Bill Lynch (Back) Dave Swihart (Lineman) Bill Lynch (Back) Andy Wetzel (Lineman) Steve Clayton (Back) Phil Schluge (Lineman) Tom Redmond (Back) Ron Cooper, Fred Powell (Lineman) Leonard Brown (Back) Phil Fitzsimmons, Ephraim Smiley (Lineman) Dan Nolan (Back) Mike Caito (Lineman) Dick Reed (Back) Mike Caito (Lineman) Dick Reed (Back) Steve Orphey (Lineman) Mike Harrison (Back) Vic Wukovits (Lineman) Dan Warfel (Back) Larry Fairchild (Lineman) Dick Dullaghan Dick Dullaghan Lee Grimm Ron Adams Phil Long John Skirchak Bob Stryzinski Paul Furnish John Harrell Robert Eichholtz Leroy Thompson, Jim Baker Les Gerlach 29 All-Time Butler Leaders Single Game DeWayne Ewing Rushing Yards 2,255, Arnold Mickens, 1994 1,558, Arnold Mickens, 1995 1,535, Richard Johnson, 1993 1,490, Steve Roberts, 1987 1,450, Steve Roberts, 1989 1,427, Steve Roberts, 1988 1,284, Naim Sanders, 1996 1,190, Kevin Kimble, 1992 1,175, Kevin Kimble, 1991 1,163, Naim Sanders, 1998 Rushing Yards 4,623, Steve Roberts, 1986-89 3,813, Arnold Mickens, 1994-95 3,666, Naim Sanders, 1995-98 2,879, Leroy Thompson, 1953-56 2,802, Kevin Kimble, 1989-92 2,691, Andy Howard, 1979-82 2,445, Harry Muta, 1972-75 2,013, Bruce Scifres, 1975-78 1,920, Richard Johnson, 1990-93 1,889, Kevin McDevitt, 1973-76 Rushing Attempts 56, Arnold Mickens vs. Valparaiso, 1994 54, Arnold Mickens vs. Dayton, 1994 47, Arnold Mickens vs. San Diego, 1994 46, Arnold Mickens vs. UW-Stevens Pt., ‘94 45, Arnold Mickens vs. Drake, 1994 44, Arnold Mickens vs. Georgetown, 1994 44, Steve Roberts vs. Dayton, 1988 44, Steve Roberts vs. Indianapolis, 1989 44, Naim Sanders vs. Evansville, 1996 43, Arnold Mickens vs. UAB, 1994; Howard Payne, 1995; Valparaiso, 1995 Rushing Attempts 409, Arnold Mickens, 1994 354, Arnold Mickens, 1995 333, Steve Roberts, 1987 325, Steve Roberts, 1989 322, Richard Johnson, 1993 315, Steve Roberts, 1988 307, Kevin Kimble, 1991 288, Naim Sanders, 1996 275, Kevin Kimble, 1992 268, Kevin McDevitt, 1976 Rushing Attempts 1,026, Steve Roberts, 1986-89 843, Naim Sanders, 1995-98 763, Arnold Mickens, 1994-95 754, Andy Howard, 1979-82 701, Kevin Kimble, 1989-92 581, Bruce Scifres, 1975-78 479, Eric Chapman, 1981-84 478, Harry Muta, 1972-75 460, Kevin McDevitt, 1973-76 415, Richard Johnson, 1990-93 Passing Yards 3,182, DeWayne Ewing, 2000 2,518, Matt Kobli, 2008 2,454, Andrew Huck, 2009 2,337, DeWayne Ewing, 2001 2,214, Paul Romanowski, 1991 2,179, Mike Lee, 1986 2,118, Eli Stoddard, 1997 2,023, Rob Cutter, 1985 2,012, Ron Kiolbassa, 1990 1,994, Curt Roy, 1983 Passing Yards 8,094, DeWayne Ewing, 1998-2001 5,909, Bill Lynch, 1972-76 4,830, Steve Clayton, 1970-73 4,561, Curt Roy, 1979-83 4,071, Rob Cutter, 1984-87 3,790, Jason Stahl, 1992-94 3,735, Ron Kiolbassa, 1987-90 3,527, Mike Lee, 1982-86 3,170, Dick Reed, 1966-69 2,995, Eli Stoddard, 1994-97 Receiving Yards 1,031, Tom Redmond, 1972 998, Paul Page, 1986 971, Dan Bohrer, 2008 918, Zach Watkins, 2009 876, Kyle Conner, 2000 875, Paul Page, 1985 818, Dave Oliver, 1975 760, Adam Lafferty, 2001 756, Jim Hoskins, 1984 747, John Barron, 1987 Receiving Yards 2,241, Dan Bohrer, 2006-09 2,176, Eric Voss, 1990-93 2,070, Adam Lafferty, 1999-2002 1,937, Paul Page, 1983-86 1,933, Tom Redmond, 1970-72 1,801, Kyle Conner, 1997-2000 1,759, Jon Hill, 1990-93 1,702, Mike Chrobot, 1975-78 1,634, Dave Swihart, 1972-75 1,566, Tom Wallace, 1979-82 Receptions 78, Zach Watkins, 2009 73, Tom Redmond, 1972 71, Dan Bohrer, 2008 63, Kyle Conner, 2000 55, Paul Page, 1985 55, Mike Chrobot, 1978 54, Dave Oliver, 1975 53, Todd Roehling, 1990 52, Dan Bohrer, 2009 52, Nate Miller, 2006 52, Paul Page, 1986 Receptions 192, Dan Bohrer, 2006-09 142, Eric Voss, 1990-93 137, Mike Chrobot, 1975-78 124, Kyle Conner, 1997-2000 120, Steve Roberts, 1986-89 117, Tom Redmond, 1970-72 117, Paul Page, 1983-86 112, Adam Lafferty, 1999-2002 112, Dave Swihart, 1972-75 111, Jon Hill, 1990-93 Receiving Yards 260, Kyle Conner vs. San Diego, 2000 223, Todd Roehling vs. Dayton, 1990 211, Tom Redmond vs. Ball State, 1972 208, Eric Voss vs. Valparaiso, 1990 183, Zach Watkins vs. Albion, 2009 181, Tom Wallace vs. Georgetown, 1981 176, Paul Page vs. Dayton, 1986 175, Jim Hoskins vs. Franklin, 1984 169, Nate Miller vs. Valparaiso, 2006 165, John Barron vs. Grand Valley State, ‘87 Receptions 15, Tom Redmond vs. Ball State, 1972 13, Zach Watkins vs. Morehead St., 2009 13, Mike Chrobot vs. Dayton, 1978 12, Todd Roehling vs. Dayton, 1990 11, Zach Watkins vs. Drake, 2009 11, Nate Miller vs. Hanover, 2006 11, Kyle Conner vs. San Diego, 2000 11, Kyle Conner vs. Morehead State, 2000 11, Kyle Conner vs. Albion, 1999 11, Steve Roberts vs. Dayton, 1989 Scoring 36, Steve Roberts vs. St. Ambrose, 1989 30, Scott Gray vs. Albion, 2007 24, Dan Bohrer vs. Franklin, 2008 24, Scott Gray vs. Hanover, 2007 24, Dale Jennings vs. UW-Stevens Pt., 2002 24, Dale Jennings vs. Drake, 2002 24, Naim Sanders vs. Valparaiso, 1996 24, Arnold Mickens vs. Valparaiso, 1995 24, Steve Roberts vs. Northwood, 1989 24, Steve Roberts vs. St. Joseph’s, 1989 24, Steve Roberts vs. St. Joseph’s, 1988 24, Steve Roberts vs. Kentucky State, 1988 24, Steve Roberts vs. Northwood, 1988 24, Steve Roberts vs. Franklin, 1987 24, Dave Oliver vs. St. Joseph’s, 1974 24, Kevin McDevitt vs. Franklin, 1976 Steve Roberts 30 Career Rushing Yards 295, Arnold Mickens vs. Valparaiso, 1994 293, Arnold Mickens vs. Drake, 1994 288, Arnold Mickens vs. UW-Stevens Pt., ‘94 271, Steve Roberts vs. St. Ambrose, 1989 263, Arnold Mickens vs. San Diego, 1994 259, Arnold Mickens vs. Valparaiso, 1995 258, Steve Roberts vs. Indianapolis, 1989 251, Naim Sanders vs. Evansville, 1996 246, Steve Roberts vs. N.E. Missouri, 1987 244, Arnold Mickens vs. Evansville, 1994 Passing Yards 497, DeWayne Ewing vs. San Diego, 2000 419, Travis Delph vs. UW-Stevens Pt., 2002 384, Paul Romanowski vs. Hillsdale, 1991 364, Curt Roy vs. St. Joseph’s, 1981 350, DeWayne Ewing vs. St. Francis (Ind.), ‘00 349, DeWayne Ewing vs. Albion, 2001 346, DeWayne Ewing vs. Georgetown, 2000 341, DeWayne Ewing vs. Dayton, 2000 331, Rob Cutter vs. Georgetown, 1985 329, Ron Kiolbassa vs. Kentucky State, ‘89 Arnold Mickens Single Season Scoring 142, 132, 120, 108, 104, 100, 90, 84, 84, 78, 72, Steve Roberts, 1988 Steve Roberts, 1989 Dale Jennings, 2002 Arnold Mickens, 1994 Steve Roberts, 1987 Kevin McDevitt, 1976 Scott Gray, 2007 Ryan Zimpleman, 2000 John Skirchak, 1960 Harry Muta, 1975 Naim Sanders, 1996 Scoring 386, 216, 202, 186, 176, 175, 172, 162, 162, 161, Steve Roberts, 1986-89 John Jenkins, 1985-88 Leroy Thompson, 1953-56 Naim Sanders, 1995-98 Arnold Mickens, 1994-95 Jordan Quiroz, 2005-08 Kevin McDevitt, 1973-76 Dale Jennings, 1998-2002 Ryan Zimpleman, 1997-2000 Tim Witmer, 1989-92 Butler Records BUTLER’S FIRST TEAM ALL-CONFERENCE PLAYERS Indiana Collegiate Conference 1952: Fred Davis, QB; Ralph London, T; John Riddle, HB; Bill Norkus, C; Norm Ellenberger, FB; Don Kelly, HB 1953: Fred Davis, QB; Bob Eichholtz, G; George Freyn, E; Gene Kuzmic, HB; Ralph London, T; Leroy Thompson, FB 1954: Ralph London, T; Leroy Thompson, FB 1956: Bob Eichholtz, G; Ken Nicholson, T; Leroy Thompson, FB 1957: Paul Furnish, G; Phil Mercer, HB 1958: Paul Furnish, G; John Moses, QB; Cliff Oilar, HB; Ken Spraetz, E; Kent Stewart, FB; Bob White, C 1959: Jim Ringer, C; Walt Stockslager, T 1960: John Skirchak, HB; Ed McCauley, G; Don Benbow, T; Phil Long, QB; Gary Green, FB 1961: Phil Long, QB; Don Benbow, T; Larry Shook, HB 1962: Tim Renie, E; Lee Grimm, G; Ron Adams, QB; John Brown, HB 1963: Lee Grimm, G; Rich Florence, E 1965: Dick Dullaghan, HB; Ken Leffler, C; Tom Sayer, T 1966: Larry Fairchild, T; Mark Steinmetz, G; Dan Warfel, LB 1968: Howard Cline, C; Eddie Bopp, DB; Pat Kress, T; Larry Gilbert, HB 1969: Rich Gray, DB; Phil Whisner, DT; Andy Carson, C, Dick Reed, QB 1970: Dan Nolan, HB 1971: Tom Redmond, E, Mike McDevitt, C 1972: Mike McDevitt, C; Noble York, DB, Dave Swihart, TE; Andy Wetzel, G; Tom Redmond, FL; Bob Rykovich, E; Fred Powell, T; Ron Cooper, MG 1973: Dave Swihart, TE; Phil Schluge, T; Steve Clayton, QB; Bill Kuntz, DE; Keith Himmel, DB 1974: Dave Swihart, TE; Bill Lynch, QB; Andy Wetzel, T; Bob Ligda, K; Tim O’Banion, E; Lee Schluge, T 1975: Dave Swihart, TE; Bill Lynch, QB; Harry Muta, RB; Dave Oliver, FL; Bob Ligda, K; Rob Goshert, DE; Dave Cunningham, C; Mark Chappuis, LB 1976: Bill Lynch, QB; Kevin McDevitt, FB Heartland Collegiate Conference 1977: Mike Chrobot, TE; Ken LaRose, T; Chuck Schwanekamp, G; Joe Chaulk, C; Bruce Scifres, HB; Bill Ginn, FB; Ed Thompson, K; Rob Goshert, DE; Mike Minczeski, LB; Bruce Ford, T 1978: Mike Chrobot, TE; Scot Shaw, DB; Mike Daugherty, DE 1979: Ken LaRose, T; Tony Pence, MG; Paul Harrington, LB; Mike Shibinski, DB 1980: Andy Howard, RB; Tim Flanigan, OG; Tony Pence, MG 1981: Landy Breeden, DE; Tony Pence, MG 1982: Dave Newcomer, OT; Landy Breeden, DE; Dave Ginn, LB; Mike Peconge, MG; Tony Sales, DB 1983: Jim Bell, OG; Brian Bertke, MG; John Carwile, C; Eric Chapman, RB; Scott Cook DT; Dave Ginn, LB; John Doctor, LB; Dino Merlina, DB; Scott Olinger, OT; Noble Parks, FB; Curt Roy, QB; John Warne, TE; Steve Torrence, DE 1984: Jim Hoskins, WR; Scott Olinger, OT; Jay Barnhorst, FB; Dave Ginn, LB; Dino Merlina, DB 1985: Paul Page, WR; Jay Barnhorst, FB; Mark Ribordy, DE; Ron Bunt, LB; Jeff Palmer, DT 1986: Paul Page, WR; Mark Ribordy, DE; Todd Jones, OT; George Dury, C; Mike Hegwood, DB 1987: Todd Jones, OT; Rusty Melzoni, OG; Mark Allanson, C; Steve Roberts, RB; John Jenkins, K; Jack Fillenwarth, DE; Tom Klusman, MG; Joe Annee, LB; Todd Yeoman, DB 1988: John Barron, WR; Tom Maheras, OT; Dan Shirey, OG; Mark Allanson, C; Steve Roberts, RB; Randy Renners, DT; Ron Baird, DE; Chuck Orban, LB; Todd Yeoman, DB 1989: Dan Shirey, OG; Mark Allanson, C; Ron Roembke, TE; Steve Roberts, RB; Greg Mariacher, DT; Jerry Pianto, MG; Chuck Orban, LB; Dax Gonzalez, DB; Kevin Shomber, P Midwest Intercollegiate Football Conference 1990: Kevin Enright, OT; Jerry Pianto, NG; Chuck Orban, LB 1991: Todd Larson, OT; Paul Romanowski, QB; Kevin Kimble, RB; Dave Kathman, DE; Kevin Johnson, LB; Dax Gonzalez, DB 1992: Ruben DeLuna, OT; Kevin Kimble, RB; Dave Kathman, LB; Don DeCraene, DB Pioneer Football League 1993: Eric Voss, WR; Jeff Burks, OT; Richard Johnson, RB; Terry Bolen, DT; Dave Kathman, LB 1994: Damon Black, OT; Keith Rossell, OT; Arnold Mickens, RB; Brian Sanders, DT; Marty Erschen, LB; Chris Toner, LB; Cameron McDaniel, DB 1995: Arnold Mickens, RB; Ron Griswold, NG 1996: Naim Sanders, RB; Ron Griswold, NG; Nick Winings, LB 1997: Nick Batalis, C; Naim Sanders, RB; Ron Griswold, NG; Kevin Ward, LB 1998: Nick Batalis, C; Naim Sanders, RB; Adam Timm, WR; Kevin Ward, LB; Shawn Wood, P 1999: Kyle Conner, WR 2000: Kyle Conner, WR; Brandon Willett, TE; Grant Veith, OL; DeWayne Ewing, QB 2001: Kyle Derickson, WR; Brandon Willett, TE; Roman Speron, RB; Nick Ober, LB 2002: Carl Erickson, OL; Dale Jennings, RB; Tim Lytle, DL; Parker Smith; LB, Russ Mann, DB; Justin Campbell, RS 2003: Brandon Martin, DB; John Leininger, DL; Justin Campbell, RS 2004: Dave McMahon, DL; Andy Nelson, DB 2005: Chris Marzotto, DL 2006: Chris Marzotto, DL 2008: Grant Hunter, DL 2009: Spencer Summerville, DB; Donnie Gilmore, OL; Scott Gray, RB; Zach Watkins, WR Conference Champions Indiana Collegiate Conference 1951 Valparaiso (4-0) 1952 BUTLER, Valparaiso (3-1-1) 1953 BUTLER (5-0) 1954 Valparaiso (5-1) 1955 St. Joseph’s, Evansville (5-1) 1956 St. Joseph’s (6-0) 1957 St. Joseph’s (6-0) 1958 BUTLER (5-1) 1959 BUTLER (6-0) 1960 BUTLER (5-1) 1961 BUTLER (6-0) 1962 BUTLER (4-1-1) 1963 BUTLER (6-0) 1964 BUTLER, Ball State, Evansville, Valparaiso, Indiana State (4-2) 1965 Ball State (6-0) 1966 Ball State (5-0-1) 1967 Ball State (5-1) 1968 Valparaiso (4-0) 1969 Valparaiso, Evansville (3-1) 1970 Evansville (4-0) 1971 St. Joseph’s (4-0) 1972 BUTLER, Evansville (4-1) 1973 BUTLER (4-1) 1974 BUTLER (6-0) 1975 BUTLER (6-0) 1976 Evansville, St. Joseph’s (4-1) Heartland Collegiate Conference 1977 BUTLER, St. Joseph’s (3-1) 1978 Indianapolis (4-2) 1979 St. Joseph’s (4-1) 1980 Ashland, Franklin (5-2) 1981 Franklin, Indianapolis (6-1) 1982 Ashland (6-1) 1983 BUTLER (5-0-1) 1984 Ashland (6-1) 1985 BUTLER, Ashland (5-1) 1986 Ashland (5-1) 1987 BUTLER (4-0-1) 1988 BUTLER (3-0-1) 1989 BUTLER (4-0) Midwest Intercollegiate Football Conference 1990 Grand Valley State (9-1) 1991 BUTLER (9-1) 1992 BUTLER, Ferris State, Grand Valley State (8-2) Pioneer Football League 1993 Dayton (5-0) 1994 BUTLER (4-1) Dayton (4-1) 1995 Drake (5-0) 1996 Dayton (5-0) 1997 Dayton (5-0) 1998 Drake (4-0) 1999 Dayton (4-0) 2000 Dayton, Drake, Valparaiso (3-1) 2001 Dayton (4-0) 2002 Dayton (4-0) 2003 Valparaiso (3-1) 2004 Drake (4-0) 2005 San Diego (4-0) 2006 San Diego (7-0) 2007 Dayton (6-1), San Diego (6-1) 2008 Jacksonville (7-1) 2009 Butler, Dayton (7-1) 31 Season-By-Season Records YEAR 32 RECORD COACH 3-0-0 Clint Howe (No football played) 2-0-0 Clint Howe 3-0-1 Clint Howe 4-3-0 (Coach Unknown) 5-1-0 (Coach Unknown) 1-0-0 (Coach Unknown) 6-1-0 Joseph Flint 2-2-0 Joseph Flint 4-3-0 (Coach Unknown) 3-0-0 James Zink 2-1-1 T. Williamson 1-3-0 Walter Kelly 1-3-0 Walter Kelly 1-0-0 Walter Kelly 1-3-0 Walter Kelly 0-3-0 Walter Kelly 6-1-0 R. Wingard 7-2-1 R. Wingard 1-0-0 Don Robinson 1-3-2 John McKay 5-0-1 John McKay 5-3-0 Walter Gipe 4-3-1 John McKay 4-3-1 Dave Allerdice 5-3-0 G. Cullen Thomas 2-4-1 G. Cullen Thomas 4-2-0 G. Cullen Thomas 1-6-0 G. Cullen Thomas 3-5-0 G. Cullen Thomas 3-3-0 G. Cullen Thomas 2-1-1 G. Cullen Thomas 0-5-1 Joe Mullane 7-1-0 Pat Page 6-2-0 Pat Page 8-2-0 Pat Page 7-2-0 Pat Page 4-5-0 Pat Page 5-2-2 Pat Page 3-6-0 Paul D. (Tony) Hinkle 4-3-1 George Potsy Clark 6-2-0 George Potsy Clark 4-4-0 George Potsy Clark 2-7-0 Harry M. Bell 3-5-0 Harry M. Bell 2-4-1 Fred Mackey 2-6-0 Fred Mackey 6-1-1 Fred Mackey 7-1-0 Paul D. (Tony) Hinkle 6-0-2 Paul D. (Tony) Hinkle 5-2-1 Paul D. (Tony) Hinkle 4-4-0 Paul D. (Tony) Hinkle 7-0-1 Paul D. (Tony) Hinkle 4-4-1 Paul D. (Tony) Hinkle 5-4-0 Paul D. (Tony) Hinkle 2-7-0 Frank (Pop) Hedden (No football–World War II) (No football–World War II) 3-3-0 Frank (Pop) Hedden 7-1-0 Paul D. (Tony) Hinkle 5-3-1 Paul D. (Tony) Hinkle 3-5-0 Paul D. (Tony) Hinkle 2-6-0 Paul D. (Tony) Hinkle 4-4-1 Paul D. (Tony) Hinkle 4-4-1 Paul D. (Tony) Hinkle 5-3-1 Paul D. (Tony) Hinkle 6-2-0 Paul D. (Tony) Hinkle 4-4-1 Paul D. (Tony) Hinkle YEAR 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 RECORD 3-5-0 6-2-0 7-2-0 8-1-0 9-0-0 8-1-0 9-0-0 5-2-2 8-1-0 4-4-1 6-3-0 4-5-0 2-7-0 2-7-0 3-6-0 3-6-1 3-7-0 5-5-0 5-5-0 8-2-0 9-1-0 6-4-0 5-5-0 5-5-0 5-5-0 5-5-0 3-7-0 7-3-0 9-1-1 “Tony” Hinkle COACH YEAR 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 Summary: Bill Lynch RECORD 6-4-0 8-2-0 5-5-0 8-1-1 8-2-1 7-2-1 5-5-1 9-2-0 8-2-0 4-6-0 7-3-0 2-8-0 3-7-0 6-4-0 4-6-0 5-5-0 2-8-0 5-5-0 4-6-0 2-9-0 1-10-0 0-11-0 3-8-0 4-7-0 6-5-0 11-1-0 Years 120 COACH Bill Sylvester Bill Lynch Bill Lynch Bill Lynch Bill Lynch Bill Lynch Bob Bartolomeo Bob Bartolomeo Ken LaRose Ken LaRose Ken LaRose Ken LaRose Ken LaRose Ken LaRose Ken LaRose Ken LaRose Ken LaRose Ken LaRose Kit Cartwright Kit Cartwright Kit Cartwright Kit Cartwright Jeff Voris Jeff Voris Jeff Voris Jeff Voris Won Lost Tied 539 417 35 Bob Bartolomeo Ken LaRose BUTLER FOOTBALL COACHING RECORDS Coach (Years) Clint Howe (1887-1890) Joseph Flint (1894-1895) James Zink (1897) T. Williamson (1898) Walter Kelly (1899-1903) R. Wingard (1904-1905) Don Robinson (1906) John McKay (1907-1908,1910) Walter Gipe (1909) Dave Allerdice (1911) G. Cullen Thomas (1912-1918) John Mullane (1919) Harland (Pat) Page (1920-1925) George Potsy Clark (1927-1929) Harry Bell (1930-1931) Fred Mackey (1932-1934) Frank (Pop) Hedden (1942-1945) Paul D. (Tony) Hinkle (1926, 35-41,46-69) Bill Sylvester (1970-1984) Bill Lynch (1985-89) Bob Bartolomeo (1990-91) Ken LaRose (1992-2001) Kit Cartwright (2002-05) Jeff Voris (2006- ) Years W 3 8 2 8 1 3 1 2 5 4 2 13 1 1 3 10 1 5 1 4 7 20 1 0 6 37 3 14 2 5 3 10 2 5 32 165 15 84 5 36 2 14 10 46 4 7 4 24 L 0 3 0 1 12 3 0 6 3 3 24 5 14 9 12 11 10 99 65 12 7 54 36 21 T 1 0 0 1 0 1 0 4 0 1 2 1 2 1 0 2 0 13 2 3 1 0 0 0 Pct. .944 .727 1.000 .625 .250 .794 1.000 .600 .625 .563 .457 .083 .717 .604 .294 .478 .333 .619 .563 .735 .659 .460 .163 .533 Butler Year-By-Year Scores 1887 Coach: Clint Howe 45 Purdue 48 Franklin 24 Hanover (3-0-0) 5 8 10 1888 (No Football) 1889 Coach: Clint Howe 32 Hanover 14 Purdue (2-0-0) 0 0 1890 Coach: Clint Howe 0 DePauw 18 DePauw 22 Wabash 12 Purdue (3-0-1) 0 0 6 10 1891 Coach: Unknown 20 DePauw 52 State Univ. 6 Michigan 26 State Univ. 28 Wabash 34 Cincinnati 0 Purdue (4-3-0) 32 6 42 6 6 10 58 1892 Coach: Unknown 18 Earlham 10 Indiana 14 Wabash 12 Dayton 6 Purdue 20 DePauw (5-1-0) 6 6 12 0 40 18 1893 Coach: Unknown 28 Wabash (1-0-0) 24 1894 Coach: Joseph Flint 0 Purdue 38 DePauw 64 Kokomo 12 Danville Normal 34 Rose Poly 58 Wabash 6 Indianapolis Art (6-1-0) 30 6 0 0 0 0 4 1895 Coach: Joseph Flint 6 Wabash 0 Miami 34 Indiana 18 Knightstown (2-2-0) 10 6 2 0 1896 Coach: Unknown (4-3-0) 22 Franklin 6 6 Indiana 22 18 Rose Hulman 0 0 Miami 6 16 Miami 4 13 Earlham 0 0 Indianapolis Ath. Club 14 1897 Coach: James Zink 24 Franklin 16 Miami 4 Earlham (3-0-0) 6 4 0 1898 Coach: T. Williamson 4 Earlham 11 Franklin 0 Earlham 0 DePauw (2-1-1) 0 0 10 0 1899 Coach: Walter Kelly 11 Franklin 0 DePauw 0 Franklin 0 Earlham (1-3-0) 0 28 17 10 1900 Coach: Walter Kelly 0 Franklin 0 Earlham 0 Franklin 10 DePauw (1-3-0) 17 33 37 7 1901 Coach: Walter Kelly 30 Earlham (1-0-0) 0 1902 Coach: Walter Kelly 0 Cincinnati 17 Wabash 0 DePauw 0 Franklin (1-3-0) 6 12 32 5 1903 Coach: Walter Kelly 0 DePauw 0 Wabash 0 Rose Hulman (0-3-0) 18 46 31 1904 Coach: R. Wingard 47 State Normal 32 Miami 56 Danville Normal 17 State Normal 28 Earlham 16 Hanover 0 Wabash (6-1-0) 0 0 5 0 8 6 51 1905 Coach: R. Wingard 101 State Normal 6 Wabash 0 Indiana 31 Winona Tech 47 Shelbyville 6 Rose Hulman 10 Wittenberg 17 Miami 18 DePauw 64 Franklin (7-2-1) 0 0 31 0 0 6 12 0 17 0 1906 Coach: Don Robinson (1-0-0) 17 Irvington Ath. Club 0 1907 Coach: John McKay 0 State Normal 5 Rose Hulman 5 Winona 0 Franklin 6 Earlham 0 Hanover (1-3-2) 0 16 0 0 34 22 1908 Coach: John McKay 22 Winona 18 Hanover 31 Earlham 23 Franklin 10 Hanover 6 Rose Hulman (5-0-1) 5 0 0 0 0 6 1909 Coach: Walter Gipe 25 Winona 18 Franklin 23 Hanover 6 Earlham 6 DePauw 6 Rose Hulman 0 Cincinnati 12 Wabash (5-3-0) 0 0 5 0 12 12 22 0 1910 Coach: John McKay 34 Georgetown 5 Hanover 18 Moores Hill 0 Wabash 0 Indiana 3 DePauw 6 Earlham 0 Miami (4-3-1) 0 3 0 48 33 0 17 0 1911 Coach: Dave Allerdice 19 Franklin 0 Transylvania 9 Notre Dame 22 Cincinnati 6 Earlham 3 DePauw 6 Rose Hulman 45 Moores Hill (4-3-1) 0 0 27 12 39 0 11 0 1912 Coach: G. Cullen Thomas (5-3-0) 54 Hanover 0 25 Franklin 0 0 Wabash 47 13 Earlham 0 52 Moores Hill 14 27 Transylvania 0 3 DePauw 17 6 Rose Hulman 13 1913 Coach: G. Cullen Thomas (2-4-1) 7 Kentucky State 21 10 Wabash 6 14 Franklin 7 0 Earlham 0 0 Louisville 21 0 DePauw 12 19 Rose Hulman 20 1914 Coach: G. Cullen Thomas (4-2-0) 0 Georgetown 13 7 Earlham 6 17 Hanover 16 0 Transylvania 47 7 DePauw 0 6 Franklin 0 1915 Coach: G. Cullen Thomas (1-6-0) 0 Kentucky State 33 16 Franklin 20 0 Rose Hulman 7 7 Wabash 35 0 DePauw 39 20 Hanover 7 19 Earlham 34 1916 Coach: G. Cullen Thomas (3-5-0) 3 Kentucky State 39 83 Meron 0 27 Earlham 0 0 Wabash 56 7 Louisville 19 0 DePauw 20 14 Franklin 39 13 Rose Hulman 7 1917 Coach: G. Cullen Thomas (3-3-0) 0 Kentucky State 33 0 Millikin 34 27 Hanover 7 6 Franklin 0 6 Earlham 0 0 Rose Hulman 25 1918 Coach: G. Cullen Thomas (2-1-1) 32 Hanover 6 6 Franklin 2 0 Miami 52 0 Rose Hulman 0 1919 Coach: Joe Mullane 0 Wabash 0 DePauw 7 Rose Hulman 0 Hanover 0 Earlham 0 Franklin (0-5-1) 67 76 21 0 6 14 1920 Coach: Pat Page 7 Wittenberg 53 Hanover (7-1-0) 20 7 74 13 21 35 9 39 Wilmington Earlham Franklin Rose Hulman Chicago YMCA Georgetown 0 7 10 7 0 0 1921 Coach: Pat Page 19 Denison 70 Rose Hulman 122 Hanover 33 Earlham 0 Wabash 7 Chicago YMCA 3 Michigan Aggies 28 Franklin (6-2-0) 6 6 6 7 14 14 2 0 1922 Coach: Pat Page 6 Wilmington 14 Franklin 16 Chicago YMCA 10 Illinois 57 Earlham 9 Wabash 19 Rose Hulman 19 DePauw 3 Notre Dame 7 Bethany (8-2-0) 0 0 0 7 0 7 0 0 32 29 1923 Coach: Pat Page 39 Hanover 7 Illinois 2 Wabash 13 DePauw 16 Bethany 13 Franklin 7 Notre Dame 19 Haskell 26 Chicago YMCA (7-2-0) 0 21 0 0 0 7 34 13 6 1924 Coach: Pat Page 21 Hanover 10 Franklin 10 Illinois 7 Centenary 12 Wabash 26 DePauw 0 Iowa 0 Ohio Wesleyan 7 Haskell (4-5-0) 6 7 40 9 0 7 7 24 20 1925 Coach: Pat Page 28 Earlham 6 DePauw 13 Illinois 23 Franklin 0 Wabash 38 Rose Hulman 7 Minnesota 10 Dayton 9 Centenary (5-2-2) 0 6 16 0 0 0 33 7 0 1926 Coach: Tony Hinkle 38 Earlham 70 Hanover 7 Illinois 7 Franklin 10 DePauw 0 Lombard 0 Wabash 0 Minnesota 6 Dayton (3-6-0) 0 0 38 0 21 18 13 81 20 1927 Coach: George Potsy Clark (4-3-1) 46 Muncie Normal 12 58 VALPARAISO 0 0 At Illinois 58 7 FRANKLIN 7 25 DePauw 6 6 LOMBARD 19 13 Wabash 6 0 At Michigan State 25 1928 Coach: George Potsy Clark (6-2-0) 0 At Northwestern 14 55 FRANKLIN 0 40 DANVILLE C. NORMAL 0 13 WASHINGTON (MO.) 0 12 BALL STATE 7 0 ILLINOIS 14 24 EARLHAM 0 26 TUFTS 3 1929 Coach: George Potsy Clark (4-4-0) 13 ILLINOIS WESLEYAN 9 0 At Northwestern 13 6 HASKELL 13 6 At New York Univ.# 13 13 At DePauw 0 14 WABASH 0 0 MILLIKIN 6 33 LOYOLA (LA.) 13 #Game played at Yankee Stadium. 1930 Coach: Harry M. Bell (2-7-0) 46 INDIANA CENTRAL* 0 7 OHIO 12 0 Illinois 27 0 ST. LOUIS 7 13 Wabash 7 0 At Loyola (La.) 33 0 Purdue 33 0 Haskell 27 0 Marquette 25 •First night game in the Butler Bowl. 1931 Coach: Harry M. Bell (3-5-0) 6 Franklin 7 0 At Ohio 40 34 BALL STATE 0 61 Louisville 6 2 Dayton 26 13 WABASH 0 0 MARQUETTE 21 7 At Washington (D.C.) 32 1932 Coach: Fred Mackey 13 Ball State 7 Cincinnati 7 Millikin 0 Wabash 14 Franklin 0 Drake 0 Dayton (2-4-1) 12 13 13 34 0 0 7 1933 Coach: Fred Mackey 2 Franklin 19 Ball State 6 Drake 24 Evansville 0 Wabash 7 Cincinnati 7 Valparaiso 12 Washington (2-6-0) 16 2 26 6 12 34 20 36 1934 Coach: Fred Mackey (6-1-1) 13 Ball State 4 25 Franklin 0 50 Danville C. Normal 0 12 Indiana State 0 0 Wabash 0 7 Washington 32 6 Manchester 0 12 Valparaiso 7 1935 Coach: Tony Hinkle 29 Louisville 12 Evansville 71 Hanover 33 Indiana State 39 Valparaiso 20 Wabash 18 Franklin 7 Western State (7-1-0) 0 0 7 7 0 0 0 19 33 Butler Year-By-Year Scores 1936 Coach: Tony Hinkle 40 Evansville 12 Cincinnati 6 Chicago 26 Manchester 9 Wabash 64 Franklin 41 Valparaiso 13 Western State (6-0-2) 0 12 6 0 7 0 0 7 1937 Coach: Tony Hinkle (5-2-1) 7 At Purdue 33 13 At Cincinnati 0 33 VALPARAISO 0 51 EVANSVILLE 0 12 WASH. & JEFFERSON 0 13 At DePauw 0 0 WABASH 0 13 WESTERN STATE 14 1938 Coach: Tony Hinkle (4-4-0) 12 Ball State 6 6 Purdue 21 0 George Washington 26 12 DePauw 0 35 Ohio Wesleyan 0 27 Wabash 0 0 Western State 13 21 Washington (Mo.) 27 1939 Coach: Tony Hinkle (7-0-1) 16 BALL STATE 0 12 OHIO 7 34 INDIANA STATE 0 13 GEORGE WASHINGTON 6 33 At DePauw 0 6 WASHINGTON (MO.) 6 55 WABASH 0 12 At Western State 0 1940 Coach: Tony Hinkle (4-4-1) 27 ST. JOSEPH’S 6 0 At Purdue 28 7 At Ohio 7 6 XAVIER 13 19 At Wabash 12 19 At Washington (Mo.) 27 32 DePAUW 6 25 BALL STATE 0 7 TOLEDO 20 1946 Coach: Tony Hinkle 19 Eastern Illinois 13 Indiana State 0 Western Michigan 41 DePauw 20 Ball State 25 Wabash 31 St. Joseph’s 25 Valparaiso (7-1-0) 12 7 19 6 6 7 6 0 1947 Coach: Tony Hinkle (5-3-1) 6 BALL STATE 6 7 At Ohio 14 21 ST. JOSEPH’S 0 14 At Wabash 0 21 WESTERN MICHIGAN 20 35 DePAUW 0 0 WESTERN RESERVE 6 27 VALPARAISO 6 19 At Cincinnati 26 1948 Coach: Tony Hinkle 68 Indiana Central 14 Evansville 0 Western Reserve 0 Washington (MO) 7 Cincinnati 20 Wabash 7 Western Michigan 6 Ohio (3-5-0) 7 13 6 7 16 7 20 14 1949 Coach: Tony Hinkle 7 Evansville 14 Wabash 6 Western Reserve 47 Indiana State 0 Washington (Mo.) 0 ILLINOIS STATE 6 Western Michigan 0 Ohio (2-6-0) 24 7 28 14 7 14 40 14 1950 Coach: Tony Hinkle (4-4-1) 12 At Evansville 14 7 WABASH 7 14 OHIO 21 33 At Ball State 7 7 MIAMI (OHIO) 42 25 At Western Reserve 14 13 At Western Michigan 34 25 WASHINGTON (MO.) 20 32 At Indiana State 0 1941 Coach: Tony Hinkle 6 ST. JOSEPH’S 7 Xavier 6 Western State 13 Ball State 20 DePauw 7 Ohio 26 Wabash 18 Toledo 40 Washington (MO) (5-4-0) 13 40 14 6 6 20 0 2 13 1942 Coach: Pop Hedden 14 XAVIER 0 Indiana 0 Illinois 0 Ohio 0 WABASH 7 Western Michigan 39 DePAUW 12 TOLEDO 0 ST. JOSEPH’S 1951 Coach: Tony Hinkle (4-4-1) 7 VALPARAISO 41 7 WEST. RESERVE 6 26 At Wabash 26 20 BALL STATE 14 6 At St. Joseph’s 12 27 EVANSVILLE 12 0 WEST. MICHIGAN 20 13 At Washington (Mo.) 20 14 At Indiana State 7 (2-7-0) 21 53 67 6 6 13 0 0 6 1952 Coach: Tony Hinkle (5-3-1) 25 At Evansville 20 47 NORTH CENTRAL 6 25 WABASH 27 28 At Ball State 6 33 ST. JOSEPH’S 0 13 INDIANA STATE 13 13 At Valparaiso 14 33 WASHINGTON (Mo.) 20 14 At Western Reserve 42 1943-44 (No football) 1945 Coach: Pop Hedden 7 Eastern Illinois 56 Earlham 32 Franklin 56 Manchester 2 Ball State 0 Valparaiso 34 (3-3-0) 12 7 6 0 16 6 1953 Coach: Tony Hinkle (6-2-0) 27 EVANSVILLE 0 24 At Wabash 20 25 BALL STATE 7 47 At St. Joseph’s 13 47 INDIANA STATE 12 32 VALPARAISO 20 14 At Washington (Mo.) 27 20 WESTERN RESERVE 21 1954 Coach: Tony Hinkle (4-4-1) 21 At Evansville 14 14 WABASH 21 13 At Ball State 26 40 ST. JOSEPH’S 12 38 INDIANA STATE 26 7 At Valparaiso 39 6 WASHINGTON (MO.) 25 13 INDIANA CENTRAL 7 13 At Western Reserve 13 1955 Coach: Tony Hinkle (3-5-0) 14 EVANSVILLE 45 26 At Indiana State 19 20 BALL STATE 13 13 At St. Joseph’s 28 18 DePAUW 7 14 VALPARAISO 24 12 At Wabash 14 20 At Washington (Mo.) 41 1956 Coach: Tony Hinkle (6-2-0) 34 At Evansville 7 32 INDIANA STATE 0 28 At Ball State 12 6 ST. JOSEPH’S 31 19 At DePauw 13 20 At Valparaiso 6 26 WABASH 7 20 WASHINGTON (MO.) 21 1957 Coach: Tony Hinkle (7-2-0) 0 BRADLEY 13 14 At Wabash 6 13 At St. Joseph’s 34 27 INDIANA STATE 0 27 At Valparaiso 0 27 BALL STATE 7 19 At Evansville 7 26 DePAUW 13 41 WASHINGTON (MO.) 13 1958 Coach: Tony Hinkle (8-1-0) 39 At Bradley 19 40 WABASH 6 6 ST. JOSEPH’S 0 31 At Indiana State 8 34 VALPARAISO 0 7 At Ball State 14 28 EVANSVILLE 14 30 At DePauw 0 20 At Washington (Mo.) 12 1959 Coach: Tony Hinkle (9-0-0) 27 BRADLEY 8 28 At Wabash 8 20 At St. Joseph’s 7 41 INDIANA STATE 6 10 At Valparaiso 7 27 BALL STATE 0 33 At Evansville 14 21 DePAUW 3 48 WASHINGTON (MO.) 13 1960 Coach: Tony Hinkle (8-1-0) 18 At Bradley 12 40 WABASH 7 6 ST. JOSEPH’S 24 20 At Indiana State 13 27 VALPARAISO 20 27 At Ball State 0 34 EVANSVILLE 6 13 At DePauw 6 33 At Washington (Mo.) 6 1961 Coach: Tony Hinkle 34 BRADLEY 48 BALL STATE 34 At Wabash 12 At DePauw 27 ST. JOSEPH’S 26 At Indiana State 14 VALPARAISO (9-0-0) 23 6 7 6 7 0 2 30 26 At Evansville WASHINGTON (MO) 1962 Coach: Tony Hinkle 34 At Bradley 28 At Ball State 14 WABASH 21 DePAUW 0 At St. Joseph’s 41 INDIANA STATE 16 At Valparaiso 41 EVANSVILLE 13 At Marshall 7 7 (5-2-2) 16 28 14 18 6 20 14 0 26 1963 Coach: Tony Hinkle (8-1-0) 13 At Morehead State 31 35 BRADLEY 27 13 BALL STATE 0 26 At Wabash 21 14 At DePauw 12 27 ST. JOSEPH’S 0 7 At Indiana State 6 27 VALPARAISO 12 32 At Evansville 14 1964 Coach: Tony Hinkle (4-4-1) 7 MOREHEAD STATE 26 21 At Bradley 28 14 At Ball State 28 7 WABASH 7 9 DePAUW 6 41 At St. Joseph’s 2 7 INDIANA STATE 2 14 At Valparaiso 23 48 EVANSVILLE 21 1965 Coach: Tony Hinkle (6-3-0) 41 TAYLOR 6 27 At Indiana State 7 21 ST. JOSEPH’S 12 21 At Valparaiso 23 42 EVANSVILLE 0 7 At Ball State 22 14 DePAUW 8 7 At Akron 14 27 WEST. KENTUCKY 20 1966 Coach: Tony Hinkle (4-5-0) 6 Northern Illinois 34 28 Indiana State 6 20 St. Joseph’s 7 12 Valparaiso 15 26 Evansville 7 14 Ball State 17 14 DePauw 7 14 Akron 20 7 Western Kentucky 35 1967 Coach: Tony Hinkle (2-7-0) 7 NO. ILLINOIS 24 7 At Indiana State 23 27 ST. JOSEPH’S 2 7 At Valparaiso 21 7 EVANSVILLE 24 7 At Ball State 65 20 DePAUW 21 14 At Wabash 0 14 WEST. KENTUCKY 36 1968 Coach: Tony Hinkle (2-7-0) 7 AKRON 32 0 At Western Kentucky 35 12 INDIANA STATE 28 49 At St. Joseph’s 14 7 VALPARAISO 10 7 At Evansville 44 21 BALL STATE 24 7 At DePauw 30 26 WABASH 9 1969 Coach: Tony Hinkle (3-6-0) 0 AKRON 52 57 INDIANA CENTRAL 0 7 34 6 17 31 9 38 At Ball State At DePAUW At Wabash At St. Joseph’s INDIANA STATE At Evansville VALPARAISO 36 23 17 20 54 14 20 1970 Coach: Bill Sylvester (3-6-1) 0 At Akron 34 13 BALL STATE 26 14 At DePauw 6 21 WABASH 21 24 ST. JOSEPH’S 26 0 At Indiana State 61 18 EVANSVILLE 31 34 At Valparaiso 31 0 At Western Kentucky 14 35 INDIANA CENTRAL 0 1971 Coach: Bill Sylvester (3-7-0) 0 At Akron 24 0 At Ball State 27 15 DePAUW 13 14 At Wabash 0 6 At St. Joseph’s 24 21 INDIANA STATE 14 8 At Evansville 21 12 VALPARAISO 48 0 WEST. KENTUCKY 31 12 INDIANA CENTRAL 17 1972 Coach: Bill Sylvester (5-5-0) 7 AKRON 34 41 BALL STATE 50 34 At DePauw 7 55 WABASH 6 33 ST. JOSEPH’S 8 21 At Indiana State 49 6 EVANSVILLE 10 17 At Valparaiso 10 6 At Western Kentucky 35 8 At Indiana Central 7 1973 Coach: Bill Sylvester (5-5-0) 19 At Akron 51 14 At Ball State 52 13 At St. Joseph’s 7 13 At Wabash 7 12 VALPARAISO 6 13 INDIANA STATE 41 36 DePAUW 21 34 EVANSVILLE 35 6 WEST. KENTUCKY 48 21 INDIANA CENTRAL 14 1974 Coach: Bill Sylvester (8-2-0) 21 WAYNE STATE 14 0 BALL STATE 45 31 At Valparaiso 15 22 WABASH 17 24 At DePauw 20 29 INDIANA CENTRAL 26 27 At Indiana State 56 39 At Evansville 16 64 At St. Joseph’s 26 35 FRANKLIN 28 1975 Coach: Bill Sylvester 21 At Evansville 20 ROSE HULMAN 37 ST. JOSEPH’S 44 At Indiana Central 39 VALPARAISO 35 At Wabash 17 At Wayne State 14 DePAUW 51 At Franklin 28 ST. NORBERT (9-1-0) 19 12 8 7 8 0 21 7 20 15 1976 Coach: Bill Sylvester 28 EVANSVILLE 34 HILLSDALE 18 At Wittenberg (6-4-0) 31 28 21 Butler Year-By-Year Scores 29 24 18 35 23 35 28 At Valparaiso INDIANA CENTRAL At St. Joseph’s FRANKLIN At DePauw WABASH EASTERN ILLINOIS 1977 Coach: Bill Sylvester 13 DAYTON 7 At Hillsdale 3 WITTENBERG 14 VALPARAISO 11 At Indiana Central 17 ST. JOSEPH’S 7 At Franklin 31 At Eastern Illinois 41 DePAUW 28 At Evansville 49 6 28 7 7 12 27 (5-5-0) 45 20 14 7 30 7 42 13 0 20 1978 Coach: Bill Sylvester (5-5-0) 3 EASTERN ILLINOIS 42 17 HILLSDALE 14 6 At Dayton 31 24 At Valparaiso 20 20 INDIANA CENTRAL 6 17 At St. Joseph’s 21 13 FRANKLIN 14 6 At St. Norbert 20 13 At DePauw 12 9 EVANSVILLE 7 1979 Coach: Bill Sylvester 0 At Eastern Illinois 9 At Hillsdale 0 DAYTON 25 VALPARAISO 13 At Indiana Central 23 ST. JOSEPH’S 14 At Franklin 24 ST. NORBERT 24 DePAUW 21 At Evansville (5-5-0) 38 10 24 24 14 28 10 7 14 10 1980 Coach: Bill Sylvester (5-5-0) 17 HILLSDALE 10 0 At Dayton 29 13 At Valparaiso 25 21 FRANKLIN 26 7 At Georgetown 14 10 At Ashland 0 31 EVANSVILLE 20 14 WITTENBERG 35 7 INDIANA CENTRAL 6 0 At St. Joseph’s 24 1981 Coach: Bill Sylvester 0 At Hillsdale 0 DAYTON 10 At Evansville 16 VALPARAISO 21 At Franklin 34 GEORGETOWN 10 ASHLAND 42 At Wittenberg 31 ST. JOSEPH’S 14 At Indiana Central (3-7-0) 37 27 31 0 25 6 38 14 33 16 1982 Coach: Bill Sylvester (7-3-0) 20 WAYNE STATE 7 20 At Dayton 14 7 WITTENBERG 17 27 At Valparaiso 3 6 FRANKLIN 10 39 At Georgetown 0 6 At Ashland 8 20 EVANSVILLE 9 31 At St. Joseph’s 16 14 INDIANA CENTRAL 7 1983 Coach: Bill Sylvester 19 At Wayne State 20 DAYTON 14 At Wittenberg (9-1-1) 6 3 3 41 31 38 24 21 24 26 VALPARAISO At Franklin GEORGETOWN ASHLAND At Evansville ST. JOSEPH’S At Indiana Central NCAA Division II Playoffs 6 At UC-Davis 35 17 14 9 21 7 6 25 1984 Coach: Bill Sylvester (6-4-0) 6 KENTUCKY STATE 7 20 WITTENBERG 9 0 At Dayton 34 33 GEORGETOWN 7 22 At St. Joseph’s 17 16 At Evansville 13 13 ASHLAND 14 34 At Franklin 27 28 VALPARAISO 27 10 At Indiana Central 20 1985 Coach: Bill Lynch (8-2-0) 24 At Kentucky State 0 23 At Wittenberg 24 27 DAYTON 14 31 At Georgetown 18 31 ST. JOSEPH’S 3 21 EVANSVILLE 20 7 At Ashland 17 39 FRANKLIN 10 26 At Valparaiso 15 18 INDIANA CENTRAL 17 1986 Coach: Bill Lynch (5-5-0) 16 At Dayton 17 28 GRAND VALLEY ST. 30 36 ANDERSON 0 32 At St. Joseph’s 22 28 At Evansville 9 14 ASHLAND 24 20 At Franklin 21 28 VALPARAISO 6 25 At Indianapolis 28 31 FINDLAY 12 1987 Coach: Bill Lynch (8-1-1) 19 At Grand Valley State 24 64 At Anderson 0 15 DAYTON 10 27 At Northeast Missouri 22 14 ST. JOSEPH’S 3 28 EVANSVILLE 28 31 At Ashland 6 49 FRANKLIN 14 29 At Valparaiso 22 35 INDIANAPOLIS 20 1988 Coach: Bill Lynch (8-2-1) 29 FERRIS STATE 13 34 NORTHWOOD 11 10 At Central State 55 34 At Dayton 17 43 ST. JOSEPH’S 20 35 KENTUCKY STATE 14 56 At Valparaiso 0 13 At Indianapolis 13 17 ASHLAND 10 27 At St. Ambrose 0 NCAA Divison II Playoffs 6 At Tennessee-Martin 23 1989 Coach: Bill Lynch (7-2-1) 3 At Ferris State 28 35 At Northwood Institute 21 18 GRAND VALLEY ST. 27 23 DAYTON 23 43 At St. Joseph’s 28 23 At Kentucky State 19 31 VALPARAISO 0 31 INDIANAPOLIS 25 17 At Ashland 6 56 ST. AMBROSE 28 1990 Coach: Bob Bartolomeo (5-5-1) 9 At Northern Michigan 10 17 ST. JOSEPH’S 10 0 At Grand Valley State 35 10 At Dayton 14 9 INDIANAPOLIS 9 16 At Wayne State 7 37 VALPARAISO 0 18 At Ferris State 27 17 ASHLAND 3 27 HILLSDALE 16 7 At Saginaw Valley State 17 1991 Coach: Bob Bartolomeo (9-2-0) 28 NO. MICHIGAN 0 37 At St. Joseph’s 10 33 GRAND VALLEY ST. 0 22 At Indianapolis 3 42 WAYNE STATE 7 22 At Valparaiso 2 6 FERRIS STATE 7 14 At Ashland 12 26 At Hillsdale 20 13 SAGINAW VALLEY 10 NCAA Division II Playoffs 16 At Pittsburg State 26 33 0 EVANSVILLE At St. Joseph’s 31 49 1997 Coach: Ken LaRose (6-4-0) 10 At Howard Payne 9 21 ROBERT MORRIS 26 38 At St. Francis (Pa.) 12 17 CLINCH VALLEY 3 7 At Dayton 42 14 At San Diego 24 17 VALPARAISO 19 38 At Evansville 35 14 DRAKE 13 20 ST. JOSEPH’S 13 1998 Coach: Ken LaRose (4-6) 21 ALBION 44 17 At Morehead State 55 13 ST. FRANCIS (PA.) 0 26 At Wesley (Del.) 24 27 DAYTON 31 29 SAN DIEGO 27 10 At Valparaiso 17 7 At Drake 41 39 At Quincy 0 33 LINDENWOOD (OT) 34 1992 Coach: Ken LaRose (8-2-0) 14 At Northern Michigan 0 33 ST. JOSEPH’S 7 10 At Grand Valley State 21 28 INDIANAPOLIS 6 31 At Wayne State 6 42 VALPARAISO 13 7 At Ferris State 35 24 ASHLAND 21 28 HILLSDALE 17 37 At Saginaw Valley State 0 1999 Coach: Ken LaRose (5-5) 27 At Albion (OT) 20 34 MOREHEAD STATE 56 21 At St. Francis (Pa.) 7 34 WESLEY (DEL.) 19 7 At Dayton 42 20 VALPARAISO 38 14 At San Diego 42 6 DRAKE 53 27 QUINCY 12 27 At Lindenwood 16 1993 Coach: Ken LaRose 19 At Hofstra 24 At Georgetown 28 DRAKE 7 HILLSDALE 10 VALPARAISO 6 At Dayton 27 At San Diego 14 At Evansville 27 UAB 21 At Indianapolis (4-6-0) 20 21 3 29 0 28 28 12 31 34 2000 Coach: Ken LaRose (2-8) 37 ST. FRANCIS (IND.) 56 43 At Morehead State 46 41 ST. FRANCIS (PA.) 7 56 At Georgetown (DC) (OT) 57 26 DAYTON 43 23 At Albion 24 7 At Valparaiso 33 37 SAN DIEGO 38 41 At Drake 62 45 At Quincy 12 1994 Coach: Ken LaRose (7-3-0) 0 HOFSTRA 41 42 At St. Xavier 6 31 GEORGETOWN 21 28 At Wis.-Stevens Point 16 28 At Drake 20 14 At Valparaiso 20 31 DAYTON 24 38 SAN DIEGO 21 49 EVANSVILLE 14 14 At UAB 19 2001 Coach: Ken LaRose (5-5) 20 At St. Francis (Ind.) 41 29 MOREHEAD STATE 27 23 DUQUESNE 33 31 ALBION 28 38 VALPARAISO 21 7 At Dayton 45 19 At San Diego 16 39 DRAKE 41 48 QUINCY 27 24 At Southern Utah 49 1995 Coach: Ken LaRose (2-8-0) 17 HOWARD PAYNE 7 3 At Towson State 34 15 At Millikin 27 0 WIS.-STEVENS PT. 37 8 DRAKE 29 42 VALPARAISO 44 13 At Dayton 49 29 THOMAS MORE 37 14 At Evansville 13 16 At San Diego 37 2002 Coach: Kit Cartwright 54 TIFFIN 0 At Florida Internat’l 43 UW-STEVENS PT. 20 At Morehead State 0 At Dayton 23 AUSTIN PEAY 26 SAN DIEGO 48 DRAKE 52 At Valparaiso 28 At Quincy 1996 Coach: Ken LaRose 3 TOWSON STATE 0 At Robert Morris 42 MILLIKIN 6 At Clinch Valley 7 At Drake 29 At Valparaiso 10 DAYTON 34 SAN DIEGO 2003 Coach: Kit Cartwright (2-9) 6 At Tiffin 42 0 At Duquesne 49 27 DAVIDSON 45 7 At UW-Stevens Point 56 7 At Austin Peay 28 16 ST. FRANCIS (IND.) 47 7 At Drake 24 0 DAYTON 40 (3-7-0) 14 38 7 34 51 50 30 3 (4-6) 31 42 29 53 41 40 35 44 22 37 7 25 17 At San Diego VALPARAISO ST. JOSEPH’S 53 21 13 2004 Coach: Kit Cartwright (1-10) 12 At Albion 13 3 TIFFIN 48 7 At Morehead State 15 14 At Davidson 21 21 AUSTIN PEAY 14 7 At St. Francis (Ind.) 35 6 DRAKE 43 10 At Dayton 49 12 SAN DIEGO 41 26 At Valparaiso 31 0 ST. JOSEPH’S 34 2005 Coach: Kit Cartwright (0-11) 23 ALBION* 28 7 At Tiffin 30 13 At Robert Morris 49 21 At Jacksonville 55 10 MOREHEAD STATE 58 7 At San Diego 49 21 VALPARAISO 34 7 ST. JOSEPH’S 23 10 At Drake 56 7 DAYTON 41 28 MISSOURI-ROLLA 45 *--At Univ. of Indianapolis. 2006 Coach: Jeff Voris (3-8) 10 At Albion 31 30 HANOVER 10 14 ROBERT MORRIS 35 7 JACKSONVILLE 31 23 DAYTON 20 3 At San Diego 56 0 DRAKE 29 32 VALPARAISO 10 20 At Missouri-Rolla 35 7 MOREHEAD STATE 14 10 At Davidson 50 2007 Coach: Jeff Voris (4-7) 42 ALBION 14 44 At Hanover 14 11 ST. JOSEPH’S 8 28 MISSOURI-ROLLA (OT) 21 9 SAN DIEGO 56 19 At Drake 37 37 At Valparaiso 42 3 At Morehead State 24 32 DAVIDSON 44 0 At Dayton 61 16 At Jacksonville 24 2008 Coach: Jeff Voris (6-5) 20 At Albion 6 28 FRANKLIN 31 41 At Missouri S & T 15 21 DRAKE 15 56 At Campbell 7 48 At Valparaiso 21 31 MOREHEAD STATE 21 9 JACKSONVILLE 45 21 DAYTON (OT) 28 17 At San Diego 34 34 At Davidson 46 2009 Coach: Jeff Voris (11-1) 42 ALBION 3 49 At Franklin 19 42 HANOVER 21 28 At Morehead State (OT) 21 25 SAN DIEGO 24 23 VALPARAISO 14 23 At Campbell 16 14 DAVIDSON 7 31 At Dayton 28 7 At Jacksonville 36 20 DRAKE 17 28 CENT. CONN. STATE# 23 #--Gridiron Classic. 35 All-Time Letterwinners A Abel, Chris, 1989, 90 Abplanalp, Pat, 1987, 88 Abts, Henry, 1938, 39, 40 Ackman, Eric, 1992, 93, 94 Adams, Bob, 1963 Adams, Joe, 1952, 53 Adams, Mark, 2000, 01 Adams, Ron, 1960, 62, 63 Adika, Brian, 2006, 07, 08, 09 Agnew, Ralph, 1916 Ahrendts, Dick, 1954, 55 Albea, Mark, 1973, 74, 75 Alcala, Richard, 2004, 05, 06 Alcorn, Chad, 1987, 88, 89 Aldrich, Jeff, 2003, 04 Alenduff, Marty, 1964 Allanson, Mark, 1987, 88, 89 Allegretti, Carl, 1982 Allegretti, Jim, 1991 Allegretti, Paul, 1986 Allen, Edwin, 1928, 29, 30 Anderson, Cory, 2007 Andreadis, Lou, 1994, 95 Andress, Dave, 1972 Annee, Joe, 1985, 86, 87 (C) Annee, Louis, 1957 Anthony, Jim, 1969 Anthony, Leslie, 1903 Arbo, John, Jr., 1998 Armstrong, Scott, 1933, 34 Aronson, Mike, 1968 Atkins, Rick, 1991 Atkins, V. A., 1986, 88 Aton, Ross, 1995 Attaway, Al, 1968, 69, 70 (C) Austin, Gene, 1988, 89, 90, 91 Austin, Steve, 1992, 93 Avington, Ken, 1956, 57, 58 B Babinec, John, 1971, 73 Badger, Everett, 1912 Bailey, David, 1997, 98, 99 Bailey, Van, 1964, 65, 66, 67 Baird, Ron, 1987, 88, 89, 90 Baker, Bill, 1978, 79, 80, 82 Baker, Charles, 1889, 90, 91, 92 Baker, James, 1954, 55 Baker, Stephen, 1928 Baldwin, Chris, 1993 Barclay, Jim, 1976, 77 Barker, Ted, 1967, 68 Barnard, Scott, 1979, 80, 81 (C) Barnes, Chris, 1987 Barnes, George, 1996 Barney, Doug, 1962, 63 Barnhorst, Jay, 1984, 85, 86 Barr, Richard, 1949 Barron, John, 1986, 87, 88 Barthel, Tim, 1983, 84 Bartolomeo, Bob, 1974, 75, 76 Bastian, Bob, 1920 Baszner, Adam, 2002 Batalis, Nick, 1995, 96, 97, 98 Batrich, Donald, 1945 Batts, Roscoe, 1933, 34, 45 Bauer, Brian, 2000 Bauermeister, Carl, 1929, 30 Bays, Shawn, 1991, 92, 93 Baytala, Mike, 1998 Bearden, Cody, 2007 Beatty, Bart, 1996, 97 Beaverson, Neil, 1973, 75, 76 Beck, Art, 1962, 63, 64 Beck, Jeff, 1992 Beimesche, Andy, 1994 Belcher, William, 1936 Belden, Jim, 1962, 63 Belden, Randy, 1968, 69, 70 (C) Bell, Jim, 1981, 82, 83, 84 Belmonte, Efres, 1978, 79, 80 Benbow, Don, 1959, 60, 61 (C) Benjamin, John, 1955 Bennett, Mike, 2005, 06, 07, 08 Bennett, Paul, 1951, 53 Bennett, Richard, 1947, 48 Berglund, Brent, 1990, 91, 92 Berndt, Dick, 1952, 53, 54 Berninger, Mark, 1996 Bertke, Brian, 1982, 83 Bertuglia, Lennie, 1977, 78 Betz, David, 2000 36 Biddle, Herbert, 1962 Bidstrup, Dick, 1948, 49 Billick, Larry, 1978, 80 Biven, James, 1945 Black, Arthur, 1921, 25 Black, Damon, 1993, 94 Blackaby, Inman, 1935, 36, 37 (C) Blanchford, Vince, 1993, 94 Bland, Eugene (Tiny), 1945 Blanks, Derek, 1982, 83 Blare, George, 1940 Blasenak, Jason, 1995, 96, 97 Blessing, Robert, 1922, 23 Blevins, Rob, 2007, 08 Blum, Norm, 1968 Boa, Andy, 1935, 36, 37 Bohnert, Mark, 1972, 74, 75 Bohrer, Dan, 2006, 07, 08, 09 (C) Bole, Randy, 1972, 74, 75 Bolen, Terry, 1991, 92, 93 Bolger, Grove, 1984, 85, 86 Boltin, Charles, 1952, 53, 54 Bonham, Earl, 1916 (C) Booz, Ken, 1930, 31 (C) Bopp, Ed, 1966, 67, 68 (C) Bork, Bill, 1957, 58, 59 Bork, William, 2008, 09 Borrell, Adam, 1993, 94, 95 Bovenkerk, Greg, 1998 Bowers, David, 1995, 96 Bradford, Derek, 2007, 08, 09 Brand, Harry, 1903 Brandt, Ralph, 1930, 32 Branson, DeWayne, 1985, 86 Branyon, Jared, 2006, 07, 08 Braun, Gregg, 2000, 01 Breeden, Landy, 1979, 80, 81, 82 (C) Britt, Kevin, 1979, 81, 82 Brocchi, Mario, 1999, 00, 01 Brock, Bob, 1967, 68, 69 Brock, Ray, 1929, 30 Brocklesby, Ryan, 2001, 02, 03 Broden, Tom, 1942 Broderick, Charles, 1936, 37, 38 (C) Brodine, Jeff, 1964, 65 Brolsma, Kevin, 2003, 04, 05, 07 Brown, Archie, 1916 Brown, Charles, 1936 Brown, John, 1955, 61, 62 Brown, Leonard, 1969, 70, 71 Brown, Mark, 1903 Brown, Pat, 1970 Brown, Paul, 1919, 20, 21 Brown, Phil, 1920 (C), 21, 22 Brown, Robert, 1933, 34, 35 Brown, T. J., 2005, 06, 07 Bruner, Ralph, 1920 Bruton, Kyle, 2001, 02 Buchanan, Bill, 1887 Buchanan, Jerry, 1981, 83 Buchanan, Phil, 2003, 04, 05, 06 Buckner, Lawrence, 2002 Bugg, Bill, 1928 Bunn, Heath, 1994, 95 Bunnell, Kermit, 1932, 33, 34 Bunt, Ron, 1982, 83, 84, 85 (C) Burdette, Cody, 1935, 36 Burgner, Dan, 1964, 65, 66 Burker, John, 1967, 68 Burkett, Kip, 1977, 78 Burkhart, Clarence, 1911, 12 Burks, Jeff, 1992, 93 Burton, Chris, 1980 Burton, Kelly, 1995 Bush, Dave, 1960, 61, 62 Bush, Kyle, 2007 Butcher, Mark, 1975 Butler, Bert, 1960, 61, 62 Butler, Jerry, 1960 Butler, Mike, 1982 Butters, Tom, 1969 C Cahill, Andy, 2002 Caito, Mike, 1969, 70, 71 Caldicott, Nick, 2008, 09 Calvert, Mark, 1977, 78 Cameron, Harry, 1891 Campbell, Don, 1951, 52, 53 Campbell, Justin, 2001, 02, 03 (C), 04 Campbell, Travis, 1995, 96, 97 Canfield, Vincent, 1924 Capizzi, Lou, 2000, 01, 02 (C) Caporale, Egidio, 1958, 59 Caporale, Louis, 1954 Captain, Ron, 1962, 63, 64 Caranddo, Richard, 1966 Carbone, Dean, 1964 Carlberg, Chad, 1996, 97, 98, 99 (C) Carr, Don, 1960 Carson, Andy, 1968, 69 Carter, Ben, 1932 Carter, Glen, 1922 Carwile, John, 1980, 81, 82, 83 Casselman, Bob, 1977 Catanella, Ken, 1968 Cavosie, John, 1928, 29 Cecil, Carl, 1922, 24, 25 Celarek, Frank, 1939 Celarek, Kevin, 1970 Chakos, Ted, 1981, 82 Chaleff, Boris, 1942 Chandler, Scott, 1953, 54, 55 Chapman, Eric, 1982, 83, 84 Chapman, Jerry, 1974, 75 Chapman, Ralph, 1947, 48, 49 Chappuis, Mark, 1973, 74, 75 Chase, Anton, 2000, 01 Chaulk, Joe, 1974, 75, 77 Chelminiak, John, 1948, 49, 50 Chelovich, Colin, 2001, 02, 03, 04 (C) Cheviron, Mike, 1982, 83, 84, 85 Christner, Phil, 2003, 04, 05 (C) Chrobot, Mike, 1975, 76, 77, 78 Chrobot, Rob, 1984 Cilella, Mike, 1976, 77 Clark, David, 1982, 83 Clark, George, 1887 Clark, Jon, 2003, 04 Clarkson, Nick, 2006, 07, 08 Clarkson, Taylor, 2009 Clausen, Frank, 1903 Clayton, Steve, 1970, 71, 72, 73 (C) Cline, Howard, 1967, 68 Coachys, Jim, 1967 Cochran, Mike, 1987, 88, 89, 90 (C) Coddington, Addison, 1934 Coe, John, 1955 Coffman, Chris, 2000 Cohen, Bennie, 1940 Cole, Ryan, 2003 Collier, George, 1925 Collier, Harrison, 1926, 27 (C) Collins, Matt, 2000, 01, 02 (C) Collins, Rob, 1984, 85, 86 Colway, Gene, 1921 Comotto, Nick, 2007, 08, 09 Compton, John, 1931, 32 Compton, Mel, 1903 Compton, Todd, 1998 Condon, Kyle, 1994, 95, 96, 97 Condon, Louis, 1946 Conley, Bob, 1968 Conn, Scott, 2005, 06, 07 Conner, Dave, 2001 Conner, Kyle, 1997, 99, 00 (C) Connor, Bill, 1936, 37, 38 Connor, Bob, 1937, 38, 39 (C) Connor, Bob, 1978, 79, 80 Conrad, Carson, 1931 Constantino, Silvio, 1937, 38 Cook, Brandon, 2002 Cook, Scott, 1980, 81, 82, 83 Cooks, Jeff, 1994 Cooper, Howard, 1946 Cooper, Jerry, 1978, 79 Cooper, Ron D., 1970 Cooper, Ron L., 1970, 71, 72 Corbett, Mark, 1970, 71, 72, 73 Cornelius, Ed, 1946 Cornelius, George, 1916 Cornelius, Pem, 1946, 49, 50 Cornell, Andrew, 1992 Cosgrove, Walter, 1932 Cosler, Rob, 2008, 09 Costas, Spero, 1934, 35, 36 (C) Cottrell, Andrew, 2009 Coughlin, Kevin, 1971 Cougill, Brad, 1989, 90, 91 Cousino, Case, 2006 Crable, Brian, 2007, 08, 09 Craney, Jason, 1996, 97, 98, 00 (C) Craver, Jim, 1966, 67, 68 Crawford, Bob, 1964, 65 Crawford, George, 1935, 36 Crawford, John, 1936, 37, 38 Crawford, Stanley, 1938, 39, 40 Crawforth, Tim, 1949, 50 (C) Creach, Bill, 1996 Credille, Kevin, 2009 Crockett, Chuck, 1978, 79, 80, 81 Crosley, Howard, 1928, 29 Cross, James, 1890 Cross, Tom, 1950 Crouch, Dan, 1968, 69 Crumley, Jim, 1950, 51 Cruse, Glen, 1908 Cullum, Carl, 1890, 91 Cunningham, Dan, 1963 Cunningham, Dave, 1974, 75 Cunningham, Eric, 2004, 05 Cunnings, Robert, 1945 Currey, Doug, 1977, 78, 79, 80 Curtis, Richard, 1934, 35 Curto, Stephen, 2003 Cutter, Rob, 1985, 86, 87 (C) D Dainton, Ken, 1971 Daniels, Fred, 1916 Dauch, Andy, 2008, 09 Daugherty, Mike, 1975, 76, 77, 78 Davidson, Frank, 1889, 90 Davidson, Mike, 1982, 83, 84 Davies, Thomas, 1928 Davis, Alex, 2005, 06, 07 Davis, Chester, 1916 Davis, Chris, 2006, 07 Davis, Fred, 1951, 52, 53 (C) Davis, Gordon, 1928 Davis, John, 1952 Davis, Kent, 1999, 00, 01 Day, Bob, 1958, 59, 60 (C) Day, Derek, 2005, 2006, 07, 08 Dayton, Charles, 1931 DeCraene, Don, 1990, 91, 92 DeJaegher, Joe, 1992, 93, 94 Delaney, Dave, 1973 Del Busto, Mike, 1980, 81, 82 DeLuna, Ruben, 1989, 90, 91, 92 (C) Delph, Travis, 2001, 02 Dennison, Chuck, 1964, 65, 68 Depositar, Steven, 2009 Derickson, Kyle, 1999, 00, 01, 02 (C) DeShone, Scott, 1988, 89 DeTrude, Keith, 1973, 74, 75 DeWald, Steve, 1942 Dezelan, Joseph, 1938, 39, 40 (C) Dezelan, Joseph, Jr., 1964, 65 Dick, Andy, 1974 Diggins, Ryan, 2003, 05, 06 Dillon, Husani, 1995, 96, 97 Dimancheff, Boris, 1941, 42 Dinn, George, 1973, 74, 76 Dinwiddie, Will, 1995, 96 Dirksing, Chris, 1990 Disney, James, 1966 Disser, Kelly, 2000, 02, 03 Dixon, Travis, 1992, 93, 94 Dixon, Jim, 1987, 88, 89 Dobkins, Knute, 1942, 46, 47, 48 Doctor, John, 1980, 81, 82, 83 Dodds, Ronald, 1945 Dodson, Roger, 1973, 74, 75 Dold, Leslie, 1942, 46 Dombart, Tadd, 2007, 08, 09 Donovan, Brian, 1994 Donhauser, Dick, 1969 Donner, Chase, 2006, 07, 08 Doss, Bill, 1984, 85, 86 Douglas, Jim, 1957, 58, 59 d’Ouville, Paul, 1987, 88 Dovey, Parmalee, 1934 Dowd, Joe, 1973, 74, 75 Dowdy, D’Andre, 2005 Downham, Bob, 1963, 64 Doyle, William, 1941 Dugger, Doyle, 1939, 40 Dukes, Scott, 1941 Dullaghan, Dan, 1968, 69 Dullaghan, Dick, 1963, 64, 65 Duncan, Don, 1953 Durig, Chris, 2002, 03, 04, 05 Dury, George, 1984, 85, 86 Duttenhaver, Harry, 1920, 21, 22 (C) Dykhuizen, Joe, 1986 E Eagan, Bernard, 1952 All-Time Letterwinners Eaton, Joe, 1985, 86, 87 Eckerle, Mark, 1969, 70 Eckert, Kevin, 1998 Eddie, Mike, 2005 Egbers, Dan, 1979, 80, 81 Eichholtz, Bob, 1951, 52, 53, 54 (C) Eldridge, Gary, 1988, 89, 90, 91 Ellenberger, Norm, 1951, 52, 53 (C) Ellis, Jeff, 1981 Elser, Earl, 1930, 32 Elson, Brian, 1987, 88 Elson, Dave, 1991, 92, 93 Ely, Jeremy, 1995 Emmert, Dan, 2002, 03 England, George, 1951 Ennis, Willard, 1930, 32 Enrico, Jim, 1976, 77 Enright, Dave, 1963, 64, 65 Enright, Kevin, 1988, 89, 90 Ent, Harry, 1942 Eppard, John, 1980, 81 Epperson, Stan, 1968, 69, 70 Erickson, Carl, 1999, 00, 01, 02 (C) Errett, Zach, 1997, 98 Erschen, Marty, 1993, 94 Ertel, Mike, 1988, 89 Esary, Les, 1947, 48 Espich, Robert, 1985, 86, 87 Ewald, Clarence, 1953 Ewing, DeWayne, 1998, 99, 00, 01 (C) Evans, Jon, 1988 Eynotten, Bob, 1932, 33 F Fagan, Mark, 1981, 82 Fairchild, Larry, 1964, 65, 66 (C) Farmer, Bob, 2003, 04, 05 Farmer, Harry, 1947, 48, 49 Fattore, Len, 1961 Feichter, Harold, 1940 Ferree, John, 1922 Ferrell, John, 1914, 16 Fickert, Steve, 1970, 71, 72 (C) Fields, Randy, 1976, 77, 78 Fields, Tom, 1921 Fike, Edward, 1947, 49 Fillenwarth, Jack, 1985, 86, 87 Fine, Marion, 1945 Finney, Charles, 1996, 97 Fischer, Tom, 1973, 74 Fish, Guy, 1950, 51 (C) Fisher, Fred, 1947, 48, 49, 50 Fitzsimmons, Phillip, 1970, 71 (C) Flanigan, Tim, 1978, 79, 80 Fleck, Leslie, 1916 Fleming, Doug, 1985, 86 Fletcher, Bill, 1996 Fletcher, Francis, 1923, 24, 25 Fletcher, Nick, 1997, 99 Flick, Paul, 1995 Florence, Rich, 1962, 63 Flowers, Chris, 1990 Flowers, Dave, 1957, 58, 59 Flowers, Jim, 1989, 90 Floyd, Walter, 1923 Flynn, Mike, 1979 Fodor, Carl, 1955 Foor, Matthew, 2009 Ford, Bruce, 1974, 75, 76, 77 Ford, Jamie, 1994, 95, 96, 97 Forsythe, Chester, 1903 Fouty, John, 1950, 51 Frasca, Joe, 2003, 04, 05 Freas, Tom, 1972, 73 Frendenberger, George, 1928, 29 Freeman, Ken, 1960, 61 Freeman, Phil, 1894, 95 Freeman, Vince, 1985, 86 Freuchtenicht, Dick, 1939, 40 Freyn, George, 1953 (C) Friend, Jason, 2004 Fritz, Jacob, 2007, 08, 09 Fromuth, Alan, 1928 Fuhs, Dan, 1978, 79, 80 Fulaytar, Don, 1960 Fulk, Ralph, 1947 Fuller, Tom, 1990, 91, 92 Furnish, Paul, 1953, 56, 57, 58 (C) Furrey, Bryan, 1995, 96 Fus, Mike, 1985, 86 G Gallagher, Dan, 1959, 61 Galloy, Ryan, 1996, 98 Gamblin, Bill, 1954, 55, 56, 59 Gard, Marty, 1999 Garrett, Harvey, 1925 Garvey, Pat, 1967 Garwood, James, 1938, 39, 40 Gates, Damon, 1966, 67 Gatlin, Dan, 1987, 88, 89 Gatto, Joseph, 1945 Gauer, Greg, 1990, 91, 92 Gegner, Mike, 1983, 84 Geiman, Ken, 1942, 46 Geisert, Herman, 1928 (C) Gerlach, Les, 1951, 52, 53, 54 Giacomantonio, Mark, 2008, 09 Gilbert, Larry, 1966, 67, 68 Gillespie, Jim, 1967 Gillum, Joe, 1987, 88 Gillum, Mike, 1989 Gilmore, Donnie, 2007, 08, 09 Gilson, James, 1940, 41, 42 (C) Gilson, John, 1951, 54, 55, 56 Ginn, Bill, 1974, 75, 76, 77 Ginn, Dave, 1981, 82, 83, 84 (C) Glasgow, Joe, 1998 Glunt, Warren, 1928 Goad, Ryan, 1997, 98, 99 Goens, Larry, 1959, 60, 61 Goens, Mike, 1982, 83, 84 Goettler, Logan, 1998 Goettler, Steve, 1997 Goletz, Mike, 1996, 97, 98, 99 (C) Gollner, Bob, 1951 Golomb, Larry, 1965 Gongwer, Ed, 1887 Gonzales, Dax, 1988, 89, 90, 91 Gooch, Thomas, 1945 Good, Charles, 1912 Good, Noel, 1945 Goshert, Rob, 1975, 76, 77 Graham, Alva, 1920, 21 Gray, Rich, 1969, 70 Gray, Robert, 1962 Gray, Scott, 2007, 08, 09 Green, Gary, 1959, 60, 61 Green, Paul, 1934 Greene, Carlton, 1964 Greenwald, Jake, 2005, 06 Greer, Kent, 1990, 91, 92 Greer, Shane, 1992, 93 Greisl, Kevin, 1977 Grenda, Bob, 1971, 72, 73, 74 (C) Gribbins, Kevin, 1991, 93, 94 Grider, Chris, 2006, 07 Griggs, Haldane, 1921, 22, 24 Grimes, Chris, 1999 Grimes, Ricardo, 1976, 77, 78 Grimm, Lee, 1961, 62, 63 (C) Grissom, Joe, 1951, 57, 58, 59 Griswold, Ron, 1994, 95, 96, 97 (C) Gross, Jim, 1977 Gruber, John, 1972, 73 Guggenberger, Derek, 2006, 07, 08, 09 Guyer, Rick, 1947 H Hacker, Dillon, 1903 Haggard, Gordon, 1928 Hailey, Artis, 2009 Haley, Jr., Joe, 2000, 01, 02, 03 Hall, Archie, 1889 Hall, Bob, 1889, 90 Hall, Richard, 1921 Hall, Steve, 1997, 98, 99 Hall, Tom, 1889, 90, 91 Hallam, Ron, 1950, 51 Halloran, Dan, 1972 Hamilton, Bob, 1942, 46, 47, 48 Hamman, Bruce, 1950, 51 Hanson, Tyler, 2001, 02 Hardee, Craig, 1986, 87, 88 Harding, Thomas, 1937, 38, 39 Harkin, Jeremy, 1995, 96, 97, 98 (C) Harrell, John, 1955, 56, 57 (C) Harrington, Paul, 1976, 77, 78, 79 (C) Harris, Bill, 1973, 74 Harris, Justin, 2001, 02, 03 Harrison, Mike, 1965, 66, 67 (C) Hart, Jim, 2002, 03, 04 (C) Hart, Scott, 1995 Hartley, Mike, 1985, 86, 87 Hartley, Tim, 1908, 09 Hartman, Ray, 1908 Harvey, Stuart, 2008 Haste, Mark, 1985 Hatzikostantis, Alex, 1998, 99, 00 Haugh, Otis, 1903 Hauss, Craig, 1965, 66 Hauser, Craig, 1994, 95 Hauss, Jim, 1936, 37 Hauss, John, 1977, 78 Hayes, James, 1938 Haynes, Chris, 2003, 04, 05 (C) Hazelwood, Nathan, 1999, 00 Heacox, Richard, 1945 Healey, Brian, 1996 Healy, Brendan, 1998, 99, 00, 01 Heard, Sidney, 2003 Hearne, Brian, 1980 Heberet, Fred, 1971 Heckman, Drew, 2002, 04 Hedden, Frank, 1929 Hegwood, Mike, 1984, 85, 86, 87 Hein, Dave, 1972 Heines, Tom, 1955 Heintzman, Rob, 1984 Heise, Jonathan, 2003, 04 (C) Helmerich, Matt, 1998, 99 Helms, Larry, 1959, 60, 61 Helton, Carter, 1922, 23, 24, 25 Hensel, Hiram, 1922, 23, 24, 25 Hess, Ben, 2003, 04 Hess, Dick, 1955 Hess, Jim, 2004, 05, 07 Hiday, Mike, 2005, 06, 07 Hildebrand, Keith, 2005, 06 Hill, Erick, 1990, 91 Hill, Jon, 1991, 92, 93 Hilligrass, Frank, 1916 Hillring, Oscar, 1940, 41 Himmel, Keith, 1970, 71, 72, 73 (C) Hinchman, Hubert, 1928, 29, 30 Hinkle, Don, 1947, 48, 49 Hitch, Ralph, 1924, 25 Hitchcock, Randy, 1979, 80, 81 Hitchcock, Ryan, 2008, 09 Hobson, Rob, 2009 Hockett, Dave, 1962, 63 Hoffman, Jason, 1994, 95 Hoffman, Mark, 1969, 70, 71 Holliday, Keen, 1894, 95 Hollingsworth, Joe, 1947 Hollstegge, Dan, 1984 Holman, Rob, 1983, 84 Holmes, Larry, 1934, 36 Holok, Al, 1967, 68, 69 Holsclaw, Jim, 1995, 96, 97 Holst, Dick, 1961 Holthaus, Brian, 2003, 04, 05, 06 Holthaus, Matt, 1998, 99 Holthaus, Scott, 2000, 01, 02 Hoover, John, 1986 Horvath, Bill, 1942 Horvath, Steve, 1947 Hosier, Maurice, 1928, 29 (C) Hoskins, Jim, 1982, 83, 84 Hosmer, Jordan, 2006, 07 Housand, Kevin, 2004, 05 Howard, Andy, 1979, 80, 81, 82 Howard, Bill, 1941, 42 Huber, Dan, 1990 Huber, Joe, 1992 Huck, Andrew, 2009 Huddleston, Lew, 1887 Huff, Todd, 1996, 97, 98, 99 Huffman, Harold, 1970, 71 Hughes, Wayne, 1931 Hughett, Bill, 1953 Hummell, Frank, 1889 Humphrey, Pete, 1978, 79, 80, 81 Hungate, Harold, 1920, 21, 22, 23 (C) Hunt, Rick, 1992 Hunter, Grant, 2008, 09 Hunter, Ken, 1980 Hurrle, Bob, 1950, 55 Hurrle, Ott, 1946, 47 (C), 48 Hurrle, Otto, 1972, 73 Hurley, Marshall, 1954 Hutson, Mike, 1988, 90, 91 Hutt, Matt, 1991 Hysong, James, 1966, 67 (C) I Inman, Dave, 2003, 04, 05 (C), 06 Iozzo, Tom, 1966 Isenbarger, Tom, 1972, 73 Isom, Justin, 1999 J Jackson, John, 1960, 61 Jackson, Tim, 1964 Jacobs, Jay, 1952 Jacobs, Toby, 1993, 94, 95 (C) Jarrett, Robert, 1929 Jasinski, Mike, 2004, 05 Jefferson, Bill, 1974 Jenkins, John, 1985, 86, 87, 88 Jennings, Dale, 1999, 2002 Jennings, Floyd, 1950 Jennings, Tom, 1989 Jensen, Phil, 1984, 85, 86 Jeter, Mel, 1962, 63 Johnson, Ben, 1950, 53 Johnson, Brandon, 1998, 99 Johnson, Charles, 1951, 52 (C) Johnson, E. Paul, 1972, 73 Johnson, Kevin, 1990, 91 Johnson, Luke, 2007, 08 Johnson, Richard, 1991, 92, 93 Johnson, Spurgeon, 1933 Johnson, Walter, 1930 Johnston, Greg, 1986 Johnston, John, 1965, 66 Joiner, Doug, 1995 Jones, Ben, 2001 Jones, Gary, 1962 Jones, Larry, 1947 Jones, Richard, 1956 Jones, Todd, 1985, 86, 87 Jones, Tom, 1963 Jones, Walter, 1903 K Kalm, Philip, 1945 Kammer, Don, 1941 Kantor, Jerry, 1956 Kappen, Steve, 1985, 86 Kathman, Dave, 1991, 92, 93 Katris, Pete, 1977, 78, 79 Kavanaugh, Jim, 1946 Kavanaugh, Joe, 1946 Kazmierczak, Paul, 1977, 78, 79, 80 Keach, Bob, 1922, 23, 24, 25 Kealing, Marshall, 1931, 32 Keckler, Al, 1958 Keeling, Cliff, 1903 Kehrer, Dick, 1967 Kelleher, Joe, 2003, 04 Keller, Todd, 1976, 77, 78, 79 (C) Keller, Tyler, 2005, 06 Kelly, Don, 1951, 52 Kelly, Joe, 1957, 58 Kelly, Pat, 1993, 94 Kelly, Ryan, 1998, 99, 00 Keltner, Ken, 1962 Kennedy, Mannert, 1953, 54, 56 Kenney, Howard, 1972 Kerbox, Will, 1946 Kerins, Craig, 1999, 00, 01, 02 Kiel, Kip, 1984 Kilgore, David, 1924, 25 Kilgore, Charles, 1932 Kilgore, Marc, 1975 Kimble, Kevin, 1990, 91, 92 (C) Kincaid, Jeff, 2005, 06, 07 King, Pat, 1977 King, Robert, 1945 King, Ron, 1950 Kingsburty, John, 1903 Kiolbassa, Ron, 1988, 89, 90 Kirk, Mike, 1976, 77, 78 Kirk, Pat, 1974, 76, 77 Kirkhoff, Louis, 1912 Kirn, Art, 1969, 70 Kirschner, Arnold, 1968, 69, 70 Kiser, Dwight, 1920, 32 Kiser, William, 1919, 20, 21 (C), 23 Kisselman, Harry, 1967 Kistler, Matt, 1991, 92, 93 Klawitter, Gordon, 1966 Klett, Mike, 1992, 93, 94 Kline, Frank, 1935 Klug, Greg, 1980 Klusman, Tom, 1984, 85, 86, 87 Knapik, Rob, 2002, 03 Knieper, Steve, 1982, 83, 84 Knight, John, 1995, 96 Knock, Reginald, 1931 Knox, James, 1954 Kobli, Matt, 2007, 08 Koblinski, Kyle, 2006 37 All-Time Letterwinners Koch, Jim, 1977, 78 Kocur, Carl, 1988, 89, 90 Kodba, Joe, 1942 Koehnen, Joe, 1987, 88, 89 Koehler, Kurt, 1976, 77 Koenig, Russ, 1975, 76 Kokinda, Jack, 1967 Kokonas, Eric, 2006 Kolkmeyer, Tim, 1979, 80, 81 Kollias, Steve, 1982, 83, 84 Kollins, John, 1957, 58, 59 Konold, David, 1921, 22, 24, 25 Koopman, Jordan, 2008, 09 Kosior, Casey, 1977, 78, 90 Koss, Harry, 1933 Koteff, Robert, 2009 Kotulic, Wayne, 1967 Kozlowski, Ron, 1964, 65, 66 Krause, Frank, 1962, 63 Kreag, Bill, 1937, 38, 39 Krebs, Jack, 1961 Kremer, David, Jr., 1995, 96, 97, 98 Kress, Pat, 1967, 68 (C) Kroger, Bob, 1985, 86 Kruse, William, 1941 Kubal, James, 1937, 38 Kuhn, Chris, 1998, 99 Kuntz, Bill, Jr., 1973, 74 Kuntz, Bill, Sr., 1946, 47, 48, 49 (C) Kuntz, Joe, 1989 Kutschke, Jim, 1965 Kuykendall, Bob, 1950 Kuzmic, Gene, 1953 Kwiatkowski, Mark, 2005, 06, 07, 08 Kyvik, Curtis, 1946, 47, 48, 49 L Lafferty, Adam, 2000, 01, 02 Lamar, William, 2009 Lambert, Bob, 1977, 78, 79 Lanahan, Victor, 1937, 38, 39 Landry, Greg, 1974, 75 Landry, Virgil, 1950, 51 Lane, C. D., 1903 Lang, David, 2008, 09 Langston, Ken, 1992 LaRose, Ken, 1976, 77, 78, 79 (C) Larsen, Jeff, 2009 Larson, Todd, 1988, 89, 90, 91 LaVine, Dave, 1942, 46 Lawson, Rob, 2005 Lawyer, John, 1950, 53 Laymon, Clarence, 1933, 34 (C), 35 (C) Laymon, Dan, 1891 League, Vincent, 1966, 67 Leamon, Charles, 1948 Lee, Mike, 1983, 84, 85 (C), 86 (C) Lees, Ed, 1983, 84 Leffler, Jim, 1968, 69 Leffler, K. C., 1990 Leffler, Ken, 1963, 64, 65 Lehane, Dan, 1954, 55 Leininger, John, 2000, 01, 02, 03 (C) Leonard, Bob, 2001, 02, 03 (C) Leonard, Dennis, 1972, 73, 74 Leslie, John, 1920, 21, 22 Lewis, Edwin, 1911, 12 (C) Lewis, Joe, 1964 Lewis, Richard, 1967, 69, 70 Ligda, Bob, 1974, 75, 76 Lill, Jim, 1967, 68, 69 Lister, John, 1894 (C), 95 (C) Livingston, Rob, 1993, 94 Livorsi, Mike, 1949, 50 Lockhart, Arthur, 1912 Loeffler, Greg, 1981, 82 Logan, Greg, 1968, 69 Logan, Mike, 1984, 85 Logsdon, Tim, 1975, 76, 77 London, Ralph, 1951, 52, 53, 54 (C) Long, Karl, 1957, 58, 59 Long, Phil, 1959, 60, 61 Loop, C., 1898 Loop, Marvin, 1894, 95, 98 Lopez, Davey, 2004 Lord, Jack, 1962, 63 Lumpkin, Tijuan, 1994 Lupus, Peter, 1951 Lutkewitte, Mike, 2005 Lynch, Bill, 1972, 74, 75, 76 (C) Lynch, Jim, 1962, 63 Lyon, Jim, 1966 Lyons, Greg, 1991, 92 38 Lyster, Kevin, 1996, 97, 98 Lytle, Tim, 1999, 00, 01, 02 M Macek, Joseph, 1935, 37 Macharaschwili, Dave, 1988, 89, 90 Maglish, Joe, 1981, 82, 83 Magnuson, Bob, 1960 Maheras, Tom, 1987, 88 Maki, Wilho, 1928 Mahoney, Leo, 1952, 53, 54, 55 Mallonee, John, 1975, 76 Malone, Zach, 1995 Mangeot, Dan, 1990, 91, 92, 93 Mangin, Gene, 1951, 52, 53, 54 Manka, John, 1950, 51 Mann, Henry, 1889 (C), 90 (C) Mann, Russ, 2000, 01, 02 (C) Mariacher, Greg, 1987, 88, 89 (C) Marienthal, Matt, 2001, 02 Marion, Cecil, 1933 Marmion, Mike, 1951, 52 Marrs, Joel, 1999 Marrs, Lucas, 1995, 96, 97, 98 Martin, Brandon, 2000, 01, 02, 03 (C) Martin, Gary, 1977 Martin, Luther, 1934, 35 Martin, Vic, 1996, 97 Marzotto, Chris, 2004, 05, 06, 07 (C) Marzotto, Mike, 2005, 06, 07 Masarachia, Vincent, 1935, 36, 37 Massey, Jeff, 1987, 89 Masters, Nolan, 1954, 55, 56 Maternowski, Charles, 1947, 48, 49 Mattingly, Dan, 1979, 80, 81 Mattingly, Pete, 2008, 09 Maxey, Bob, 1955, 56 McCalip, Robert, 1941, 42 McCanna, Matt, 2003, 04, 05 McCarthy, William, 1928, 29 McCauley, Ed, 1960 McClafin, Bill, 1920, 21 McClarnon, Kevin, 1972, 73 McClelland, Harry, 1935 McClure, Brian, 2005, 06 McConnell, Kurt, 1985, 86, 87 McCool, Chris, 1988 McCowan, George, 1973, 74 McCray, Jim, 1979, 80, 81 McCullough, Joe, 1994 McDaniel, Cameron, 1992, 93, 94 (C) McDaniels, Charles, 1935, 36 McDevitt, Kevin, 1973, 74, 75, 76 McDevitt, Mike, 1970, 71, 72 McDowell, Charles, 1938 McElderry, Tim, 1986, 87 McGann, Colin, 2003, 04, 05, 06 McGary, Chris, 1979, 80, 81, 82 McGeorge, Mike, 1978, 79, 80 McGinley, Mike, 1961, 62, 63 McHale, Eddie, 2007, 08, 09 McHugh, Jeke, 1946 McInerney, Will, 2005, 06 McIntire, Jim, 1958 McKay, Bob, 1908 (C), 1909 McKenna, Jack, 2008, 09 McKenzie, Marshall, 1957 McLinn, Jim, 1946 McMahon, Dave, 2002, 03, 04, 06 (C) McManamon, Eugene, 1931 McNerney, Chester, 1933 McSemek, Ray, 1946, 47, 48, 49 Means, Adell, 1993, 94 Mecum, Ralph, 1930 Medford, Andrew, 2005, 06, 07 Meeker, Ray, 1887 (C), 89, 90 Meier, Frank, 1955 Melloh, Nick, 1991 Meloy, Jim, 1980 Melzoni, Rusty, 1985, 86, 87 Mench, Tom, 1972, 73 Menely, Ron, 1989 Mercer, Phil, 1957, 61 Merlina, Dino, 1981, 82, 83, 84 Merrill, William, 1935, 36, 37 Metrick, Mike, 1995 Metzelaars, Charles, 1940 Metzinger, Dave, 1973 Mewborn, Mike, 1985, 86 Meyers, Claude, 1903 Meyers, David, 2005, 06 Michelakis, Mike, 1992, 93, 94 Michielutti, Eric, 1999, 00, 02 Mickens, Arnold, 1994, 95 (C) Middlekauff, Lance, 1961 Middlesworth, Hugh, 1920, 21, 22, 23 Miedema, Ernie, 1951 Mike, David, 1975 Miles, Joey, 1995, 96, 97 Miller, Andy, 1991, 92 Miller, Doug, 1996, 97 Miller, George, 1889, 90, 91 Miller, Harold, 1941, 42 Miller, James, 1945 Miller, Merle, 1925 Miller, Nate, 2004, 05, 06, 07 Miller, Ray, 1930 Mills, Tom, 1984 Minczeski, Matt, 1976, 77 Minnick, Kelly, 1980, 81, 82, 83 Mitchell, James, 1942 Mitchell, Steve, 1976 Mitschelem, Lyle, 1962, 63, 64 (C) Moan, Steve, 1991 Moore, Bill, 1967, 68 Moore, Damon, 1994 Moore, Jeremy, 1996, 97 Moore, John, 1887 Moore, Ken, 1920 Moore, Paul, 1932, 33, 34 Moore, R. E., 1887 Moore, Ralph, 1931, 32 Morales, Seth, 1998 Morelli, Mark, 1973, 74, 75 Morgan, John, 1911, 12 Morgan, Mike, 2007 Moriarity, Francis, 1942, 46, 47, 48 (C) Morris, Kevin, 1992, 93, 94 Morrison, Ino, 1887 Moseley, Keith, 1984, 85, 86 Moses, John, 1956, 57, 58 Mossey, Harold, 1939, 40 Mozingo, Ernest, 1931 Muckerheide, Lynn, 1968, 69, 70 Mueller, Derk, 1997, 98 Mulholland, George, 1923, 24, 25 Mullis, Don, 1997, 98, 99 Mullane, Dan, 1911, 12 Mullane, J., 1911 Mullane, Price, 1916 Mulvihill, Tom, 2006, 07 Murphy, Chris, 1995, 96, 98 Murphy, John, 1946, 47, 48, 49 Murphy, Kevin, 1984 Murphy, Mike, 1981, 82, 83 Murphy, Scott, 2004, 05 Murphy, Thorn, 2008, 09 Murray, Gene, 1993 Murzyn, Dale, 2004, 05, 06, 07 Muse, Frank, 1887, 89 Musgrave, Emerson, 1934, 35, 36 Muta, Harry, 1973, 74, 75 Myatt, Gene, 1972, 73 Myers, Steve, 2002, 03 N Nackenhorst, John, 1935, 37 Nardini, Billy, 2003, 04, 05 (C), 06 Nardo, Nicholas, 1954, 55, 56, 57 Natfzger, George, 1928, 29 Naylor, Mickey, 1981, 82, 84, 85 Neeme, Emil, 1942 Nelson, Andy, 2002, 03, 04 (C) Nelson, Ian, 2002, 03, 04 Neumeier, Eric, 2002, 03, 04, 05 Newcomer, Dave, 1980, 81, 82 (C) Newell, Rick, 1973, 74 Ney, Bill, 1957 Nichols, John, 1889, 90, 91 Nicholson, Ken, 1953, 54, 55, 56 Niemeyer, John, 1966, 67, 68 Nipper, Bob, 1922, 23, 24, 25 Nixon, Ken, 1945 Noel, Rob, 2005, 06, 07, 08 Nolan, Dan, 1968, 69, 70 Norkus, Bill, 1952 Norman, Clyde, 1937, 38 Norris, Belmont, 1931 Norris, Elwood, 1940, 41 Norris, Paul, 1972, 73 North, Derek, 2004, 05, 06, 07 Northam, John, 1922, 23, 25 Notestine, Eric, 2005, 06, 07, 08 Novack, Mitch, 2002 Nulf, George, 1928 Nyers, Jim, 1951 O O’Banion, Elmer, 1958, 59, 60 O’Banion, Tim, 1973, 74 Ober, Nick, 1998, 99, 00, 01 (C) Oberheiman, John, 1962 Oberting, Dave, 1961 O’Brien, Tom, 1952 Ochs, Kyle, 1990, 91, 92 (C) O’Connor, Charles, 1935, 37 O’Connor, Ed, 1934, 35, 36 O’Connor, Frank, 2003, 04, 05 Offerle, Mike, 1965, 66 Oilar, Cliff, 1957, 58, 59 O’Leary, Tim, 1974, 75, 76 Olinger, Scott, 1982, 83, 84 (C) Olinghouse, Dave, 1954 Oliphant, Frank, 1942 Oliver, Dave, 1973, 74, 75 Olszewski, Bob, 2009 Olszta, Rick, 1999, 00 Opatkiewicz, Mark, 1976, 77 Opel, Doug, 1978, 79, 80 Oppenlander, Ben, 1971, 74 Orban, Chuck, 1988, 89, 90 (C) O’Rourke, Jason, 1992, 93, 94 Orphey, Steve, 1966, 67, 68 (C) Osborne, Pat, 1894, 95 P Pachacz, Greg, 2004, 05, 06, 07 (C) Page, Paul, 1984, 85, 86 Paliska, Steve, 2004 Palmer, Jeff, 1983, 84, 85 Parker, Ben, 1995, 96, 97 Parker, Ed, 1894, 95 Parks, Noble, 1980, 81, 82, 83 Paul, Gordon, 1922, 23, 24, 25 Paul, Judson, 1928 Paul, Justus, 1911, 12 Paulson, Craig, 1974, 75 Pavey, Jesse, 1912 Peconge, Mike, 1979, 80, 81, 82 Pedigo, Bob, 1956, 57 Peebles, Julian, 1967 Pence, Tony, 1978, 79, 80, 81 (C) Pendleton, Matt, 2000, 01 Pendleton, Tim, 1993 Perkins, Dave, 1989, 90 Perkins, Harry, 1916 Perozzi, David, 1995, 96 Perrone, Mel, 1941, 42, 46 Perry, Bob, 1966 Perry, George, 1936, 37 Person, Robbie, 2008 Peters, David, 1983, 84, 85 Peterson, Dave, 1947, 48, 49 Phillips, Mark, 1988, 89, 90 Phillips, Wallace, 1903 Pianto, Jerry, 1987, 88, 89, 90 Pierce, Jesse, 1999 Piety, Jeff, 1978 Piko, Jim, 1994, 95, 96 (C) Piko, Joe, 2000, 01, 02 Pinkston, Chad, 1999 Pittman, Joe, 2007 Place, Ashley, 1999, 00, 01 Pollizotto, Sam, 1930 Popa, Ryan, 1994, 95 Poremba, Matt, 2000, 01, 02, 03 Poss, Jeff, 2008, 09 Potter, Wally, 1941, 42, 46 (C) Powell, Ames, 1959 Powell, Fred, 1970, 71, 72 Powell, Zane, 1940, 41 Prather, Brad, 1983, 85 Presecan, Nick, 1937 Pritchard, Brian, 1990 Pryor, Dave, 1971 Puchley, Tom, 1982, 83 Puett, James, 1928, 29, 30 Purichia, Joe, 1964, 65 Purkhiser, Bob, 1938, 39, 40 Puskas, Steve, 1956 Q Quale, Dan, 1974 Quiesser, Tim, 1975, 76 Quigg, Ron, 1964 Quiroz, Jordan, 2005, 06, 07, 08 R Raber, Nelson, 1930, 32 Rabold, John, 1937, 39, 40 Rains, Darrell, 1971 Ramsey, Robert, 1954 All-Time Letterwinners Rashevich, Steve, 1991, 92, 93 Ratliff, Vern, 1960, 61 Ray, Cecil, 1931, 32, 33 (C) Ray, Jack, 1887 Read, Scott, 1975, 76, 77, 78 Reagan, Kevin, 2000, 01 Reddle, Scott, 2001, 02, 03 (C) Redmon, Bud, 1887 Redmond, Tom, 1970, 71, 72 (C) Reed, Dave, 1969 Reed, Dick, 1967, 68, 69 Reed, Jason, 1997, 98, 99 Reese, Elgin, 1991, 92, 93, 94 Reichel, Louis, 1922, 23, 24, 25 (C) Reiff, Todd, 1985, 86 Reisler, Phil, 1939 Renie, Tim, 1961, 62 Renner, Jack, 1949 Renners, Randy, 1986, 87, 88 Reno, John, 1940 Renschen, Randy, 1996, 97 Reynolds, Cleon, 1928, 29, 30 Ribordy, Mark, 1984, 85, 86 (C) Richmond, Warren, 1967, 68, 69 Riddle, John, 1951, 52 Ridley, Jordan, 2009 Reigle, Charles, 1967, 68 Ringer, Jim, 1957, 58, 59 (C) Roach, Robert, 1980, 81, 82, 83 Robbins, Patrick, 2003, 04, 05 Roberts, Bob, 1939, 40, 41 (C) Roberts, Dick, 1957, 58, 59 Roberts, Don, 1949 Roberts, Steve, 1986, 87, 88, 89 (C) Roberts, Tim, 1989 Robertson, Troy, 1986 Rodick, Don, 1949, 50 Rodick, Joe, 1941 Rodman, Mark, 1977, 78 Roeder, Jim, 1990, 91, 92 Roehling, Todd, 1989, 90 Roembke, Ron, 1986, 87, 88, 89 Rohrabaugh, Tom, 1954, 55 Romanowski, Paul, 1990, 91 (C) Rooney, Pat, 1987, 88, 89 Rose, Brandon, 1995, 96, 97 Rosenstihl, James, 1947 Rosner, Barney, 1964 Ross, Larry, 1954 Rossell, Keith, 1991, 1992, 93, 94 Rossell, Tim, 1991, 92 Rothhaar, Karl, 1975, 76 Rowley, Mike, 1956 Roy, Curt, 1980, 81, 83 (C) Royce, Francis, 1928, 29 Rudd, Donald, 1938, 39 Rudnicky, John, 1941 Rufli, Lewis, 1929, 30 Runyon, Robert, 1948, 49, 50 Rush, Joel, 2000, 01, 02 Rush, Mike, 1978, 79, 80 Russell, Kevin, 1995 Ruth, Greg, 2005 Rykovich, Bob, 1971, 72 Rykovich, Tom, 1969 S Saam, Tim, 1988, 89 Sadler, Steve, 1964, 65, 66 Salvi, Chris, 2008 Sanders, Brian, 1992, 93, 94 Sanders, Naim, 1995, 96, 97, 98 (C) Safford, Bob, 1951, 52 Sales, Andy, 1983 Sales, Tony, 1981, 82, 83 Sayler, Tom, 1964, 65 (C) Schaffer, Jim, 1975, 76 Schankerman, Maurice, 1949, 50 Scheller, Tom, 1984, 85 Schluge, Lee, 1972, 73, 74 Schluge, Phil, 1971, 72, 73 (C) Schmidtz, Ryan, 2005, 06, 07, 08 Schmitz, Harold, 1971 Schofield, Byron, 1935, 37 Schopf, Robert, 1928, 29 Schuesler, John, 1949, 50 Schuler, Mike, 1991 Schultz, Steve, 1986, 87 Schwanekamp, Brent, 1999, 00, 01 Schwanekamp, Chuck, 1974, 75, 76, 77 Schwecke, Joe, 1977, 78, 79, 80 (C) Schwingendorf, Kyle, 1998 Scifres, Bruce, 1976, 77, 78 Scola, Steve, 1999 Scott, Charles, 1945 Scott, Taylor, 2007 Seal, Mickey, 1960, 61 Sebo, Eric, 1984, 85 Secrist, Ryan, 2007, 08 Seidl, Mike, 2004, 05 Sells, Tom, 1958 Sewards, Bill, 1998 Shafer, Ron, 1957 Shaffer, Andy, 1998, 99 Shaffer, Todd, 1990, 91, 92, 93 Shanteau, Craig, 1977 Sharp, Steve, 1984, 85, 86 Shaw, Aaron, 2006, 07 Shaw, Scot, 1975, 76, 77, 78 Sheehan, Dan, 1955 Sheley, Craig, 2001 Shelton, Derek, 1991 Shepherd, Jim, 1960 Sheridan, Hansel, 1960, 61, 62 Sherwood, Dick, 1947 Shibinski, Mike, 1977, 78, 79, 80 (C) Shirey, Dan, 1987, 88, 89 Shomber, Kevin, 1987, 88, 89 Shook, Larry, 1960, 61, 62 Shultz, Jerry, 1960, 61 Siefert, Mel, 1983, 84 Simpson, Ralph, 1933, 34 Skaggs, Tyler, 2009 Skirchak, John, 1958, 59, 60 Slama, Brett, 1999, 00 Slatter, Anthony, 1998, 99 Sleet, Tom, 1941, 42, 46, 47 Smiley, Ephraim, 1970, 71 (C) Smith, Parker, 2000, 01, 02, 04 (C) Smith, Wayne, 1983, 84, 85 Smock, Ken, 1947, 48 Smothers, Joe, 1967 Snyder, Dan, 1971 Snyder, Leroy, 1945 Sohl, Charles, 1930, 32 (C) Sorrentino, Joe, 1977, 78, 79 Southern, John, 1925 Speron, Roman, 2000, 01, 02 Sporer, Albert, 1937, 38 Spraetz, Ken, 1956, 57, 58 Springer, Bob, 1903 Springer, Dennis, 1989, 90, 91 (C) St. Clair, Steve, 1976, 77 Stahl, Jason, 1992, 93, 94 (C) Stahley, Wayne, 1970, 71, 72 Stalcup, Bill, 1937 Staller, Eldon, 1934, 35, 36 Stallings, Ramon, 1992, 93, 94 Staniewicz, Mike, 2007, 08, 09 Stayer, Tom, 1975, 76, 77 Stearns, Jeff, 1973 Steinmetz, Mark, 1965, 66 Stephenson, Mawrie, 1920 Stermont, Winifred, 1911, 12 Stevenson, Tom, 1894, 95 Stewart, Donald, 1940, 41 Stewart, George, 1965 Stewart, James, 1931, 32, 33 Stewart, Kent, 1957, 58, 59 Stewart, Pete, 1962 Stewart, Robert, 1932, 33, 34 Stewart, Weylin, 1990, 91, 92 Stockslager, Walt, 1957, 58, 59 Stoddard, Eli, 1995, 96, 97 (C) Stone, Robert, 1945 Stout, Waldo, 1934, 35, 36 Stoyko, Steve, 1940, 41, 42 Strahl, James, 1928, 29 Stratman, Rick, 1983 Stratton, Bill, 1945 Straub, Robert, 1949 Streiff, Rick, 1980, 81, 82, 83 Strickland, Richard, 1921, 22, 23 Strole, Gerald, 1922, 23, 24, 25 Stropes, Claude, 1940 Stryzinski, Bob, 1959 Stryzinski, Ron, 1981, 82, 83, 84 Stubbs, Dwayne, 1989, 90 Stump, Jeremy, 2000, 01 Sturgeon, Harland, 1950, 51 Sturm, Don, 1958 Sullivan, Joe, 1930 Sullivan, Logan, 2008, 09 Summerlin, Harold, 1911, 12 Summerville, Spencer, 2006, 07, 08, 09 (C) Sutphin, Dave, 1964, 65 Sutphin, Karl, 1934 Swager, Ralph, 1938, 39, 40 Sweet, Eric, 1979 Sweet, Jeff, 1986, 87 Swift, Cliff, 1934, 35, 36 Swihart, Dave, 1972, 73, 74, 75 (C) Sykes, Lamar, 2003, 04 Sylvester, Bill, 1946, 47, 48, 49 Sylvester, Bill, 1981, 82, 83 Sypult, Chuck, 1982, 83 Sypult, Gene, 1950, 53 T Taber, Matt, 2002, 03, 04 Talarico, Sam, 1988, 89 Tanner, Gordon, 1942 Tapscott, Ralph, 1912 Teague, Frank, 1926 Teague, Jeff, 1986, 87 Teare, Ross, 2008, 09 Templeton, Harold, 1930 Tennant, Jace, 2009 Thaung, Gunner, 1925 Thein, Ernie, 1983 Thomas, Cullen, 1908, 09 (C), 11 (C) Thomas, Larry, 2008, 09 Thomas, Ryan, 2002, 03 Thomas, William, 1933, 34 Thompson, Bill, 1966 Thompson, Brad, 1996, 97, 98 Thompson, Ed, 1894, 95, 98 Thompson, Ed, 1977, 78 Thompson, Leroy, 1953, 54, 55, 56 Thompson, Phillip, 1934, 35 Thompson, Terry, 1979, 80, 81, 82 Thompson, Wes, 1962, 63 Timm, Adam, 1996, 97, 98 Tinder, Ed, 1969, 70 Tiscusan, John, 1939 Toelle, Lowell, 1939, 40, 41 Toner, Chris, 1992, 93, 94 Toner, Dave, 1970 Toon, Herod, 1945 Toran, Derrick, 1986 Torchia, Bill, 1964, 65 Torrence, Steve, 1980, 81, 82, 83 (C) Trabant, Nick, 1998, 99, 00, 01 Tracey, Justin, 2004, 05 Trott, Edward, 1934, 35, 36 Trujillo, Ricky, 2006, 07, 08, 09 Tucker, Albert, 1912 Turley, Ryan, 1994 Turner, B. J., 1996 Tyson, Rick, 2004, 05, 06 (C) U Uhl, Steve, 1992, 93 Ulery, Buck, 2004, 05, 06, 07 Unser, Emil, 1939 Updegraph, Hughes, 1921 Urick, Tom, 1995 V Vandawark, Floyd, 1916 Vandermeer, Mel, 1937, 38, 39 VanDeursen, Matt, 2002, 03 Veith, Grant, 1997, 98, 99, 00 Vermilion, Aaron, 1990, 91, 92 (C) Vermilion, Ryan, 1994, 95, 96 Villani, Mark, 1991, 92, 93 Vlasic, Jerry, 1957, 59 Vonderhaar, Rich, 1971 Vorndran, Phil, 2003, 04, 05 Vosioh, Channing, 1938, 39 Voss, Eric, 1990, 91, 92, 93 W Waggoner, Brandon, 2003 Wagoner, Fred, 1916 Walker, Tim, 1990, 91 Wallace, Brian, 1978, 79, 80 Wallace, Jim, 1967, 68, 69 Wallace, Tom, 1979, 80, 81, 82 Walley, Carter, 2009 Walls, Wayne, 1951, 52 (C) Walsh, John, 1928, 29 Walsman, Bob, 1967, 68 Walsman, Tom, 1970 Walters, Denny, 1965 Walton, Jesse, 1997, 98, 99 (C) Ward, Chuck, 1988, 89, 90 Ward, Kevin, 1996, 97, 98 (C) Warfel, Dan, 1964, 65, 66 (C) Warne, John, 1981, 82, 83 Warrenburg, James, 1947, 49 Watford, Alonzo, 1928, 29 Wathen, Ron, 1955, 56, 57 Watkins, Chris, 2000 Watkins, Zach, 2008, 09 Watson, Marlon, 2000 Webb, Adam, 2003, 04 (C) Webb, Ryan, 2009 Weber, Lou, 1968 Weesner, Ron, 1957 Weger, Jake, 1935, 36, 37 Weger, Ralph, 1932, 33 Weidekamp, Flavian, 1948, 49, 50 Weiss, Mike, 2004 Wells, Charles, 1963, 64 Wells, Rusty, 1981 Welton, Frank, 1936, 37, 38 Wenzler, Morris, 1961, 62, 63 Westerlind, Kyle, 2001, 02, 03, 04 Western, Bryan, 1999 Wetzel, Andy, 1972, 73, 74 Wetzel, Tom, 1977, 78, 79 Wheeler, James, 1937, 38 Wheeler, Sam, 2000 Wheeler, Tom, 1953, 54 Whisner, Phil, 1969, 70 White, Alex, 2006, 08 White, Robert, 1956, 57, 58 White, Ron, 1990, 91, 92, 93 Whitfield, Dave, 1989 Whitt, Carol, 1971 Wiggins, Dionte, 2008 Willett, Brandon, 1999, 00, 01 Williams, Andy, 1941 Williams, Brandon, 2003 Williams, Jason, 1996, 98, 99, 00 Williams, John, 2001, 02 Williams, Norm, 1941, 42, 46 Williams, Orville, 1946, 47 Williams, P. K., 1987, 88, 89 Williams, Rasheed, 1993 Wilms, Larry, 1968, 69, 70 Wilson, Jesse, 1996, 97, 98, 99 Wilson, Michael, 2009 Wilson, Norm, 1952, 53, 54, 55 Windsor, Nick, 2003 Winings, Nick, 1995, 96, 97 Winnings, Ben, 1947 Winters, Larry, 1992, 93, 94 Wise, Glen, 1912 Wise, Verl, 1912 Witman, Bob, 1973, 74 Witmer, Tim, 1989, 90, 91, 92 Wolfe, Richard, 1929, 30 Wood, A. C., 1917 Woodring, Homer, 1924, 25 Woodring, Bob, 1991 Woods, David, 1940 Woods, Gerald, 1921, 22, 24 (C) Wood, Shawn, 1995, 96, 97, 98 Worth, Willard, 1928, 29 Woznick, John, 1951 Wray, Evan, 2008 Wright, Tom, 1947, 48 Wrona, Al, 1975, 76 Wuest, Joe, 1937 Wukovits, Vic, 1966, 67 (C) Wulle, James, 1934, 35 Wymer, Jack, 1959 X Xander, Peter, 2007, 08, 09 Y Yeoman, Todd, 1986, 87, 88 Young, Andrew, 1988, 89, 90, 91 Z Zavela, Dan, 1939, 40, 41 Zavela, George, 1940, 41 Zentz, Tom, 1966 Zimmer, Zach, 2004, 05, 06, 07 Zimmerman, George, 1931, 34 Zimmerman, Scott, 1997, 98, 99 Zimpleman, Ryan, 1998, 99, 00 (C)--Captain. 39 Butler University The five faculty and 113 students present when Butler University opened in 1855 laid a solid foundation for 155 years of creative change and progress. Today’s more than 4,000 students continue to look ahead while treasuring the traditions unique to Butler. The young school, originally named North Western Christian University, was unusually innovative. It was the first in Indiana, and only the third in the nation, to admit women on an equal basis with men. With the appointment in 1858 of Catherine Merrill as Demia Butler Professor of English, the institution became the second in the country to appoint a woman faculty member, the first to establish an endowed chair specifically for a female professor and the first to establish a professorship in English literature. The school was also the first in Indiana to allow its students, with parental consent, to choose subjects under a new “elective” system. As Indianapolis grew, the city’s commercial district began to penetrate the heavily-wooded campus at what is now the corner of Thirteenth Street and College Avenue. In 1873, the board of directors decided to sell the downtown campus and accept a gift of 25 acres in Irvington, then a suburb east of Indianapolis. In 1877, North Western Christian University became Butler University, taking the name of its founder, Indianapolis attorney Ovid Butler. Butler moved again 50 years later, as the “Circle City” continued to grow. In 1928, classes were held for the first time in Jordan Hall, an imposing new Gothic structure erected on the beautiful Fairview Park Site, a wooded tract north of the city on the White River and the Inland Waterway Canal. Today’s students come from nearly every state in the nation and from many foreign countries to enroll in degree programs in the College of Liberal Arts and Sciences, or in one of four professional colleges -- Business Administration, Education, Fine Arts or Pharmacy and Health Sciences. Butler is one of only 21 private schools in the country offering a pharmacy program. True to the vision of its founders, the University continues to offer an array of professional and pre-professional programs within the context of a strong commitment to the traditional arts and sciences and to the values of liberal education. Butler continues to welcome highly motivated, intellectually curious men and women, and to prepare them for lives of professional and community service and creative, ethical action. Butler is one of the top 20 U. S. colleges for producing business executives, is in the top 10% for preparing future Ph.D.s and is located in the #2 city for college graduates starting a career. Administrative Support 40 Barry Collier Director of Athletics Tom Crowley Associate A.D. for Internal Operations Beth Goetz Associate A. D. for Administration/S.W.A. Jim McGrath Associate A.D. for Communications Mike Freeman Associate A.D. for External Operations Joe Gentry Director of Corporate Sponsorship Sonya Hopkins Assistant A.D./Student Development Carl Heck Assistant A.D./ Facilities and Events Stephanie Martin Athletic Business Manager Ryan Galloy Director of Sports Medicine Lindsay Martin Manager, Marketing and Promotions Matt Harris Manager of Fan Development Josh Rattray Assistant Sports Information Director Kyle Smith Assistant Director of the Bulldog Club 2010 Bulldogs, front row, left to right: Mark Giacomantonio, Tadd Dombart, Josh Dorfman, Eddie McHale, Tyler Skaggs, Donnie Gilmore, Mike Staniewicz, Rob Hobson, Matt Kobli, Ryan Secrist, Rob Cosler, Logan Sullivan, Andrew Pratt and Scott Gray. Second row: Trae Heeter, Caleb Conway, Larry Thomas, David Thomas, Christian Eble, David Burke, Brandon Grubbe, Andy Dauch, Greg Egan, Matt Foor, Chris Burns, Steven Depositar, Jordan Ridley, Arthur Monaco, Don Stewart, Daniel Wilson, T. J. Lukasik, Ryan Hitchcock and Jordan Koopman. Third Row: Chris Tinkey (Athletic Trainer), Ted Pajakowski (Student Manager), Stephen Blowers (Student Manager), Grant Lewis (Student Assistant Coach), Jim Peal (Strength and Conditioning Coordinator), Assistant Coach Nick Tabacca, Assistant Coach Matt Walker, Assistant Coach Joe Cheshire, Assistant Coach Danny Sear, Head Coach Jeff Voris, Assistant Coach Nick Anderson, Ryan Galloy (Head Athletic Trainer), Assistant Coach Chris Davis, Assistant Coach Rob Noel, Assistant Coach Tim Cooper, Assistant Coach David Kenney, Thorn Murphy (Student Assistant), Lester Burris (Video Coordinator), John Harding (Equipment Manager) and Missy Schultz (Athletic Trainer). Fourth Row: Andrew Cottrell, Derek O’Connor, JT Mesch, Joseph Purze, Jack McKenna, Dan Haber, Sean Grady, Bill Bork, Bob Olszewski, Logan Perry, Bobby McDonald, Brendan Shannon, Will Schierholz, William Lamar, Kyle Jachim, and Paul Yanow. Fifth Row: Artis Hailey, Matt Hittinger, Jeff Larsen, Zach Watkins, Robert Koteff, Nick Nykaza, Jay Brummel, Jeremy Stephens, Alex Perritt, Joseph Ciancio, David Lang, Michael Wilson, Nick Caldicott, Jace Tennant, John Cannova, Jayme Szafranski and Jimmy Schwabe. Sixth row: Matt Benson, Kevin Cook, Bryce Berry, Tom Judge, Cal Blair, Taylor Clarkson, Carter Walley, Scott Harvey, Andrew Huck, Mike Rose, Paul Sciortino, Nick Schirmann, Ross Teare, Phillip Powell, Charles Perrecone and Stuart Harvey. Top row: Jeff Poss, Ryan Myers, Doug Petty, Matt Storey, Jeff Urch, Grant Hunter, Jay Howard, Pete Mattingly, Charlie Schmelzer, Wade Markley, Mike Wendahl, Brett Thomaston, Nick Atkinson, Ryan Webb, Taylor Harris, Greg Ambrose and Dylan Johnson. Directions To Butler Bulldogs on the Air FROM NORTHWEST: I-65 south to I-465 east, exit I465 at U.S. 31 (Meridian Street), follow Meridian Street south to 49th Street, turn right and follow 49th Street to Butler Bowl/Hinkle Fieldhouse (5 blocks) FROM NORTH: U.S. 31 (Meridian Street) south to 49th Street, turn right and follow to Butler Bowl/Hinkle Fieldhouse (5 blocks) FROM NORTHEAST: I-69 south (becomes Hwy 37 in Marion County) to 46th Street, turn right and follow 46th Street to Boulevard Place, turn right and follow to Butler Bowl/Hinkle Fieldhouse (3 blocks) FROM EAST: I-70 west to I-65 north, exit at Meridian Street (U.S. 31) and follow Meridian Street north to 49th Street, turn left at 49th Street and follow to Butler Bowl Butler Bowl/Hinkle Fieldhouse (5 blocks) FROM SOUTH: I-65 north, exit at Meridian Street (U.S. 31) and follow Meridian Street north to 49th Street, turn left at 49th Street and follow to Butler Bowl/Hinkle Fieldhouse (5 blocks) FROM WEST/SOUTHWEST: I-70 east to I-465 north, exit I-465 at 38th Street, follow 38th Street east to Clarendon Road (first street left after Michigan Road, approximately 7 miles from I-465), turn left at Clarendon Road and follow to Butler campus (7 blocks, Butler Bowl/Hinkle Fieldhouse is located on the northeast corner of the Butler campus) For the sixth straight season, Butler football games will be broadcast on XL 950 (AM). Play-by-play duties for the Bulldogs again will be handled by Brian Giffin, and he’ll again be joined in the booth by former Butler head football coach Ken LaRose. A former collegiate and semi-pro football player, Giffin has covered the Indianapolis Colts, Atlanta Falcons, Tennessee Titans and Tampa Bay Buccaneers for various radio stations and networks across the United States, and he has broadcast high school football in Indiana and Florida. He currently serves as Executive Producer/Engineer for the Atlanta Braves Radio Network. LaRose boasts more than two decades of Butler football experience as a player, assistant coach and head coach. He Brian Giffin Ken LaRose guided the Bulldogs for ten seasons, 1992-2001, and he was at the helm when Butler made the move from NCAA Division II football to the NCAA I-AA. A former Butler football Most Valuable Player, LaRose had the distinction as a coach of leading Butler to league championships in both Division II and Division I-AA. Until the end of the major league baseball season, Butler’s radio play-by-play duties will be handled by Brian Scott, former Butler baseball play-by-play announcer and Sports Director for Radio Brownsburg. Media Information WORKING CREDENTIALS: Requests for working press credentials should be directed to the Butler University Sports Information Office in advance of the game. PHOTOGRAPHERS: Sideline passes are issued upon written request to certified news photographers. Requests should be made at least one week prior to the game. No one is allowed on the field without a sideline pass. RADIO BROADCASTS: Space for up to four broadcasters is provided in the visiting radio booth at the Butler Bowl. One courtesy phone line is available for the designated “Official” radio station of the visiting team. Please direct any additional or special broadcast requests to the Butler Sports Information Office. MID-WEEK INTERVIEWS: Every effort will be made to accommodate media requests for interviews during the week. Players will be made available as class schedules permit. Coaches are usually available prior to noon, or immediately following practice. All interview requests should be directed to Butler’s Sports Information Office. For additional information, please contact: Jim McGrath Sports Information Director, Butler University Office - (317) 940-9414 Fax - (317) 940-9808 jmcgrath@butler.edu 41 | http://issuu.com/buathletics/docs/10bufb_guide | CC-MAIN-2015-48 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.