text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Announcements
RainmakerMembers
Content count126
Joined
Last visited
Community Reputation122 Neutral
About Rainmaker
- RankMember
Hud blitting
Rainmaker replied to Rainmaker's topic in Graphics and GPU ProgrammingThanks. I had a feeling that was the case, but I had to ask first. Good thing the original code is still around. Also, the DX font interface runs like molasses. Is there a way to bitmap it or something with a function call to get a speed boost? I suspect it would be wiser to write my own...
Hud blitting
Rainmaker posted a topic in Graphics and GPU ProgrammingI am working on a 3d project which uses DX9, and the team leader wants me to do the GUI/HUD rendering via blitting to the back-buffer (instead of in ortho mode). He thinks that should be at least just as fast as ortho mode. Is that true? One of the main issues that I can't seem to figure out is alpha testing or blending. Are either of those possible with blitting? How do most of you do HUDs and GUIs? Thanks.
- Hot damn, 522k+ polygons at 60 frames per second, including culling algorithms. That is all. Thanks ~ Paul Frazee.
- Thanks for the help. I actually knew most of what you guys are telling me, I just needed a few refreshers. With a test load of 130k polygons I get 40 fps without VBOs and 130 with. I ended up instituting a polygon range builder for each state change. Unfortunately, it has to work by polygon because my triangles are unsorted right now. Since my octree doesn't share triangles, I will sort the triangles (just indices) by node, which will make for faster culling. If I am really crafty, I may be able to use it to impose the range restrictions in RangeElements, but that is iffy. Then I will interleave my data, which will set my fetch right at 32 bytes. Silvermace, it rendered 300,000 triangles per second, or 300,000 triangles per hundredth of a second? You put tri/s, but then said 100fpsish, so that is why I ask. ~ Paul Frazee
- ok, thanks for the tips.
- Well if I want to render in chunks, then I am going to have to do something like build a listing of triangles to be rendered every frame to take advantage of culling. I could create a listing of ranges I guess... any better ideas on that? And just in case, is there _any_ way to salvage my indexed vertex mesh? Is that just a bad idea? Should I just go ahead and start crying over the lost time? ~ Paul Frazee
Slow rendering
Rainmaker posted a topic in Graphics and GPU ProgrammingAge old problem - rendering is too slow. My system has two limitations on the way polygons are rendered: 1) I use an octree to do frustum culling, and 2) polygons have different rendering states (texture, etc). As a result, I can not render in chunks - unless, of course, I ordered the triangles by render state within each octree node. I could try that, but I don't think the speed increase would warrant the work. So as it is, I use glArrayElements with vertex pointers to specify the rendering data (GL_TRIANGLES). Even if I remove the render state code and octree code, just rendering all of the polygons, I get 13 FPS rendering ~130k polygons on a Dell Inspiron XPS notebook (3ghz HT, 1ghz ram, ati radeon mobility). If I draw it all at once with glDrawArrays, I get about 55 fps, so I know that it can be done. I tried switching to indexed vertices (I had a question thread on that earlier, but by spatially partitioning the polygons, I got it down to a suitable speed), but I found no speed increase at all, which seems odd to me. The idea is to lower the amount of data transfered to the card (and stored on the card). What am I doing wrong with that? Also keep in mind that I cannot draw in clumps if I use indexed vertices. Also, do VBOs work with glArrayElement? It works fine with glDrawArrays, but my app just kinda hangs if I use glArrayElement. Thanks ~ Paul Frazee
Vertex index generation
Rainmaker posted a topic in Graphics and GPU ProgrammingI am writing an algorithm to convert uncompressed triangles (3*trianglecount vertices) to indexed triangles (no duplicate vertices, triangles consist of 3 indices), and need to optimize a bit. My existing algorithm is pretty simple: for i=0 to PolyCount-1 for j=0 to i-1 if vertex[i] == vertex[j] isaduplicate() endif endfor endfor It works, but it is slow. O(nlogn), right? Any ideas as to a better algorithm? I considered some kind of spatial partition alg... but I can't be sure how much faster that would be. Any better ideas? ~Paul Frazee
Level editor texture coordinates
Rainmaker posted a topic in Graphics and GPU ProgrammingI am working on a level editor, and need some help with the texture coordinate transforming. The only level editor I have ever really familiarized myself with is Worldcraft, so a lot of my designs are borrowed from it. It works in modes - first is the primitives mode, in which you create the major level geometry using combinations of cuboids, spheres, and cylinders. Then you go to material editing mode, which combines the primitives with CSG and allows you to set textures and align them. The problem is getting good texture coordinates, regardless of the vertices. Here is a picture of what I am talking about: As you can see, the texture is distorting. Right now, I am using the texture matrix to do it. I have tried different orders, but that screenshot was rotate, translate, and scale. Thanks in advance for any help.
Octree woes
Rainmaker replied to Rainmaker's topic in Graphics and GPU ProgrammingThanks for your help, and sorry I didn't respond - I set this thing to email me on responses, and it never did. I ended up realizing the solution while in bed, heh. Thank you very much for your help.
Octree woes
Rainmaker posted a topic in Graphics and GPU ProgrammingI have a little bug in my frustum culling. I am sure this is addressed somewhere, but I can't find it. I made some purty pictures to illustrate my issue: Thoughts? | https://www.gamedev.net/profile/4147-rainmaker/ | CC-MAIN-2017-34 | refinedweb | 1,061 | 72.36 |
Part I: Scraping, cleaning, & clustering
Part II: Exploring clusters
This notebook was written using Python 3.5. If you're following along with this notebook, you should have roughly up to date versions of the following libraries (installable via pip, conda, or your package manager of choice):
The data can be obtained by running the script
MA_scraper.py.
It scrapes M-A for a listing of band names alphabetically (as seen here for example). The site uses AJAX to retrieve band names and display them in the page. It does so in chunks of 500 bands. From inspecting the page source it was easy to guess the location and naming of the URLs from which the band names are retrieved. Towards the bottom of the page source, you can find this line of JavaScript which generates the table of band names:
var grid = createGrid("#bandListAlpha", 500, 'browse/ajax-letter/l/A/json/1', ... );
This line uses data retrieved from
browse/ajax-letter/l/A/json/1
/{A-Z} to generate the grid of links displayed on the page. The link indicates that the data is probably returned in JSON format to the browser when generating the grid. This turns out to be the case and is what is assumed in
MA_scraper.py.
The output data is stored as a CSV file with the name
MA-band-names_YYYY-MM-DD.csv.
import string import re import itertools import requests from bs4 import BeautifulSoup import pandas as pd import numpy as np np.random.seed(10) import matplotlib.pyplot as plt from matplotlib.colors import LogNorm, rgb2hex %matplotlib inline %config InlineBackend.figure_format = 'retina' import seaborn as sns sns.set_style('whitegrid') from sklearn.preprocessing import MultiLabelBinarizer from sklearn.decomposition import PCA from sklearn.cluster import KMeans, DBSCAN from wordcloud import WordCloud, ImageColorGenerator import networkx as nx from circos import CircosPlot
data = pd.read_csv('MA-band-names_2016-04-01.csv', index_col=0) # Change the date to match the date you scraped!
data.head()
We have a little bit of cleaning up to do:
NameLinkcolumn contains the band name and it's respective M-A link. we should pull out the link and band name and put them into their own columns.
Statuscolumn has some styling information that isn't necessarily useful to us so if we can pull out just the text that would be better.
Genrecolumn contains a string descriptor of the band's genre, which I'm not sure is standardized. Tokenizing these strings will help us in quantifying which terms occur most often.
Here we'll make use of the
map method of the
pd.Series class to achieve what we want. First the
NameLink column. We'll use BeautifulSoup to parse the HTML contained within, then use the results to construct the columns we desire (one column with the band name and another with it's corresponding M-A link).
data['NameSoup'] = data['NameLink'].map(lambda raw_html: BeautifulSoup(raw_html, 'html.parser')) data['BandName'] = data['NameSoup'].map(lambda soup: soup.text) # extracts band name data['BandLink'] = data['NameSoup'].map(lambda soup: soup.a['href']) # extracts link to band's page on M-A
data['StatusSoup'] = data['Status'].map(lambda raw_html: BeautifulSoup(raw_html, 'html.parser')) data['Status'] = data['StatusSoup'].map(lambda soup: soup.text)
Now let's check to see that our mappings worked and the columns
BandName,
BandLink, and
Status have the correctly formatted information.
data[['BandName', 'BandLink', 'Status']].head()
Here we'll tokenize the string in the
Genre column to obtain a list of terms. This should help us in identifying all of the keywords used to describe bands in terms of genre. To do this, we'll replace all punctuation with spaces and then split the strings on spaces to get our list of terms.
def replace_punctuation(raw, replacement=' '): """Replaces all punctuation in the input string `raw` with a space. Returns the new string.""" regex = re.compile('[%s]' % re.escape(string.punctuation)) output = regex.sub(replacement, raw) return output def split_terms(raw): """Splits the terms in the input string `raw` on all spaces and punctuation. Returns a list of the terms in the string.""" replaced = replace_punctuation(raw, replacement=' ') output = tuple(replaced.split()) return output
Now we can split the strings in the
Genre column into separte terms with a simple map. We'll also replace the term "Metal" itself with an empty string.
data['GenreTerms'] = data['Genre'].str.replace('Metal', '').map(split_terms)
Now that we have the genre descriptors tokenized, we'll compile a list of unique terms. This will make it easier to do some quantification later on. To do this we'll first flatten the column
GenreTerms into a list, then use the
np.unique() function to get the unique terms and their corresponding counts.
all_terms = list(itertools.chain.from_iterable(data['GenreTerms'])) # Flatten terms to a single list unique_terms, counts = np.unique(all_terms, return_counts=True) # Get unique terms & counts genre_terms = pd.DataFrame({'Term': unique_terms, 'TotalCount': counts}) # Store in DataFrame for later genre_terms.sort_values('TotalCount', ascending=False, inplace=True) # Sort by count
Before we can feed our data to some unsupervised machine learning algorithms, we need to generate a mathematical representation of it.
One simple way to represent our data is with a binary matrix where each column represents the presence of a unique genre term and each row corresponds to a band. For example, a band with the genre descriptor
'Black/Death Metal', would have the tokenized genre terms
('Death', 'Black'), and a binarized vector with a 1 in the columns corresponding to Death and Black, but with zeros in every other column. We can do this with scikit-learn's
MultiLabelBinarizer class. The resulting feature matrix with have the shape
n_bands by
n_unique_terms. We'll use a subset of our data because it may be too large to process in a reasonable amount of time on a single machine.
# Take every Nth band from the scraped data set, use the whole set at your own risk! subset = data['GenreTerms'][::5]
mlb = MultiLabelBinarizer() mlb.fit([[x] for x in [term for term in unique_terms]]) # We can use 8-bit integers to save on memory since it is only a binary matrix. binarized_terms = mlb.transform(subset).astype('int8')
binarized_terms.shape # (n_bands x n_unique_terms)
(21470, 293)
To get an idea of what our data now looks like, we'll make a visual representation of our binarized matrix using matplotlib's
imshow. Each black spot represents the presence of a unique genre descriptor for a given band.
fig, ax = plt.subplots(figsize=(8,8)) ax.imshow(binarized_terms, aspect='auto') ax.grid('off') ax.set_ylabel('Band') ax.set_xlabel('Unique Term')
<matplotlib.text.Text at 0x7fd2acee9a90>
The vertical trends we're seeing indicate genre terms repeated over bands. What this suggests should be obvious: some terms are more common than others. We can verify this by looking at the counts for the top genre terms that we extracted above and stored in
genre_terms. The indices of the top terms should correspond with where the appear on the binary plot. Let's double check this for sanity.
genre_terms.head(10)
popular_term_locations = genre_terms.index[:10] fig, ax = plt.subplots(figsize=(8,8)) ax.imshow(binarized_terms, aspect='auto', origin='lower') ax.grid('off') ax.set_ylabel('Band') ax.set_xlabel('Unique Term') for location in popular_term_locations: ax.axvline(location, lw=2, alpha=0.1)
The vertical lines we plotted do indeed line up with frequenly used genre terms. Sanity checked!
As you can see our feature vector is one big matrix. The first step in almost any data exploration (IMO) should be to create a simple visualization of your data. Since each of our bands is described by a 293-dimension feature vector, some dimensionality reduction is in order.
While not technically an algorithm for dimensionality reduction (it's truly a decomposition algorithm), PCA can be used to find the axes in our feature space that capture the greatest amount of variance within the data. These axes are known as the principal components. Taking only the top components therefore is an effective way of "reducing" the dimensionality of your original data set.
pca = PCA() components = pca.fit_transform(binarized_terms)
Now we can visualize combinations of first three primary components to get an idea of any structure in our data set.
fig, ax = plt.subplots(1,3,figsize=(14,4)) for i, (j, k) in enumerate([(0,1), (1,2), (0,2)]): ax[i].scatter(components[:,j], components[:,k], alpha=0.05, lw=0) ax[i].set_xlabel('Component %d' % (j+1)) ax[i].set_ylabel('Component %d' % (k+1)) fig.tight_layout()
There appears to be some structure revealed in the top principal components. Let's use matplotlib's
hexbin to get an idea of the point density, which may not be apparent from the scatter plots.
fig, ax = plt.subplots(figsize=(6,4)) hb = ax.hexbin(components[:,0], components[:,1], cmap=plt.cm.viridis, extent=(-1.5, 1, -1.5, 1.5), norm=LogNorm()); ax.set_xlabel('Component 1') ax.set_ylabel('Component 2') fig.colorbar(hb)
<matplotlib.colorbar.Colorbar at 0x7fd2ad1a9ef0>
In some regions there are nearly 10,000 overlapping data points! We definitely have distinct clusters on our hands. How can we effectively detect them?
Clustering will allow us to detect groups of similar data points. By clustering the results from our dimensionality reduction, we should be able to meaningfully partition our data into different groups.
$k$-means is a relatively quick, low cost clustering algorithm that requires us to guess the number of clusters ($k$) beforehand (for an overview of how it works, you can check out my example implementation here). However we don't necessarily have a clear estimate for $k$. How many clusters are apparent in the
hexbin plot above? 2? 6? More? The truth is that there is no correct answer.
One way to come up with an estimate is to run $k$-means with different values for $k$ and evaluate how well the clusters fit the shape of the data. We can then use the "hockey stick" method (that's a technical term) and choose the value for $k$ where the most significant bend occurs in the plot. The measure of how well the clusters fit is known as the inertia and reflects the variability within clusters. This isn't the only way to determine the optimal number of clusters, but it's what we'll use here. We'll run this using the first 3 principal components.
distortions = [] for n in np.arange(1,11): kmeans = KMeans(n_clusters=n) clusters = kmeans.fit_predict(components[:,[0,1,2]]) distortions.append(kmeans.inertia_) fig, ax = plt.subplots(figsize=(4,4)) ax.plot(np.arange(1,11), distortions, marker='o') ax.set_xlabel('Number of clusters') ax.set_ylabel('Inertia') ax.set_ylim(0, ax.get_ylim()[1]);
Depending on what you think hockey sticks look like, $k$-means suggests either 4 or 7 clusters. Let's see what 4 clusters looks like.
n_clusters = 4 kmeans = KMeans(n_clusters=n_clusters) clusters = kmeans.fit_predict(components[:,[0,1,2]])
fig, ax = plt.subplots(1,3,figsize=(14,4)) for i, (j, k) in enumerate([(0,1), (1,2), (0,2)]): ax[i].scatter(components[:,j], components[:,k], alpha=0.1, lw=0, color=plt.cm.spectral(clusters/n_clusters)) ax[i].set_xlabel('Component %d' % (j+1)) ax[i].set_ylabel('Component %d' % (k+1)) fig.tight_layout() | https://jonchar.net/notebooks/MA-Exploratory-Analysis/ | CC-MAIN-2020-29 | refinedweb | 1,879 | 59.7 |
this.
Please feel free to comment on errors, things you don't like and things you would like to see. If I don't get the comments then I can't take it forward, and the question you would like answered is almost certainly causing other people problems too.
If you are new to Ada and do not have an Ada compiler handy then why not try the GNAT Ada compiler. This compiler is based on the well known GCC C/C++ and Objective-C compiler and provides a high quality Ada-83 and Ada-95 compiler for many platforms. Here is the FTP site see if there is one for you.
One thing before we continue, most of the operators are similar, but you should notice these differences:
One of the biggest things to stop C/C++ programmers in their tracks is that Ada is case insensitive, so begin BEGIN Begin are all the same. This can be a problem when porting case sensitive C code into Ada.
Another thing to watch for in Ada source is the use of ' the tick. The tick is used to access attributes for an object, for instance the following code is used to assign to value s the size in bits of an integer.
int a = sizeof(int) * 8; a : Integer := Integer'Size;Another use for it is to access the attributes
Firstand
Last, so for an integer the range of possible values is
Integer'First to Integer'Last. This can also be applied to arrays so if you are passed an array and don't know the size of it you can use these attribute values to range over it in a loop (see section 1.1.5). The tick is also used for other Ada constructs as well as attributes, for example character literals, code statements and qualified expressions (1.1.8).
Note that 'objects' are defined in reverse order to C/C++, the object name is first, then the object type, as in C/C++ you can declare lists of objects by seperating them with commas.
int i; int a, b, c; int j = 0; int k, l = 1; i : Integer; a, b, c : Integer; j : Integer := 0; k, l : Integer := 1;The first three declarations are the same, they create the same objects, and the third one assigns j the value 0 in both cases. However the fourth example in C leaves k undefined and creates l with the value 1. In the Ada example it should be clear that both k and l are assigned the value 1.
Another difference is in defining constants.
const int days_per_week = 7; days_per_week : constant Integer := 7; days_per_week : constant := 7;In the Ada example it is possible to define a constant without type, the compiler then chooses the most appropriate type to represent it.
Ada is a strongly typed language, in fact possibly the strongest. This means that its type model is strict and absolutely stated. In C the use of typedef introduces a new name which can be used as a new type, though the weak typing of C and even C++ (in comparison) means that we have only really introduced a very poor synonym. Consider:
typedef int INT; INT a; int b; a = b; // works, no problemThe compiler knows that they are both ints. Now consider:
type INT is new Integer; a : INT; b : Integer; a := b; -- fails.The important keyword is
new, which really sums up the way Ada is treating that line, it can be read as "a new type
INThas been created from the type
Integer", whereas the C line may be interpreted as "a new name
INThas been introduced as a synonym for
int".
This strong typing can be a problem, and so Ada also provides you with a feature for reducing the distance between the new type and its parent, consider
subtype INT is Integer; a : INT; b : Integer; a := b; -- works.The most important feature of the subtype is to constrain the parent type in some way, for example to place an upper or lower boundary for an integer value (see section below on ranges).
Long_Integer, Short_Integer, Long_Long_Integeretc as needed.
System.Unsigned_Typeswhich provide such a set of types.
Ada-95 has added a modular type which specifies the maximum value, and also the feature that arithmatic is cyclic, underflow/overflow cannot occur. This means that if you have a modular type capable of holding values from 0 to 255, and its current value is 255, then incrementing it wraps it around to zero. Contrast this with range types (previously used to define unsigned integer types) in section 1.1.5 below. Such a type is defined in the form:
type BYTE is mod 256; type BYTE is mod 2**8;The first simply specifies the maximum value, the second specifies it in a more 'precise' way, and the 2**x form is often used in system programming to specify bit mask types. Note: it is not required to use 2**x, you can use any value, so 10**10 is legal also.
Standard{A.1} as an enumerated type (see section 1.1.5). There is an Ada equivalent of the C set of functions in
ctype.hwhich is the package
Ada.Characters.Handling.
Ada Also defines a
Wide_Character type for handling non ASCII
character sets.
Standardas an enumerated type (see below) as
(FALSE, TRUE).
Standard). There is a good set of Ada packages for string handling, much better defined than the set provided by C, and Ada has a & operator for string concatenation.
As in C the basis for the string is an array of characters, so you can use array slicing (see below) to extract substrings, and define strings of set length. What, unfortunatly, you cannot do is use strings as unbounded objects, hence the following.
type A_Record is record illegal : String; legal : String(1 .. 20); end record; procedure check(legal : in String);The illegal structure element is because Ada cannot use 'unconstrained' types in static declarations, so the string must be constrained by a size. Also note that the lower bound of the size must be greater than or equal to 1, the C/C++
array[4]which defines a range
0..3cannot be used in Ada,
1..4must be used. One way to specify the size is by initialisation, for example:
Name : String := "Simon";is the same as defining
Nameas a
String(1..5)and assigning it the value
"Simon"seperatly..
For parameter types unconstrained types are allowed, similar to passing
int array[] in C.
To overcome the constraint problem for strings Ada has a predefined package
Ada.Strings.Unbounded which implements a variable length string
type.
Floatand compilers may add
Long_Float, etc. A new Float type may be defined in one of two ways:
type FloatingPoint1 is new Float; type FloatingPoint2 is digits 5;The first simply makes a new floating point type, from the standard
Float, with the precision and size of that type, regardless of what it is.
The second line asks the compiler to create a new type, which is a floating point type "of some kind" with a minimum of 5 digits of precision. This is invaluable when doing numeric intensive operations and intend to port the program, you define exactly the type you need, not what you think might do today.
If we go back to the subject of the tick, you can get the number of digits which are actually used by the type by the attribute 'Digits. So having said we want a type with minimum of 5 digits we can verify this:
number_of_digits : Integer := FloatingPoint2'Digits;
Fixed point types are unusual, there is no predefined type 'Fixed' and such type must be declared in the long form:
type Fixed is delta 0.1 range -1.0 .. 1.0;This defines a type which ranges from -1.0 to 1.0 with an accuracy of 0.1. Each element, accuracy, low-bound and high-bound must be defined as a real number.
There is a specific form of fixed point types (added by Ada-95) called decimal
types. These add a clause
digits, and the
range
clause becomes optional.
type Decimal is delta 0.01 digits 10;This specifies a fixed point type of 10 digits with two decimal places. The number of digits includes the decimal part and so the maximum range of values becomes
-99,999,999.99...
+99,999,999.99
type Boolean is (FALSE, TRUE);should give you a feeling for the power of the type.
You have already seen a range in use (for strings), it is expressed as
low .. high and can be one of the most useful ways of expressing
interfaces and parameter values, for example:
type Hours is new Integer range 1 .. 12; type Hours24 is range 0 .. 23; type Minutes is range 1 .. 60;There is now no way that a user can pass us an hour outside the range we have specified, even to the extent that if we define a parameter of type
Hours24we cannot assign a value of
Hourseven though it can only be in the range. Another feature is demonstrated, for
Hourswe have said we want to restrict an
Integertype to the given range, for the next two we have asked the compiler to choose a type it feels appropriate to hold the given range, this is a nice way to save a little finger tapping, but should be avoided Ada provides you a perfect environment to specify precisely what you want, use it the first definition leaves nothing to the imagination.
Now we come to the rules on subtypes for ranges, and we will define the two
Hours again as follows:
type Hours24 is new range 0 .. 23; subtype Hours is Hours24 range 1 .. 12;This limits the range even further, and as you might expect a subtype cannot extend the range beyond its parent, so
range 0 .. 25would have been illegal.
Now we come to the combining of enumerations and ranges, so that we might have:
type All_Days is (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday); subtype Week_Days is All_Days range Monday .. Friday; subtype Weekend is All_Days range Saturday .. Sunday;We can now take a
Day, and see if we want to go to work:
Day : All_Days := Today; if Day in Week_Days then go_to_work; end if;Or you could use the form
if Day in range Monday .. Fridayand we would not need the extra types.
Ada provides four useful attributes for enumeration type handling, note these are used slightly differently than many other attributes as they are applied to the type, not the object.
'Succvalue of an object containing
Mondayis
Tuesday.
Sundaythen an exception is raised, you cannot
Succpast the end of the enumeration.
'Predvalue of an object containing
Tuesdayis
Monday.
'Predof
Mondayis an error.
Val(2)is
Wednesday.
'Val(0)is the same as
'First.
'Pos(Wednesday)is
2.
'Lastwill work, and return
Sunday.
All_Days'Succ(Monday) = Tuesday All_Days'Pred(Tuesday) = Monday All_Days'Val(0) = Monday All_Days'First = Monday All_Days'Val(2) = Wednesday All_Days'Last = Sunday All_Days'Succ(All_Days'Pred(Tuesday)) = TuesdayAda also provides a set of 4 attributes for range types, these are intimatly associated with those above and are:
0 .. 100then
'First0.
'Lastis
100.
'Lengthis actually
101.
Some example:
char name[31]; int track[3]; int dbla[3][10]; int init[3] = { 0, 1, 2 }; typedef char[31] name_type; track[2] = 1; dbla[0][3] = 2; Name : array (0 .. 30) of Character; -- OR Name : String (1 .. 30); Track : array (0 .. 2) of Integer; DblA : array (0 .. 2) of array (0 .. 9) of Integer; -- OR DblA : array (0 .. 2,0 .. 9) of Integer; Init : array (0 .. 2) of Integer := (0, 1, 2); type Name_Type is array (0 .. 30) of Character; track(2) := 1; dbla(0,3) := 2; -- Note try this in C. a, b : Name_Type; a := b; -- will copy all elements of b into a.Simple isn't it, you can convert C arrays into Ada arrays very easily. What you don't get is all the things you can do with Ada arrays that you can't do in C/C++.
Example : array (-10 .. 10) of Integer;
array(type range low .. high)which would make Example above
array(Integer range -10 .. 10). Now you can see where we're going, take an enumerated type,
All_Daysand you can define an array:
Hours_Worked : array (All_Days range Monday .. Friday);
type Vector is array (Integer range <>) of Float; procedure sort_vector(sort_this : in out Vector); Illegal_Variable : Vector; Legal_Variable : Vector(1..5); subtype SmallVector is Vector(0..1); Another_Legal : SmallVector;This does allow us great flexibility to define functions and procedures to work on arrays regardless of their size, so a call to
sort_vectorcould take the
Legal_Variableobject or an object of type
SmallVector, etc. Note that a variable of type
Smallvectoris constrained and so can be legally created.
Example : array (1 .. 10) of Integer; for i in Example'First .. Example'Last loop for i in Example'Range loopNote that if you have a multiple dimension array then the above notation implies that the returned values are for the first dimension, use the notation
Array_Name(dimension)'attributefor multi-dimensional arrays.
Init : array (0 .. 3) of Integer := (0 .. 3 => 1); Init : array (0 .. 3) of Integer := (0 => 1, others => 0);The keyword
otherssets any elements not explicitly handled.
Large : array (0 .. 100) of Integer; Small : array (0 .. 3) of Integer; -- extract section from one array into another. Small(0 .. 3) := Large(10 .. 13); -- swap top and bottom halfs of an array. Large := Large(51 .. 100) & Large(1..50);Note: Both sides of the assignment must be of the same type, that is the same dimensions with each element the same. The following is illegal.
-- extract section from one array into another. Small(0 .. 3) := Large(10 .. 33); -- ^^^^^^^^ range too big.
struct _device { int major_number; int minor_number; char name[20]; }; typedef struct _device Device; type struct_device is record major_number : Integer; minor_number : Integer; name : String(1 .. 19); end record; type Device is new struct_device;As you can see, the main difference is that the name we declare for the initial record is a type, and can be used from that point on. In C all we have declared is a structure name, we then require the additional step of typedef-ing to add a new type name.
Ada uses the same element reference syntax as C, so to access the minor_number
element of an object lp1 of type Device we write
lp1.minor_number.
Ada does allow, like C, the initialisation of record members at declaration.
In the code below we introduce a feature of Ada, the ability to name
the elements we are going to initialise. This is useful for clarity of code,
but more importantly it allows us to only initialise the bits we want.
Device lp1 = {1, 2, "lp1"}; lp1 : Device := (1, 2, "lp1"); lp2 : Device := (major_number => 1, minor_number => 3, name => "lp2"); tmp : Device := (major_number => 255, name => "tmp");When initialising a record we use an aggregate, a construct which groups together the members. This facility (unlike aggregates in C) can also be used to assign members at other times as well.
tmp : Device; -- some processing tmp := (major_number => 255, name => "tmp");This syntax can be used anywhere where parameters are passed, initialisation (as above) function/procedure calls, variants and discriminants and generics. The code above is most useful if we have a default value for minor_number, so the fact that we left it out won't matter. This is possible in Ada.
This facility improves readability and as far as most Ada programmers believe maintainability.
type struct_device is record major_number : Integer := 0; minor_number : Integer := 0; name : String(1 .. 19) := "unknown"; end record;Structures/records like this are simple, and there isn't much more to say. The more interesting problem for Ada is modelling C unions (see section 1.1.10).
Ada access types are safer, and in some ways easier to use and understand, but they do mean that a lot of C code which uses pointers heavily will have to be reworked to use some other means.
The most common use of access types is in dynamic programming, for example in linked lists.
struct _device_event { int major_number; int minor_number; int event_ident; struct _device_event* next; }; type Device_Event; type Device_Event_Access is access Device_Event; type Device_Event is record major_number : Integer := 0; minor_number : Integer := 0; event_ident : Integer := 0; next : Device_Event_Access := null; -- Note: the assignement to null is not required, -- Ada automatically initialises access types to -- null if no other value is specified. end record;The Ada code may look long-winded but it is also more expressive, the access type is declared before the record so a real type can be used for the declaration of the element next. Note: we have to forward declare the record before we can declare the access type, is this extra line worth all the moans we hear from the C/C++ community that Ada is overly verbose?
When it comes to dynamically allocating a new structure the Ada allocator syntax is much closer to C++ than to C.
Event_1 := new Device_Event; Event_1.next := new Device_Event'(1, 2, EV_Paper_Low, null);There are three things of note in the example above. Firstly the syntax, we can say directly that we want a new thing, none of this malloc rubbish. Secondly that there is no difference in syntax between access of elements of a statically allocated record and a dynamically allocated one. We use the
record.elementsyntax for both. Lastly that we can initialise the values as we create the object, the tick is used again, not as an attribute, but with parenthases in order to form a qualified expresssion.
Ada allows you to assign between access types, and as you would expect it only changes what the access type points to, not the contents of what it points to. One thing to note again, Ada allows you to assign one structure to another if they are of the same type, and so a syntax is required to assign the contents of an access type, its easier to read than write, so:
dev1, dev2 : Device_Event; pdv1, pdv2 : Device_Event_Access; dev1 := dev2; -- all elements copied. pdv1 := pdv2; -- pdv1 now points to contents of pdv2. pdv1.all := pdv2.all; -- !!What you may have noticed is that we have not discussed the operator to free the memory we have allocated, the equivalent of C's free() or C++'s delete.
There is a good reason for this, Ada does not have one.
To digress for a while, Ada was designed as a language to support garbage collection, that is the runtime would manage deallocation of no longer required dynamic memory. However at that time garbage collection was slow, required a large overhead in tracking dynamic memory and tended to make programs irratic in performance, slowing as the garbage collector kicks in. The language specification therefore states {13.11} "An implementation need not support garbage collection ...". This means that you must, as in C++ manage your own memory deallocation.
Ada requires you to use the generic procedure
Unchecked_Deallocation
(see 1.3.4) to deallocate a dynamic object. This procedure
must be instantiated for each dynamic type and should not (ideally) be declared
on a public package spec, ie provide the client with a deallocation procedure
which uses
Unchecked_Deallocation internally.
type Thing is new Integer; an_Integer : Integer; a_Thing : Thing; an_Integer := a_Thing; -- illegal an_Integer := Integer(a_Thing);This can only be done between similar types, the compiler will not allow such coersion between very different types, for this you need the generic procedure
Unchecked_Conversion(see 1.3.4) which takes as an argument one type, and returns another. The only constraint on this is that they must be the same size.
In C/C++ there is the most formidable syntax for defining pointers to functions and so the Ada syntax should come as a nice surprise:
typedef int (*callback_func)(int param1, int param2); type Callback_Func is access function(param_1 : in Integer; param_2 : in Integer) return Integer;
type Event_Item is record Event_ID : Integer; Event_Info : String(1 .. 80); end record; type Event_Log(Max_Size : Integer) is record Log_Opened : Date_Type; Events : array (1 .. Max_Size) of Event_Item; end record;First we declare a type to hold our event information in. We then declare a type which is a log of such events, this log has a maximum size, and rather than the C answer, define an array large enough for the maximum ever, or resort to dynamic programming the Ada approach is to instantiate the record with a max value and at time of instantiation define the size of the array.
My_Event_Log : Event_Log(1000);If it is known that nearly all event logs are going to be a thousand items in size, then you could make that a default value, so that the following code is identical to that above.
type Event_Log(Max_Size : Integer := 1000) is record Log_Opened : Date_Type Events : array (Integer range 1 .. Max_Size) of Event_Item; end record; My_Event_Log : Event_Log;Again this is another way in which Ada helps, when defining an interface, to state precisely what we want to provide.
Ada variant records allow you to define a record which has 2 or more blocks of data of which only one is visible at any time. The visibility of the block is determined by a discriminant which is then 'cased'.
type Transport_Type is (Sports, Family, Van); type Car(Type : Transport_Type) is record Registration_Date : Date_Type; Colour : Colour_Type; case Type is when Sports => Soft_Top : Boolean; when Family => Number_Seats : Integer; Rear_Belts : Boolean; when Van => Cargo_Capacity: Integer; end case; end record;So if you code
My_Car : Car(Family);then you can ask for the number of seats in the car, and whether the car has seat belts in the rear, but you cannot ask if it is a soft top, or what its cargo capacity is.
I guess you've seen the difference between this and C unions. In a C union representation of the above any block is visible regardless of what type of car it is, you can easily ask for the cargo capacity of a sports car and C will use the bit pattern of the boolean to provide you with the cargo capacity. Not good.
To simplify things you can subtype the variant record with types which define the variant (note in the example the use of the designator for clarity).
subtype Sports_Car is Car(Sports); subtype Family_Car is Car(Type => Family); subtype Small_Van is Car(Type => Van);
Unlike C++ where an exception is identified by its type in Ada they are uniquely identified by name. To define an exception for use, simply
parameter_out_of_range : Exception;These look and feel like constants, you cannot assign to them etc, you can only raise an exception and handle an exception.
Exceptions can be argued to be a vital part of the safety of Ada code, they cannot easily be ignored, and can halt a system quickly if something goes wrong, far faster than a returned error code which in most cases is completely ignored.
type BYTE is range 0 .. 255; for BYTE use 8;This first example shows the most common form of system representation clause, the size attribute. We have asked the compiler to give us a range, from 0 to 255 and the compiler is at liberty to provide the best type available to hold the representation. We are forcing this type to be 8 bits in size.
type DEV_Activity is (READING, WRITING, IDLE); for DEV_Activity use (READING => 1, WRITING => 2, IDLE => 3);Again this is useful for system programming it gives us the safety of enumeration range checking, so we can only put the correct value into a variable, but does allow us to define what the values are if they are being used in a call which expects specific values.
type DEV_Available is BYTE; for DEV_Available use at 16#00000340#;This example means that all objects of type
DEV_Availableare placed at memory address 340 (Hex). This placing of data items can be done on a per object basis by using:
type DEV_Available is BYTE; Avail_Flag : DEV_Available; for Avail_Flag'Address use 16#00000340#;Note the address used Ada's version of the C 0x340 notation, however the general form is
base#number#where the base can be anything, including 2, so bit masks are real easy to define, for example:
Is_Available : constant BYTE := 2#1000_0000#; Not_Available: constant BYTE := 2#0000_0000#;Another feature of Ada is that any underscores in numeric constants are ignored, so you can break apart large numbers for readability.
type DEV_Status is 0 .. 15; type DeviceDetails is record status : DEV_Activity; rd_stat: DEV_Status; wr_stat: DEV_Status; end record; for DeviceDetails use record at mod 2; status at 0 range 0 .. 7; rd_stat at 1 range 0 .. 3; wr_stat at 1 range 4 .. 7; end record;This last example is the most complex, it defines a simple range type, and a structure. It then defines two things to the compiler, first the mod clause sets the byte packing for the structure, in this case back on two-byte boundaries. The second part of this structure defines exactly the memory image of the record and where each element occurs. The number after the 'at' is the byte offset and the range, or size, is specified in number of bits.
>From this you can see that the whole structure is stored in two bytes where the first byte is stored as expected, but the second and third elements of the record share the second byte, low nibble and high nibble.
This form becomes very important a little later on.
Firstly we must look at the two ways unions are identified. Unions are used to represent the data in memory in more than one way, the programmer must know which way is relevant at any point in time. This variant identifier can be inside the union or outside, for example:
struct _device_input { int device_id; union { type_1_data from_type_1; type_2_data from_type_2; } device_data; }; void get_data_func(_device_input* from_device); union device_data { type_1_data from_type_1; type_2_data from_type_2; }; void get_data_func(int *device_id, device_data* from_device);In the first example all the data required is in the structure, we call the function and get back a structure which holds the union and the identifier which denotes which element of the union is active. In the second example only the union is returned and the identifier is seperate.
The next step is to decide whether, when converting such code to Ada, you wish to maintain simply the concept of the union, or whether you are required to maintain the memory layout also. Note: the second choice is usually only if your Ada code is to pass such a structure to a C program or get one from it.
If you are simply retaining the concept of the union then you would not use the second form, use the first form and use a variant record.
type Device_ID is new Integer; type Device_Input(From_Device : Device_ID) is record case From_Device is when 1 => From_Type_1 : Type_1_Data; when 2 => From_Type_2 : Type_2_Data; end case; end record;The above code is conceptually the same as the first piece of C code, however it will probably look very different, you could use the following representation clause to make it look like the C code (type sizes are not important).
for Device_Input use record From_Device at 0 range 0 .. 15; From_Type_1 at 2 range 0 .. 15; From_Type_2 at 2 range 0 .. 31; end record;You should be able to pass this to and from C code now. You could use a representation clause for the second C case above, but unless you really must pass it to some C code then re-code it as a variant record.
We can also use the abilities of
Unchecked_Conversion to convert
between different types (see 1.3.4). This allows us to
write the following:
type Type_1_Data is record Data_1 : Integer; end record; type Type_2_Data is record Data_1 : Integer; end record; function Type_1_to_2 is new Unchecked_Conversion (Source => Type_1_data, Target => Type_2_Data);This means that we can read/write items of type
Type_1_Dataand when we need to represent the data as
Type_2_Datawe can simply write
Type_1_Object : Type_1_Data := ReadData; : Type_2_Object : Type_2_Data := Type_1_to_2(Type_1_Object);
Note: All Ada statements can be qualified by a name, this be discussed further in the section on Ada looping constructs, however it can be used anywhere to improve readability, for example:
begin Init_Code: begin Some_Code; end Init_Code; Main_Loop: loop if Some_Value then exit loop Main_Loop; end if; end loop Main_Loop; Term_Code: begin Some_Code; end Term_Code; end A_Block;
{ declarations statements } declare declarations begin statement end;
Note: Ada does not require brackets around the expressions used in if, case or loop statements.
if (expression) { statement } else { statement } if expression then statement elsif expression then statement else statement end if;
switch (expression) { case value: statement default: statement } case expression is when value => statement when others => statement end case;There is a point worth noting here. In C the end of the statement block between case statements is a break statement, otherwise we drop through into the next case. In Ada this does not happen, the end of the statement is the next case.
This leads to a slight problem, it is not uncommon to find a switch statement in C which looks like this:
switch (integer_value) { case 1: case 2: case 3: case 4: value_ok = 1; break; case 5: case 6: case 7: break; }This uses ranges (see 1.1.5) to select a set of values for a single operation, Ada also allows you to or values together, consider the following:
case integer_value is when 1 .. 4 => value_ok := 1; when 5 | 6 | 7 => null; end case;You will also note that in Ada there must be a statement for each case, so we have to use the Ada
nullstatement as the target of the second selection.
loop ... endconstruct
loop statement end loop;
while (expression) { statement } while expression loop statement end loop;
do { statement } while (expression) -- no direct Ada equivalent.
for (init-statement ; expression-1 ; loop-statement) { statement } for ident in range loop statement end loop;However Ada adds some nice touches to this simple statement.
Firstly, the variable ident is actually declared by its appearance in the loop, it is a new variable which exists for the scope of the loop only and takes the correct type according to the specified range.
Secondly you will have noticed that to loop for 1 to 10 you can write the following Ada code:
for i in 1 .. 10 loop null; end loop;What if you want to loop from 10 down to 1? In Ada you cannot specify a range of
10 .. 1as this is defined as a 'null range'. Passing a null range to a for loop causes it to exit immediatly. The code to iterate over a null range such as this is:
for i in reverse 1 .. 10 loop null; end loop;
while (expression) { if (expression1) { continue; } if (expression2) { break; } }This code shows how break and continue are used, you have a loop which takes an expression to determine general termination procedure. Now let us assume that during execution of the loop you decide that you have completed what you wanted to do and may leave the loop early, the break forces a 'jump' to the next statement after the closing brace of the loop. A continue is similar but it takes you to the first statement after the opening brace of the loop, in effect it allows you to reevaluate the loop.
In Ada there is no continue, and break is now exit.
while expression loop if expression2 then exit; end if; end loop;The Ada exit statement however can combine the expression used to decide that it is required, and so the code below is often found.
while expression loop exit when expression2; end loop;This leads us onto the do loop, which can now be coded as:
loop statement exit when expression; end loop;Another useful feature which C and C++ lack is the ability to 'break' out of nested loops, consider
while ((!feof(file_handle) && (!percent_found)) { for (char_index = 0; buffer[char_index] != '\n'; char_index++) { if (buffer[char_index] == '%') { percent_found = 1; break; } // some other code, including get next line. } }This sort of code is quite common, an inner loop spots the termination condition and has to signal this back to the outer loop. Now consider
Main_Loop: while not End_Of_File(File_Handle) loop for Char_Index in Buffer'Range loop exit when Buffer(Char_Index) = NEW_LINE; exit Main_Loop when Buffer(Char_Index) = PERCENT; end loop; end loop Main_Loop;
return value; // C++ return return value; -- Ada return
label: goto label; <<label>> goto label;
In C++ there is no exception type, when you raise an exception you pass out any sort of type, and selection of the exception is done on its type. In Ada as seen above there is a 'psuedo-type' for exceptions and they are then selected by name.
Firstly lets see how you catch an exception, the code below shows the basic structure used to protect statement1, and execute statement2 on detection of the specified exception.
try { statement1 } catch (declaration) { statement2 } begin statement1 exception when ident => statement2 when others => statement2 end;Let us now consider an example, we will call a function which we know may raise a particular exception, but it may raise some we don't know about, so we must pass anything else back up to whoever called us.
try { function_call(); } catch (const char* string_exception) { if (!strcmp(string_exception, "the_one_we_want")) { handle_it(); } else { throw; } } catch (...) { throw; } begin function_call; exception when the_one_we_want => handle_it; when others => raise; end;This shows how much safer the Ada version is, we know exactly what we are waiting for and can immediately process it. In the C++ case all we know is that an exception of type 'const char*' has been raised, we must then check it still further before we can handle it.
You will also notice the similarity between the Ada exception catching code and the Ada case statement, this also extends to the fact that the when statement can catch multiple exceptions. Ranges of exceptions are not possible, however you can or exceptions, to get:
begin function_call; exception when the_one_we_want | another_possibility => handle_it; when others => raise; end;This also shows the basic form for raising an exception, the throw statement in C++ and the raise statement in Ada. Both normally raise a given exception, but both when invoked with no exception reraise the last one. To raise the exception above consider:
throw (const char*)"the_one_we_want"; raise the_one_we_want;
return_type func_name(parameters); return_type func_name(parameters) { declarations statement } function func_name(parameters) return return_type; function func_name(parameters) return return_type is declarations begin statement end func_nameLet us now consider a special kind of function, one which does not return a value. In C/C++ this is represented as a return type of void, in Ada this is called a procedure.
void func_name(parameters); procedure func_name(parameters);Next we must consider how we pass arguments to functions.
void func1(int by_value); void func2(int* by_address); void func3(int& by_reference); // C++ only.These type of parameters are I hope well understood by C and C++ programmers, their direct Ada equivalents are:
type int is new Integer; type int_star is access int; procedure func1(by_value : in int); procedure func2(by_address : in out int_star); procedure func3(by_reference : in out int);Finally a procedure or function which takes no parameters can be written in two ways in C/C++, though only one is Ada.
void func_name(); void func_name(void); int func_name(void); procedure func_name; function func_name return Integer;Ada also provides two features which will be understood by C++ programmers, possibly not by C programmers, and a third I don't know how C does without:
function Day return All_Days; function Day(a_date : in Date_Type) return All_Days;The first returns you the day of week, of today, the second the day of week from a given date. They are both allowed, and both visible. The compiler decides which one to use by looking at the types given to it when you call it.
function "+"(Left, Right : in Integer) return Integer;Available operators are:
void func(int by_value, int* by_pointer, int& by_reference);Ada provides two optional keywords to specify how parameters are passed,
inand
out. These are used like this:
procedure proc(Parameter : in Integer); procedure proc(Parameter : out Integer); procedure proc(Parameter : in out Integer); procedure proc(Parameter : Integer);If these keywords are used then the compiler can protect you even more, so if you have an
outparameter it will warn you if you use it before it has been set, also it will warn you if you assign to an
inparameter.
Note that you cannot mark parameters with
out in functions
as functions are used to return values, such side affects are disallowed.
procedure Create (File : in out File_Type; Mode : in File_Mode := Inout_File; Name : in String := ""; Form : in String := "");This example is to be found in each of the Ada file based IO packages, it opens a file, given the file 'handle' the mode, name of the file and a system independant 'form' for the file. You can see that the simplest invokation of Create is
Create(File_Handle);which simply provides the handle and all other parameters are defaulted (In the Ada library a file name of "" implies opening a temporary file). Now suppose that we wish to provide the name of the file also, we would have to write
Create(File_Handle, Inout_File, "text.file");wouldn't we? The Ada answer is no. By using designators as has been demonstrated above we could use the form:
Create(File => File_Handle, Name => "text.file");and we can leave the mode to pick up its default. This skipping of parameters is a uniquely Ada feature.
procedure Sort(Sort_This : in out An_Array) is procedure Swap(Item_1, Item_2 : in out Array_Type) is begin end Swap; begin end Sort;
procedure increment(A_Value : A_Type); procedure increment (A_Value : in out A_Type; By : in Integer := 1);If we call increment with one parameter which of the two above is called? Now the compiler will show such things up, but it does mean you have to think carefully and make sure you use defaults carefully.
Ada is also commonly assumed to be a military language, with the US Department of Defense its prime advocate, this is not the case, a number of commercial and government developments have now been implemented in Ada. Ada is an excellent choice if you wish to spend your development time solving your customers problems, not hunting bugs in C/C++ which an Ada compiler would not have allowed.
Ada-95 has introduced these new features, Object Oriented programming through tagged types and procedural types which make it more difficult to statically prove an Ada-95 program, but the language designers decided that such features merited their inclusion in the language to further another goal, that of high reuse.
Constraint_Error
nullaccess type.
Program_Error
Storage_Error
newcould not be satisfied due to lack of memory.
Tasking_Error
Supresswhich can be used to stop certain run-time checks taking place. The pragma works from that point to the end of the innermost enclosing scope, or the end of the scope of the named object (see below).
Access_Check
Constraint_Erroron dereference of a
nullaccess value.
Accessibility_Check
Program_Erroron access to inaccessible object or subprogram.
Discriminant_Check
Constraint_Erroron access to incorrect component in a discriminant record.
Division_Check
Constraint_Erroron divide by zero.
Elaboration_Check
Program_Erroron unelaborated package or subprogram body.
Index_Check
Constraint_Erroron out of range array index.
Length_Check
Constraint_Erroron array length violation.
Overflow_Check
Constraint_Erroron overflow from numeric operation.
Range_Check
Constraint_Erroron out of range scalar value.
Storage_Check
Storage_Errorif not enough storage to satisfy a
newcall.
Tag_Check
Constraint_Errorif object has an invalid tag for operation.
pragma Suppress(Access_Check); pragma Suppress(Access_Check, On => My_Type_Ptr);The first use of the pragma above turns off checking for
nullaccess values throughout the code (for the lifetime of the suppress), whereas the second only suppresses the check for the named data item.
The point of this section is that by default all of these checks are enabled, and so any such errors will be trapped.
Unchecked_Conversion
generic type Source (<>) is limited private; type Target (<>) is limited private; function Ada.Unchecked_Conversion (Source_Object : Source) return Target;and should be instantiated like the example below (taken from one of the Ada-95 standard library packages
Ada.Interfaces.C).
function Character_To_char is new Unchecked_Conversion (Character, char);and can then be used to convert and Ada character to a C char, thus
A_Char : Interfaces.C.char := Character_To_char('a');
Unchecked_Deallocation
generic type Object (<>) is limited private; type Name is access Object; procedure Ada.Unchecked_Deallocation (X : in out Name);this function, instantiated with two parameters, only requires one for operation,
type My_Type is new Integer; type My_Ptr is access My_Type; procedure Free is new Unchecked_Deallocation (My_Type, My_Ptr); Thing : My_Ptr := new My_Type; Free(Thing);
It is worth first looking at the role of header files in C/C++. Header files
are simply program text which by virtue of the preprocessor are inserted into
the compilers input stream. The
#include directive knows nothing
about what it is including and can lead to all sorts of problems, such as
people who
#include "thing.c". This sharing of code by the
preprocessor lead to the
#ifdef construct as you would have
different interfaces for different people. The other problem is that C/C++
compilations can sometime take forever because a included b included c ... or
the near fatal a included a included a ...
Stroustrup has tried ref [9] (in vain, as far as I can see) to convince C++ programmers to remove dependance on the preprocessor but all the drawbacks are still there.
Any Ada package on the other hand consists of two parts, the specification (header) and body (code). The specification however is a completely stand alone entity which can be compiled on its own and so must include specifications from other packages to do so. An Ada package body at compile time must refer to its package specification to ensure legal declarations, but in many Ada environments it would look up a compiled version of the specification.
The specification contains an explicit list of the visible components of a
package and so there can be no internal knowledge exploited as is often
the case in C code, ie module a contains a functions aa() but does not export
it through a header file, module b knows how a is coded and so uses the
extern keyword to declare knowledge of it, and use it. C/C++
programmers therefore have to mark private functions and data as
static.
--file example.ads, the package specification. package example is : : end example; --file example.adb, the package body. package body example is : : end example;
#include "example.h", the Ada package specification has a two stage process.
Working with the example package above let us assume that we need to include
another package, say
My_Specs into this package so that it may be
used. Firstly where do you insert it? Like C, package specifications can be
inserted into either a specification or body depending on who is the client.
Like a C header/code relationship any package included in the specification of
package A is visible to the body of A, but not to clients of A. Each package
is a seperate entity.
-- Specification for package example with Project_Specs; package example is type My_Type is new Project_Spec.Their_Type; end example; -- Body for package example with My_Specs; package body example is type New_Type_1 is new My_Specs.Type_1; type New_Type_2 is new Project_Specs.Type_1; end example;
You can see here the basic visibility rules, the specification has to include
Project_Specs so that it can declare
My_Type. The
body automatically inherits any packages included in its spec, so that you
can see that although the body does not include
Project_Specs
that package is used in the declaration of
New_Type_1. The body
also includes another package
My_Specs to declare the new type
New_Type_2, the specification is unaware of this include and so
cannot use
My_Specs to declare new types. In a similar way an
ordinary client of the package
example cannot use the inclusion of
Project_Specs, they would have to include it themselves.
To use an item, say a the type
Type_1 you must name it
My_Specs.Type_1, in effect you have included the package name,
not its contents. To get the same effect as the C
#include you
must also add another statement to make:
with My_Specs; use My_Specs package body example is : : end example;
It is usual in Ada to put the with and the use on the same line, for clarity.
There is much more to be said about Ada packages, but that should be enough to
start with. There is a special form of the
use statement
which can simply include an element (types only) from a package, consider:
use type Ada.Calendar.Time;
In C this is done by presenting the 'private type' as a
void*
which means that you cannot know anything about it, but implies that no one
can do any form of type checking on it. In C++ we can forward declare classes
and so provide an anonymous class type.
/* C code */ typedef void* list; list create(void); // C++ class Our_List { public: Our_List(void); private: class List_Rep; List_Rep* Representation; };You can see that as a C++ programmer you have the advantage that when writing the implementation of Our_List and its internal representation
List_Repyou have all the advantages of type checking, but the client still knows absolutely nothing about how the list is structured.
In Ada this concept is formalised into the 'private part' of a package. This private part is used to define items which are forward declared as private.
package Our_List is type List_Rep is private; function Create return List_Rep; private type List_Rep is record -- some data end record; end Our_List;As you can see the way the Ada private part is usually used the representation of
List_Repis exposed, but because it is a private type the only operations that the client may use are = and /=, all other operations must be provided by functions and procedures in the package. Note: we can even restrict use of = and /= by declaring the type as
limited privatewhen you wish to have no predefined operators available.
You may not in the public part of the package specification declare variables of the private type as the representation is not yet known, we can declare constants of the type, but you must declare them in both places, forward reference them in the public part with no value, and then again in the private part to provide a value:
package Example is type A is private; B : constant A; private type A is new Integer; B : constant A := 0; end Example;To get exactly the same result as the C++ code above then you must go one step further, you must not expose the representation of
List_Rep, and so you might use:
package Our_List is type List_Access is limited private; function Create return List_Access; private type List_Rep; -- opaque type type List_Access is access List_Rep; end Our_List;We now pass back to the client an access type, which points to a 'deferred incomplete type' whose representation is only required to be exposed in the package body.
package Outer is package Inner_1 is end Inner_1; package Inner_2 is end Inner_2; private end Outer;Ada-95 has added to this the possibility to define child packages outside the physical scope of a package, thus:
package Outer is package Inner_1 is end Inner_1; end Outer; package Outer.Inner_2 is end Outer.Inner_2;As you can see
Inner_2is still a child of outer but can be created at some later date, by a different team.
Consider:
with Outer; with Outer.Inner_1; package New_Package is OI_1 renames Outer.Inner_1; type New_type is new OI_1.A_Type; end New_Package;The use of
OI_1not only saves us a lot of typing, but if outer were the package
Sorting_Algorithms, and
Inner_1was
Insertion_Sort, then we could have
Sort renames Sorting_Algorithms.Insertion_Sortand then at some later date if you decide that a quick sort is more approriate then you simply change the renames clause, and the rest of the package spec stays exactly the same.
Similarly if you want to include 2 functions from two different package with the same name then, rather than relying on overloading, or to clarify your code text you could:
with Package1; function Function1 return Integer renames Package1.Function; with Package2; function Function2 return Integer renames Package2.Function;Another example of a renames clause is where you are using some complex structure and you want to in effect use a synonym for it during some processing. In the example below we have a device handler structure which contains some procedure types which we need to execute in turn. The first example contains a lot of text which we don't really care about, so the second removes most of it, thus leaving bare the real work we are attempting to do.
for device in Device_Map loop Device_Map(device).Device_Handler.Request_Device; Device_Map(device).Device_Handler.Process_Function(Process_This_Request); Device_Map(device).Device_Handler.Relinquish_Device; end loop; for device in Device_Map loop declare Device_Handler : Device_Type renames Device_Map(device).Device_Handler; begin Device_Handler.Request_Device; Device_Handler.Process_Function(Process_This_Request); Device_Handler.Relinquish_Device; end; end loop;
class. A class is an extension of the existing
structconstruct which we have reviewed in section 1.1.7 above. The difference with a class is that a class not only contains data (member attributes) but code as well (member functions). A class might look like:
class A_Device { public: A_Device(char*, int, int); char* Name(void); int Major(void); int Minor(void); protected: char* name; int major; int minor; };This defines a class called A_Device, which encapsulates a Unix-like /dev entry. Such an entry has a name and a major and minor number, the actual data items are protected so a client cannot alter them, but the client can see them by calling the public interface functions.
The code above also introduces a constructor, a function with the same name as
the class which is called whenever the class is created. In C++ these may be
overloaded and are called either by the
new operator, or in local
variable declarations as below.
A_Device lp1("lp1", 10, 1); A_Device* lp1; lp1 = new A_Device("lp1", 10, 1);Creates a new device object called
lp1and sets up the name and major/minor numbers.
Ada has also extended its equivalent of a struct, the
record but
does not directly attach the member functions to it. First the Ada equivalent
of the above class is
package Devices is type Device is tagged private; type Device_Type is access Device; function Create(Name : String; Major : Integer; Minor : Integer) return Device_Type; function Name(this : Device_Type) return String; function Major(this : Device_Type) return Integer; function Minor(this : Device_Type) return Integer; private type Device is tagged record Name : String(1 .. 20); Major : Integer; Minor : Integer; end record; end Devices;and the equivalent declaration of an object would be:
lp1 : Devices.Device_Type := Devices.Create("lp1", 10, 1);
taggedto the definition of the type Device makes it a class in C++ terms. The tagged type is simply an extension of the Ada-83 record type but (in the same way C++'s
classis an extension of C's
struct) which includes a 'tag' which can identify not only its own type but its place in the type hierarchy.
The tag can be accessed by the attribute
'Tag but should only
be used for comparison, ie
dev1, dev2 : Device; if dev1'Tag = dev2'Tag thenthis can identify the isa relationship between two objects.
Another important attribute
'Class exists which is used in type
declarations to denote the class-wide type, the inheritence tree rooted
at that type, ie
type Device_Class is Device'Class; -- or more normally type Device_Class is access Device'Class;The second type denotes a pointer to objects of type
Deviceand any objects whos type has been inherited from
Device.
char* namedirectly maps into
Name : String.
A pure virtual function maps onto a virtual member function with the keywords
is abstract before the semicolon. When any pure virtual
member functions exist the tagged type they refer to must also be identified
as abstract. Also, if an abstract tagged type has been introduced which has
no data, then the following shorthand can be used:
type Root_Type is abstract tagged null record;
Createfunction which creates a new object and returns it. If you intend to use this method then the most important thing to remember is to use the same name throughout,
Create Copy Destroyetc are all useful conventions.
Ada does provide a library package
Ada.Finalization which can
provide constructor/destructor like facilities for tagged types.
Note: See ref 6.
For example, let us now inherit the device type above to make a tape device, firstly in C++
class A_Tape : public A_Device { public: A_Tape(char*, int, int); int Block_Size(void); protected: int block_size; };Now let us look at the example in Ada.
package Device.Tapes is type Tape is new device with private; type Tape_Type is access Tape; function Create(Name : String; Major : Integer; Minor : Integer) return Tape_Type; function Block_Size(this : Tape_Type) return Integer; private type Tape is new Device with record Block_Size : Integer; end record; end Device.Tapes;Ada does not directly support multiple inheritance, ref [7] has an example of how to synthesise mulitple inheritance.
Devicecomparison. In this example the C++ class provided a public interface and a protected one, the Ada equivalent then provided an interface in the public part and the tagged type declaration in the private part. Because of the rules for child packages (see 2.4) a child of the
Devicespackage can see the private part and so can use the definition of the
Devicetagged type.
Top mimic C++ private interfaces you can choose to use the method above, which in effect makes them protected, or you can make them really private by using opaque types (see 2.3).
class base_device { public: char* name(void) const; int major(void) const; int minor(void) const; enum { block, character, special } io_type; io_type type(void) const; char read(void) = 0; void write(char) = 0; static char* type_name(void); protected: char* _name; int _major; int _minor; static const io_type _type; base_device(void); private: int _device_count; };The class above shows off a number of C++ features,
package Devices is type Device is abstract tagged limited private; type Device_Type is access Device; type Device_Class is access Device'Class; type IO_Type is (Block, Char, Special); function Name(this : in Device_Type) return String; function Major(this : in Device_Type) return Integer; function Minor(this : in Device_Type) return Integer; function IOType(this : in Device_Type) return IO_Type; function Read(this : Device_Class) return Character is abstract; procedure Write(this : Device_Class; Output : Character) is abstract function Type_Name return String; private type Device_Count; type Device_Private is access Device_Count; type Device is abstract tagged limited record Name : String(1 .. 20); Major : Integer; Minor : Integer; Count : Device_Private; end record; Const_IO_Type : constant IO_Type := special; Const_Type_Name : constant String := "Device"; end Devices;
void sort(int *array, int num_elements);however when you come to sort an array of structures you either have to rewrite the function, or you end up with a generic sort function which looks like this:
void sort(void *array, int element_size, int element_count, int (*compare)(void* el1, void *el2));This takes a bland address for the start of the array user supplied parameters for the size of each element and the number of elements and a function which compares two elements. C does not have strong typing, but you have just stripped away any help the compiler might be able to give you by using
void*.
Now let us consider an Ada generic version of the sort function:
generic type index_type is (<>); type element_type is private; type element_array is array (index_type range <>) of element_type; with function "<" (el1, el2 : element_type) return Boolean; procedure Sort(the_array : in out element_array);This shows us quite a few features of Ada generics and is a nice place to start, for example note that we have specified a lot of detail about the thing we are going to sort, it is an array, for which we don't know the bounds so it is specified as
range <>. We also can't expect that the range is an integer range and so we must also make the range type a parameter,
index_type. Then we come onto the element type, this is simply specified as private, so all we know is that we can test equality and assign one to another. Now that we have specified exactly what it is we are going to sort we must ask for a function to compare two elements, similar to C we must ask the user to supply a function, however in this case we can ask for an operator function and notice that we use the keyword
withbefore the function.
I think that you should be able to see the difference between the Ada code and C code as far as readability (and therefore maintainability) are concerned and why, therefore, Ada promotes the reuse philosophy.
Now let's use our generic to sort some of
MyTypes.
MyArray : array (Integer 0 .. 100) of MyType; function LessThan(el1, el2 : MyType) return Boolean; procedure SortMyType is new Sort(Integer, MyType, MyArray, LessThan); SortMyType(MyArray);The first two lines simply declare the array we are going to sort and a little function which we use to compare two elements (note: no self respecting Ada programmer would define a function
LessThanwhen they can use "<", this is simply for this example).
We then go on to instantiate the generic procedure and declare that we have an
array called
MyArray of type
MyType using an
Integer range and we have a function to compare two elements.
Now that the compiler has instantiated the generic we can simply call it using
the new name.
Note: The Ada compiler instantiates the generic and will ensure type safety throughout.
generic type Element_Type is private; package Ada.Direct_IO isIs the standard method for writing out binary data structures, and so one could write out to a file:
type My_Struct is record ... end record; package My_Struct_IO is new Ada.Direct_IO(My_Struct); use My_Struct_IO; Item : My_Struct; File : My_Struct_IO; ... My_Struct_IO.Write(File, Item);Note: see section 5.2 for a more detailed study of these packages and how they are used.
limitedthen even these abilities are unavailable.
String. Ada-95 does not allow the instantiation of generics with unconstrained types, unless you use this syntax in which case you cannot declare data items of this type.
0 .. 100.
with Generic_Tree; generic with package A_Tree is new Generic_Tree(<>); package Tree_Walker is -- some code. end Tree_Walker;This says that we have some package called Generic_Tree which is a generic package implementing a tree of generic items. We want to be able to walk any such tree and so we say that we have a new generic package which takes a parameter which must be an instantiated package. ie
package AST is new Generic_Tree(Syntax_Element); package AST_Print is new Tree_Walker(AST);
write()which takes any old thing and puts it out to a file, how can you write a function which will take any parameter, even types which will be introduced after it has been completed. Ada-83 took a two pronged approach to IO, with the package
Text_IOfor simple, textual input output, and the packages
Sequential_IOand
Direct_IOwhich are generic packages for binary output of structured data.
The most common problem for C and C++ programmers is the lack of the printf family of IO functions. There is a good reason for their absence in Ada, the use in C of variable arguments, the '...' at the end of the printf function spec. Ada cannot support such a construct as the type of each parameter is unknown.
Ada.Text_IO. This provides a set of overloaded functions called
Putand
Getto read and write to the screen or to simple text files. There are also functions to open and close such files, check end of file conditions and to do line and page management.
A simple program below uses
Text_IO to print a message to the
screen, including numerics! These are achieved by using the types attribute
'Image which gives back a String representation of a value.
with Ada.Text_IO; use Ada.Text_IO; procedure Test_IO is begin Put_Line("Test Starts Here >"); Put_Line("Integer is " & Integer'Image(2)); Put_Line("Float is " & Float'Image(2.0)); Put_Line("Test Ends Here"); end Test_IO;It is also possible to use one of the generic child packages of
Ada. Text_IOsuch as
Ada.Text_IO.Integer_IOwhich can be instantiated with a particular type to provide type safe textual IO.
with Ada.Text_IO; type My_Integer is new Integer; package My_Integer_IO is new Ada.Text_IO.Integer_IO(My_Integer); use My_Integer_IO;
with Ada.Direct_IO; package A_Database is type File_Header is record Magic_Number : Special_Stamp; Number_Of_Records : Record_Number; First_Deleted : Record_Number; end record; type Row is record Key : String(1 .. 80); Data : String(1 .. 255); end record; package Header_IO is new Direct_IO (File_Header); use Header_IO; package Row_IO is new Direct_IO (Row); use Record_IO; end A_Database;Now that we have some instantiated packages we can read and write records and headers to and from a file. However we want each database file to consist of a header followed by a number of rows, so we try the following
declare Handle : Header_IO.File_Type; A_Header : File_Header; A_Row : Row; begin Header_IO.Open(File => Handle, Name => "Test"); Header_IO.Write(Handle, A_Header); Row_IO.Write(Handle, A_Row); Header_IO.Close(Handle); end;The obvious error is that
Handleis defined as a type exported from the
Header_IOpackage and so cannot be passed to the procedure
Writefrom the package
Row_IO. This strong typing means that both
Sequential_IOand
Direct_IOare designed only to work on files containg all elements of the same type.
When designing a package, if you want to avoid this sort of problem (the designers of these packages did intend this restriction) then embed the generic part within an enclosing package, thus
package generic_IO is type File_Type is limited private; procedure Create(File : File_Type .... procedure Close ..... generic Element_Type is private; package Read_Write is procedure Read(File : File_Type; Element : Element_Type ... procedure Write ..... end Read_Write; end generic_IO;Which would make our database package look something like
with generic_IO; package A_Database is type File_Header is record Magic_Number : Special_Stamp; Number_Of_Records : Record_Number; First_Deleted : Record_Number; end record; type Row is record Key : String(1 .. 80); Data : String(1 .. 255); end record; package Header_IO is new generic_IO.Read_Write (File_Header); use Header_IO; package Row_IO is new generic_IO.Read_Write (Row); use Record_IO; end A_Database; : : declare Handle : generic_IO.File_Type; A_Header : File_Header; A_Row : Row; begin generic_IO.Open(File => Handle, Name => "Test"); Header_IO.Write(Handle, A_Header); Row_IO.Write(Handle, A_Row); generic_IO.Close(Handle); end;
Interfaceswhich define functions to allow you to convert data types between the Ada program and the external language routines.
The full set of packages defined for interfaces are show below.
Unlike C/C++ Ada defines a concurrency model as part of the language itself. Some languages (Modula-3) provide a concurrency model through the use of standard library packages, and of course some operating systems provide libraries to provide concurrency. In Ada there are two base components, the task which encapsulates a concurrent process and the protected type which is a data structure which provides guarded access to its data.
forkfunction to start a process which is a copy of the current process and so inherits these global variables. The problem with this model is that the global variables are now replicated in both processes, a change to one is not reflected in the other.
In a multi-threaded environment multiple concurrent processes are allowed
within the same address space, that is they can share global data. Usually
there are a set of API calls such as
StartThread, StopThread
etc which manage these processes.
Note: An Ada program with no tasks is really an Ada process with a single running task, the default code.
task X is end X; task body X is begin loop -- processing. end loop; end X;As with packages a task comes in two blocks, the specification and the body. Both of these are shown above, the task specification simply declares the name of the task and nothing more. The body of the task shows that it is a loop processing something. In many cases a task is simply a straight through block of code which is executed in parallel, or it may be, as in this case, modelled as a service loop.
task type X is end X; Item : X; Items : array (0 .. 9) of X;Note: however that tasks are declared as constants, you cannot assign to them and you cannot test for equality.
The Ada tasking model defines methods for inter-task cooperation and much more in a system independant way using constructs known as Rendezvous.
A Rendezvouz is just what it sounds like, a meeting place where two tasks arrange to meet up, if one task reaches it first then it waits for the other to arrive. And in fact a queue is formed for each rendezvous of all tasks waiting (in FIFO order).
in outparameters). It can take any number of parameters, but rather that the keyword
procedurethe keyword
entryis used. In the task body however the keyword
acceptis used, and instead of the procedure syntax of
is beginsimply
dois used. The reason for this is that rendezvous in a task are simply sections of the code in it, they are not seperate elements as procedures are.
Consider the example below, a system of some sort has a cache of elements,
it requests an element from the cache, if it is not in the cache then
the cache itself reads an element from the master set. If this process of
reading from the master fills the cache then it must be reordered.
When the process finishes with the item it calls
PutBack which
updates the cache and if required updates the master.
task type Cached_Items is entry Request(Item : out Item_Type); entry PutBack(Item : in Item_Type); end Cached_Items; task body Cached_Items is Log_File : Ada.Text_IO.File_Type; begin -- open the log file. loop accept Request(Item : out Item_Type) do -- satisfy from cache or get new. end Request; -- if had to get new, then quickly -- check cache for overflow. accept PutBack(Item : in Item_Type) do -- replace item in cache. end PutBack; -- if item put back has changed -- then possibly update original. end loop; end Cached_Items; -- the client code begins here: declare Cache : Cached_Items; Item : Item_Type; begin Cache.Request(Item); -- process. Cache.PutBack(Item); end;It is the sequence of processing which is important here, Firstly the client task (remember, even if the client is the main program it is still, logically, a task) creates the cache task which executes its body. The first thing the cache (owner task) does is some procedural code, its initialisation, in this case to open its log file. Next we have an accept statement, this is a rendezvous, and in this case the two parties are the owner task, when it reaches the keyword
acceptand the client task that calls
Cache.Request(Item).
If the client task calls
Request before the owner task has reached
the
accept then the client task will wait for the owner task.
However we would not expect the owner task to take very long to open a log file,
so it is more likely that it will reach the
accept first and
wait for a client task.
When both client and owner tasks are at the rendezvous then the owner task executes
the
accept code while the client task waits. When the owner
task reaches the end of the rendezvous both the owner and the client are set off
again on their own way.
Requesttwice in a row then you have a deadly embrace, the owner task cannot get to
Requestbefore executing
PutBackand the client task cannot execute
PutBackuntil it has satisfied the second call to
Request.
To get around this problem we use a
select statement which
allows the task to specify a number of entry points which are valid at any time.
task body Cached_Items is Log_File : Ada.Text_IO.File_Type; begin -- open the log file. accept Request(Item : Item_Type) do -- satisfy from cache or get new. end Request; loop select;We have done two major things, first we have added the
selectconstruct which says that during the loop a client may call either of the entry points. The second point is that we moved a copy of the entry point into the initialisation section of the task so that we must call
Requestbefore anything else. It is worth noting that we can have many entry points with the same name and they may be the same or may do something different but we only need one
entryin the task specification.
In effect the addition of the
select statement means that
the owner task now waits on the
select itself until one
of the specified
accepts are called.
Note: possibly more important is the fact that we have not changed the specification for the task at all yet!.
acceptmay be valid, so:;This (possibly erroneous) example adds two internal values, one to keep track of the number of items in the cache, and the size of the cache. If no items have been read into the cache then you cannot logicaly put anything back.
delaystatement into a task, this statement has two modes, delay for a given amount of time, or delay until a given time. So:
delay 5.0; -- delay for 5 seconds delay Ada.Calendar.Clock; -- delay until it is ... delay until A_Time; -- Ada-95 equivalent of aboveThe first line is simple, delay the task for a given number, or fraction of, seconds. This mode takes a parameter of type
Durationspecified in the package
System. The next two both wait until a time is reached, the secodn line also takes a
Duration, the third line takes a parameter of type
Timefrom package
Ada.Calendar.
It is more interesting to note the effect of one of these when used in a select
statement. For example, if an
accept is likely to take a
long time you might use:
select accept An_Entry do end An_Entry; or delay 5.0; Put("An_Entry: timeout"); end select;This runs the
delayand the
acceptconcurrently and if the
delaycompletes before the
acceptthen the
acceptis aborted and the task continues at the statement after the
delay, in this case the error message.
It is possible to protect procedural code in the same way, so we might amend our example by:; select -- if item put back has changed -- then possibly update original. or delay 2.0; -- abort the cache update code end select; or accept Request(Item : Item_Type) do -- satisfy from cache or get new. end Request; -- if had to get new, then quickly -- check cache for overflow. end select; end loop; end Cached_Items;
The
else clause allows us to execute a non-blocking
select statement, so we could code a polling task, such
as:
select accept Do_Something do end DO_Something; else -- do something else. end select;So that if no one has called the entry points specified we continue rather than waiting for a client.
terminatewhich executes a nice orderly cleanup of the task. (We can also kill a task in a more immediate way using the
abortcommand, this is NOT recommended).
The
terminate alternative is used for a task to specify
that the run time environment can terminate the task if all its actions are
complete and no clients are waiting.
loop select accept Do_Something do end Do_Something; or terminate; end select; end loop;The
abortcommand is used by a client to terminate a task, possibly if it is not behaving correctly. The command takes a task identifer as an argument, so using our example above we might say:
if Task_In_Error(Cache) then abort Cache; end if;The
then abortclause is very similar to the
delayexample above, the code between
then abortand
end selectis aborted if the
delayclause finishes first.
select delay 5.0; Put("An_Entry: timeout"); then abort accept An_Entry do end An_Entry; end select;
protected type Cached_Items is function Request return Item_Type; procedure PutBack(Item : in Item_Type); private Log_File : Ada.Text_IO.File_Type; Number_Requested : Integer := 0; Cache_Size : constant Integer := 50; end Cached_Items; protected body Cached_Items is function Request return Item_Type is begin -- initialise, if required -- satisfy from cache or get new. -- if had to get new, then quickly -- check cache for overflow. end Request; procedure PutBack(Item : in Item_Type) is begin -- initialise, if required -- replace item in cache. -- if item put back has changed -- then possibly update original. end Request; end Cached_Items;This is an implementation of our cache from the task discussion above. Note now that the names
Requestand
PutBackare now simply calls like any other. This does show some of the differences between tasks and protected types, for example the protected type above, because it is a passive object cannot completly initialise itself, so each procedure and/or function must check if it has been initialised. Also we must do all processing within the stated procedures. | http://www.adahome.com/Ammo/cpp2ada.html | CC-MAIN-2017-22 | refinedweb | 12,164 | 57.81 |
On Fri, Aug 10, 2012 at 3:29 PM, Daniel Vetter <daniel@ffwll.ch> wrote:> On Fri, Aug 10, 2012 at 04:57:52PM +0200, Maarten Lankhorst wrote:>> A dma-fence can be attached to a buffer which is being filled or consumed>> by hw, to allow userspace to pass the buffer without waiting to another>> device..>>>> A dma-fence is transient, one-shot deal. It is allocated and attached>> to one or more dma-buf's. When the one that attached it is done, with>> the pending operation, it can signal the fence.>>>> + dma_fence_signal()>>>> The dma-buf-mgr handles tracking, and waiting on, the fences associated>> with a dma-buf.>>>> TODO maybe need some helper fxn for simple devices, like a display->> only drm/kms device which simply wants to wait for exclusive fence to>> be signaled, and then attach a non-exclusive fence while scanout is in>> progress.>>>> The one pending on the fence can add an async callback:>> + dma_fence_add_callback()>> The callback can optionally be cancelled with remove_wait_queue()>>>> Or wait synchronously (optionally with timeout or interruptible):>> + dma_fence_wait()>>>> A default software-only implementation is provided, which can be used>> by drivers attaching a fence to a buffer when they have no other means>> for hw sync. But a memory backed fence is also envisioned, because it>> is common that GPU's can write to, or poll on some memory location for>> synchronization. For example:>>>> fence = dma_buf_get_fence(dmabuf);>> if (fence->ops == &bikeshed_fence_ops) {>> dma_buf *fence_buf;>> dma_bikeshed_fence_get_buf(fence, &fence_buf, &offset);>> ... tell the hw the memory location to wait on ...>> } else {>> /* fall-back to sw sync * />> dma_fence_add_callback(fence, my_cb);>> }>>>> On SoC platforms, if some other hw mechanism is provided for synchronizing>> between IP blocks, it could be supported as an alternate implementation>> with it's own fence ops in a similar way.>>>> To facilitate other non-sw implementations, the enable_signaling callback>> can be used. The handler of the enable_signaling>> op should take a refcount until the fence is signaled, then release its ref.>>>> The intention is to provide a userspace interface (presumably via eventfd)>> later, to be used in conjunction with dma-buf's mmap support for sw access>> to buffers (or for userspace apps that would prefer to do their own>> synchronization).>> I think the commit message should be cleaned up: Kill the TODO, rip out> the bikeshed_fence and otherwise update it to the latest code.>>>>> v1: Original>> v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided>> that dma-fence didn't need to care about the sw->hw signaling path>> (it can be handled same as sw->sw case), and therefore the fence->ops>> can be simplified and more handled in the core. So remove the signal,>> add_callback, cancel_callback, and wait ops, and replace with a simple>> enable_signaling() op which can be used to inform a fence supporting>> hw->hw signaling that one or more devices which do not support hw>> signaling are waiting (and therefore it should enable an irq or do>> whatever is necessary in order that the CPU is notified when the>> fence is passed).>>.>>>> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>>> I like the design of this, and especially that it's rather simple ;-)>> A few comments to polish the interface, implementation and documentation a> bit below.>>> --->> Documentation/DocBook/device-drivers.tmpl | 2>> drivers/base/Makefile | 2>> drivers/base/dma-fence.c | 268 +++++++++++++++++++++++++++++>> include/linux/dma-fence.h | 124 +++++++++++++>> 4 files changed, 395 insertions(+), 1 deletion(-)>> create mode 100644 drivers/base/dma-fence.c>> create mode 100644 include/linux/dma-fence.h>>>> diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl>> index 7514dbf..36252ac 100644>> --- a/Documentation/DocBook/device-drivers.tmpl>> +++ b/Documentation/DocBook/device-drivers.tmpl>> @@ -126,6 +126,8 @@ X!Edrivers/base/interface.c>> </sect1>>> <sect1><title>Device Drivers DMA Management</title>>> !Edrivers/base/dma-buf.c>> +!Edrivers/base/dma-fence.c>> +!Iinclude/linux/dma-fence.h>> !Edrivers/base/dma-coherent.c>> !Edrivers/base/dma-mapping.c>> </sect1>>>-fence.c b/drivers/base/dma-fence.c>> new file mode 100644>> index 0000000..93448e4>> --- /dev/null>> +++ b/drivers/base/dma-fence.c>> @@ -0,0 +1,268 @@>> +/*>> + */slab.h>>> +#include <linux/sched.h>>> +#include <linux/export.h>>> +#include <linux/dma-fence.h>>> +>> +/**>> + * dma_fence_signal - signal completion of a fence>> + * @fence: the fence to signal>> + *>> + * All registered callbacks will be called directly (synchronously) and>> + * all blocked waters will be awoken. This should be always be called on>> + * software only fences, or alternatively be called after>> + * dma_fence_ops::enable_signaling is called.>> I think we need to be cleare here when dma_fence_signal can be called:> - for a sw-only fence (i.e. created with dma_fence_create)> dma_fence_signal _must_ be called under all circumstances.> - for any other fences, dma_fence_signal may be called, but it _must_ be> called once the ->enable_signalling func has been called and return 0> (i.e. success).> - it may be called only _once_.>>> + */>> +int dma_fence_signal(struct dma_fence *fence)>> +{>> + unsigned long flags;>> + int ret = -EINVAL;>> +>> + if (WARN_ON(!fence))>> + return -EINVAL;>> +>> + spin_lock_irqsave(&fence->event_queue.lock, flags);>> + if (!fence->signaled) {>> + fence->signaled = true;>> + __wake_up_locked_key(&fence->event_queue, TASK_NORMAL,>> + &fence->event_queue);>> + ret = 0;>> + } else>> + WARN(1, "Already signaled");>> + spin_unlock_irqrestore(&fence->event_queue.lock, flags);>> +>> + return ret;>> +}>> +EXPORT_SYMBOL_GPL(dma_fence_signal);>> +>> +static void release_fence(struct kref *kref)>> +{>> + struct dma_fence *fence =>> + container_of(kref, struct dma_fence, refcount);>> +>> + BUG_ON(waitqueue_active(&fence->event_queue));>> +>> + if (fence->ops->release)>> + fence->ops->release(fence);>> +>> + kfree(fence);>> +}>> +>> +/**>> + * dma_fence_put - decreases refcount of the fence>> + * @fence: [in] fence to reduce refcount of>> + */>> +void dma_fence_put(struct dma_fence *fence)>> +{>> + if (WARN_ON(!fence))>> + return;>> + kref_put(&fence->refcount, release_fence);>> +}>> +EXPORT_SYMBOL_GPL(dma_fence_put);>> +>> +/**>> + * dma_fence_get - increases refcount of the fence>> + * @fence: [in] fence to increase refcount of>> + */>> +void dma_fence_get(struct dma_fence *fence)>> +{>> + if (WARN_ON(!fence))>> + return;>> + kref_get(&fence->refcount);>> +}>> +EXPORT_SYMBOL_GPL(dma_fence_get);>> +>> +static int check_signaling(struct dma_fence *fence)>> +{>> + bool enable_signaling = false, signaled;>> + unsigned long flags;>> +>> + spin_lock_irqsave(&fence->event_queue.lock, flags);>> + signaled = fence->signaled;>> + if (!signaled && !fence->needs_sw_signal)>> + enable_signaling = fence->needs_sw_signal = true;>> + spin_unlock_irqrestore(&fence->event_queue.lock, flags);>> +>> + if (enable_signaling) {>> + int ret;>> +>> + /* At this point, if enable_signaling returns any error>> + * a wakeup has to be performanced regardless.>> + * -ENOENT signals fence was already signaled. Any other error>> + * inidicates a catastrophic hardware error.>> + *>> + * If any hardware error occurs, nothing can be done against>> + * it, so it's treated like the fence was already signaled.>> + * No synchronization can be performed, so we have to assume>> + * the fence was already signaled.>> + */>> + ret = fence->ops->enable_signaling(fence);>> + if (ret) {>> + signaled = true;>> + dma_fence_signal(fence);>> I think we should call dma_fence_signal only for -ENOENT and pass all> other errors back as-is. E.g. on -ENOMEM or so we might want to retry ...>> + }>> + }>> +>> + if (!signaled)>> + return 0;>> + else>> + return -ENOENT;>> +}>> +>> +static int>> +__dma_fence_wake_func(wait_queue_t *wait, unsigned mode, int flags, void *key)>> +{>> + struct dma_fence_cb *cb =>> + container_of(wait, struct dma_fence_cb, base);>> +>> + __remove_wait_queue(key, wait);>> + return cb->func(cb, wait->private);>> +}>> +>> +/**>> + * dma_fence_add_callback - add a callback to be called when the fence>> + * is signaled>> + *>> + * @fence: [in] the fence to wait on>> + * @cb: [in] the callback to register>> + * @func: [in] the function to call>> + * @priv: [in] the argument to pass to function>> + *>> + * cb will be initialized by dma_fence_add_callback, no initialization>> + * by the caller is required. Any number of callbacks can be registered>> + * to a fence, but a callback can only be registered to one,>> + dma_fence_func_t func, void *priv)>> +{>> + unsigned long flags;>> + int ret;>> +>> + if (WARN_ON(!fence || !func))>> + return -EINVAL;>> +>> + ret = check_signaling(fence);>> +>> + spin_lock_irqsave(&fence->event_queue.lock, flags);>> + if (!ret && fence->signaled)>> + ret = -ENOENT;>> The locking here is a bit suboptimal: We grab the fence spinlock once in> check_signalling and then again here. We should combine this into one> critical section.Fwiw, Maarten had the same thought. I had suggested keep itclean/simple for now and get it working, and then go back and optimizeafter, so you can blame this one on me :-PI guess we could either just inline the check_signaling() code, but Ididn't want to do that yet. Or we could call check_signaling() withthe lock already hand, and just drop and re-acquire it around therelatively infrequent enable_signaling() callback.>> +>> + if (!ret) {>> + cb->base.flags = 0;>> + cb->base.func = __dma_fence_wake_func;>> + cb->base.private = priv;>> + cb->fence = fence;>> + cb->func = func;>> + __add_wait_queue(&fence->event_queue, &cb->base);>> + }>> + spin_unlock_irqrestore(&fence->event_queue.lock, flags);>> +>> + return ret;>> +}>> +EXPORT_SYMBOL_GPL(dma_fence_add_callback);>> I think for api completenes we should also have a> dma_fence_remove_callback function.We did originally but Maarten found it was difficult to deal withproperly when the gpu's hang. I think his alternative was just torequire the hung driver to signal the fence. I had kicked around theidea of a dma_fence_cancel() alternative to signal that could pass anerror thru to the waiting driver.. although not sure if the otherdriver could really do anything differently at that point.>> +>> +/**>> + * dma_fence_wait - wait for a fence to be signaled>> + *>> + * @fence: [in] The fence to wait on>> + * @intr: [in] if true, do an interruptible wait>> + * @timeout: [in] absolute time for timeout, in jiffies.>> I don't quite like this, I think we should keep the styl of all other> wait_*_timeout functions and pass the arg as timeout in jiffies (and also> the same return semantics). Otherwise well have funny code that needs to> handle return values differently depending upon whether it waits upon a> dma_fence or a native object (where it would us the wait_*_timeout> functions directly).We did start out this way, but there was an ugly jiffies roll-overproblem that was difficult to deal with properly. Using an absolutetime avoided the problem.> Also, I think we should add the non-_timeout variants, too, just for> completeness.>>> + *>> + * Returns 0 on success, -EBUSY if a timeout occured,>> + * -ERESTARTSYS if the wait was interrupted by a signal.>> + */>> +int dma_fence_wait(struct dma_fence *fence, bool intr, unsigned long timeout)>> +{>> + unsigned long cur;>> + int ret;>> +>> + if (WARN_ON(!fence))>> + return -EINVAL;>> +>> + cur = jiffies;>> + if (time_after_eq(cur, timeout))>> + return -EBUSY;>> +>> + timeout -= cur;>> +>> + ret = check_signaling(fence);>> + if (ret == -ENOENT)>> + return 0;>> + else if (ret)>> + return ret;>> +>> + if (intr)>> + ret = wait_event_interruptible_timeout(fence->event_queue,>> + fence->signaled,>> + timeout);>> We have a race here, since fence->signaled is proctected by> fenc->event_queu.lock. There's a special variant of the wait_event macros> that automatically drops a spinlock at the right time, which would fit> here. Again, like for the callback function I think you then need to> open-code check_signalling to avoid taking the spinlock twice.yeah, this would work for thecall-check_signaling()-with-lock-already-held approach to get rid ofthe double lock..>> + else>> + ret = wait_event_timeout(fence->event_queue,>> + fence->signaled, timeout);>> +>> + if (ret > 0)>> + return 0;>> + else if (!ret)>> + return -EBUSY;>> + else>> + return ret;>> +}>> +EXPORT_SYMBOL_GPL(dma_fence_wait);>> +>> +static int sw_enable_signaling(struct dma_fence *fence)>> +{>> + /* dma_fence_create sets needs_sw_signal,>> + * so this should never be called>> + */>> + WARN_ON_ONCE(1);>> + return 0;>> +}>> +>> +static const struct dma_fence_ops sw_fence_ops = {>> + .enable_signaling = sw_enable_signaling,>> +};>> +>> +/**>> + * dma_fence_create - create a simple sw-only fence>> + * @priv: [in] the value to use for the priv member>> + *>> + * *priv)>> +{>> + struct dma_fence *fence;>> +>> + fence = kmalloc(sizeof(struct dma_fence), GFP_KERNEL);>> + if (!fence)>> + return NULL;>> +>> + __dma_fence_init(fence, &sw_fence_ops, priv);>> + fence->needs_sw_signal = true;>> +>> + return fence;>> +}>> +EXPORT_SYMBOL_GPL(dma_fence_create);>> diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h>> new file mode 100644>> index 0000000..e0ceddd>> --- /dev/null>> +++ b/include/linux/dma-fence.h>> @@ -0,0 +1,124 @@>> +/*>> + *_FENCE_H__>> +#define __DMA_FENCE_H__>> +>> +#include <linux/err.h>>> +#include <linux/list.h>>> +#include <linux/wait.h>>> +#include <linux/list.h>>> +#include <linux/dma-buf.h>>> +>> +struct dma_fence;>> +struct dma_fence_ops;>> +struct dma_fence_cb;>> +>> +/**>> + * struct dma_fence - software synchronization primitive>> + * @refcount: refcount for this fence>> + * @ops: dma_fence_ops associated with this fence>> + * @priv: fence specific private data>> + * @event_queue: event queue used for signaling fence>> + * @signaled: whether this fence has been completed yet>> + * @needs_sw_signal: whether dma_fence_ops::enable_signaling>> + * has been called yet>> + *>> + * Read Documentation/dma-buf-synchronization.txt for usage.>> + */>> +struct dma_fence {>> + struct kref refcount;>> + const struct dma_fence_ops *ops;>> + wait_queue_head_t event_queue;>> + void *priv;>> + bool signaled:1;>> + bool needs_sw_signal:1;>> I guess a comment here is in order that signaled and needs_sw_signal is> protected by event_queue.lock. Also, since the compiler is rather free to> do crazy stuff with bitfields, I think it's preferred style to use an> unsigned long and explicit bit #defines (ton ensure the compiler doesn't> generate loads/stores that leak to other members of the struct).yeah, good point.. I guess we should just change that to be a'unsigned long' bitmask.BR,-R>> +};>> +>> +typedef int (*dma_fence_func_t)(struct dma_fence_cb *cb, void *priv);>> +>> +/**>> + * struct dma_fence_cb - callback for dma_fence_add_callback>> + * @base: wait_queue_t added to event_queue>> + * @func: dma_fence_func_t to call>> + * @fence: fence this dma_fence_cb was used on>> + *>> + * This struct will be initialized by dma_fence_add_callback, additional>> + * data can be passed along by embedding dma_fence_cb in another struct.>> + */>> +struct dma_fence_cb {>> + wait_queue_t base;>> + dma_fence_func_t func;>> + struct dma_fence *fence;>> +};>> +>> +/**>> + * struct dma_fence_ops - operations implemented for dma-fence>> + * @enable_signaling: enable software signaling of fence>> + * @release: [optional] called on destruction of fence>> + *>> + * Notes on enable_signaling:>> + *. Any other errors will be treated as -ENOENT,>> + * and can happen because of hardware failure.>> + */>> +>> I think we need to specify the calling contexts of these two.>>> +struct dma_fence_ops {>> + int (*enable_signaling)(struct dma_fence *fence);>> I think we should mandate that enable_signalling can be called from atomic> context, but not irq context (since I don't see a use-case for calling> this from irq context).>>> + void (*release)(struct dma_fence *fence);>> Since a waiter might call ->release as a reaction to a signal, I think the> release callback must be able to handle any calling context, and> especially anything that calls dma_fence_signal.>>> +};>> +>> +struct dma_fence *dma_fence_create(void *priv);>> +>> +/**>> + * __dma_fence_init - Initialize a custom dma_fence.>> + * @fence: [in] The fence to initialize>> + * @ops: [in] The dma_fence_ops for operations on this fence.>> + * @priv: [in] The value to use for the priv member.>> + */>> +static inline void>> +__dma_fence_init(struct dma_fence *fence,>> + const struct dma_fence_ops *ops, void *priv)>> +{>> + WARN_ON(!ops || !ops->enable_signaling);>> +>> + kref_init(&fence->refcount);>> + fence->ops = ops;>> + fence->priv = priv;>> + fence->needs_sw_signal = false;>> + fence->signaled = false;>> + init_waitqueue_head(&fence->event_queue);>> +}>> +>> +void dma_fence_get(struct dma_fence *fence);>> +void dma_fence_put(struct dma_fence *fence);>> +>> +int dma_fence_signal(struct dma_fence *fence);>> +int dma_fence_wait(struct dma_fence *fence, bool intr, unsigned long timeout);>> +int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,>> + dma_fence_func_t func, void *priv);>> +>> +#endif /* __DMA_FENCE_H__ */>>>>>> _______________________________________________>> Linaro-mm-sig mailing list>> Linaro-mm-sig@lists.linaro.org>>>> --> Daniel Vetter> Mail: daniel@ffwll.ch> Mobile: +41 (0)79 365 57 48> _______________________________________________> dri-devel mailing list> dri-devel@lists.freedesktop.org> | http://lkml.org/lkml/2012/8/11/77 | CC-MAIN-2013-20 | refinedweb | 2,343 | 54.02 |
C
Qt Safe Renderer C++ Classes
Namespaces
Classes
Note: The implementation of the safety requirements concerns only content inside the
SafeRenderer namespace.
Deprecated Qt Safe Renderer classes
The following Qt Safe Renderer C++ classes are now obsolete. Obsolete classes are no longer maintained. They are provided to keep old source code working, but they can be removed in a future release. We strongly advise against using these classes in new code.
Note: You can use QML states for changing the color and event positions. For more information, see Changing States and Important Concepts in Qt Quick - States, Transitions and Animations.
Available under certain Qt licenses.
Find out more. | https://doc-snapshots.qt.io/qtsaferenderer/qtsaferenderer-module.html | CC-MAIN-2022-40 | refinedweb | 108 | 59.3 |
A have two entities. For example timing settings and orders.
@Entity
public class TimingSettings{
@Basic
private Long orderTimeout = 18000; //for example 18000 sec
...
I'm at a loss of how to create a running sum of particular field after creating a criteria within a controller
I'm currently creating a set of set of records using:
...
Hi I want to write a hql ( query in hibernate ) , My problem is that in my one query the result of that query is come using distinct , not i want to sum of one field carried out from that distinct query but it also distinct in sum method too. Can i use sub table in hql e.g ... | http://www.java2s.com/Questions_And_Answers/JPA/Field/sum.htm | CC-MAIN-2018-39 | refinedweb | 114 | 69.62 |
Join devRant
Search - "happy birthday"
- I think I'll never going to get a devRant stress ball, so i made this instead with my pretty low budget (0.5$).18
-
-
-.8
-
- I'm 40 years old today. Feels strange.. and getting older as a developer is not an advantage at all. But I got a wonderful little baby, a wife and a job I (almost always) like! So happy birthday to me 😊16
-.6
- I was just scrolling down a Facebook feed and a part of a picture with Linus Torvalds appeared.
- Hell no! Not again, 2016, just fuck off already!!
And then... "Happy birthday!"
My heart skipped a beat.. that's how my girlfriend wished me a happy birthday.. she's a chemist
<?php
$happy="Happy";
$bday="Birthday";
$to="To";
$you="You";
$dear="Dear";
$boyfriend="Joseph";
for($i=0;$i<4;$i++){
if($i!=2)
echo $happy." ".$bday." ".$to." ".$you."<br>";
else
echo $happy." ".$bday." ".$dear." ".$boyfriend."<br>";
}
?>18
-
-
-
-
-
- Every company should give a day off to employees for their birthdays.. Is it to much I'm asking for???
Well, happy birthday to me 🎉🎉13
- Since no one has mentioned it - Happy birthday, Linux.
And thank you, who goes through fire and water debugging this enormous creature (this could've been worse, right?)
-
- Dad: "Happy birthday"
(hands over a box)
"here's your cake, now bake it"
Me: "Wha?"
fast forward to today, its now a linux meme1
-
-
-
-
- 🎂🎈
Linus torvalds announcing his humble new (hobby) OS to minix newsgrp on this day 25yr ago .. Rest, my folks, is history!
Happy Birthday Linux
-
- theNox.age++
I wanted to screenshot some of the birthday wishes that were printed for me on the discord server early this morning but they're spread too much, thanks anyway! That was very cute of you all. Only people from devRant actually bothered to wish me a Happy Birthday this early (maybe I influenced it a little bit ... 😂) But it's true
Everyone remember that ThatDude's Birthday is on January 28th 😉15
- So my girlfriend decided to surprise me with this cake... I'm happy with it but I feel violated as I'm PHP guy not .NET13
-
-!!!12
-
-
-6
-
- Steve Wilhite... The man who changed the world with his software: the GIF.
Today, is GIF's 30th birthday (15th June 1987)!!!!
Happy Birthday6
-
-
- So it's December 31st. Also my birthday, but no one is going to celebrate it. Wish I could revert to an earlier version of the program to when people were happy to celebrate it like, friends and family.14
-
-
-
-
- if (MONTH === 1 & DAY === 18) {
alert("Bro, it's your birthday");
} else {
alert("Bro, just go away, you're nothing special 😅);
}12
- Happy birthday Linux well I'm not gonna be like other with cake and words compile it yourself that's what gentoo is for.
So just happy birthday.1
-
- Skipped the night, I feel like an old person now.
15 minutes later my wife steps in, happy 56th birthday.
I feel ancient right now!9
-
- The nerdiest way to say happy birthday to someone? Tell them to paste atob("SGFwcHkgQmlydGhkYXkh") in the chrome console/firebug.3
-
- Happy birthday to me, happy birthday to me, happy biiirthday dear meee-eee, happy birthday to me! 🥳🎁🍰🎈6
-
-
- I made a game (rather mod) called "wake up".
I should trademark it and whenever someone says or writes "wake up", I sue them
- Happy birthday devRant!
I miss those early days which had more rants but still, to the best dev community ever 🍺 🍻🍻🍻
-.
- Today is my birthday and my company as a present will make me work on disgusting legacy code, stored procedures, impossible to debug and convoluted as fuck.
And everything needs to be released yesterday...
Happy birthday motherfucker.1
-
-
- !Rant
ninePlusTen($date = '29/11/2017') {
if(preg_match("/(29.11)|(11.29)/",$date){
$age++;
echo 'Happy birthday!';
}
}11
-
-
- You know what sucks?
Having birthday...Literally not an hour ago and nobody showing up...At least nobody who you really cared about. Only people around are there because they knew you some years ago and expect to get a free drink...
Wanna know what sucks even more??
Being heartbroken and even though you felt horrible because of that person it is my minute of the year and the most I wish is her being by my site. Caring about me and just wishing me happy birthday.
Definitely the worst birthday ever but at least I'm drunk so that's that.
I just wanted to get that off my chest. Bye and have a nice evening y'all
- First: happy birthday, devRant.
Second: the more I use it, the more I would like to see implemented as a block-chain, a la Steemit
-
-
-
- Happy Birthday devRant!! 🎂🎉 Can't believe we're at 30,000 users!! This community feels much smaller (I mean that in a good way)
-
- 1. using "if... then... else..." When explaining something tru slack to non tech people
2. buying lamps i can program
3. dreaming abt my code
4. dreaming abt the solution
5. trying to make bot to send happy birthday msg
-
-
-
-
- For a very short moment i thought i have 585++, but then i realized i just liked this many rants :D1
- Today is the birthday of Sir M. Visvesvaraya, one of the greatest engineers of all time. In India, we celebrate his birthday as Engineers' day. Happy Engineers' day to all engineers here:
-
-
- Happy birthday to the best community for developers. Thanks for all the laughs, friends and good times you have brought us all.
-
-
- Happy Birthday devRant. Still the best community for devs and future devs.
I hope this will still grow and become more recognized in the future1
- void Birthday (int age){
cout<<"Happy Birthday\n";
if(age % 10 == 0)
{
cout<<"Holy shit, you've been alive for "<<age/10<<" decades"}
}2
- public class Celebrations {
public static void WishHappyBirthday(String toWho) {
if(toWho == null || toWho.isEmpty()){
System.out.println("Can't wish" +
"happy birthday to" +
"someone that doesn't exist");
return;
}
System.out.println("Happy Birthday " +
toWho + "!!!");
}
}1
-
-
-
- When your code is accepted after you forgot about the comment about your boss being a Tw*t... Surprise.1
-
- !rant
If (LocalDateTime.now().toString(DateTimeFormatter.format(“yyyy-MM-dd”)).equals(“2017/10/17”)){
human.age++;
System.out.println(“Happy birthday old bastard!”);
- I joined less that a week ago so have not had the pleasure of seeing devRant grow up from birth. Happy birthday devRant I hope to be around until your next 🎂2
- Hey you just reminded me it's my Mom's birthday too! Happy birthday devRant (and Mom, but my Mom is not on devRant, that I know of, yet)!
-
-
-
-
- DateTime now = DateTime.Today;
int age = now.Year - bday.Year;
MessageBox.Show("Happy " + age.ToString() + " birthday, devRant!");
-
- They had a meeting without me for a production release that was happening tomorrow. Rescheduled the release on my wife's birthday without asking anyone. This was the response when I let this known "Happy Birthday honey, you get a night without me."1
-
-
-
- It's my fucking chonky cat's (call him Cody, Chonko or fat bitch) birthday today so wish him happy birthday and send gifts 🐱🐱🐱
- Happy 5th Birthday CoderDojo 🎉😊
'You're opening doors for them' - The community that encourages kids to code is five today
-
- Happy Engineers' Day to all engineers here!
Sir M. Visvesvaraya, one of the greatest engineers of all time, was born this day in 1861. His contributions to the society through engineering is priceless. We celebrate his birthday as engineers' day here in India.
- Jordan Castillo: Happy birthday sire.
Eddie: Thank you Maestro.
Jordan Castillo: 😊 How do you feel now that your life's code just ran past an age++?
*10 minutes later.
Jordan Castillo: Oh what? I didn't see your reply.
Jordan Castillo: You maybe forgot to echo it out.
Eddie: Oh well Jordan, I feel like an iteration I guess.2
Top Tags | https://devrant.com/search?term=happy+birthday | CC-MAIN-2020-34 | refinedweb | 1,327 | 76.11 |
/* * ipdsock.h * * IP Datagram: ipdsock.h,v $ * Revision 1.11 2005/11/25 03:43:47 csoutheren * Fixed function argument comments to be compatible with Doxygen * * 2001/05/22 12:49:32 robertj * Did some seriously wierd rewrite of platform headers to eliminate the * stupid GNU compiler warning about braces not matching. * * Revision 1.6 1999/03/09 02:59:49 robertj * Changed comments to doc++ compatible documentation. * * Revision 1.5 1999/02/16 08:12:00 robertj * MSVC 6.0 compatibility changes. * * Revision 1.4 1998/11/14 06:28:09 robertj * Fixed error in documentation * * Revision 1.3 1998/09/23 06:20:43 robertj * Added open source copyright license. * * Revision 1.2 1996/09/14 13:09:20 robertj * Major upgrade: * rearranged sockets to help support IPX. * added indirect channel class and moved all protocols to descend from it, * separating the protocol from the low level byte transport. * * Revision 1.1 1996/05/15 21:11:16 robertj * Initial revision * */ #ifndef _PIPDATAGRAMSOCKET #define _PIPDATAGRAMSOCKET #ifdef P_USE_PRAGMA #pragma interface #endif /** Internet Protocol Datagram Socket class. */ 00080 class PIPDatagramSocket : public PIPSocket { PCLASSINFO(PIPDatagramSocket, PIPSocket); protected: /**Create a TCP/IP protocol socket channel. If a remote machine address or a "listening" socket is specified then the channel is also opened. */ PIPDatagramSocket(); public: // New functions for class /**Read a datagram from a remote computer. @return TRUE if any bytes were sucessfully read. */ virtual BOOL ReadFrom( void * buf, ///< Data to be written as URGENT TCP data. PINDEX len, ///< Number of bytes pointed to by #buf#. Address & addr, ///< Address from which the datagram was received. WORD & port ///< Port from which the datagram was received. ); /**Write a datagram to a remote computer. @return TRUE if all the bytes were sucessfully written. */ virtual BOOL WriteTo( const void * buf, ///< Data to be written as URGENT TCP data. PINDEX len, ///< Number of bytes pointed to by #buf#. const Address & addr, ///< Address to which the datagram is sent. WORD port ///< Port to which the datagram is sent. ); // Include platform dependent part of class #ifdef _WIN32 #include "msos/ptlib/ipdsock.h" #else #include "unix/ptlib/ipdsock.h" #endif }; #endif // End Of File /////////////////////////////////////////////////////////////// | http://pwlib.sourcearchive.com/documentation/1.10.3-0ubuntu1/ipdsock_8h-source.html | CC-MAIN-2018-17 | refinedweb | 353 | 52.05 |
1 2 package org.apache.jetspeed.om.apps.coffees;3 4 5 import org.apache.torque.om.Persistent;6 7 /**8 * The skeleton for this class was autogenerated by Torque on:9 *10 * [Thu Apr 22 15:30:48 PDT 2004]11 *12 * You should add additional methods to this class to meet the13 * application requirements. This class will only be generated as14 * long as it does not already exist in the output directory.15 */16 public class Coffees17 extends org.apache.jetspeed.om.apps.coffees.BaseCoffees18 implements Persistent19 {20 }21
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/apache/jetspeed/om/apps/coffees/Coffees.java.htm | CC-MAIN-2016-50 | refinedweb | 106 | 50.84 |
Choose a Subject » Select Duration » Schedule a Session
Get this solution now
Get notified immediately when an answer to this question is available.
You might
want to try the quicker alternative (needs Monthly Subscription)
No, not that urgent
o Evaluate the advantages and disadvantages of the various decision-making tools listed (e.g., regular payback, discounted payback, net present value ( NPV ), internal rate of return ( IRR ), and modified internal rate of return ).
o Describe a project scenario in which you would recommend one...
Ranking Projects Using net present value ( NPV ) analysis, a manager can select those projects that will maximize the present value of future cash flows. NPV analysis is a powerful tool; however, surveys of current practice indicate that the internal rate of return ( IRR ) method enjoys
Calculating Net Present Value and Internal Rate of Return Lancer Corp. has the following information available about a potential capital investment Required: 1 . Calculate and evaluate the net present value of this project . 2. Without making any calculations, determine whether
1 . What do a positive net present value ( NPV ) and a negative NPV indicate about an investment? 2. When would you use an annuity factor in a net present value calculation instead of a present value factor for a single cash flow? 3. Explain how the internal rate of return and net present
Discuss the significance of recognizing the time value of money in the long-term impact of the capital budgeting decision. Or Discuss how the internal rate of return ( IRR ) method differs from the net present value ( NPV ) method. Be sure to include an explanation of what the IRR method is and what
By creating an account, you agree to our terms & conditions
We don't post anything without your permission
Attach Files | https://www.transtutors.com/questions/analyzing-the-relationship-between-net-present-value-and-internal-rate-of-return-con-2644963.htm | CC-MAIN-2018-34 | refinedweb | 293 | 50.57 |
0
i started learning java yesterday and its my first programming language to learn. ive covered most of the basics and such but im stuck at one thing
i have a program written that asks for a password
i want it to compare the user input to the actual password using an if statement.
heres what i have so far
import java.util.Scanner; public class UserInput { public static void main(String[] args) { System.out.println("Please Enter Your Name"); Scanner name = new Scanner (System.in); String inName = name.nextLine(); System.out.println("Welcome " + inName); System.out.println("Please enter your password"); Scanner password = new Scanner (System.in); String inPassword = password.nextLine(); if (inPassword.equals(password1)) { // i get an error here //ive tried making a string for the actual password but nothing seems to work System.out.println("access granted"); } else { System.out.println("access denied"); } } }
Edited by mike_2000_17: Fixed formatting | https://www.daniweb.com/programming/software-development/threads/222714/java-help-comparing-strings | CC-MAIN-2018-30 | refinedweb | 151 | 53.07 |
Tutorial: Groovy Functional Testing with Geb
Ellery Crane explores Groovy browser automation solution Geb and how it can be used to write easy and top-notch functional tests.
I have been developing web applications my entire career. Since the beginning, I’ve held automated testing to be one of the pinnacles of software development best practices. Unit testing in particular has been part of my development process since I was an undergraduate. One of the challenges I’ve experienced when developing for the web is that traditional unit testing only goes so far. Some of the most serious bugs in a web application tend to be things a unit test has no way to check for; a JavaScript error in a particular version of a specific browser, or perhaps an API change in a third party RESTful service that your application is consuming. Detecting these kinds of regressions requires testing on a wholly different level.
Enter functional testing.. For example, a functional test might verify that navigating to a search engine page, inputting a particular search term in the search box, and then clicking the submit button takes you to a search results page that displays results with the selected term. The underlying application logic that actually performs the search is irrelevant to the test as long as the expected results show up.
This type of test can be exceptionally valuable if done correctly. If development on any component involved in the function being tested introduces a regression, the test will fail. It doesn’t matter whether the component lies in the front end (JavaScript or markup, primarily) or the back end—all of the application layers are being tested at the same time.
Unfortunately, tools for writing and executing functional tests have, historically, been notoriously cumbersome to use. Browser and environmental differences make it a challenge to run tests by developers on different computers, if they can be run at all. The intricacies of selecting and manipulating data in the DOM has also meant that, even after investing the time and effort into the writing of functional tests, they tend to be extremely brittle and hard for other developers to understand. As such, though I always yearned for my web applications to have a proper suite of automated functional tests to rely upon, I learned to do without.
Geb changed all of that. Not as clumsy or fragile as other browser automation frameworks, Geb is an elegant tool for a more civilized age. Geb’s Page Objects and Groovy DSL make tests readable to the point that they’re almost plain English. The encapsulation of content definition inside of those Page Objects also reduces. What’s more, Geb lets you define content using a powerful selector syntax familiar to anyone acquainted with jQuery.
In this article, I will provide an introduction to Geb and an overview of its use as a tool for functional testing. I will then present an example of Geb testing in action, and show how it can be used to mitigate problems in areas that other functional testing frameworks fall short.
Writing a Geb-Powered Functional Test
To get a sense of the power Geb brings to functional testing, let us consider a simple example application that consists only of a login page and user home page. The login page has a form that allows a user to enter their username and password. A successful form submission from the login page will log the user in and take them to the user home page. An unsuccessful form submission will redisplay the login page with an error message. Using Geb, we will write functional tests for both of these behaviors.
Listing 1 – login.html
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" ""> <html> <head><title>Login Page</title></head> <body> <h1 class="page-header">Login</h1> <ul class="errors"> <!-- Only included if there were submission errors --> <!-- Any submission errors will be displayed as list item elements here --> </ul> <form id="login-form" action="checkLogin.html" method="post"> <label for="username-field">Username</label> <input type="text" name="username" id="username-field" /> <br /> <label for="password-field">Password</label> <input type="password" name="password" id="password-field" /> <br /> <input type="submit" value="Submit" /> </form> </body> </html>
Listing 1 contains the markup of our login page. While a functional test for this page could be written using any testing framework that supports Groovy, Geb has built in integrations with several of the most popular, including Spock, JUnit, TestNG, EasyB and Cuke4Duke. I will be using Spock for this example, as its highly human-readable specification language dovetails perfectly with Geb’s own Groovy DSL. We will begin with a simple test that bears resemblance to many traditional (and fragile) functional tests as an introduction to Geb’s Browser object and jQuery-ish Navigator API, and then revisit it later to demonstrate how using Geb’s Page Object pattern can reduce fragility and enhance readability.
Listing 2 – SimpleLoginSpec.groovy
import geb.spock.GebSpec class SimpleLoginSpec extends GebSpec{ def "should login with valid username and password"(){ when: go "login.html" then: $(".page-header").text() == "Login" when: $("#login-form input[name=username]").value("user1@example.com") $("#login-form input[name=password]").value("goodpassword") $("#login-form input[type=submit]").click() then: $(".page-header").text() == "User Home Page" } def "should redisplay form with an error message when password is bad"(){ when: go "login.html" then: $(".page-header").text() == "Login" when: $("#login-form input[name=username]").value("user1@example.com") $("#login-form input[name=password]").value("badpassword") $("#login-form input[type=submit]").click() then: $(".page-header").text() == "Login" $(".errors li").size() == 1 $(".errors li")[0].text() == "Invalid username or password"
Listing 2 contains the source code for our simple test class. Let’s begin by going over what’s happening. Our test class, SimpleLoginSpec, extends GebSpec, which is a Geb-furnished extension to Spock’s Specification class. For those unfamiliar with the anatomy of a Spock specification, it should be explained that each method defines a test case. Using a language feature of Groovy, the method names can be defined as strings to provide expressive, human readable descriptions of what each test is doing. Each method contains “when” and “then” blocks — the code in “when” blocks sets up data and performs actions, while each line in a “then” block is an implicit assert statement. If any line in a “then” block returns false (as defined using Groovy Truth), the test fails.
The first test case verifies that submitting a valid username and password takes the user to the user home page. The very first step in testing this behavior is navigating to the login page. We do this by using GebSpec’s go() method. Under the hood, this is actually a call to the go() method on an instance of Geb’s Browser class, which itself wraps an instance of WebDriver. GebSpec makes use of Groovy’s methodMissing() and propertyMissing() methods to forward method calls to the Browser object, reducing the noise in the test. The go() method drives the browser to the URL specified; in this case, since the URL is not an absolute path, it will be appended to the value of Geb’s baseUrl property. You can set the baseUrl either using a system property or by making use of Geb’s excellent Config Script.
After the browser has navigated to the page we specified in the “when” block, the first “then” block checks that the page we’re at is actually our login page. This is also our first taste of Geb’s “jQuery-ish” Navigator API, which is “a jQuery inspired mechanism for finding, filtering and interacting with DOM elements”. The $() method, like the go() method, is being delegated to the Browser object. The arguments to this method can be a combination of CSS selectors, indexes, and attribute/text matchers. Invoking the $() method returns one or more of Geb’s Navigator objects. Navigator objects provide access to the data contained in the matched content, as well as methods to allow additional filtering and matching. In this case, to verify that we are at the login page, we use the $() method to select the element on the page with the CSS class “page-header”, and then verify that the text in that element matches our that expected on our login page.
The second “when” block in our test uses the navigator API to select the username and password form inputs, and then uses the Navigator object’s value() method to set their values to those of the username and password we’re testing. If you want to read the value of a form input instead of write to it, calling value() without arguments will return the input’s current value. After filling in values for our username and password, we finish by selecting the submit button and instructing the browser to click on it using the click() method. Finally, the last “then” block simply verifies that the content of page header is now “User Home Page”, indicating that we have logged in successfully.
The second test case is very similar to the first, save that we are setting the value of the password input to a password we expect to be invalid. Rather than test that we have reached the user home page, the final “then” block confirms that we are still on the login page and that an error is displayed. The selection of the errors content demonstrates an important feature of the navigator API—namely, that in the case where the selector matches multiple elements, we can treat it as a collection and make use of all of the regular Groovy collection methods.
While our SimpleLoginSpec tests our login page’s functionality, it still leaves much to be desired. The test code is a mess of selectors and hard coded text values. In this case, most of the selectors are understandable semantically, but this is hardly something that can be guaranteed in every test we might write. Indeed, as applications grow in complexity, the selectors necessary to precisely identify content often become far too convoluted to be human readable. Even were that not the case, embedding the selectors and expected values directly in the test code like this makes the test extremely fragile. If, for example, the id of the login form element were to change in the markup, both test cases would fail regardless of whether or not the login form still worked successfully. Furthermore, fixing such a test failure would require changes to almost every line of test code! Thankfully, Geb provides a solution.
Understanding the Page Object Pattern
Geb’s implementation of the Page Object Pattern is one of its more compelling features from the perspective of a functional tester.
Page Objects can most easily be thought of as an object hierarchy modeling the pages in a web application. Most web applications can be conceptually broken into different pages very easily; most applications have pages a ‘login page’, a ‘home page’, a ‘user information page’ and so on. Using Geb, we would represent these pages with unique classes that extend Geb’s Page class. Each Page class implementation defines the content and functionality present on the web page it is modeling. From a semantic standpoint, the specific nature of the page content is irrelevant; it doesn’t matter whether the ‘submitButton’ content is actually a button, an input element of type ‘submit’, or even a hyperlink. Externally, all that matters when using the Page object is that there is a ‘submitButton’ that can be accessed and interacted with.
The Page class not only serves as a place to define what content is on the page, but how that content is represented in the DOM. In the simple case, each piece of content defined in the Page class is mapped to a selector that is used to actually retrieve that content from the web page. In this way, the implementation details of the content are encapsulated entirely within the Page class and hidden from the outside world.
Listing 3 – LoginPage and UserHomePage
import geb.Page class LoginPage extends Page{ static url = "login.html" static at = { header.text() == "Login" } static content = { header { $(".page-header") } loginForm { $("#login-form") } username { loginForm.username } password { loginForm.password } submitButton(to: UserHomePage) { loginForm.find("input", type: "submit") } errors(required:false) { $(".errors li") } invalidUsernameOrPasswordError(required:false) { errors.filter(text:contains("Invalid username or password")) } } } class UserHomePage extends Page{ static url = "userHome.html" static at = { header.text() == "User Home Page" } static content = { header { $(".page-header") } } }
Listing 3 contains the source code to Page object representations of our login page and user home page. After examining how everything is wired together, we will revisit our original functional test using these Page objects to drive our test code.
Every Page subclass can define several static properties that describe the page’s attributes and behavior. The url property defines the actual URL to associate with this page, and is used by the Browser object to navigate to the page when you invoke the to() method. As before, we can define relative URLs that are used in conjunction with the baseUrl config property to construct the full URL. The at property is a closure that is called whenever the Page’s verifyAt() method is invoked, and should return a boolean value indicating whether the browser is actually at the page or not (though it’s often actually smarter to place an explicit assert statement in the closure rather than returning true—this allows you to levy Groovy’s power assert feature to get more information about a test failure). The content static property is a closure that defines content on the page using Geb’s content DSL. The basic structure to this DSL is:
«name» { «definition» } or: «name»(«options map») { «definition» }
Each named piece of content is followed by a closure that returns whatever data is associated with that content. Optionally, you can provide a map of options that are used to aid Geb in interacting with the content appropriately. The submitButton in our LoginPage, for instance, specifies that it has a “to” value of UserHomePage. This tells Geb that clicking on that content can result in navigating to the specified Page so that you do not have to explicitly tell the Browser that the page has changed. The various errors content all specify a “required” value of false. This tells Geb that the content will not always be on the page. If you fail to specify that a particular piece of content is not required, a RequiredPageContentNotPresentException will be thrown when that content is accessed.
Content defined in the content closure can be accessed as though it were a property on the Page class itself. It can even be referenced by other content definitions or from within the at closure, such as how the username, password, and submitButton content all reference the loginForm content and the at closure references the header content. The username and password content definitions also illustrate Geb’s “form control shortcuts” feature: a Navigator instance referencing a form element treats the form’s inputs as implicit properties on the Navigator object with the same names as their “name” attribute. Hence, the following two selectors are identical:
$("form").find("input", name:"username") $("form").username.
Comparing the Tests
Consider the differences between SimpleLoginSpec and LoginSpec. By using page objects and content definition in LoginSpec, we were able to test the same functionality as SimpleLoginSpec in an incredibly more readable and robust fashion. LoginSpec’s test code almost reads as a plain English description of what the test is doing and expecting. SimpleLoginSpec may test the same behavior, but making sense of the raw selectors is at best difficult and at worst can lead to heaps of wasted development time spent simply trying to figure out what the test is attempting to do. Using a page object model to expose the semantics of the page rather than the implementation details results in tests that can actually serve as application documentation. In many cases, such tests can be more valuable as clear documentation of expected application behavior than they are as regression tests!
I discussed earlier how selectors in SimpleLoginSpec were extremely sensitive to change, resulting in brittle tests that require large amounts of refactoring to correct failures caused by minor disturbances to the markup. Consider the impact of changing the id of the login form from “login-form” to “user-sign-in-form”. LoginSpec would fail, just as SimpleLoginSpec would. Unlike SimpleLoginSpec, however, LoginSpec can be fixed without changing a single line of test code. In fact, the only change needed would be the line in LoginPage that defines the loginForm content:
loginForm { $(“#user-sign-in-form”) }
Because all of the other content definitions reference loginForm rather than include the form’s id explicitly, no other changes are needed. Certainly, there may be some breaking markup changes that can be a pain to refactor. The page object model, however, ensures that most of that work is encapsulated entirely within the Page class and not within the test code.
The functional tests I’ve presented here only scratch the surface of the gains that are possible using Geb. A well-defined, extensible page object model contributes more than just robust and readable functional tests. Other developers can write their own tests more quickly and concisely by using existing Page classes. Refactoring commonly used page content into Module classes that can be reused across multiple Pages can reduce code duplication. Furthermore, Page objects are still traditional GroovyObjects and can make use of all of the standard object oriented features—inheritance, method definition and so on—to become extremely powerful and reusable testing components. When you start to think of modelling your web pages in the same way you would any other data, your functional tests can reap the same object-oriented programming benefits that you’re used to in your application code.
Conclusions
Any serious web application developer should heed the benefits of automated functional testing. Geb is the most elegant and powerful browser automation tool available today. Proper use of Geb’s Page Object implementation can yield extremely readable and robust functional tests that can both detect regressions and serve as documentation of application behavior. Geb’s expressive content definition DSL and jQuery-ish Navigator API make writing such tests a breeze. If you’re interested in getting started with Geb yourself (and how could you not be?), The Book Of Geb contains all of the information you could hope for and more.
If you still get stuck, fear not—the Geb mailing list is extremely active, responsive and friendly. You owe it to yourself to explore everything Geb has to offer, because it’s the tool your web application deserves. With Geb, writing clear, powerful and maintainable functional tests has never been easier.
2 Comments on "Tutorial: Groovy Functional Testing with Geb"
Article was helpful. but want detail how will i select the javascript popup. actually i am doing automation for a company app.
Article was helpful. but want detail how will i select the javascript popup. actually i am doing automation for a company app. | https://jaxenter.com/tutorial-groovy-functional-testing-with-geb-104382.html?replytocom=10753 | CC-MAIN-2019-30 | refinedweb | 3,201 | 52.19 |
I didn't get any further. In the end, I just used the python docker image
and mounted my working directory so I can test the code.
I changed it to ANONYMOUS and I ran into the same error, though to be
frank, I don't really understand the difference. Do I need to make any more
changes to test this properly? I'm still trying to read from
amqps://<policy>:<key>@<namespace>.servicebus.windows.net/
<topic>/Subscriptions/<subscription>
On Tue, Jan 30, 2018 at 7:20 PM Roddie Kieley <rkieley@gmail.com> wrote:
> Did you have any further luck after? I had suspected that if you were to
> utilize ANONYMOUS sasl instead of PLAIN that that might avoid the issue
> mentioned above, but wanted to take a look at the specific container image
> you were using to confirm what was inside that would cause it to work ok
> vs. the macports install to not.
>
> On Fri, Jan 26, 2018 at 2:20 PM, George David <george@miamisalsa.com>
> wrote:
>
> > I'm not exactly sure what you're asking. I used the official python
> 2.7.14
> > docker image and installed python-qpid-proton using pip. You can see
> what's
> > available here:
> > `
> >
> > On Thu, Jan 25, 2018 at 7:01 PM Roddie Kieley <rkieley@gmail.com> wrote:
> >
> > > Hi, while I have not run into your specific problem I had previously
> run
> > > into some issues when utilizing the cyrus sasl implementation on OSX
> > which
> > > I documented via PROTON-1695 - '[OSX] Cyrus SASL plugins do not load
> > > leading to missing mechanisms' [1]. It appears that you are utilizing
> the
> > > sasl plain mechanism which I did have issues with.
> > >
> > > Looking at the python docs for running on OSX I'm not seeing the link
> for
> > > the dockerfile, can you point me in the right direction?
> > >
> > >
> > > Roddie
> > > ---
> > > [1] -
> > > [2] -
> > >
> > > On Thu, Jan 25, 2018 at 1:52 PM, George David <gsalsero@gmail.com>
> > wrote:
> > >
> > > > macOs High Sierra 10.13.12
> > > > python: 2.7.14 (macports)
> > > > qpid-proton: 0.19.0 (macports)
> > > > python-qpid-proton: 0.19.0 (pipenv)
> > > >
> > > > I created two python scripts, 1 to write to a service bus topic and 1
> > to
> > > > read. When I try to either read or write, Here is a snippet of code I
> > use
> > > > to send a message:
> > > >
> > > > conn = BlockingConnection(broker, allowed_mechs="PLAIN")
> > > > receiver = conn.create_receiver(entity_name)
> > > > msg = receiver.receive()
> > > > receiver.accept()
> > > > conn.close()
> > > >
> > > > I get the following error:
> > > >
> > > > Traceback (most recent call last):
> > > > File "send.py", line 30, in <module>
> > > > conn = BlockingConnection(broker, allowed_mechs="PLAIN")
> > > > File
> > > > "[redacted]/.local/share/virtualenvs/amqp-service-bus-
> > > > lvrxBNPB/lib/python2.7/site-packages/proton/utils.py",
> > > > line 226, in __init__
> > > > msg="Opening connection")
> > > > File
> > > > "[redacted]d/.local/share/virtualenvs/amqp-service-bus-
> > > > lvrxBNPB/lib/python2.7/site-packages/proton/utils.py",
> > > > line 300, in wait
> > > > "Connection %s disconnected: %s" % (self.url, self.disconnected))
> > > > proton.ConnectionException: Connection amqps://send:[redacted]@[
> > namespace
> > > > redacted].servicebus.windows.net:amqps/ disconnected:
> > > > Condition('proton:io', 'getaddrinfo([namespace-redacted].
> > > > servicebus.windows.net, amqps): nodename nor servname provided, or
> not
> > > > known')
> > > >
> > > >
> > > > I have tested this code successfully on my ubuntu machine and using
> the
> > > > official python:2.7.14 docker image running on my mac.
> > > >
> > > >
> > > > My initial attempts to use qpid proton was to install
> > python-qpid-proton.
> > > > That failed because clang couldn't find "sys/timerfd.h". Then I found
> > > that
> > > > macports offered a qpid-proton. I installed that then installed the
> > > python
> > > > package with no issues.
> > > >
> > > > Has anyone else has run into this?
> > > > Any tips to help me debug this issue?
> > > >
> > > > Thanks!
> > > >
> > >
> >
> | http://mail-archives.apache.org/mod_mbox/qpid-users/201801.mbox/%3CCANggrQvc-Cz8++Dx9YWXR0xjRNFCcbAzeY4hm_73Bos7=WTMQg@mail.gmail.com%3E | CC-MAIN-2019-26 | refinedweb | 587 | 58.99 |
The C++ needs logical operators to control the flow of the program. The expressions involving logical operators evaluate to boolean value – true or false. Then the program makes decisions based on the outcome.
There are three types of logical operators.
Note that there is difference between bitwise AND and logical AND. The same applied to bitwise OR, complement.
Logical AND
The two operands of logical expression evaluates to true if both the operands are true, otherwise it is false.
A logical expression with two or more logical expression as operands is called a compound expression. When the two operands are individual expressions then each of them must evaluate to true so that the compound logical expression is true.
(expression1) && (expression2)
The data type for both the expression must be same. If expression1 is integer, then expression must be an integer. In case, one of the expression is character data type, then it is automatically converted to integer equivalent by the compiler.
(a && 34)
becomes
(97 && 34)
The ASCII value for ‘a’ is 97.
The all possible outcome of logical AND operation are:
Logical OR
The logical expression with logical OR operator evaluates to true if at least one operand is true. Otherwise, it is false.
A compound logical expression with two or more expression gives a boolean output – true if one of the expression evaluates to true. The data type of each operand must be same except char type.
(expression1) || (expression2)
All possible outcome of the expression is given below.
Not Operator
The
Not operation is a special operation which negates the boolean value of any expression or variable.
expression1 = true
then
!(expression1) = false
The
Not operator can be use anywhere to negate the existing value of a variable or expression. The table below gives all combination of output for the
Not operator.
Example Program: Logical Operators
#include <cstdlib> #include <iostream> using namespace std; int main() { //Variable Declarations int a,b,c,d; //Variable Initialization a = 100; b = 90; c = 30; d = 20; // Logical AND if( (a > b) && (c > d)) { cout << "This logical AND statement has value = True" << "\n"; } else { cout << "This logical AND statement has value = False" << "\n"; } // Logical OR if( (a < b) || (c > d)) { cout << "This logical OR statement has value = True" << "\n"; } else { cout << "This logical OR statement has value = False" << "\n"; } // NOT operation if( !(a > b)) { cout << "This NOT statement has value = True" << "\n"; } else { cout << "This NOT statement has value = False" << "\n"; } system("PAUSE"); return EXIT_SUCCESS; }
Output:
This logical AND statement has value = True This logical OR statement has value = True This NOT statement has value = False | https://notesformsc.org/c-plus-plus-logical-operators/ | CC-MAIN-2021-04 | refinedweb | 432 | 53.41 |
how to compile web service in command line
Discussion in 'ASP .Net Web Services' started by mei xiao, compile sources VB.NET & C# with command line?Alfonso Melchionna, Nov 19, 2004, in forum: ASP .Net
- Replies:
- 3
- Views:
- 5,914
- John Azzolina
- Dec 3, 2004
Compile asp.net VBNet using vbc.exe command line errorDave Willock, Jan 10, 2004, in forum: ASP .Net
- Replies:
- 1
- Views:
- 4,058
- John Timney \(Microsoft MVP\)
- Jan 10, 2004
Compile ASP.NET via command line=?Utf-8?B?S2V2aW4=?=, Jun 14, 2004, in forum: ASP .Net
- Replies:
- 1
- Views:
- 6,291
- Martin Marinov
- Jun 14, 2004
command line compile doesn't see import namespacesJimO, Jun 13, 2006, in forum: ASP .Net
- Replies:
- 1
- Views:
- 405
- David Hogue
- Jun 13, 2006
cant compile on linux system.cant compile on cant compile onlinux system.Nagaraj, Mar 1, 2007, in forum: C++
- Replies:
- 1
- Views:
- 912
- Lionel B
- Mar 1, 2007 | http://www.thecodingforums.com/threads/how-to-compile-web-service-in-command-line.783477/ | CC-MAIN-2014-52 | refinedweb | 154 | 78.35 |
Want more? Here are some additional resources on this topic:
Namespaces are heavily used in C# programming in two ways. First, the .NET Framework uses namespaces to organize its many classes, as follows:
System.Console.WriteLine("Hello World!");
System is a namespace and Console is a class contained within that namespace. The using keyword can be used so that the entire name is not required, like this:
using System;
Console.WriteLine("Hello");
Console.WriteLine("World!");
For more information, see the topic");
}
}
}.
See the following topics for more information on namespaces:
Using Namespaces (C# Programming Guide)
How to: Use the Namespace Alias Qualifier (C# Programming Guide)
How to: Use the My Namespace (C# Programming Guide)
For more information, see the following sections in the C# Language Specification:
9 Namespaces | http://msdn.microsoft.com/en-us/library/0d941h9d(VS.80).aspx | crawl-002 | refinedweb | 128 | 50.97 |
Old Berkeley DB size > 11GB : Not able to retrieve data929330 Apr 9, 2012 12:38 PM
Hi,
This content has been marked as final. Show 9 replies
1. Re: Old Berkeley DB size > 11GB : Not able to retrieve data928990 Apr 9, 2012 10:48 PM (in response to 929330)Below is some additional informatin to enable you to respond to my question.
This is for Berkeley DB version that is at least 5 years old. I do not know the exact verion and do not know how to find one. This is not for the Java Edition or the XML edition.
Below is what I am doing in Ruby:
db = nil
options = { "set_pagesize" => 8 * 1024,
"set_cachesize" => [0, 8024 * 1024, 0]}
puts "starting to open db"
db = BDB::Btree.open(ARGV[0], nil, 0, options)
if(db.size < 1)
puts "\nNothing to dump; #{ARGV[0]} is empty."
end
puts "progressing with the db"
myoutput = ARGV[1]
puts "allocating the output file #{myoutput}"
f = File.open(myoutput,"w")
i = 0
iteration = 0
puts "starting to iterate the db"
db.each do |k, v|
a = k.inspect
b = v.inspect
f.puts "#{a}|#{b}"
i = i+1
if (i>1000000)
iteration = iteration + 1
puts "iteration #{iteration}"
i = 0
end
end
This only outputs about 26.xx million records. I am sures there are more than 50 million entries in the database.
I also tried some other approaches but nothing seems to work. I end up getting only 26.xx million entries in the output.
In some case, I managed to get it to output more records, but after 26.xx million, everything is output as duplicate entries so they are of no use to me.
The Ruby is 32 bit version. I tried this on Windows 7 (64 bit) and also on RedHat Linux 5 (64 bit version).
Thanks
Harsh
2. Re: Old Berkeley DB size > 11GB : Not able to retrieve data929722 Apr 10, 2012 6:14 PM (in response to 929330)Some more information about our problem:
The BDB is an old version, as noted previously. We are definitely not using the Java or XML version. Our BDB is large, over 11GB. The behavior is that any attempt with a script (in either Ruby or Perl) simply stops processing after about 26.xxM records or so. There are no errors thrown, it just stops processing and exits. It behaves as if it can't iterate past this special point for some reason. We have tried a multitude of different approaches using Ruby and Perl scripts, but nothing seems to get past the output of only 26.xxM or so rows. We've tried different iteration methods, such as db.each, db.cursor, etc etc. Still, consistently output 26M and then stop processing. We know this is not the full set of data inside the BDB, there is much much more.
3. Re: Old Berkeley DB size > 11GB : Not able to retrieve dataAshok.Ora-Oracle Apr 10, 2012 6:23 PM (in response to 929722)Can you reproduce the behavior with a C program?
Ashok Joshi
4. Re: Old Berkeley DB size > 11GB : Not able to retrieve data"Oracle, Sandra Whitman-Oracle" Apr 10, 2012 8:21 PM (in response to Ashok.Ora-Oracle)Hello,
In addition to the request to reproduce the behavior with C,
lets try to find the version you are working with. You should
have access to the Berkeley DB utilities. Do you know where
they are located? Please find the location of the utilities like
db_stat and let me know what is available. There is also a
method you can invoke in C to get the version, but maybe we can
try from the utilities first.
Thanks,
Sandra
5. Re: Old Berkeley DB size > 11GB : Not able to retrieve data929722 Apr 16, 2012 5:58 PM (in response to "Oracle, Sandra Whitman-Oracle")Here are our results from db_stat. The BDB name is "ExpId"
53162 Btree magic number
8 Btree version number
Big-endian Byte order
Flags
2 Minimum keys per-page
8192 Underlying database page size
2031 Overflow key/data size
4 Number of levels in the tree
151M Number of unique keys in the tree (151263387)
151M Number of data items in the tree (151263387)
9014 Number of tree internal pages
24M Number of bytes free in tree internal pages (68% ff)
1304102 Number of tree leaf pages
3805M Number of bytes free in tree leaf pages (64% ff)
0 Number of tree duplicate pages
0 Number of bytes free in tree duplicate pages (0% ff)
0 Number of tree overflow pages
0 Number of bytes free in tree overflow pages (0% ff)
0 Number of empty pages
0 Number of pages on the free list
6. Re: Old Berkeley DB size > 11GB : Not able to retrieve data929722 Apr 16, 2012 5:59 PM (in response to "Oracle, Sandra Whitman-Oracle")Also, we wrote a C program and it fails also. 32 bit.
7. Re: Old Berkeley DB size > 11GB : Not able to retrieve data"Oracle, Sandra Whitman-Oracle" Apr 16, 2012 6:34 PM (in response to 929722)Hello,
Since you can find the utilities please do the following:
db_verify -V
That will identify the BDB version which is the first thing we need
to know. I ran db_verify -V on an older release and got;
db_verify -V
Sleepycat Software: Berkeley DB 4.3.29: (September 6, 2005)
Thanks,
Sandra
8. Re: Old Berkeley DB size > 11GB : Not able to retrieve data931679 Apr 19, 2012 5:21 AM (in response to 929330)Hi Sandar,
Our bdb version is Sleepycat Software: Berkeley DB 4.3.29: (February 19, 2009)
I tried to trace through the c code when doing db_dump and
I noticed that an exception "DB_NOTFOUND error code (-30988)" was thrown in bt_cursor.c, method __bamc_next(), when the 24million-th record was fetched.
for (;;) {
/*
* If at the end of the page, move to a subsequent page.
*
* !!!
* Check for >= NUM_ENT. If the original search landed us on
* NUM_ENT, we may have incremented indx before the test.
*/
if (cp->indx >= NUM_ENT(cp->page)) {
if ((pgno = NEXT_PGNO(cp->page)) == PGNO_INVALID)
return (DB_NOTFOUND);
ACQUIRE_CUR(dbc, lock_mode, pgno, 0, ret);
if (ret != 0)
return (ret);
cp->indx = 0;
continue;
}
break;
}
return (0);
}
I hope this piece of information helps in your investigation.
Thanks and regards,
Gary
9. Re: Old Berkeley DB size > 11GB : Not able to retrieve data931679 Apr 20, 2012 1:47 AM (in response to "Oracle, Sandra Whitman-Oracle")Hi Sandra,
One more note to my last post.
I was using 5.3.1.5 db_dump to do data dumping. So, both 5.3.1.5 and 4.3.29 fail on accessing the big BDB.
Regards,
Gary | https://community.oracle.com/message/10288228 | CC-MAIN-2017-17 | refinedweb | 1,120 | 72.87 |
Mokka mit Schlag � Chameleon schemas considered harmful.
If you believe there is a single, more important, absolute requirement in the land of XML than that of the proper usage of XML namespaces: You obviously don’t understand XML.
That said, there’s got to be at least some sort of reasonable explanation to the mentioned madness quoted above, doesn’t there?
Somebody care to clue me in, cuz’ I have to admit, at first sight it very much seems like a few loose screws have rattled off the W3C XHTML2/XForms working groups wagon wheel, placing some serious doubt upon the ability of these same mentioned groups to produce a quality specification that even closely resembles any of their 1.0 counterparts.
Anybody?
The problem is with the original assumption: that XML namespaces itself isn't horribly broken, resulting in attempts to route around the damage. -m
The learning curve for SGML was cited as the reason for making HTML so flexible and easy that to this day, even it's inventors can't control the mess that it makes. When I asked on the XML-Dev list for the five biggest problems with XML, Namespaces consistently ranked number one. Pick any two:
2. Namespaces are too hard.
2. Namespaces don't work reliably.
2. Namespaces are badly designed.
2. Namespaces should be replaced with something better.
2. Namespaces are as good as it will get in a polyglot document.
2. The accretion of markup designs for web systems that began by discarding SGML in favor of simpler systems to achieve the most rapid expansion possible on networks of W3C-controlled systems was witless and made by people who technically did not understand what they were proposing nor did the companies that supported it.
2. The problem of doing the simplest thing possible is that it is the simplest thing possible at the moment and once it is fielded you can't take it back even after the moment is past.
I'd like to say I feel badly about that. I don't. I accept that simpleminded solutions to complex problems get the same results in technology and politics. Slow and steady design with incremental small changes are the best approach to both but that wise approach seldom wins one a Nobel Prize, a MacArthur Grant, or a world of praise from the companies that profit selling the solutions for the discards of the last simple solution.
Get used to it.
len
I'm surprised there hasn't been more activity on this issue yet. I can understand a rationale for a reply like this, but I'm not entirely sure that lowering the bar for the adoption of XForms by what I would consider to be a trivial amount merits the implementation of chameleon schemas - especially since other vocabularies that could be embedded in a composite XHTML document (like RDF, SVG, XLink, XInclude and RDDL, just off the top of my head) require the acknowledgment of namespaces as well.
@Micah,
>> that XML namespaces itself isn't horribly broken, resulting in attempts to route around the damage.
@dorian,
>> especially since other vocabularies that could be embedded in a composite XHTML document (like RDF, SVG, XLink, XInclude and RDDL, just off the top of my head) require the acknowledgment of namespaces as well.
@len,
I'm guessing one of the O'Reilly internal folks must have snatched it from the bitter clutches of the junk comment system, as this is the first time I am seeing this. My apologies for the late reply!
So I am beginning to get a sense of what you have suggested in various other comments over the past year. I must say that it seems to be I have simply taken too much for granted as to who are the hero's and who are the villains in all of this.
It's eye opening to say the least.
Thanks for your help in bringing to this to the surface!
A junk comment system? When the machines get to rate opinions we are hobbled in ways too numerous to consider. I understand the reasons for them but I'll never learn to like them. It is why I prefer email technical lists over annotated blogs. If sharing in a conversation is the goal, there can be no 'more equal pigs'. The web meritocracy quickly comes to resemble the American two party system that way.
@len,
Oh how we think alike > :D | http://www.oreillynet.com/xml/blog/2006/10/erhxrant_how_to_kill_an_xml_st.html | crawl-002 | refinedweb | 746 | 67.99 |
I have a function that takes the argument
NBins. I want to make a call to this function with a scalar
50 or an array
[0, 10, 20, 30]. How can I identify within the function, what the length of
NBins is? or said differently, if it is a scalar or a vector?
I tried this:
>>> N=[2,3,5] >>> P = 5 >>> len(N) 3 >>> len(P) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object of type 'int' has no len() >>>
As you see, I can’t apply
len to
P, since it’s not an array…. Is there something like
isarray or
isscalar in python?
thanks
>>> isinstance([0, 10, 20, 30], list) True >>> isinstance(50, list) False
To support any type of sequence, check
collections.Sequence instead of
list.
note:
isinstance also supports a tuple of classes, check
type(x) in (..., ...) should be avoided and is unnecessary.
You may also wanna check
not isinstance(x, (str, unicode))
Previous answers assume that the array is a python standard list. As someone who uses numpy often, I’d recommend a very pythonic test of:
if hasattr(N, "__len__")
Combining @jamylak and @jpaddison3’s answers together, if you need to be robust against numpy arrays as the input and handle them in the same way as lists, you should use
import numpy as np isinstance(P, (list, tuple, np.ndarray))
This is robust against subclasses of list, tuple and numpy arrays.
And if you want to be robust against all other subclasses of sequence as well (not just list and tuple), use
import collections import numpy as np isinstance(P, (collections.Sequence, np.ndarray))
Why should you do things this way with
isinstance and not compare
type(P) with a target value? Here is an example, where we make and study the behaviour of
NewList, a trivial subclass of list.
>>> class NewList(list): ... >>> type(x) is list False >>> type(y) is list True >>> type(x).__name__ 'NewList' >>> isinstance(x, list) True
Despite
x and
y comparing as equal, handling them by
type would result in different behaviour. However, since
x is an instance of a subclass of
list, using
isinstance(x,list) gives the desired behaviour and treats
x and
y in the same manner.
Is there an equivalent to isscalar() in numpy? Yes.
>>> np.isscalar(3.1) True >>> np.isscalar([3.1]) False >>> np.isscalar(False) True
While, @jamylak’s approach is the better one, here is an alternative approach
>>> N=[2,3,5] >>> P = 5 >>> type(P) in (tuple, list) False >>> type(N) in (tuple, list) True
Another alternative approach (use of class name property):
N = [2,3,5] P = 5 type(N).__name__ == 'list' True type(P).__name__ == 'int' True type(N).__name__ in ('list', 'tuple') True
No need to import anything.
You can check data type of variable.
N = [2,3,5] P = 5 type(P)
It will give you out put as data type of P.
<type 'int'>
So that you can differentiate that it is an integer or an array.
>>> N=[2,3,5] >>> P = 5 >>> type(P)==type(0) True >>> type([1,2])==type(N) True >>> type(P)==type([1,2]) False
I am surprised that such a basic question doesn’t seem to have an immediate answer in python.
It seems to me that nearly all proposed answers use some kind of type
checking, that is usually not advised in python and they seem restricted to a specific case (they fail with different numerical types or generic iteratable objects that are not tuples or lists).
For me, what works better is importing numpy and using array.size, for example:
>>> a=1 >>> np.array(a) Out[1]: array(1) >>> np.array(a).size Out[2]: 1 >>> np.array([1,2]).size Out[3]: 2 >>> np.array('125') Out[4]: 1
Note also:
>>> len(np.array([1,2])) Out[5]: 2
but:
>>> len(np.array(a)) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-40-f5055b93f729> in <module>() ----> 1 len(np.array(a)) TypeError: len() of unsized object
Simply use
size instead of
len!
>>> from numpy import size >>> N = [2, 3, 5] >>> size(N) 3 >>> N = array([2, 3, 5]) >>> size(N) 3 >>> P = 5 >>> size(P) 1
Tags: laravelpython, scala | https://exceptionshub.com/python-how-to-identify-if-a-variable-is-an-array-or-a-scalar.html | CC-MAIN-2022-05 | refinedweb | 710 | 73.37 |
From: Rozental, Gennadiy (Gennadiy_at_[hidden])
Date: 2002-09-03 12:46:18
> I agree with you concerning the last point; named template
> parameters can
> be interesting. But I'm not sure it's what the users will principally
> used. I think they will use things like:
>
> typedef change_compare_policy
> <my_old_interval_type, my_new_compare_policy>
> my_new_interval_type;
No. I thought more like
interval<T, rounding_is<some_rounding> >
> The reason it is in the public section is
> only an artifact of the way the library interface is designed and the
> compilers are conforming.
It should be in private section at least for conforming compilers. Use some
kind of ifdef switch and add private and friend declarations optionally.
> > 4. Why do you need set_empty and set_whole. Why could not you use "=
> > interval::empty()" and "= interval whole()"
>
> As for set before, if you refer to the documentation, you
> will see that
> set_empty and set_whole are not part of the published interface of the
> class.
I does not change my point. Why would you need excessive function in your
(private) interface?
>
> > 5. interval free function interface/implementation IMO
> should be split up in
> > several independent headers. For example In many cases I
> may not need
> > trigonometric, transcendental and so forth functions over
> intervals. It may
> > also allow to minimize dependency on STL headers cmath and algorithm
>
> Yes, it is a possibility.
Look for example on MPL. Almost Every function in separate header. It could
be an overkill for you, but one solid header is even worse.
> I don't think it is possible to write the same class if the
> methods are
> static. There is a lot of situation where it is necessary to
> access local
> data.
Ok. I just thought that rounding policies are always stateless.
> > 2. Boost recommend to use T const instead of const T
>
> Ok. (if you remember where it is written, could you give me a
> pointer? thank you)
See coding guideline on boost site.
> > 4. using std::log and myriads of other symbols is all over
> rounding headers.
> > What about the compilers that does not put the into std namespace?
>
> It's a big problem. It's the reason why
> interval/detail/bug.hpp tests if
> BOOST_NO_STDC_NAMESPACE is defined.
Does not it too limiting. How many compilers you are cut off by this?
Interval library implementation does not use too much advanced C++ tricks so
it could work on wide range of compilers.
> > 5. Why interval implementation is located in utility.hpp
> header. It may be
> > misleading.
>
> If it is really disturbing, it can easily be moved elsewhere.
It would be more clear IMO.
>
> > 6. There are some commented lines. Since we are using cvs
> may be it worth to
> > clean the code?
>
> Yes it is worth to clean the code of the few remaining commented
> lines. But what do you mean by "Since we are using cvs" ?
People usually put if 0 or comment code cause they do not want to lose the
information. Since we are using cvs no need to bother - cvs will take care
about this.
>
> > Docs:
> > 1. Heading: May be it's worth to name the page Interval library (or
> > something like this) and put reference to the definition separately
> > somewhere.
> > 2. On top of the page right under he header there is TOC.
> IMO it is aligned
> > a bit strangely and it would look better if you delete <center> tag
>
> Nothing much to say.
Does it mean you agree?
>
> > 3. Introduction, statement 2: "It consists of a single
> header". It is not
> > true.
>
> Yes. The reason of this error is that only one header is needed to be
> included in order to access the major part of the library.
It still would be better to rephrase it.
> > 4. Introduction, statement 3: "Traits is a policy".
> Yes, you are right. And it's reason why we speak of
> "policies" and not of
> "traits" in the documentation. However, as you may know, this
> library was
> originally the work of Jens Maurer, and by respect to that
> work, we tried
> to limit the number of naming changes to a minimum.
IMO it is *very* important to use proper terminology. When I see template
parameter named traits I am confused.
> Some concept checking would be nice. But I'm not sure it would be that
> useful (am I underestimating the strength of concept checking
> in the case
> of interval arithmetic?)
I think it could be useful.
Gennadiy.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2002/09/35059.php | CC-MAIN-2019-30 | refinedweb | 756 | 67.86 |
I just started watching that:
..and it got me thinking, couldn't we just get rid of dynamic
polymorphism and classes altogether? Doesn't static polymorphism
through the use of duck typing and member function delegates
provide all that we need? This way we wouldn't have the nasty
coupling of types which inheritance causes. Here's a C++ example:
(I'm sure it would look nicer with D's syntax)
// Canvas.h
#pragma once
#include <functional>
#include <type_traits>
#include <utility>
#include <vector>
class Canvas
{
private:
struct Shape // Represents an interface
{
std::function<void (int x, int y)> resize;
std::function<void (int x, int y)> moveTo;
std::function<bool (int r, int g, int b)> draw;
};
public:
template <typename S>
auto addShape(S& s)
-> typename std::enable_if<
std::is_same<decltype(s.resize(1, 1)), void>::value &&
std::is_same<decltype(s.moveTo(1, 1)), void>::value &&
std::is_same<decltype(s.draw(1, 1, 1)), bool>::value
>::type
{
Shape shape;
shape.resize = [&](int x, int y)
{ return s.resize(x, y); };
shape.moveTo = [&](int x, int y)
{ return s.moveTo(x, y); };
shape.draw = [&](int r, int g, int b)
{ return s.draw(r, g, b); };
_shapes.emplace_back(std::move(shape));
}
Shape& getShape(size_t idx)
{
return _shapes[idx];
}
private:
std::vector<Shape> _shapes;
};
// Circle.h
#pragma once
#include <conio.h>
class Circle
{
public:
void resize(int x, int y)
{
_cprintf("Circle resized to %d %d\n", x, y);
}
void moveTo(int x, int y)
{
_cprintf("Circle moved to %d %d\n", x, y);
}
bool draw(int r, int g, int b)
{
_cprintf("Circle drawn with color %d %d %d\n", r, g, b);
return true;
}
};
// Rectangle.h
#pragma once
#include <conio.h>
class Rectangle
{
public:
void resize(int x, int y)
{
_cprintf("Rectangle resized to %d %d\n", x, y);
}
void moveTo(int x, int y)
{
_cprintf("Rectangle moved to %d %d\n", x, y);
}
bool draw(int r, int g, int b)
{
_cprintf("Rectangle drawn with color %d %d %d\n",r,g,b);
return true;
}
};
// main.cpp
int main()
{
Canvas canvas;
Rectangle rectangle;
Circle circle;
canvas.addShape(rectangle);
canvas.addShape(circle);
canvas.getShape(0).resize(5, 5);
canvas.getShape(0).moveTo(2, 3);
canvas.getShape(0).draw(1, 12, 123);
canvas.getShape(1).resize(10, 10);
canvas.getShape(1).moveTo(4, 5);
canvas.getShape(1).draw(50, 0, 50);
_getch();
return 0;
}
// Prints:
Rectangle resized to 5 5
Rectangle moved to 2 3
Rectangle drawn with color 1 12 123
Circle resized to 10 10
Circle moved to 4 5
Circle drawn with color 50 0 50
On Thursday, 8 November 2012 at 17:27:42 UTC, Tommi wrote:
> ..and it got me thinking, couldn't we just get rid of dynamic
> polymorphism and classes altogether? Doesn't static
> polymorphism through the use of duck typing and member function
> delegates provide all that we need?
For a lot of programs (or parts of programs) that currently use
runtime polymorphism, the answer seems to be yes, and Phobos is
very good at helping D programmers do their polymorphism at
compile-time.
But dynamic polymorphism is special in that it is just that -
dynamic.
You can decide which implementation to use at runtime rather than
having to do it at compile-time. When this runtime component is
necessary, there is no replacement for runtime polymorphism.
As for function pointers and delegates, class-based polymorphism
provides a couple of additional niceties: for one, vtables are
created at compile-time. Secondly, it provides a lot of syntax
and structure to the system that you don't have with arbitrary
function pointers or delegates.
Emulating OOP (no, not Object *Based* Programming) with function
pointers is a real pain. Without classes, we'd only be marginally
better off than C in this area, thanks to delegates.
I also don't really like inheritance based design but..
That example would crash hard if those stack allocated shapes
were not in scope...
Making it work safely would probably require std::shared_ptr usage
So it uses way more memory per object(BAD)
Just use a dynamic language like Lua, it doesn't have classes,
you example would be dead simple in lua.
That's essentially how Go is designed:
type Shape interface {
draw()
}
type Circle struct { ... }
type Square struct { ... }
func (c *Circle) draw() { ... }
func (s *Square) draw() { ... }
func main() {
var shape Shape
var circle Circle
var square Square
shape = circle
shape.draw() // circle.draw()
shape = square
shape.draw() // square.draw()
}
On Thursday, 8 November 2012 at 17:50:48 UTC,
DypthroposTheImposter wrote:
> That example would crash hard if those stack allocated shapes
> were not in scope...
>
> Making it work safely would probably require std::shared_ptr
> usage
But the correct implementation depends on the required ownership
semantics. I guess with Canvas and Shapes, you'd expect the
canvas to own the shapes that are passed to it. But imagine if,
instead of Canvas and Shape, you have Game and Player. The game
needs to pass messages to all kinds of different types of
players, but game doesn't *own* the players. In that case, if a
game passes a message to a player who's not in scope anymore,
then that's a bug in the code that *uses* game, and not in the
implementation of game. So, if Canvas isn't supposed to own those
Shapes, then the above implementation of Canvas is *not* buggy.
On 2012-11-08 17:27:40 +0000, Tommi said:
> ..and it got me thinking, couldn't we just get rid of dynamic
> polymorphism and classes altogether?
Compiler can do a lot of optimizations with knowledge about classes.
Also it automates a lot things that would become boilerplate with
proposed manual setup of delegates for each object.
> struct Shape // Represents an interface
> {
> std::function<void (int x, int y)> resize;
> std::function<void (int x, int y)> moveTo;
> std::function<bool (int r, int g, int b)> draw;
> };
Dinamic polimorphism isn't gone anywhere, it was just shifted to delegates.
This approach complicates thing to much and produces template bloat
with no real benefit.
On Thursday, 8 November 2012 at 21:43:32 UTC, Max Klyga wrote:
> Dinamic polimorphism isn't gone anywhere, it was just shifted
> to delegates.
But there's no restrictive type hierarchy that causes unnecessary
coupling. Also, compared to virtual functions, there's no
overhead from the vtable lookup. Shape doesn't need to search for
the correct member function pointer, it already has it.
It's either that, or else I've misunderstood how virtual
functions work.
On Thursday, 8 November 2012 at 17:27:42 UTC, Tommi wrote:
> I just started watching that:
>
I've ripped the audio and done some processing to make it a
little more understandable (without cranking the audio up to god
awful levels); however I seem to have a little trouble uploading
it somewhere accessible. I'll post a link when I get it uploaded
(assuming anyone wants to make use of the audio rip...)
I have had the same thoughts for quite some time now -
library-based runtime polymorphism implementation. You can
already do something like: . However
it is inefficient as you have to store thisptr for each delegate.
One has to think about how to implement it more efficiently -
some metaprogramming will be required.
On Thu, Nov 8, 2012 at 10:36 AM, F i L <witte2008@gmail.com> wrote:
> That's essentially how Go is designed:
>
> type Shape interface {
> draw()
> }
...
Method dispatch is still done at runtime though.
--
Ziad | http://forum.dlang.org/thread/dghnkucwwuvwhvwqbzap@forum.dlang.org | CC-MAIN-2014-10 | refinedweb | 1,250 | 56.35 |
import com.sleepycat.db.*; import java.io.FileNotFoundException;
public void remove(String file, String database, int flags) throws DbException, FileNotFoundException;
The Db.remove interface removes the database specified by the file and database arguments. If no database is specified, the underlying file represented by file is removed, incidentally removing all databases that it contained.
Applications should not remove databases that are currently in use. If an underlying file is being removed and logging is currently enabled in the database environment, no database in the file may be open when the Db.remove method is called. In particular, some architectures do not permit the removal of files with open handles. On these architectures, attempts to remove databases that are currently in use by any thread of control in the system will fail.
The flags parameter is currently unused, and must be set to 0.
After Db.remove has been called, regardless of its return, the Db handle may not be accessed again.
The Db.remove method throws an exception that encapsulates a non-zero error value on failure.
The Db.remove method may fail and throw an exception encapsulating a non-zero error for the following conditions:
If the file or directory does not exist, the Db.remove method will fail and throw a FileNotFoundException exception.
The Db.remove method may fail and throw an exception for errors specified for other Berkeley DB and C library or system methods. If a catastrophic error has occurred, the Db.remove method may fail and throw a DbRunRecoveryException, in which case all subsequent Berkeley DB calls will fail in the same way. | http://doc.gnu-darwin.org/api_java/db_remove.html | CC-MAIN-2018-51 | refinedweb | 268 | 55.44 |
let is a fundamental part of Clojure. Whereas
def creates a global variable,
let creates a local variable.
(def x 5) (println x) ; => 5 ; nil (let [x 2] (println x)) ; => 2 ; nil (println x) ; => 5 ; nil
x in this example never actually gets changed.
x just refers to something different inside of our
let binding. This can be a useful way to avoid repetition inside a function.
This is incredibly useful. Having too many global variables can lead to nasty bugs and unintended behaviour.
(def x 5) (defn add-5 [y] (+ x y)) (add-5 5) ; => 10 (defn change-x [] (def x 6)) (change-x) ; => nil (add-5 5) ; => 11
Uh oh! That’s not adding 5 anymore! Of course, this example is a bit silly, but using too many global variables can lead to bugs that are just as scary as this one.
Note: We aren’t really reassigning
x here, like you would in a C-like language. We’re just creating a new variable that happens to also be called x. This is a very, very, very bad idea.
Multiple Bindings
let can also define multiple variables at once, and can assign variables to expressions.
(let [spam "foo" ham (str "b" "ar")] ; str is a function that concatenates strings (println spam ham)) ; or converts variables into strings. ; => foo bar ; nil
|
|
| Conditionals | Table of Contents | Loop and Recur| | https://forum.freecodecamp.org/t/clojure-create-local-variables-with-let/18415 | CC-MAIN-2018-39 | refinedweb | 230 | 74.39 |
Spring Boot: The Right Boot For You
Still configuring Spring manually? You've got plenty of options to set up your libraries, annotate where necessary, then jump right into your work with Spring Boot.
Join the DZone community and get the full member experience.Join For Free
Need a little spring in your step? Tired of all those heavy web servers and deploying WAR files? Well, you’re in luck. Spring Boot takes an opinionated view of building production-ready Spring applications. Spring Boot favors convention over configuration and is designed to get you up and running as quickly as possible.
In this blog, I will walk you through the step-by-step process for getting Spring Boot going on your machine.
Just Put Them on and Lace Them Up…
Spring Boot makes it easy to create stand-alone, production-grade Spring-based applications that you can “just run.” You can get started with minimum fuss due to it taking an opinionated view of the Spring platform and third-party libraries. Most Spring Boot applications need very little Spring configuration.
These Boots Are Made for Walking… Maybe Running!
So the greatest thing about Spring Boot is the ability to be up and running in very little time. You don’t have to install a web server like JBoss, Websphere, or even Tomcat for that matter. All you need to do is pull in the proper libraries, annotate, and fire away. If you are going to do a lot of Spring Boot projects, I would highly suggest using the Spring Tool Suite that is available. It has some great features for making Boot projects really easy to manage.
You can, of course, choose between Maven or Gradle to manage dependencies and builds. My examples will be in Maven as it is what I am familiar with. It’s all about your configuration preference.
Many Different Styles to Choose From
One of the things that make Spring Boot great is that it works really well with other Spring offerings. Wow, go figure? You can use Spring MVC, Jetty, or Thymeleaf just by adding them to your dependencies and Spring Boot automatically adds them in.
Every Day Boots
Spring Boot wants to make things easy for you. You can do a whole host of things with it. Here is a list of some of the highlights.
- Spring Boot lets you package up an application in a standalone JAR file, with a full Tomcat server embedded
- Spring Boot lets you package up an application as a WAR still.
- Configuration is based on what is in the classpath (MySQL DB in the path, it’ll set it up for you)
- It has defaults set (so you don’t have to configure them)
- Easily overridden by adding to the classpath (add H2 dependency and it’ll switch)
- Let’s new devs learn the ropes in a hurry and make changes later as they learn more.
Baby Boots
But remember, the aim of this blog is just to get you familiar with how to get Spring Boot going on your machine. It is going to be fairly straightforward and vanilla. The goal is to get you started. We’re not trying to code a new Uber app or something here. Baby steps folks! We just want to get your feet warm. We all know those tutorials that throw tons of stuff at us and just gloss over things. Not here.
So to get started the easiest way is to pull down the tutorial code from Spring itself. It has a great getting-started point. It is a good for you to see what is happening without throwing the whole Spring library at you.
Clone Boots… Watch Your Aim!
First off, let’s clone the Spring example found here.
git clone
Construction Boots
We won’t go into the steps of setting it up in an IDE as everyone will have their own preference.
Let’s break things down a bit. What are these annotations about?
vcon the classpath. This flags the application as a web application and activates key behaviors such as setting up a
DispatcherServlet.
@ComponentScantells Spring to look for other components, configurations, and services in the the hello package, allowing it to find the controllers.
Wow, I have always liked quality built-ins when looking for a new home! But what’s really happening behind these shiny new items?
The
main() method calls out Spring Boot’s
SpringApplication.run() method to launch.
Did we mention (or did you notice) that you didn’t have to mess around with XML? What a bonus! No more
web.xml file nonsense. No more wondering if I put the right tag in the file and wondering what the problem is with the paragraph of unreadable error message telling you just about nothing any longer. This is 100% pure Java. No configuration or plumbing needed. They have done it for you. How nice of them!
Once it is set up and ready for you to begin editing, let’s take a quick look at the
Application.java file. Here you will find a runnable
main class. It has an annotation of
@SpringBootApplication. This is the key annotation that makes this application a Boot app.
package hello; import java.util.Arrays; import org.springframework.boot.CommandLineRunner; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.ApplicationContext; import org.springframework.context.annotation.Bean; ); } }; } }
Now to run it! If you are using the STS suite (and properly built it), you will see it in your Boot Dashboard. For everyone else, either right click in the IDE and Run As => Java Application, or head to your favorite command line tool. Use the following commands.
Maven
mvn package && java -jar target/gs-spring-boot-0.1.0.jar
Gradle
./gradlew build && java -jar build/libs/gs-spring-boot-0.1.0.jar
You did it! You tied your first pair of Spring Boots.
The output will show the normal Spring startup of the embedded server and then it will loop over all the beans and write them out for you!
Boots on Display
To make the sale or just to get your eyes on the prize, this example throws in a
CommandLineRunner method marked as a
@Bean and this runs on startup. It retrieves all the beans that were created either by your app or were automatically added thanks to Spring Boot. It sorts them and prints them out. You can put other startup information or do some other little bit of work if you would like.
Boots Online
While shopping for the right boot, we want the nice ones that will go with our favorite pair of jeans or for the ladies a nice skirt, right? Well, Boot provides a simple way to get your boots out to the world for others to see. Well, we need to employ a
Controller to do so. How convenient: the Spring code we downloaded has one already for us.
package hello; import org.springframework.web.bind.annotation.RestController; import org.springframework.web.bind.annotation.RequestMapping; @RestController public class HelloController { @RequestMapping("/") public String index() { return "Greetings from Spring Boot!"; } }
The two things that are most important here are the
@RestController and the
@RequestMapping annotations you see.
The
@RestController is a subliminal message that it is nap time. Errr, wait sorry, I was getting sleepy. No, it means we have a RESTful controller waiting, watching, listening to our application’s call to it.
The
@RequestMapping is the url designation that calls the particular method. So in the case of the given example, it is the “index” of the application. The example here is simply returning text. Here’s the cool thing; we can return just about anything here that you want to return.
Did JSON Have Nice Boots on the Argo?
Finally, what I think most adventurers into Spring Boot are doing now is using it as an endpoint to their applications. There are a whole host of different options as to how you can accomplish this. Either by JSON provided data or XML solutions. We’ll just focus on one for now. Jackson is a nice lightweight tool for accomplishing JSON output to the calling scenario.
Jackson is conveniently found on the classpath of Spring Boot by default. Check it out for yourself:
mvn dependency:tree
or:
./gradlew dependencies
Let’s add some pizazz to these boots, already! Add a new class wherever you would like to in your source. Just a POJO.
public class Greeting { private final long id; private final String content; public Greeting(long id, String content) { this.id = id; this.content = content; } public long getId() { return id; } public String getContent() { return content; } }
Now, head back to your Controller and paste this in:
private static final String template = "Ahoy, %s!"; private final AtomicLong counter = new AtomicLong(); @RequestMapping(method=RequestMethod.GET) public @ResponseBody Greeting sayHello(@RequestParam(value="name", required=false, defaultValue="Argonaut") String name) { return new Greeting(counter.incrementAndGet(), String.format(template, name)); }
Now restart your Boot app. Go back to a browser and instead of
/, go to
hello-world. You should see some awesome JSON output. If you did, then you are well on your way to creating endpoints in Spring Boot and Jackson.
The Argo Needs Another Port
Since a lot of folks are writing endpoints and have multiple sites going on, you’ll probably want to change the default port of 8080 to something else. So the easiest and most straightforward way is to add an
application.properties file to
src/main/resources.
All that is need is this:
server.port = 8090
Easy peasy. Weigh anchor and set sail!
Boot Camp Conclusion
So you can see how easy it is to get things going with Spring Boot. We didn’t have to do much in the way of configuration to actually get up and running in a hurry. We avoided the dreaded XML files and only added a small properties file. The built-ins are extremely nice to already have in the stack. Jackson provides an easy to use JSON conversion for those of us wanting to provide endpoints for our shiny frontends.
Again, Spring seems to find a way to make life simpler for the developer. This blog was kept simple on purpose. There are many different avenues to venture down in our new boots. Whether you want to leverage microservices, build a traditional monolith, or some other twist that may be out there, you can see how Spring Boot can get you started in a hurry.
Published at DZone with permission of Matt McCandless, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/spring-boot-the-right-boot-for-you-1?fromrel=true | CC-MAIN-2021-04 | refinedweb | 1,784 | 75.81 |
On Saturday July 22 2006 23:20, Don Allingham wrote:
> You can add your filters to the ~/.gramps/plugins
> directory, and gramps will find them and install them.
> I've attached the modified version of your rule (3
> additional lines) that allows you to do this.
>
> Eventually, we want to provide an online-download of user
> contributed rules.
This is a good idea of which I was unaware.
-- WB
--
Scorched Earth Party: You wanna' riot, we'll give
you something to riot about.
Alex and Don:
I needed another filter, to calculate descendants of
bookmarked people, with a depth limit.
The changes required are in the attached diff file.
Could you guys put this in the code base in the appropriate
branch(es)? I would appreciate it.
-- Wayne Bergeron
--
Furious activity is no substitute for understanding.
-- H. H. Williams
On Sat, 2006-07-22 at 22:20 -0600, Don Allingham wrote:
> You can add your filters to the ~/.gramps/plugins directory, and gramps
> will find them and install them. I've attached the modified version of
> your rule (3 additional lines) that allows you to do this.
I made a small addition to the code so that the plugins can import
the Rule class less awkwardly.
Instead of
from Filters.Rules._Rule import Rule
now one can use:
from Filters.Rules import Rule
Alex
--=20
Alexander Roitman | http://sourceforge.net/p/gramps/mailman/message/11953300/ | CC-MAIN-2014-23 | refinedweb | 227 | 67.96 |
This page was last modified 07:00, 9 April 2008.
CS000890 - Random value generation in Open C
From Forum Nokia Wiki
Overview
Random value generation in Open C applications can be done with functions called srand() and rand(). Function srand() sets its argument seed as the seed for a new sequence of pseudo-random numbers. Sequences are repeatable by calling srand() with the same seed value. Function srand() needs to be called only once in a program and this should be done before the first call to rand().
Note: In order to use this code, you need to install the Open C plug-in.
This snippet can be self-signed.
MMP file
The following libraries are required:
LIBRARY libc.lib
Source file
#include <stdio.h> //printf #include <stdlib.h> //srand, rand #include <time.h> //time int LOWER_BOUND = 1; int UPPER_BOUND = 6; int main (void) { int index = 0; int random_number = 0; char random_letter = ' '; time_t now; /* get current time from the system clock */ time(&now); /* set seed for a new sequence of pseudo-random numbers */ srand((unsigned int) now); /* loop 10 rounds to generate different numbers/letters */ for(index = 1; index <= 10; index++) { printf("round:%d\n", index); /* get random number between 1 and 6 */ random_number = rand() % (UPPER_BOUND - LOWER_BOUND + 1) + LOWER_BOUND; printf("random number:[%d]\n", random_number); /* get random character (lowercase letters in the ASCII) */ random_letter = 'a' + rand() % 26; printf("random letter:[%c]\n", random_letter); printf("- - -\n"); } return 0; }
Postconditions
10 random numbers (1-6) and 10 lowercase ASCII characters are displayed to standard output. | http://wiki.forum.nokia.com/index.php/CS000890_-_Random_value_generation_in_Open_C | crawl-001 | refinedweb | 253 | 60.45 |
Would the project import feature resolve this issue? i.e.
import the required maven build.xml into mybuild.xml and
thus have access to the required properties?
-John K
On Thu, 2002-05-23 at 14:27, Peter Donald wrote:
> On Thu, 23 May 2002 23:11, James Strachan wrote:
> > From: "Peter Donald" <peter@apache.org>
> >
> > > Be warned that there is virtually no chance of it actually getting into
> >
> > core.
> >
> > Any reason why?
>
> Mainly as Ant is not intended to be a scripting language and most people don't
> want to see it become one (better to use python/javascript/perl/whatever if
> you want fully script environment). Putting a return statement in <ant/>
> effectively makes <ant/> a method/funciton call which has been vetoed several
> dozen times through out ants history and my guess is it woul dbe vetoed again
> ;)
>
> --
> Cheers,
>
> Peter Donald
>
>
> --
>> | http://mail-archives.eu.apache.org/mod_mbox/ant-dev/200205.mbox/%3C1022160730.1618.112.camel@oasis%3E | CC-MAIN-2019-47 | refinedweb | 144 | 65.32 |
davejames wrote: »
Whit wrote: »
Any legitimate short-cuts, tips, or suggestions would be appreciated! Till then I will keep hacking away at it...
Regards,
DJ
Whit wrote: »
Any legitimate short-cuts, tips, or suggestions would be appreciated! Till then I will keep hacking away at it...
// Start the motor
motorPort = TRUE
while (1) {
// Check we are at the end stop position
if (endStopPort == TRUE) {
// Stop the motor
motorPort = FALSE
}
}
startMotor () {
motorPort = TRUE
}
stopMotor () {
motorPort = FALSE
}
isAtEndStop () {
return (endStopPort == TRUE)
}
startMotor()
while (1) {
if (isAtEndStop) {
stopMotor()
}
}
really_long_variable_name := another_really_long_variable_name * yet_another_really_long_variable_name + 50 * moderately_long_name
really_long_variable_name := another_really_long_variable_name
* yet_another_really_long_variable_name
+ 50 * moderately_long_name
may say, "what was he thinking!?!?!?"
potatohead wrote: »
I like to make it work, then make it work better. Often, I'll then rewrite it with all the things I learned.
As for "the right way."
Good luck with that. Seriously.and
Make it work, then make it work better, then make it work faster and or smaller. Refactor, rewrite, all apply here.and.
Whit wrote: »
they may say, "what was he thinking!?!?!?"
pub Motor ( on/off -- )
IF motorPort HIGH
BEGIN endStopPort PIN@ UNTIL
THEN
motorPort LOW
;
0 := stop
1 := start
Peter Jakacki wrote: »
.
Thanks - really helpful for me.
CON
Version = 12
PUB main
debug.str(string("Firmware Ver: "))
debug.dec(Version)
Moskog wrote: »
Whit, thank's for the most interesting thread in the forum for years.
I am a big fan of marking the version/revision of my code - Just to keep myself straight! Once I have the best working version, I go back and delete the old versions and rename the keeper!
I like tab insert spaces, and like editors that show spaces. Just a hint is fine. With those on, I don't have trouble very often. SPIN made me like white space sensitive languages. Generally, I just sort this, leaving spaces, or complying with the tabs, whichever is the smallest meta task.
Names are a thing I hate coming up with too. Reading all this makes me realize I often use little short names often, when I just shouldn't. @Whit there you go.
How to be better? Communicate. Lots of good stuff in this thread. Nice topic.
Hey, I often have trouble with scope and compartmentalization. What should be it's own routine, procedure, function, and why?
After a few revisions, I get to a place I can maybe feel good about. Is it always that way, or?
Thoughts?
I remember that book; or perhaps a whole series of Programming Proverbs.
To me, meaningful names and useful comments are most important.
Crappy example but it makes a point. The former example is what people write. The latter abstracts things away and nicely documents itself. Basically the comments have moved into the actual code.
"real" programmers will reject the latter, it's too verbose, to much typing, besides unless your compiler does a good job the executable is much bigger and slower.
I have done something like that in my Sodor HDL example above. Luckily in that case it does not result in a lot more generated hardware.
It should be possible to get the sense of the expression in one eyeful, rather than having to scan it repeatedly, wading through the verbiage, just to find the operators.
-Phil
The best way to improve the efficiency of your code may be decided before you start; how you plan and define what you want it to do. The better the road map, the fewer wrong turns. Sort of a measure twice, cut once. As to the rest, commenting, naming conventions, the internet is full of opinions and flame wars regarding "best" and "elegance". That it must meet its purpose and run within certain expectations are hard givens. All else seems to be finding a balance between how and where documentation is done, how long does the code have to be maintained and by whom. So, back to planing the project before actually writing the first line of code. It may have been Tolkien who wrote "shortcuts make long delays"
?
Mike
I work with lots of coders that would prefer your latter example. Or something similar. I guess we are all not "real" programmers?
Also, pretty much all modern compilers would compile both of those examples to a similar number of instructions. Short functions almost always inline unless you set options to not do that.
Phil, I use editors with completion features built in so typing out long stuff is trivial, and within reason it's easier to read and comprehend. However, I agree with you that it can go overboard.
The Software Developers Sourcebook (1985) - It's on the right side of the first row.
I also strive to be diligent with versioning as I develop. I will save multiple copies of code as I build things up, especially when my project is simply the merging of several other already done projects.
Lastly, as other mentioned, I do flow charts to help me in developing code.
My Reverse Geocache project is a good example of my programming methods.
I guess since I am self-taught, I always wonder if someone else looks at my code, they may say, "what was he thinking!?!?!?"
But I digress...
Yes, think it through. Eric wrote some code, which I ran to test before he got a Propeller. This amazed me. I have always done this for small bits, but not without a machine, some interactions.
The lesson here is simple: take the time to internalize things. I believe doing this is hardest at first, and after being away, which I often am.
When you do that, you get to a place where you can simulate things, and once you do that once, those skills endure. Things solidify.
Hand assemble, write core bits, key loops, data structures, other things using pencil and paper. Look up all you need to do it. The outcome can or only needs to be a few lines. Won't matter. Just do it.
Pays off.
I used to do that on 6502 long ago. Would be in class figuring something out, hoping I could get it assembled so I could type hex codes in and see it run. I still remember all of that vividly. Where I haven't done the internalizing, the good stuff fades. I can tell, and you may ask why, and for me it's time, or importance.
Maybe you also won't always have time or care. But when you do, invest that time.
Oh yeah. They can, will, do. Its OK.
@potatohead - these comments are some of my favorite of yours! They put a finger right on some of my sore and insecure spots. Thanks - really helpful for me.
Blast me if you must. Part of what got me thinking about this is programming in BlocklyProp. Since BlocklyProp programming is free of syntax and formatting issues - it has made examine more closely my code logic, order, tightness, and efficiency.
Since much of what I code in BlocklyProp is for the purpose of tutorials - I don't want to "teach" in a way that forms bad habits.
Like many of you have talked about planning code flow and logic - BlocklyProp lets you work on that part of the code in such a pure way - it makes goofy code easy to spot. So, I a working on that aspect of coding particularly.
When I look at the code in "C" - good BlocklyProp code appears more neat and orderly than does sloppy BlocklyProp code. That is kind of amazing when you think about it.
Two questions that could be applied to the same situation, where one tryes to understand the work of other(s), both fitting for the good, and also for the evil.
- What hell was (s)he thinking about, while (s)he was doing that?
- How did (s)he came to that idea, to do this the way (s)he did?
Henrique
P.S. (s) added, to don't be blamed, for not being as inclusive as I need to (wise wife's advice).
.
Our automotive project's schedules were typically around six to nine months, start to release. Two weeks of that time was slated for manual development/completion. The set of documentation usually filled two or three, three-inch binders; test diagrams, schematics, source code, descriptions of each test, device-to-device correlation data, etc.
This was serious stuff. One of the assumptions being that sometime in the future, someone may have to revisit the work. And, true to form, someone had to and did.
Me.
One year after the release of a product, I returned to the doc package to resolve some discrepancy that showed up over time.
So I'm perusing the docs that I had so painstakingly assembled, going "WTH were you thinking, Dave?!?"
The point is, no matter how well you think your comments or extraneous documentation may be, you can always do better.
The above is not targeted at the code development aspect. It's meant for for maintenance of said code.
Regards,
DJ
Love this! Great advice...
Reminds of the 5 Ds of Dodgeball - Dodge, Duck, Dip, Dive, and Dodge!
That was my intent. Good! I face some of those insecurities each time I am away from this stuff. Sometimes it's a battle, just life, things, but I love this stuff. Always have. If you do too, let that guide you. It's pure intent, nothing to be worried about at all.
But... you may, as some of us others do, have to remind yourself of that. We are in the midst of some really sharp people. Much wisdom here. And a great culture.
Comments helps a lot, but I can't tell you how many times I've seen this:
LET X = 5 ' Make X equal to 5
Avoid this at all costs. Do not document WHAT this the code is doing. Document WHY this code is doing it.
LET X = 5 ' X holds the Port Value, prepare to read from Port 5
Bean
Might as well write:
LET X = 5 'Let X = 5
But why X? If it's a port why not say so:
LET port = 5
If it's a port you read from, one of many, why not:
LET inputPort = 5
Which pretty much makes the comment redundant.
Self taught me too, started with ZX81 back in early eighties.
My first Spin program (from 2010 and still running, 24 hrs a day) was a single cog program for controlling pre-heated water from he heat-pump into the water heater. No idea on using several cogs because I was too much BS2 inspired. Several temperature sensors were read and several switches were activated in addition to the computing itself.
Now I have to rewrite spin-files I have written because I'm often running out of cogs -there are still new ideas for the projects. And during this rewriting I often find so many strange solutions I've done.
And that's very interesting. You understand you have done some progress in being a programmer when comparing the things you did long time ago with the way you would do it today. Especially if your new, more powerful version also fits in a smaller spin-file.
If I should come with an advice in this thread, always include Version numbers to your application if you often do upgrades to your projects.
Like this:
This always helps me a lot to avoid version confusions!
Thanks, Moskog! I am a big fan of marking the version/revision of my code - Just to keep myself straight! Once I have the best working version, I go back and delete the old versions and rename the keeper!
I've gotten such great feedback and from so many of programming heroes on the Forums!
@Bean and @Heater - those examples are really helpful.
Geo_Leeman, interesting Zen. I too like to have sources cited, for example, when someone pulls a tricky algorithm or concept or snippet out of a forum thread, book, etc., I like to see it cited.
I'm self-taught too, except for a few classes back as an electrical engineering and then biophysics student, never computer science per se. I'm amazed by what "real" programmers can do today but at the same time completely daunted by the complexity of the tools. Embedded, contained, that's my thing.
My variable names except for idx++ and such are getting longer. Short variable names should be unique sequences of letters and be for the most part very local and obvious in function.
The most amazing aid to writing software, even the most simple project, and keeping things straight is using a source code management tool. No more copying files and directories here and there the with different version numbers appended to their names. No more wondering which was the last "good" version and "where did I break things".
Of course I don't mean any old source code management tool. No, I mean git.
Now a days I start every new project with:
$ mkdir projectName
$ cd projectName
$ git init
Then start hacking code in whatever files. Then...
$ git add thisFile thatFile
$ git commit -m "I changed this and that"
Then hack some more...and repeat...
At the end of days and weeks of that, the whole history of everything you ever did is in the git repository. You can time-warp backwards and forwards through versions.
Then, for peace of mind, keep the whole git repository on github.com. Then you can share your efforts with anyone who might be interested. And if your house burns down your code is safe in github.
Best part of it all is that you can totally mess with your code, try experiments, break things, if it does not work out then so what? Just "git checkout" the last version and continue.
I have loathed and hated source code management systems for decades, those complex, clunky, slow tools that companies always insist we use on their projects. I have had to use many.
Git, on the other hand, actually helps you as you hack on code.
There was also a great sub-disussion on version control and bug tracking as part of best practices on this week's embedded.fm
@Heater - I never thought of working git/github the way you describe - I ONLY thought of it a way of working with others. Duh! Great tip! | http://forums.parallax.com/discussion/comment/1436675/ | CC-MAIN-2021-17 | refinedweb | 2,403 | 74.29 |
django.contrib.formtools¶
A set of high-level abstractions for Django forms (
django.forms).
Historically, Django shipped with
django.contrib.formtools – a collection
of assorted utilities that are useful for specific form use cases. This code is
now distributed separately from Django, for easier maintenance and to trim the
size of Django’s codebase. In Django 1.8, importing from
django.contrib.formtools will no longer work.
The new formtools package is named
django-formtools, with a main module
called
formtools. Version 1.0 includes the same two primary features that
the code included when it shipped with Django: a helper for form previews and a
form wizard view.
See the official documentation for more information.
How to migrate¶
If you’ve used the old
django.contrib.formtools package follow these
two easy steps to update your code:
Install version 1.0 of the third-party
django-formtoolspackage.
Change your app’s import statements to reference the new packages.
For example, change:
from django.contrib.formtools.wizard.views import WizardView
to:
from formtools.wizard.views import WizardView
The code in version 1.0 of the new package is the same (it was copied directly from Django), so you don’t have to worry about backwards compatibility in terms of functionality. Only the imports have changed. | https://docs.djangoproject.com/en/1.8/ref/contrib/formtools/ | CC-MAIN-2017-39 | refinedweb | 215 | 52.97 |
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of C++11 status.
Section: 27.5.5.3 [basic.ios.members] Status: C++11 Submitter: Martin Sebor Opened: 2008-05-17 Last modified: 2016-02-10
Priority: Not Prioritized
View all other issues in [basic.ios.members].
View all issues with C++11 status.
Discussion:
The fix for issue 581, now integrated into the working paper, overlooks a couple of minor problems.
First, being an unformatted function once again,
flush()
is required to create a sentry object whose constructor must, among
other things, flush the tied stream. When two streams are tied
together, either directly or through another intermediate stream
object, flushing one will also cause a call to
flush() on
the other tied stream(s) and vice versa, ad infinitum. The program
below demonstrates the problem.
Second, as Bo Persson notes in his
comp.lang.c++.moderated post,
for streams with the
unitbuf flag set such
as
std::stderr, the destructor of the sentry object will
again call
flush(). This seems to create an infinite
recursion for
std::cerr << std::flush;
#include <iostream> int main () { std::cout.tie (&std::cerr); std::cerr.tie (&std::cout); std::cout << "cout\n"; std::cerr << "cerr\n"; }
[ Batavia (2009-05): ]
We agree with the proposed resolution. Move to Review.
[ 2009-05-26 Daniel adds: ]
I think that the most recently suggested change in 27.7.5.1.3 [ostream::sentry] need some further word-smithing. As written, it would make the behavior undefined, if under conditions when pubsync() should be called, but when in this scenario os.rdbuf() returns 0.
This case is explicitly handled in flush() and needs to be taken care of. My suggested fix is:
If ((os.flags() & ios_base::unitbuf) && !uncaught_exception() ) is true, calls
os.flush().
Two secondary questions are:
- Should pubsync() be invoked in any case or shouldn't a base requirement for this trial be that os.good() == true as required in the original flush() case?
- Since uncaught_exception() is explicitly tested, shouldn't a return value of -1 of pubsync() produce setstate(badbit) (which may throw ios_base::failure)?
[ 2009-07 Frankfurt: ]
Daniel volunteered to modify the proposed resolution to address his two questions.
Move back to Open.
[ 2009-07-26 Daniel provided wording. Moved to Review. ]
[ 2009-10-13 Daniel adds: ]
This proposed wording is written to match the outcome of 397.
[ 2009 Santa Cruz: ]
Move to Open. Martin to propose updated wording that will also resolve issue 397 consistently.
[ 2010-02-15 Martin provided wording. ]
[ 2010 Pittsburgh: ]
Moved to Ready for Pittsburgh.
Proposed resolution:
Just before 27.5.5.3 [basic.ios.members] p. 2 insert a new paragraph:
Change 27.7.5.1.3 [ostream::sentry] p. 4 as indicated:
If ((os.flags() & ios_base::unitbuf) && !uncaught_exception() ) is true, calls
os.flush().
Add after 27.7.5.1.3 [ostream::sentry] p17, the following paragraph: | https://cplusplus.github.io/LWG/issue835 | CC-MAIN-2018-30 | refinedweb | 491 | 60.61 |
BC6H Format
The BC6H format is a texture compression format designed to support high-dynamic range (HDR) color spaces in source data.
- About BC6H/DXGI_FORMAT_BC6H
- BC6H Implementation
- Decoding the BC6H Format
- BC6H Compressed Endpoint Format
- Sign Extension for Endpoint Values
- Transform Inversion for Endpoint Values
- Unquantization of Color Endpoints
- Related topics
About BC6H/DXGI_FORMAT_BC6H
The BC6H format provides high-quality compression for images that use three HDR color channels, with a 16-bit value for each color channel of the value (16:16:16). There is no support for an alpha channel.
BC6H is specified by the following DXGI_FORMAT enumeration values:
- DXGI_FORMAT_BC6H_TYPELESS.
- DXGI_FORMAT_BC6H_UF16. This BC6H format does not use a sign bit in the 16-bit floating point color channel values.
- DXGI_FORMAT_BC6H_SF16.This BC6H format uses a sign bit in the 16-bit floating point color channel values.
Note
The 16 bit floating point format for color channels is often referred to as a "half" floating point format. This format has the following bit layout:
The BC6H format can be used for Texture2D (including arrays), Texture3D, or TextureCube (including arrays) texture resources. Similarly, this format applies to any MIP-map surfaces associated with these resources.
BC6H uses a fixed block size of 16 bytes (128 bits) and a fixed tile size of 4x4 texels. As with previous BC formats, texture images larger than the supported tile size (4x4) are compressed by using multiple blocks. This addressing identity applies also to three-dimensional images, MIP-maps, cubemaps, and texture arrays. All image tiles must be of the same format.
Some important notes about the BC6H format:
- BC6H supports floating point denormalization, but does not support INF (infinity) and NaN (not a number). The exception is the signed mode of BC6H (DXGI_FORMAT_BC6H_SF16), which supports -INF (negative infinity). Note that this support for -INF is merely an artifact of the format itself, and is not specifically supported by encoders for this format. In general, when encoders encounter INF (positive or negative) or NaN input data, they should \ convert that data to the maximum allowable non-INF representation value, and map NaN to 0 prior to compression.
- BC6H does not support an alpha channel.
- The BC6H decoder performs decompression before it performs texture filtering.
- BC6H decompression must be bit accurate; that is, the hardware must return results that are identical to the decoder described in this documentation.
BC6H Implementation
A BC6H block consists of mode bits, compressed endpoints, compressed indices, and an optional partition index. This format specifies 14 different modes.
An endpoint color is stored as an RGB triplet. BC6H defines a palette of colors on an approximate line across a number of defined color endpoints. Also, depending on the mode, a tile can be divided into two regions or treated as a single region, where a two-region tile has a separate set of color endpoints for each region. BC6H stores one palette index per texel.
In the two-region case, there are 32 possible partitions.
Decoding the BC6H Format
The pseudocode below shows the steps to decompress the pixel at (x,y) given the 16 byte BC6H block.
decompress_bc6h(x, y, block) { mode = extract_mode(block); endpoints; index; if(mode.type == ONE) { endpoints = extract_compressed_endpoints(mode, block); index = extract_index_ONE(x, y, block); } else //mode.type == TWO { partition = extract_partition(block); region = get_region(partition, x, y); endpoints = extract_compressed_endpoints(mode, region, block); index = extract_index_TWO(x, y, partition, block); } unquantize(endpoints); color = interpolate(index, endpoints); finish_unquantize(color); }
The following table contains the bit count and values for each of the 14 possible formats for BC6H blocks.
Each format in this table can be uniquely identified by the mode bits. The first ten modes are used for two-region tiles, and the mode bit field can be either two or five bits long. These blocks also have fields for the compressed color endpoints (72 or 75 bits), the partition (5 bits), and the partition indices (46 bits).
For the compressed color endpoints, the values in the preceding table note the precision of the stored RGB endpoints, and the number of bits used for each color value. For example, mode 3 specifies a color endpoint precision level of 11, and the number of bits used to store the delta values of the transformed endpoints for the red, blue and green colors (5, 4, and 4 respectively). Mode 10 does not use delta compression, and instead stores all four color endpoints explicitly.
The last four block modes are used for one-region tiles, where the mode field is 5 bits. These blocks have fields for the endpoints (60 bits) and the compressed indices (63 bits). Mode 11 (like Mode 10) does not use delta compression, and instead stores both color endpoints explicitly.
Modes 10011, 10111, 11011, and 11111 (not shown) are reserved. Do not use these in your encoder. If the hardware is passed blocks with one of these modes specified, the resulting decompressed block must contain all zeroes in all channels except for the alpha channel.
For BC6H, the alpha channel must always return 1.0 regardless of the mode.
BC6H Partition Set
There are 32 possible partition sets for a two-region tile, and which are defined in the table below. Each 4x4 block represents a single shape.
In this table of partition sets, the bolded and underlined entry is the location of the fix-up index for subset 1 (which is specified with one less bit). The fix-up index for subset 0 is always index 0, as the partitioning is always arranged such that index 0 is always in subset 0. Partition order goes from top-left to bottom-right, moving left to right and then top to bottom.
BC6H Compressed Endpoint Format
This table shows the bit fields for the compressed endpoints as a function of the endpoint format, with each column specifying an encoding and each row specifying a bit field. This approach takes up 82 bits for two-region tiles and 65 bits for one-region tiles. As an example, the first 5 bits for the one-region [16 4] encoding above (specifically the right-most column) are bits m[4:0], the next 10 bits are bits rw[9:0], and so on with the last 6 bits containing bw[10:15].
The field names in the table above are defined as follows:
Endpt[i], where i is either 0 or 1, refers to the 0th or 1st set of endpoints respectively.
Sign Extension for Endpoint Values
For two-region tiles, there are four endpoint values that can be sign extended. Endpt[0].A is signed only if the format is a signed format; the other endpoints are signed only if the endpoint was transformed, or if the format is a signed format. The code below demonstrates the algorithm for extending the sign of two-region endpoint values.
static void sign_extend_two_region(Pattern &p, IntEndpts endpts[NREGIONS]); endpts[1].A[i] = SIGN_EXTEND(endpts[1].A[i], p.chan[i].delta[1]); endpts[1].B[i] = SIGN_EXTEND(endpts[1].B[i], p.chan[i].delta[2]); } } }
For one-region tiles, the behavior is the same, only with endpt[1] removed.
static void sign_extend_one_region(Pattern &p, IntEndpts endpts[NREGIONS]); } }
Transform Inversion for Endpoint Values
For two-region tiles, the transform applies the inverse of the difference encoding, adding the base value at endpt[0].A to the three other entries for a total of 9 add operations. In the image below, the base value is represented as "A0" and has the highest floating point precision. "A1," "B0," and "B1" are all deltas calculated from the anchor value, and these delta values are represented with lower precision. (A0 corresponds to endpt[0].A, B0 corresponds to endpt[0].B, A1 corresponds to endpt[1].A, and B1 corresponds to endpt[1].B.)
For one-region tiles there is only one delta offset, and therefore only 3 add operations.
The decompressor must ensure that that the results of the inverse transform will not overflow the precision of endpt[0].a. In the case of an overflow, the values resulting from the inverse transform must wrap within the same number of bits. If the precision of A0 is "p" bits, then the transform algorithm is:
B0 = (B0 + A0) & ((1 << p) - 1)
For signed formats, the results of the delta calculation must be sign extended as well. If the sign extension operation considers extending both signs, where 0 is positive and 1 is negative, then the sign extension of 0 takes care of the clamp above. Equivalently, after the clamp above, only a value of 1 (negative) needs to be sign extended.
Unquantization of Color Endpoints
Given the uncompressed endpoints, the next step is to perform an initial unquantization of the color endpoints. This involves three steps:
- An unquantization of the color palettes
- Interpolation of the palettes
- Unquantization finalization
Separating the unquantization process into two parts (color palette unquantization before interpolation and final unquantization after interpolation) reduces the number of multiplication operations required when compared to a full unquantization process before palette interpolation.
The code below illustrates the process for retrieving estimates of the original 16-bit color values, and then using the supplied weight values to add 6 additional color values to the palette. The same operation is performed on each channel.
int aWeight3[] = {0, 9, 18, 27, 37, 46, 55, 64}; int aWeight4[] = {0, 4, 9, 13, 17, 21, 26, 30, 34, 38, 43, 47, 51, 55, 60, 64}; // c1, c2: endpoints of a component void generate_palette_unquantized(UINT8 uNumIndices, int c1, int c2, int prec, UINT16 palette[NINDICES]) { int* aWeights; if(uNumIndices == 8) aWeights = aWeight3; else // uNumIndices == 16 aWeights = aWeight4; int a = unquantize(c1, prec); int b = unquantize(c2, prec); // interpolate for(int i = 0; i < uNumIndices; ++i) palette[i] = finish_unquantize((a * (64 - aWeights[i]) + b * aWeights[i] + 32) >> 6); }
The next code sample demonstrates the interpolation process, with the following observations:
- Since the full range of color values for the unquantize function (below) are from -32768 to 65535, the interpolator is implemented using 17-bit signed arithmetic.
- After interpolation, the values are passed to the finish_unquantize function (described in the third sample in this section), which applies the final scaling.
- All hardware decompressors are required to return bit-accurate results with these functions.
int unquantize(int comp, int uBitsPerComp) { int unq, s = 0; switch(BC6H::FORMAT) { case UNSIGNED_F16: if(uBitsPerComp >= 15) unq = comp; else if(comp == 0) unq = 0; else if(comp == ((1 << uBitsPerComp) - 1)) unq = 0xFFFF; else unq = ((comp << 16) + 0x8000) >> uBitsPerComp; break; case SIGNED_F16: if(uBitsPerComp >= 16) unq = comp; else { if(comp < 0) { s = 1; comp = -comp; } if(comp == 0) unq = 0; else if(comp >= ((1 << (uBitsPerComp - 1)) - 1)) unq = 0x7FFF; else unq = ((comp << 15) + 0x4000) >> (uBitsPerComp-1); if(s) unq = -unq; } break; } return unq; }
finish_unquantize is called after palette interpolation. The unquantize function postpones the scaling by 31/32 for signed, 31/64 for unsigned. This behavior is required to get the final value into valid half range(-0x7BFF ~ 0x7BFF) after the palette interpolation is completed in order to reduce the number of necessary multiplications. finish_unquantize applies the final scaling and returns an unsigned short value that gets reinterpreted into half.
unsigned short finish_unquantize(int comp) { if(BC6H::FORMAT == UNSIGNED_F16) { comp = (comp * 31) >> 6; // scale the magnitude by 31/64 return (unsigned short) comp; } else // (BC6H::FORMAT == SIGNED_F16) { comp = (comp < 0) ? -(((-comp) * 31) >> 5) : (comp * 31) >> 5; // scale the magnitude by 31/32 int s = 0; if(comp < 0) { s = 0x8000; comp = -comp; } return (unsigned short) (s | comp); } }
Related topics | https://docs.microsoft.com/en-us/windows/win32/direct3d11/bc6h-format | CC-MAIN-2020-29 | refinedweb | 1,917 | 50.36 |
0
Hi everybody,
I'm trying to write a function that takes in 2 values, a min and a max, that prompts the user to enter a value within that range, and uses 0 as a sentinel to quit the loop.
My issue is, I also want the loop to continue prompting the user when the user enters nothing (i.e. hits enter) until a value between the range is entered. How do I go about doing this with the code I have already written? Thanks!
def getValidNum(min, max): QUESTION = "Enter a value, 0 to quit: " num = input(QUESTION) while num != 0: if num in range(min, max + 1): return num else: print("Sorry %d - %d or 0 only") % min, max print("Try again.") num = input(QUESTION) | https://www.daniweb.com/programming/software-development/threads/238777/while-loop-help | CC-MAIN-2018-43 | refinedweb | 128 | 68.1 |
i want the spash screen to only show when the app has been compltely destroyed not when it is running in the background and resumed
A very simple method:
Main Activity is only a splash screen. This Activity is shown while a timer starts that elapses for say 4 seconds.
When 4 seconds hits, the splash screen activity is destroyed and the Main Application Activity is started.
Voila, you now have a splash screen that will never be shown, except when you first start the application.
public class SplashScreen extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.splash_screen); Thread t = new Thread() { public void run() { try { int time = 0; while (time < 4000) { sleep(100); time += 100; } } catch (InterruptedException e) { // do nothing } finally { finish(); Intent i = new Intent(SplashScreen.this, MainApplication.class); startActivity(i); } } }; t.start(); } } | https://codedump.io/share/U16TaxDjKSvG/1/how-do-i-show-a-splash-screen-only-when-the-activity-starts-up-not-when-it-is-resumed | CC-MAIN-2016-50 | refinedweb | 141 | 52.8 |
Download the TestDrive solution files (ZIP, 14 MB)
Code
FlexWebTestDrive.mxml
<?xml version="1.0" encoding="utf-8"?> <s:Application ...> <fx:Style (...) <s:Label (...) <s:Button id="addBtn" styleName="actionButton" .../> (...) <s:ToggleButton id="toggleBtn" styleName="actionButton" .../> (...) </s:Application>
assets/FlexWebTestDrive.css
@namespace s "library://ns.adobe.com/flex/spark"; @namespace mx "library://ns.adobe.com/flex/mx"; global { font-family: Arial; font-size: 12; chrome-color: #494949; symbolColor: #FFFFFF; } s|Application { backgroundColor: #E6E6E6; } s|Button, s|ToggleButton { color: #FFFFFF; cornerRadius: 5; fontWeight: bold; } s|Button:disabled { color: #000000; } .actionButton { chromeColor: #7091B9; } #xyzLabel { color:#FFFFFF; fontSize:24; fontWeight:bold; }
Tutorial
In the previous four modules, you learned to create, debug, and deploy a Flex application. In this module, you learn how to change the appearance of your application using styling and skinning.
With styling, you set component styles inline in MXML (as you already have), like
<s:Button>, are Spark components introduced in Flex 4 or later. The tags that start with mx, like
<mx:Button>and
<mx:PieChart>(which you will use in the next module), are the older Flex components. You set the appearance of MX components primarily using styling. The Spark components have been rearchitected to primarily use a skinning (rather than styling) model in which each component's associated skin file manages everything related to a component's appearance, including its graphics, its layout, and its states. Many but not all of the MX components have been rearchitected as Spark components. When both Spark and MX versions of a component exist, use the newer Spark version of the component.
In this tutorial, you learn to create a style sheet and define style rules that you apply to your application. In the following tutorials, you create and use component skins.
Step 1: Create a style sheet and a CSS global selector.
Return to FlexWebTestDrive.mxml in Design mode. Navigate to the Appearance view and change font and color styles (see Figure 1).
If you don't want to choose your own values, here are some you can use:
font-family: Verdana,
font-size: 10,
chrome-color: #494949.
Figure 1.Set global styles in the Appearance view.
Your components in the design area will reflect these new styles. Even though the global
font-sizeis set to 10 pixels, the XYZ label is still large because it has a
font-sizestyle set inline in its MXML tag. Closer, or more specific style values take precedence.
Switch to Source mode. You will see a new Style tag below the Application tag.
<fx:Style
In the Package Explorer, you should see a new file, FlexWebTestDrive.css. Open it. You will see a global CSS selector whose styles will be applied to all components.
/* CSS file */ @namespace s "library://ns.adobe.com/flex/spark"; @namespace mx "library://ns.adobe.com/flex/mx"; global { font-family: Verdana; font-size: 10; chrome-color: #494949; }
Run the application and see the different fonts and colors.
Figure 2. View the new font and color styles.
Step 2: Move the style sheet into an assets folder.
In the Package Explorer, create a new assets package (if it does not already exist). Drag FlexWebTestDrive.css and drop it in the assets folder. In the Move dialog box that appears, click OK. Return to FlexWebTestDrive.mxml and change the Style tag to use the new location.
Your code should appear as shown here:
<fx:Style
Step 3: Modify a CSS selector.
In FlexWebTestDrive.css,change the
font-familyto Arial, the
font-sizeto 12, and add a
symbol-colorof white (#FFFFFF).
When typing styles, press Ctrl+spacebar to force Content Assist to pop up so you can select styles from this list (see Figure 3).
Figure 3.Use Content Assist when editing style sheets.
Save the file and then run the application. The font has changed again and the scroll arrows are now white.
Step 4: Create a CSS type selector.
In Design mode for FlexWebTestDrive.mxml, select one of the buttons and in the Properties view set its text to bold and white and change its corner radius to 5.Look at the generated code. Return to Design mode and click the Convert to CSS button (see Figure 4). In the New Style Rule dialog box, select Specific component (see Figure 5).
When you set styles in the Properties view, styles are set inline for the selected state.
<s:Button id="empBtn" color.Employees="#FFFFFF" cornerRadius.Employees="5" fontWeight.Employees="bold" .../>
To move the styles from the component tag to a style sheet, make sure the button component is selected and then click the Convert to CSS button in the Properties view (see Figure 4).
Figure 4. Convert inline styles to CSS.
When you choose to create a selector for a specific component, a CSS type selector is created whose styles will automatically be applied to all component instances of this type—in this case, all Button controls.
Figure 5. Create a CSS type selector.
Return to FlexWebTestDrive.css. You will see a new CSS type selector:
s|Button { color: #FFFFFF; cornerRadius: 5; fontWeight: bold; }
The s in front of Button is to specify that this is the style for Spark Buttons, not MX Buttons.
Return to FlexWebTestDrive.mxml in Design mode or run the application. All the buttons (except the Bigger Text ToggleButton) should now have rounded corners and bold, white text, but the disabled buttons are now difficult to read (see Figure 6).
Figure 6. See that disabled buttons are unreadable.
Step 5: Create a CSS pseudo selector.
In FlexWebTestDrive.css, create a pseudo selector for the disabled state of Button components and set its
colorto black, #000000.
s|Button:disabled { color:#000000; }
Look at a component's API to find its defined states that can be used in the pseudo selectors (see Figure 7). Remember you can open a component's API by selecting Help > Dynamic Help, clicking a tag in MXML, and then clicking the API link in the Help view.
Figure 7. Locate the states of a Button component in its API.
Return to FlexWebTestDrive.mxml in Design mode or run the application. You should now be able to read disabled buttons (see Figure 8).
Step 6: Style the ToggleButton controls.
In FlexWebTestDrive.css, use a comma-delimited list to use the same style rule for Button and ToggleButton instances.
The selector should appear as shown here:
s|Button, s|ToggleButton { color: #FFFFFF; cornerRadius: 5; fontWeight: bold; }
The BiggerText ToggleButton should now also have bold, white text and rounded corners (see Figure 9).
Figure 9. Style the ToggleButton controls.
Step 7: Create a CSS class selector.
In Design mode, select the Add button and in the Properties view change its
chrome-colorto a blue color, like #7091B9 (see Figure 10). Click the Convert to CSS button. In the New Style Rule dialog box, select All components with style name and name the style actionButton (see Figure 11). Make this the style used by the Add button in all the application states.
Figure 10. Use a class selector to style the Add button.
Figure 11. Create a CSS class selector called actionButton.
The new selector in FlexWebTestDrive.css should appear as shown here:
.actionButton { chromeColor: #7091B9; }
This class selector can be selectively applied to any component.
Return to Source mode in FlexWebTestDrive.mxml and locate the
addBtnbutton. It now has a
styleNameproperty (for one state) set equal to the name of the class selector you just defined.
<s:Button id="addBtn" styleName.Employees="actionButton" .../>
To set it for all states, modify the source code to
styleName="actionButton".
Step 8: Assign a CSS class selector to another component.
In Design mode, select the Bigger Text button in the Departments state. Select the actionButton style in the Properties view (see Figure 12).
Any styles that can be applied to that component will appear in the drop-down list.
Figure 12. Assign a CSS class selector to the Bigger Text button.
The ToggleButton tag should appear as shown here:
<s:ToggleButton id="toggleBtn" label="Bigger Text" styleName="actionButton" .../>
The
actionButtonstyle was selectively applied to this second button, which should now also be blue (see Figure 13).
Figure 13. Use a class selector to style the Bigger Text button.
Step 9: Set the Application background color.
In FlexWebTestDrive.css, create a Spark Application selector and set the
backgroundColorto light gray, #E6E6E6.
The selector should appear as shown here:
s|Application { backgroundColor: #E6E6E6; }
Save the file and run the application. The background color should now be light gray when the application loads and when it is displayed.
Step 10: Create a CSS ID selector.
In Source mode of FlexWebTestDrive.mxml, give the XYZ Label an
idof xyzLabel and remove its
color,
fontWeight, and
fontSizestyles. In FlexWebTestDrive.css, create an ID selector for the Label using #xyzLabel and set the
colorto #FFFFFF, the
font-weightto bold, and the
font-sizeto 24.
Your Label should appear as shown here:
<s:Label
Your CSS ID selector should appear as shown here:
#xyzLabel { color: #FFFFFF; fontSize: 24; fontWeight: bold; }
Return to FlexWebTestDrive.mxml in Design mode or run the application and make sure the XYZ Label is large, white, and bold (see Figure 14).
Figure 14. View the styled application.
In this tutorial, you learned to create a style sheet and define CSS global, type, class, pseudo, and ID selectors. In addition to these selectors, you can also create component specific class selectors and descendant selectors. In the next tutorial, you will learn to change the appearance of a component more drastically by creating and using skins.
Learn more
- Styles and themes
- Using Cascading Style Sheets
- About style value formats
- Applying styles
- About style inheritance
- Using the StyleManager class
- Applying themes with Flash Builder | https://www.adobe.com/devnet/flex/testdrive/articles/5_customize_app.html | CC-MAIN-2018-47 | refinedweb | 1,619 | 58.08 |
PID Tutorials for Line Following
UPDATE : (07/10/2014) Added a PDF version of this tutorial and a sample video of a PID based Line Follower.
UPDATE : ( 01/16/2014 ) A few updates on making things more clearer about P, I , D and Kp, Ki, Kd.
UPDATE : ( 01/18/2014 ) A few minor additions and got rid of some technical errors, misplaced values, etc..
In my PID based line follower, to remove any sort of difficulties and complexities, I chose the Pololu QTR-8RC array sensor. In this tutorial as well, I will hover around this specific line sensor and how it's readings can be integrated into your own PID line following code.
The Line Following Sensor : Pololu QTR-8RC Array Sensor
The Pololu QTR-8RC sensor trims the fat when it comes to PID based line following. It not only signals you as to where the lines are located, but in turn, it also outputs the position of the robot while following a track.
KEY TERMS IN PID BASED LINE FOLLOWING:
PID stands for Proportional, Integral and Derivative. In this tutorial, I am taking a few peeks at Pololu's documents. As stated there :
· The proportional value is approximately proportional to your robot’s position with respect to the line. That is, if your robot is precisely centered on the line, we expect a proportional value of exactly 0.
· The integral value records the history of your robot’s motion: it is a sum of all of the values of the proportional term that were recorded since the robot started running.
The derivative is the rate of change of the proportional value.
(Note that the proportional term (P) , the integral term (I) and the derivative term (D) are the measures of the errors encountered while the robot follows a line. Kp , Ki and Kd are the proportional , integral and derivative constants that are then multiplied to the errors to adjust the robot's position. Kp, Ki and Kd are the PID parameters.)
In this tutorial, we will talk about just the proportional and derivative terms and in turn, just Kp and Kd. However, results can be accomplished using the Ki term as well. Perfectly tuning the Kp and Kd terms should be just enough though.
There are two important sensor readings to look into, when implementing PID. However, these readings are not simply the classic "analogRead(leftFarSensor)" type of readings. These readings are not only the analog readings, but also the positional readings of the robot. ( Let's not get into the technicalities of the reading methods of the Pololu Array Sensor. If you are interested, it is well documented on the Pololu website. The basic thing is : This sensor provides values from 0 to 2500 ranging from maximum reflectance to minimum reflectance, but at the same time, also provides additional readings on the robot's position i.e. how far the robot has stranded off from the line. )
Setpoint Value:
This is our goal. It is what we are hunting for. The setpoint value is the reading that corresponds to the "perfect" placement of sensors on top of the lines. By perfect, I am referring to the moment when the line is exactly at the center of our sensor. I will add on this later. If the robot is perfectly tuned, then it is capable of positioning itself throughout the course with the setpoint value.
Current Value:
The current value is obviously the instantaneous readings of the sensor. For eg : If you are using this array sensor and are making use of 6 sensors, you will receive a positional reading of 2500 if you are spot on, around 0 if you are far too left from the line and around 5000 if you are far too right.
Error:
This is the difference of the two values. Our goal is to make the error zero. Then only can the robot smoothly follow the line. Hopefully, this term will get more clear as we move forward.
THE PID EQUATION:
Wikipedia provides a number equations that all refer to the applications of PID. They might look complex and hard-to-understand. However, Line following is one of the simple applications of PID, thus, the equations used are not very daunting after all.
1) The first task is to calculate the error. This can also be called the proportional term i.e. it is proportional to the robot's position with respect to the line.
Error = Setpoint Value - Current Value = 2500 - position
As you can see, I have made use of 6 sensors. Our goal is to achieve a state of zero-error in the PID equation. Just imagine, as I stated earlier, the sensor gives a positional reading of 2500 when the robot is perfectly placed. Substituting 2500 as the position in the above equation we get,
Error = 2500 - position = 2500 - 2500 = 0
Clearly, 2500 is the position that zeros out the equation and is thus, our Setpoint Value.
(Note : You may notice, what if the sensor is to the far left? The positional reading would then be 0. Would that result in an error of 2500? Correct. And that error of 2500 is taken into account, and necessary changes are made in the next equation )
2) The second task is to determine the adjusted speeds of the motors.
MotorSpeed = Kp * Error + Kd * ( Error - LastError );
LastError = Error;
RightMotorSpeed = RightBaseSpeed + MotorSpeed;
LeftMotorSpeed = LeftBaseSpeed - MotorSpeed;
Note the equation -------------- MotorSpeed = Kp * Error + Kd * ( Error - LastError );
The proportional constant is multiplied to the proportional term i.e. the error.
The derivative constant is multiplied to the derivative term. ( the derivative term, as the name suggests, measures the rate of change in the proportional term)
Now that we have calculated our error, the margin by which our robot drifts across the track, it is time for us to scrutinize the error and adjust the motor speeds accordingly. Logically speaking, an error of 2500 ( in other words a positional reading of 0 ) means our robot is out to the left, which means that our robot needs to go a bit right, which inturn means, the right motor needs to slow down and the left motor needs to speed up (Differential Drive). THIS IS PID! It is basically just an implementation of the general rules of line following in real-time, in high-definition, and in split-seconds!
The MotorSpeed value is determined from the equation itself. RightBaseSpeed and LeftBaseSpeed are the speedsat which the robot runs at when the error is zero. Any PWM value from 0 -255 should do. ( If you are using an 8-bit PWM microcontroller that is :P )
So this is what we have so far :
( I have become a fan of this beautifier. Thank you Ladvien Sir! )
Troubles you may run into:
As I stated earlier, PID is a super hero and can be a mega-villain as well. It can deceive you with the erroneous responses but at the same time, it's accurate results will WOW you, if it gets implemented correctly. These are a few troubles I personally ran into while building my line follower.
1) Errors in Sign :
This is the most dangerous of all the errors you may encounter. A small sign error in any part of the PID equation could spell trouble. In this example, I have used the equation error = 2500 - position . This may not work for you. This works for a line follower following a white line on a black background. If you have a black line on a white background, you might need to set it to error = position - 2500 and might also need to make other changes in the signs in the PID equations.
If you are finding trouble to adjust your robot, an another option may include carrying your robot above a certain height, just to determine if the wheels are spinning in the correct directions. For eg : If the robot is at a position of 5000, the robot is to the far right. However, if the wheels are still spinning in such a way that the robot is still tending to move to the right, there is a sign error in your second equation.
RightMotorSpeed = RightBaseSpeed + MotorSpeed; ( + might need to be changed to - )
LeftMotorSpeed = LeftBaseSpeed - MotorSpeed; ( - might need to be changed to + )
2) Troubles with finding the PID parameter values :
I have read many comments on many PID line following projects and the most popular question that stands out is : "Please provide me the Kp, Ki and Kd values!!!"
Really? It's a great experience, if not the best, in finding them. You falter and falter and falter, and finally, you will have something running at the end. So give it a try yourself!
Whatever you do and however you forge ahead with tuning the PID parameters, I suggest to start small. Maybe a PWM value of around 100 should do for the base motor speeds. Then plug in the Kp and Kd terms. As I mentioned way above, the P term is the Proportional and the D term is the Derivative. The derivative of an error is smaller than the error itslef. Thus, to bring about meaningful corrections, it needs to multiplied by a bigger constant and thus, the Kp term is very small in comparison to the Kd term. You could start just with the Kp term, and then when you have something fine, you can add the Kd term and experiment as you move towards your goal of a smooth line follower.
Thanks for reading this tutorial. It's not that well managed and systematic, I guess. I may have made a few errors along the way, so constructive suggestions are highly appreciated. I, myself, am still learning to make my programs look more matured. I will try to make some more updates and hopefully add some more ideas, if I learn them in the days to come. Do find the attached code if you are having troubles. ( Still, there are no values for the constants :P . I'll give you a hint though. Kd is at least 20 times bigger than Kp in my case )
Happy Tuning!
code not working
I built this robot but no matter what I do, the robot only goes straight and doesn't follow the line at all. Here is my code. Any ideas?
#include <QTRSensors.h>
#define Kp 20 // experiment to determine this, start by something small that just makes your bot follow the line at a slow speed
#define Ki 1
#define Kd 400 // experiment to determine this, slowly increase the speeds and adjust this value. ( Note: Kp < Kd)
#define rightMaxSpeed 75 // max speed of the robot
#define leftMaxSpeed 75 // max speed of the robot
#define rightBaseSpeed 50 // this is the speed at which the motors should spin when the robot is perfectly on the line
#define leftBaseSpeed 50 //
#define rightMotor1 3
#define rightMotor2 4
#define rightMotorPWM 5
#define leftMotor1 12
#define leftMotor2 13
#define leftMotorPWM 11
#define motorPower 8
QTRSensorsRC qtrrc((unsigned char[]) {14, 15, 16, 17, 18, 19} ,NUM_SENSORS, TIMEOUT, EMITTER_PIN); // sensor connected through analog pins A0 - A5 i.e. digital pins 14-19
unsigned int sensorValues[NUM_SENSORS];
void setup()
{
pinMode(rightMotor1, OUTPUT);
pinMode(rightMotor2, OUTPUT);
pinMode(rightMotorPWM, OUTPUT);
pinMode(leftMotor1, OUTPUT);
pinMode(leftMotor2, OUTPUT);
pinMode(leftMotorPWM, OUTPUT);
pinMode(motorPower, OUTPUT);
for (int i = 0; i < 100; i++) // calibrate for sometime by sliding the sensors across the line
qtrrc.calibrate();
delay(20);
delay(2000); // wait for 2s to position the bot before entering the main loop
}
int lastError = 0;
int I = 0;
void loop()
{
unsigned int sensors[NUM_SENSORS];
int position = qtrrc.readLine(sensors); // get calibrated readings along with the line position, refer to the QTR Sensors Arduino Library for more details on line position.
int error = position - 2500;
int P = error * Kp;
int I = (I + error) * Ki;
int D = (error - lastError) * Kd;
int motorSpeed = P + I + D;
digitalWrite(motorPower, HIGH); // move forward with appropriate speeds
digitalWrite(rightMotor1, HIGH);
digitalWrite(rightMotor2, LOW);
analogWrite(rightMotorPWM, rightMotorSpeed);
digitalWrite(motorPower, HIGH);
digitalWrite(leftMotor1, HIGH);
digitalWrite(leftMotor2, LOW);
analogWrite(leftMotorPWM, leftMotorSpeed);
}
This is a great
This is a great tutorial...the explanation for the PID parameters is really nice.It'll be really useful since I'm building a line follower now..Thanks for this tutorial!
Also your line follower is really nice. :)
Thanks for the compliment
Thanks for the compliment robodude. Great to know it's going to assist you!
Thanks for the compliment
Sorry - Double post.
Quick edit please.
Early on, you say:
Can you maybe place the Kp term with your description of Proportional and Kd term with Derivative, and show the equation MotorSpeed = Kp * Error + Kd * ( Error - LastError );
Otherwise, I found this article very articulate and informative.
Great work!
p.s. maybe place the associated code as an attachement instead of inline. The site does not handle inline code well.
Cheers.
Thanks for spotting that
Thanks for spotting that unix_guru. I appreciate it. :-)
I too am still learning more and more about PID, so this tutorial is basically just an idea about how the line following code works.
I guess I made it sound like the Kp term and the proportional term are the same thing! ( they are definitely not! )
I'll try and make things a bit clearer in that part.
Thanks again
Oh and yeah, I hope we have a [code]......[/code] thing in the future for inserting code on LMR posts.
Ashim | http://letsmakerobots.com/node/39972 | CC-MAIN-2015-40 | refinedweb | 2,238 | 61.87 |
I'm working on exception handling. Here is my code, I need some help..
Requirements:
=> If the user push a value to stack when stack is full,it must throw an exception of type "cInvalidPush".
=> If the user pop a value from stack when stack is empty , it must throw an exception of type "cInvalidPop" .
=> The class "cInvalidPush" is inherited from the class "cInvalidStack" .
=> The class "cInvalidPop"is inherited from the class "cInvalidStack" .
=> In the exception classes mentioned above, their objects must be able to store error message in them.
=> Create an object of CStack in main and show how you would catch the exceptions if any of above occurs.
class that holds the prototype of all functions:
#include<iostream> #include<stdexcept> using namespace std; class CStack //holds the prototype of functions { private: int size; //holds size of stack int top; //contains the index value of array from where the value will be pushed and poped int *ptr_stk; //points to an array of integer int isFull() const; //returns 1 if stack is empty otherwise 0 int isEmpty() const; //returns 1 if stack is full otherwise 0 public: CStack(int=10); //default size for the stack is 10 ~CStack(); void push(const int); //push the value in array of integers pointed by int *ptr_stk int pop(); //pop the value array of integers pointed by int *ptr_stk from the top of stack }; //end class CStack
and now here is the defintion of all prototypes:
int CStack::isEmpty() const { if(top==0) return 1; else return 0; } int CStack ::isFull() const { if (top==size) return 1; else return 0; } CStack::CStack(int stackSize) { size=stackSize; ptr_stk=new int[size]; //allocates the pointer-based space for(int i=1;i<=size;i++) ptr_stk[i]=0; } void CStack::~CStack() { delete [] ptr_stk; //de-allocates the space } void CStack::push(const int val) { int decision1; decision1=isFull(); if(decision1==1) throw cInvalidPush; else for(int j=1;j<=top;j++) { ptr_stk[i]=val; } } void CStack::pop() { int decision2; decision2=isEmpty(); if(decision2==1) throw cInvalidPop; else delete [] ptr_stk; }
I made two classes for exception:
class cInvalidPush:public cInvalidStack //exception if stack is full { public: cInvalidPush():cInvalidStack("Stack if full.") { } }; //end class class cInvalidPop:public cInvalidStack //exception if stack is empty { public: cInvalidPop():cInvalidStack("Stack is empty.") { } }; //end class
but i don't know what to write in class cInvalidStack? it was mentioned in the question that these two classes are inherited from class cInvalidStack...
My second question is can anyone tell me what to write in it's main() ?
because I'm throwing two exceptions in my program. Rather one of them occurs at a time. So how to handle two exceptions in one catch? and how to make its TRY block?
Oh, i tried and here it is:
int main() { CStack obj; //instantiating object int value; while(cin>>value) { try { cout<<"enter the value in stack: "<<endl; obj.push(value); } catch { /*no idea? */ } } return 0; }
Can anyone complete it please? and one other thing how to handle the data member top ? | https://www.daniweb.com/programming/software-development/threads/498347/exception-handling | CC-MAIN-2017-17 | refinedweb | 505 | 58.32 |
The screen prints below are taken from the sample Oracle HR Management project. This project is intended for learning purposes.
This tutorial is intended to be a beginner's guide to Oracle XE. Oracle XE is an entry level database from Oracle. It has many of the features of the standard Oracle database yet it is easy to install with very little configuring needed and also easy to administer. There are some limitations as you would expect from a free product. The first is that there is a four gigabyte user data restriction. That is you can only store up to four gigabytes of user data. This does not include table namespaces or database data. The other restriction is that the database will only use one CPU from the host machine even if the host machine has more than one CPU.
Oracle XE contains a sample account named HR. This account is locked and before you can use it, you need to unlock the account. This is a very simple task, which requires you to login using the system account login details. This is because the sample HR account is locked by default, so you need to login as system, which gives you administrator privileges. Once logged in as system, you can unlock the sample HR account.
If you haven't already done so, download a copy of Oracle XE and install it on your machine. Take note of the password you supply when installing Oracle XE. This password is your system login password.
After installing Oracle XE, your Program Files menu will contain a new entry. Navigate to Oracle Database 10g Express Edition. Once the submenus appear, you should notice a menu titled 'Go To Database Home Page'. Click on this menu. Your default Web browser will load the Oracle Application Express (Oracle APEX). Oracle APEX is a Web application which in short is an application to help you manage your database.
To login to the system account, type SYS for the username. Enter the password you supplied when installing Oracle XE. Once logged in, you will be presented with the following screen.
Notice the four main menus. At this point, your main task is to unlock the sample HR account. The reason for this is we will be developing a simple C# application which will make use of this account.
To unlock this account click Administration. From the menu, click Database Users, then from the submenu, click Manage Users. You should now see the sample HR account with a Lock symbol. Click the HR icon. Now from the Manage Database User panel, select Unlock from Account Status. After selecting Unlocked, click the Alter User button. You should now see the lock symbol removed from the HR account.
We will be using the Oracle Data Provider to establish a connection to Oracle XE. ODP.NET is by default supplied with Oracle XE. That is, when you install Oracle XE, you also get ODP.NET. To make sure that you actually do have ODP.NET, you can perform a very simple test.
Run SQL*Plus, this is actually titled 'Run SQL Command Line'. Once the console is loaded, type the following:
connect hr/hr
Above, we are simply connecting to the HR account. Using hr as the username and hr as the password. If you receive Connected, this means that ODP.NET is installed.
It's about time we actually started doing some programming. Let's start with some very simple code. The code below in listing 1.1 shows how to establish a connection. Before you can use the code, you need to add the
Oracle.DataAccess reference.
using Oracle.DataAccess.Client; using Oracle.DataAccess.Types; static class ConnectionClass { private static OracleConnection conn; public static void Connection() { string oradb = "Data Source=XE;User Id=hr;Password=hr;"; conn = new OracleConnection(oradb); conn.Open(); } }
Every time I write a database application, I always like to create a separate class which consists of the database connection code. Using this approach, I can easily reuse my connection code. I also make the class a
static class. This allows me to use the class without having to create an instance of the class.
The above code is fairly easy to understand. We simply create an instance of the
OracleConnection class. The
OracleConnection class takes one argument, which is the connection string. The connection string simply consists of the Data Source name, which is by default XE, the user id, which is the username and the password. Although the above code is all that is needed to establish a connection to the Oracle database, we can add a
try,
catch block to catch any errors that might occur. The code in Listing 1.1 can be changed to the following code in Listing 1.2 below:
public static string Connection() { try { string oradb = "Data Source=XE;User Id=hr;Password=hr;"; conn = new OracleConnection(oradb); conn.Open(); } catch (OracleException e) { return e.Message; } return conn.State.ToString(); }
The first thing to note about this code is that the
Connection method will return a
string. Whereas in Listing 1.1, the
Connection method was declared as
void which, did not return a value to the calling method. This approach is a very simple approach which allows us to catch any errors that might occur and send them back to the calling method, where you can process the message either by showing the message to the user or based on the error code you can perform another task. If there are no errors,
conn.State.ToString() will return the
string "Open" indicating that the connection is open.
The HR database contains a table named
Employees. This table holds details about each employee such as first name, last name, e-mail and so on. For our next task, we will create a
GetEmployees() method, which will return a
DataTable. The code in listing 1.3 demonstrates how to retrieve all the columns from the
Employees table and return the data as a
DataTable.
using System.Data; private static string SQL; private static OracleConnection conn; private static OracleCommand cmd; private static OracleDataAdapter da; private static DataSet ds; ..................................................... public static DataTable GetEmployees() { SQL = "SELECT * FROM Employees"; cmd = new OracleCommand(SQL, conn); cmd.CommandType = CommandType.Text; da = new OracleDataAdapter(cmd); ds = new DataSet(); da.Fill(ds); return ds.Tables[0]; }
The first thing to note is that we have added a new directive, this is the
System.Data namespace. The
GetEmployees() method, creates an instance of the
OracleCommand class. This object is responsible for formulating the request and passing it to the database. It takes an SQL statement and the connection object as arguments. However, it can take just the SQL as an argument and the connection object can be set in the
OracleCommand objects property such as
cmd.Connection = conn.
Next we create an instance of the
OracleDataAdapter. We use the
OracleDataAdapter to fill a
Dataset, which will be used to return a table from the
Dataset. We supply the
OracleCommand object as an argument to the
OracleDataAdapter. We then use the
Fill method of the
OracleDataAdapter to fill a
DataSet. Finally, we return the table using
return ds.Tables[0].
Our
Connection class is almost complete, however there is one more method we need to implement. This is the
Terminate() method. The
terminate method will be responsible for closing the database connection. Listing 1.4 below shows the code for the
Terminate() method:
public static void Terminate() { conn.Close(); }
Let's take a look at
ConnectionClass code. Listing 1.5 below shows the entire code:
using System; using System.Collections.Generic; using System.Text; using Oracle.DataAccess.Client; using Oracle.DataAccess.Types; using System.Data; static class ConnectionClass { private static string SQL; private static OracleConnection conn; private static OracleCommand cmd; private static OracleDataAdapter da; private static DataSet ds; public static string Connection() { try { string oradb = "Data Source=XE;User Id=hr;Password=hr;"; conn = new OracleConnection(oradb); conn.Open(); } catch (OracleException e) { return e.Message; } return conn.State.ToString(); } public static DataTable GetEmployees() { SQL = "SELECT * FROM Employees"; cmd = new OracleCommand(SQL, conn); cmd.CommandType = CommandType.Text; da = new OracleDataAdapter(cmd); ds = new DataSet(); da.Fill(ds); return ds.Tables[0]; } public static void Terminate() { conn.Close(); } }
Finally we need to create a simple program to use our
ConnectionClass. Listing 1.6 below shows a complete Program.cs file which uses the
ConnectionClass to display employees' first name and last name.
using System; using System.Collections.Generic; using System.Text; using System.Data; class Program { static void Main(string[] args) { string strConn = ConnectionClass.Connection(); DataTable emp = ConnectionClass.GetEmployees(); for (int i = 0; i < emp.Rows.Count; i++) { //Print first name and last name Console.WriteLine(emp.Rows[i][1].ToString() + "\t\t" + emp.Rows[i][2].ToString()); } ConnectionClass.Terminate(); Console.Read(); } }
You can download the sample HR GUI application, which uses the supplied HR database or you can download the sample
ConnectionClass project files if you want to work with the basics.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/cs/Oracle_XE_with_C_.aspx | crawl-002 | refinedweb | 1,510 | 59.9 |
CodePlexProject Hosting for Open Source Software
I'm trying to create a feature that would allow me to know when a page's url has been updated
From what I read (and I read ...), I've got a few ideas about how to do this :
- I think I need a contenthandler to catch the update phase of the item
- a contentpart that would need to be added on the contentitem - a page in this case
but then, I have absolutely no idea how to wire all this together nor how to actually write this ...
(full disclosure : I've never developed a module for orchard before ... oh well I did code a redirection module, but frankly nothing to be very proud of :) )
Can someone give me a hand ? Thanks ...
You probably don't even need a content part. A ContentHandler can be applied to all content items, or you can target a specific part. Rather than PagePart, I'd target RoutePart, so then anything with routing gets the new behaviour automatically.
All you need to do is:
- Create a module (well, you already have one created, so you can just add this code there)
- Add a class like this:
public class RedirectContentHandler : ContentHandler {
public RedirectContentHandler() {
OnPublishing((publishing,part)=>{
// Here you have access to the context (publishing) and the RoutePart (part)
});
}
}
There are a number of events that you can attach to as well as Publishing. Perhaps Versioning would be most appropriate since it gets called when a new version of an item is saved. If you need more pointers let me know!
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://orchard.codeplex.com/discussions/282115 | CC-MAIN-2017-09 | refinedweb | 297 | 75.95 |
01 March 2012 11:59 [Source: ICIS news]
LONDON (ICIS)--?xml:namespace>
Frankfurt-based Verband der Chemischen Industrie (VCI) had previously forecast 1.0% year-on-year growth in
The revision came after VCI reported that Germany's 2011 fourth-quarter chemical production had fallen 2.0% from the third quarter and was down 4.3% year on year from the 2010 fourth quarter.
However, the trade group said that the longer-term prospects of
Production growth should resume in 2013, rising by about 2.0-3.0% from 2012, it said. Through 2020, VCI expects Germany's annual chemical production growth to average 2.0-2 | http://www.icis.com/Articles/2012/03/01/9537167/trade-group-cuts-germanys-2012-chem-output-growth-forecast-to-zero.html | CC-MAIN-2014-49 | refinedweb | 107 | 68.67 |
Why upgrade?
See Pants 1.x vs. 2.0.
Step 1: Upgrade to Pants 1.30
We recommend upgrading to 1.30.1, which is the last minor release in the 1.x series. See Upgrade tips for some tips with upgrading.
Upgrade your
./pants script
./pantsscript
We've made changes recently to the script to facilitate the upgrade to Pants 2.0. Run this:
curl -L -o ./pants
Check in any changes to your version control.
Step 2: Setup JVM, Node, and Go support (if relevant)
Pants 2.0 initially only supports Python, with support for other languages coming soon.
If you are using Pants's JVM, Node, or Go support and would like to continue using it, you will want two different Pants scripts: one to run Python with Pants 2.x, and one to run the other languages with Pants 1.x.
Step 2a: Setup
pants.v1.toml
pants.v1.toml
Copy over your original
pants.toml into a new TOML config file. You can call it whatever you'd like, such as
pants.v1.toml or
pants.jvm.toml.
We recommend deactivating the Python backend package and any Python plugins like
pantsbuild.pants.contrib.mypy so that you do not accidentally use your v1 script to run Python code.
[GLOBAL] pants_version = "1.30.1" # Instead of using backend_packages.remove, you may alternatively # explicitly enumerate every backend_package you want activated. backend_packages.remove = ["pants.backend.python"] plugins = [ "pantsbuild.pants.contrib.go==%(pants_version)s", "pantsbuild.pants.contrib.node==%(pants_version)s", ]
Finally, tell Pants to ignore the BUILD files for your Python code. This is important because Pants 1.x may not understand BUILD files from Pants 2.x, such as if Pants 2.x adds a new target type.
[GLOBAL] # Replace this with the paths to your Python code. build_ignore = ["src/python", "tests/python"]
Step 2b: Create
./v1
./v1
Rather than using the script
./pants, you will use this script to run Pants for other languages. You can name this script whatever you'd like, such as
./v1,
./pants_jvm, or
./jvm.
export PANTS_TOML=pants.v1.toml export PANTS_CONFIG_FILES="[\"$PANTS_TOML}\"]" export PANTS_BIN_NAME=./v1 ./pants "[email protected]"
Then, run
chmod +x ./v1.
Step 2c: Remove the other languages from
pants.toml
pants.toml
Now update your main
pants.toml to remove from
backend_packages and
plugins to deactivate non-Python implementations.
Set
build_ignore to tell Pants to ignore the BUILD files with any non-Python target types.
[GLOBAL] pants_version = "1.30.1" # Instead of using backend_packages.remove, you may alternatively # explicitly enumerate every backend_package you want activated. backend_packages.remove = ["pants.backend.jvm"] plugins = [ "pantsbuild.pants.contrib.mypy==%(pants_version)s", ] # Replace this with the paths to your non-Python code. build_ignore = ["src/java", "src/go"]
Step 3: Rewrite your plugins (if relevant)
Refer to Plugins overview to learn more about the new Rules and Target APIs.
We know that rewriting a plugin can be time-consuming, and we want to help. Please message us on Slack in the
#plugins channel. We can actively help you rewrite your code, including pair programming if needed. We don't want having to port your plugins to block you from upgrading to 2.0.
Step 4: Upgrade to 2.0
Now that the v1 engine is removed, some of the option names changed.
--v1and
--v2are both deprecated and will no-op.
backend_packages2and
plugins2are deprecated in favor of
backend_packagesand
plugins. Also, there are no more published Plugins, so the
pluginskey can be removed.
--enable-pantsdis now
--pantsd. (Deprecated in 1.30.0)
--quietwas removed because Pants output is now much less verbose thanks to the dynamic UI. Instead, to disable all Pants output, use
--no-dynamic-uiand
--level=error.
Before:
[GLOBAL] pants_version = "1.30.1" v1 = false v2 = true backend_packages = [] backend_packages2 = [ "pants.backend.python", "pants.backend.python.lint.isort", "pants.backend.python.lint.mypy", ] plugins = [] plugins2 = [] enable_pantsd = true
After:
[GLOBAL] pants_version = "2.0.1rc0" backend_packages = [ "pants.backend.python", "pants.backend.python.lint.isort", "pants.backend.python.typecheck.mypy", ] pantsd = true
Run
./pants to validate that your
pants.toml is valid. You may need to remove some options that are no longer used in Pants 2.0.
Once
./pants works, run
./pants list :: to make sure that Pants can parse all of your BUILD files.
Reminder: use
ignore_pants_warningsto ignore deprecations
For example:
[GLOBAL] ignore_pants_warnings = [ "DEPRECATED: Use the target type `pex_binary`", ]
Step 5: Tweak your CI's caching (recommended)
With the v1 engine, most caching was saved to the folder
<build root>/.pants.d. With the v2 engine, caching is saved to
~/.cache/pants/lmdb_store and
~/.cache/pants/named_caches.
Refer to Using Pants in CI for more information, including a script to nuke your cache when it has gotten too large.
Step 6: Set up a constraints file (strongly recommended)
A constraints file (aka lockfile) is important for reproducible builds.
Setting up a constraints file also allows Pants to optimize to avoid resolving requirements more than one time for your project. This greatly speeds up the performance of goals like
test,
run, and
repl.
See Third-party dependencies for a guide to set this up.
Key differences to be aware of
Missing features? Let us know!
Please message us on Slack if you are having issues upgrading or find something missing. We would love to help.
Files are now the "atomic unit", rather than targets (1.30 vs. 2.0)
Previously, with both the v1 and v2 engines, targets were the "atomic unit". For example, you could only add dependencies on an entire target, rather than specific files within the target.
This model created a tension. It's convenient to only specify a few targets to avoid boilerplate in BUILD files, but this would result in much coarser invalidation. If you wanted fine-grained invalidation, you would need one target for every single file in your project.
Now, files are the "atomic unit". You can now depend on a specific file, which will copy the metadata from its original target. Depending on a target is now sugar for depending on each file belonging to that target.
This should have little impact on your day-to-day usage, other than:
- Tests run per-file, rather than per-target.
- The project introspection goals like
list,
filter,
dependencies, and
dependeeswill have different output.
--changed-sinceis more precise. It used to return all sibling files to files that were changed, even if those siblings were untouched. Now, it only returns files that were actually changed.
- If you use dependency inference, Pants will infer dependencies on specific files, rather than the entire target. For one Pants user with ~70k lines of Python code, this finer-grained precision reduced the size of dependencies by 30%!
run,
package (formerly
binary,
setup-py, and
awslambda),
fmt, and
lint will all behave the same as before. You can still use both files and address specs like
:: on the command line. You can keep your
dependencies fields the same as before.
Previously, we did not recommend using recursive globs with the
sources field, like
**/*.py, because it resulted in too coarse of invalidation. If you are using dependency inference—or use explicit file dependencies—you no longer need to be concerned with how the granularity of your targets will impact the granularity of your dependencies. (However, using too coarse of targets may still be difficult to reason about.)
Dependency inference is enabled by default (1.30 vs. 2.0)
Pants now understands your Python import statements and knows how to map those imports back to the owning targets, e.g.
python_library and
python_requirement_library targets. This means that most of the time, you can leave off the
dependencies field in your BUILD file.
While we strongly recommend using this feature—because it makes Pants much more ergonomic and is likely to result in finer-grained invalidation—you probably do not want to use it when first upgrading. You can turn off this feature by setting
imports = false in the
[python-infers] section.
Once you have successfully upgraded to 2.0, you can re-enable dependency inference. You'll first want to teach Pants about your third-party dependencies, see Third-party dependencies. Then set
imports = true, but don't yet delete anything from your BUILD files.
Simply turning on dependency inference should be safe; Pants will only infer dependencies when there is no ambiguity, and it's optimized to avoid false positives. All of your original explicit
dependencies will still be respected. While dependency inference will likely result in some dependencies being added that were unintentionally left off, these dependencies were likely already being used thanks to transitive dependencies (otherwise, your code would have been broken).
Now, with dependency inference enabled, you can start deleting
dependencies from your BUILD files. This can be an incremental process; you do not need to update all your BUILD files in one pull request. For each target, run
./pants dependencies before, then delete the entire
dependencies field, run
./pants dependencies again to compare, and check if anything removed should have been included. You will want to check your import statements to see what is actually in use; it's common for the
dependencies field to become stale and drift from your code.
Less magical handling of
__init__.py files (1.30 vs. 2.0)
__init__.pyfiles (1.30 vs. 2.0)
Previously, with both the v1 and v2 engines, Pants would detect any missing
__init__.py files and automatically inject them by generating empty files. This is surprising and also breaks PEP 420 style namespace packages.
Instead, now Pants will use any
__init__.py files it discovers for a file or its ancestor directories.
__init__.py files will be used even if they are left off of the
sources field in a target. Pants will no longer auto-generate any missing
__init__.py files.
If you set the option
--python-infer-inits on (disabled by default), then Pants will infer a proper dependency on those files, rather than simply copying the file. You can run
./pants dependencies to see this behavior. This is because it's possible for an
__init__.py file to have content in it. This behavior may result, though, in you having more dependencies than you desire. If your
__init__.py files are all empty, it is safe to keep this option off.
If you encounter issues with imports, you may need to manually add any missing
__init__.py files.
Dependency inference for
conftest.py (1.30 vs. 2.0)
conftest.py(1.30 vs. 2.0)
Pytest uses any
conftest.py files found in the current directory and any ancestor directories.
Previously, with both the v1 and v2 engine, you would have to be careful to make sure that each
conftest.py file had an owning target, and then to add an explicit dependency on each
conftest.py file that you wanted to use.
Instead, now Pants will infer dependencies on any sibling and ancestor
confest.py files, which will ensure that they are always used. You can turn this feature off by setting
conftests = false in the
[python-infer] section of your
pants.toml.
Use the
package goal instead of
binary,
awslambda, and
setup-py (1.30 vs. 2.0)
packagegoal instead of
binary,
awslambda, and
setup-py(1.30 vs. 2.0)
We consolidated all three of those goals into the new
package goal. (The old goals still exist, but are deprecated).
package behaves identically for
binary and
awslambda. For
setup-py, rather than using command line arguments like
./pants setup-py path/to:tgt -- bdist_wheel, set the field
setup_py_commands = ['bdist_wheel'], for example.
Use
python_distribution target type for
provides=setup_py() (1.30 vs. 2.0)
python_distributiontarget type for
provides=setup_py()(1.30 vs. 2.0)
Previously, the
provides field was used on
python_library targets for the
setup-py goal. Now, use a dedicated
python_distribution target.
Typically, you can keep the original
python_library you had the same as before, e.g. keep the same
dependencies field the same. Then, create a new
python_distribution target and move the
provides field from the
python_library to the
python_distribution.
Finally, add to the
python_distribution's
dependencies field the address to the original
python_library; this will cause the
python_distribution to include everything that was originally included. You can verify everything shows up correctly by running
./pants dependencies --transitive path/to:my-dist.
python_library( name="lib", dependencies=["dep1", "dep2"], ) python_distribution( name="my-dist", dependencies=[":lib"], provides=setup_py( name="my-dist", ... ) )
python_binary is deprecated in favor of
pex_binary (1.30 vs. 2.0)
python_binaryis deprecated in favor of
pex_binary(1.30 vs. 2.0)
The target type behaves identically. It was renamed for clarity and to accommodate possible future binary formats like PyInstaller.
Use this command to automatically update your BUILD files:
- macOS:
for f in $(find . -name BUILD); do sed -i '' -Ee 's#^python_binary\\(#pex_binary(#g' $f ; done
- Linux:
for f in $(find . -name BUILD); do sed -i -Ee 's#^python_binary\\(#pex_binary(#g' $f ; done
Tests run in the background by default (v1 vs. v2 engine)
In v1, tests run in the foreground. That is, you can see tests run in real-time in your terminal, and you can use breakpoints and debuggers.
In v2, tests instead run in the background. Why? So that Pants can run multiple test files in parallel.
To instead run a test interactively, run
./pants test --debug. (See test).
Tests always run in a chroot (v1 vs. v2 engine)
Before Pants 1.25.0, tests would run in the build root, rather than in a temporary directory (chroot). We changed the default for the v1 engine in 1.25.0 to default to running in a chroot, but you might have still set
chroot = false in the
[test.pytest] scope.
The
chroot option was removed because the v2 engine always runs tests in a chroot. To fix any issues, usually, you will need to declare dependencies that you previously left off, such as declaring a
files() or
resource() target for resource files.
(Why do we now run in a chroot? This allows us to safely run multiple tests in parallel.)
Coverage.py support was redesigned (v1 engine vs. v2 engine)
Previously, Pants would try to figure out which modules you wanted coverage for by either using the
coverage option in the
[test.pytest] scope, or by trying to automatically figure out the value by looking at the
coverage field in
python_tests targets and using the package names of your test files. This did not work very well and was clunky to get the correct results.
Now, Pants will default to reporting on every file in the transitive closure of your tests, meaning any file that is touched during your tests' run. The
coverage field was removed from
python_tests. If you want more precise coverage data, you can use the
--coverage-py-filter option, e.g.
./pants --coverage-py-filter=helloworld.util.lang.
We also added new options to specify which type of report(s) you'd like. See test for instructions on how to use coverage.
Linter config files must be specified (v1 vs. v2 engine)
Because linters and formatters now run in a chroot (temporary directory), you must explicitly specify your config files to make sure that they get copied into the directory. See Linters and formatters.
(Running in a chroot means that Pants can run all of your linters in parallel.)
MyPy is activated differently (1.30 vs. 2.0)
Before:
[GLOBAL] plugins = ["pantsbuild.pants.contrib.mypy"]
After:
[GLOBAL] backend_packages = [ "pants.backend.python", "pants.backend.python.typecheck.mypy", ]
MyPy no longer runs under the
lint goal, and instead uses a new
typecheck goal.
See typecheck for more information.
IPython is activated differently (v1 engine vs. v2 engine)
Before:
[repl.py] ipython = true
After:
[repl] shell = "ipython"
python_app is now
archive (1.30 vs. 2.0)
python_appis now
archive(1.30 vs. 2.0)
Rather than using a
python_app target and the
bundle goal, use the
archive target type and the
package goal. We redesigned this feature to be simpler, such as using dependencies on
files() targets rather than using the
bundle type. If you still need to relocate where files are located, you can use the
relocated_files() target. See Resources and archives.
--cache-ignore is removed (v1 vs. v2 engine)
--cache-ignoreis removed (v1 vs. v2 engine)
The v2 engine handles caching very differently than the v1 engine. Rather than having monolithic tasks like
test.pytest, each build step is broken up into several composable "rules". Because of this design, options like
--cache-ignore and
--test-pytest-cache-ignore would no longer do anything.
Instead, you can use
--no-process-execution-use-local-cache. This will avoid reading from the global cache in
~/.cache/pants/lmbd_store. Warning: this means that you will re-resolve Python requirements, which is slow.
If you're trying to rerun a test, you can instead run
./pants test --force to force the test to rerun, but still use the cache for everything else (See test).
targets is replaced by
help (1.30 vs. 2.0)
targetsis replaced by
help(1.30 vs. 2.0)
Run
./pants help targets for a list of target types, and
./pants help $target_type for details on a specific target. The new
help mechanism gives much more detail than the previous
targets goal.
option is removed in favor of a better
help (1.30 vs. 2.0)
optionis removed in favor of a better
help(1.30 vs. 2.0)
./pants help will now show you what the current value is and how the value was derived.
You can use
./pants help-all to get information on every option in JSON format.
alias is removed (1.30 vs. 2.0)
aliasis removed (1.30 vs. 2.0)
If you found this feature useful, we'd be happy to add it back in a more powerful way. Please message us on Slack or open a GitHub issue.
prep_command is removed (1.30 vs. 2.0)
prep_commandis removed (1.30 vs. 2.0)
prep_command does not fit well with the v2 engine's execution model. If you needed this functionality, please message us on Slack and we will help to figure out how to recreate your setup.
minimize,
filemap,
path,
paths, and
sort are removed (1.30 vs. 2.0)
minimize,
filemap,
path,
paths, and
sortare removed (1.30 vs. 2.0)
We are happy to add these back if you found them useful. Please message us on Slack or open a GitHub issue.
Updated 7 months ago | https://www.pantsbuild.org/docs/how-to-upgrade-pants-2-0 | CC-MAIN-2021-25 | refinedweb | 3,076 | 68.57 |
use:
If you’re using a C++11 compiler,.
Hi Alex,
I have a lot of problems with enums, and I think it’s because of my compiler but I don’t really know. When I make an enum I get a warning that says "[Warning] non-static data member initializers only available with -std=c++11 or -std=gnu++11", and when I use an enum class apart from this warning I get another one that says "[Warning] scoped enums only available with -std=c++11 or -std=gnu++11", and I don’t really know what neither of them mean. Besides, when I try to create a variable and initialize it with an enumerator from the enum class I get an error that says " ‘Imaginary’ is not a class or a namespace" (Imaginary is how I named my enum class) and I don’t know what I’m doing wrong , because it’s almost copied from your code, with the scope qualifier and everything. Btw, while I was writing this message I tried to change my code a few times, and the first warning disappeared (don’t know why), so don’t bother about it if it’s unclear what I meant.
Thanks for the attention.
It sounds like maybe your compiler was confusing an enum class with a normal class. We talk about non-static data member initializers in the upcoming lesson in this chapter on structs.
Ok, so I’ve read that, and I guess that I can’t use non-static initialization with my compiler right? But about the enum class part, does it mean that my compiler can’t compile enum classes?
It sounds like your compiler isn’t C++11 compatible, or you haven’t turned C++11 compatibility on. Both enum classes and non-static initialization are part of C++11.
It was this, thanks. I searched on google how to make my compiler c++11 compatible and it worked, thanks.
Hi Alex,
I am confused about "type(s)" in both enum classes & normal enumerations.
Example 1: enum classes
enum class Color
{
RED,
BLUE
};
enum class Fruit
{
BANANA,
APPLE
};
In this example 1, I’ve learned that Color and Fruit are different types.
===============================
Example 2: normal enumerations
enum Color
{
RED,
BLUE
};
enum Fruit
{
BANANA,
APPLE
};
Are Color & Fruit considered as the same the same type?
Best regards,
Nguyen
No, Color and Fruit are not the same type in example 2. You can see this by trying the following:
The compiler will complain you can’t assign a Fruit enumerator to a Color.
Hi,Alex! I’m in a trouble understanding this.
This code works.
But this doesn’t.
Even though FIRST is an integer,I can’t print it with cout<<.If I don’t use enum classes,it simply works with cout<<FIRST; I know variables of enum class can’t get implicitly assigned but I assign them explicitly.Can you find a solution why this doesn’t work?
Enum classes will not implicitly convert to an int. So if you want enum class Foo to print as an integer, you’ll need to explicitly cast it.
The fact that you can assign them integer values when defining them is irrelevant.
Thank you.
I’m having a heck of a time trying to forward declare an enum. I downloaded the community version of Visual Studio (from 2015) to get a more up-to-date compiler, and despite what the book "C++ Primer" says about being able to forward declare an enum, I can’t get it to work for me. According to that book, it should be as simple as typing:
enum class Color : unsigned char;
Into a header file and defining the enum somewhere else, like Color.cpp:
enum class Color : unsigned char
{
BLACK,
RED,
GREEN,
BLUE,
WHITE
};
But it simply won’t work. You may be able to declare an object of type Color, but you cannot initialize it or do much of anything, actually. I could figure out how to forward declare a class type, but not an enum? What madness is this?
Forward declaring enums works like forward declaring any other type: the forward declaration tells the compiler that the type exists, but you can’t actually do anything with it until you define it.
I think that another important advantage of enum classes is that you can specify the underlying integer type:
Thanks for teaching me something new! 🙂
Ciccio/Alex,
Are you talking about the integer assigned to the enumerators???
Makes sense when you have more than 2^32 enumerators int will no longer work.
Another validation needed is- when we talk about global namespace, its just a concept right, kind of like the universal set, where enum class is a subset like namespace std is………. and there isn’t any specific namespace named global (namespace global{….; …; }), of course there ain’t one, else we would have accessed any global variable in it as "global::identifier" and not as "::identifier", the blank signifies it being global.
Thanks.
Yes, though I’ve never seen an enumeration with more than 255 enumerators, let alone 2^32.
The global namespace is essentially the “outermost” namespace. It can contain inner namespaces (defined using enum classes, the namespace keyword, etc…). As you say, using the scope resolution operator with no prefix means the global namespace.
Can the enum items be accessed by index?
Ex. cout << Color(1) << endl;
What you’re actually doing here is a C-style cast on integer 1, and casting it to an object of type Color. So you’re not accessing it, you’re converting it.
@Alex
It is written "When C++ compares color and fruit, it implicitly converts color to fruit to integers, and compares the integers."
Is color converted to type fruit or both color and fruit are converted to integers?
If the latter case is true, it should be
"… it implicitly converts color and fruit to integers, …"
Yes, it should be “color and fruit to integers”. Stupid typos. Thanks for noticing.
There seems to be a problem which i cannot understand while trying to compile this code with the codeblocks(13.12). It says Fruit is not a class or a namespace.
Have you turned on C++11 functionality for Code::Blocks? See lesson 0.5 -- Installing an integrated development environment for information on how to do this.
I am not able to understand that why we are not using .h with the iostream ?
Covered in lesson 1.9 -- Header files.
Alex,
For the third time (see above) I apologize if my earlier post was in error. I guess I’m a bit confused about namespace
in the first code above your comment said enum Color and RED // RED is placed in the same namespace as Color.
Then in the second your comment said enum class Color and RED, // RED is considered part of Color, not as part of the namespace that Color is in.
I think of Color as a(custom)type,and Red as one of Color’s elements. I can’t imagine how they can be in the same namespace in the enum Color and not in the enum Class Color.
Please clarify Thank You
This is a hard concept to explain, so let me try again.
Consider the following snippet of code:
You’ll agree that enum b is inside of namespace a, right? So we’d access it as a::b. With a normal enum, c and d are both also inside of a, so we’d access them as a::c and a::d. This is what I mean by the enumerators are in the same namespace as the enumeration.
Now change the enum to an enum class. In this case, the enumeration is accessed via a::b. However, the enumerators are accessed as a::b::c or a::b::d.
Note that the enumerators and the enumerations are no longer at the same level. The enumerators are inside the namespace of the enumeration.
Now remove namespace a, and that’s your normal case. In the normal enum case, b and c and d can be accessed directly. In the enum class case, b can be accessed directly, but c and d are still accessed as b::c and b::d.
Make sense?
Alex,
Very good explanation Thanks
Alex,
I’ve forgotten to provide an answer to the math above when I post quite a few comments. Then I loose all of the time and wording I’ve written. Do you think someone can fix this to send us back to Leave a comment?
Alex,
Wait, how do the two enumerations Color and Fruit in the first example above get a value? In the last lesson you said values were assigned auto. to the elements in each enumeration but not the enumerations? I see that there is a error in the code… if color == fruit) that may have something to do with it.
Even if you could do this comparison don’t the enum’s., have to start with capital letter? (Color & Fruit)
When you say "placed in the same namespace" in the first sentence just after the second code it’s easy to confuse this with the keyword. Area might be better word there.
LOL, no I’m not another Todd, it’s that using keywords in descriptions can lead us new-bee’s astray.
Color and Fruit are enums declarations, so they don’t have values themselves. They just specify what the enum looks like. The enumerators within those enumerations get mapped to a value. Since no explicit values have been assigned, the C++ compiler assigns values to the enumerators starting from 0 in sequential order (RED = 0, BLUE = 1, BANANA = 0, APPLE = 1).
color and fruit are variables of the enum type Color and Fruit respectively. These work just like normal variables, and they can be assigned to, compared, etc… You can’t compare Color and Fruit because they’re just declarations (it would be like comparing int and float -- what does that even mean?).
> When you say “placed in the same namespace” in the first sentence just after the second code it’s easy to confuse this with the keyword.
I used the word “namespace” deliberately, but I’ve updated the article to talk about scopes instead of namespaces. With an enum class, the enumerator is defined inside the scope of the enumeration itself (which is why you have to use a scoping prefix to access it).
Do enum classes support using directives like namespaces do?
For example, is there code similar to this that would work:
Nope. Enum classes aren’t namespaces.
Hi,
Can you suggest me a good IDE that supports c++11 ? (except visual studio)
Code::Blocks is one.
Razer,
If you download the current version of Code::Blocks (13-12)you have to use the main menu settings > compiler > categories and check the box Have g++ use the C++ 11 ISO.
How I can find out that my Compiler supports C++11 or not?
cppreference has a compiler support page that lists out which versions of which major compilers support which C++11 or C++14 commands.
Or you could just try it and see if it works. 🙂
My code is not working. I’m wondering if you can help me? I’ve deleted out some portions of code that I don’t think pertain to the problem.
main.cpp:
FruitFunctions.cpp:
FruitFunctions.h:
Compiler error: error C2664: ‘void printFruit(Fruit)’ : cannot convert argument 1 from ‘main::Fruit’ to ‘Fruit’
Compiler error: IntelliSense: argument of type "Fruit" is incompatible with parameter of type "Fruit"
Try moving enum class Fruit and Vegetable outside of function main().
Actually just entirely deleting the enum class Fruit from main fixed it. What was wrong? Was it that I declared the enum class twice because I had one in main, and one that i #included?
I think because you declared Fruit inside of main, the version inside main hid the other version. Then when you tried to pass the version inside main to printFruit(), it tried to do a conversion from the Fruit inside of main to the other Fruit, and didn’t know how (despite the fact that they’re identical).
Dang, that’s complex. I’ve really been enjoying your support & the tutorial overall, though. Thanks. (although I probably won’t be using enums in my simple code! structs seem cool though..)
Typo.
"(eg. Color::RED)" (you’re missing a period between ‘e’ and ‘g’)
Updated. Thanks!
Hello Alex.
First - thank you so much for this tutorial. It is absolutely EPIC!
WRT Josephs comment, I to was experiencing the same error on code::blocks. However, when I changed the compiler flags to “Have G++ follow C++11 ISO standard" it still won’t compile but changes the error:
cannot bind ‘std::basic_ostream<char>’ lvalue to ‘std::basic_ostream<char>&&’.
do you know if thee is a solution to this?
Thanks again.
Sorry Connor, I’m afraid this is beyond my knowledge. I’d definitely turn to the almighty Google for help on this one.
Try searching for “code::blocks cannot bind ‘std::basic_ostream’ lvalue to ‘std::basic_ostream&&’.” and see if you get any hits. Surely someone else has run into (and resolved) this issue before.
I’ve been getting this as well. It seems to happen when I try to output an enum member(correct terminology?) directly as an integer. EG:
Btw, thanks a million for this tutorial. It’s pretty much the new gold standard haha.
Oh, I see what’s happening now that you’ve provided some context. With an enum class, the compiler won’t do an implicit conversion from an enum class to an integer.
All you need to do is use static_cast to cast your enum class variable (color) to an integer:
Hey Conner,
Your post is pretty old but if you clicked the box to use C++ 11 ISO it should cover all the new changes.
Where did you come up with this code cannot bind ‘std::basic_ostream<char>’ lvalue to ‘std::basic_ostream<char>&&
did you check it for typos? What is it supposed to do?
There seems to be another difference in using enum and enum class (at least in C++11):
And also this:
And furthermore this:
Why? => Because strongly typed enums won’t convert to integers implicitly !
So you need to use static_cast. I thought of that in the first place but I seemed to have made it incorrectly … so I thought, there must be another problem/difference. Now that’s sorted. However, this seems to make enum classes less convenient in a way.
[quote]
If you’re using a C++11 compiler, there’s really no reason to use normal enumerated types instead of enum classes.
[/quote]
Well, one reason would be, if you’re tired of type casting.
Hi, I’m using the CodeBlocks that I downloaded a couple weeks ago, and it’s saying that the class ‘color’ doesnt exist. I tried to make my own and when it failed, i copy pasted your code to see if it was me or the compiler, and yours failed too. How can I fix this?
I’m guessing your compiler either isn’t C++11 compliant, or has C++11 functionality turned off.
I found this bit of advice on the web: “In code::blocks, go to project->build options->compiler settings->compiler flags and check “Have G++ follow C++11 ISO standard”
Try that, and let me know if it works so I can update the tutorials.
It worked, thank you.
Name (required)
Website | http://www.learncpp.com/cpp-tutorial/4-5a-enum-classes/ | CC-MAIN-2017-26 | refinedweb | 2,612 | 72.87 |
StrictYAML
StrictYAML is a type-safe YAML parser that parses and validates a restricted subset of the YAML specification.
Priorities:
- Beautiful API
- Refusing to parse the ugly, hard to read and insecure features of YAML like the Norway problem.
- Strict validation of markup and straightforward type casting.
- Clear, readable exceptions with code snippets and line numbers.
- Acting as a near-drop in replacement for pyyaml, ruamel.yaml or poyo.
- Ability to read in YAML, make changes and write it out again with comments preserved.
- Not speed, currently.
Simple example:
# All about the character name: Ford Prefect age: 42 possessions: - Towel
from strictyaml import load, Map, Str, Int, Seq, YAMLError
Default parse result:
>>> load(yaml_snippet) YAML(OrderedDict([('name', 'Ford Prefect'), ('age', '42'), ('possessions', ['Towel'])]))
All data is string, list or OrderedDict:
>>> load(yaml_snippet).data OrderedDict([('name', 'Ford Prefect'), ('age', '42'), ('possessions', ['Towel'])])
Quickstart with schema:
from strictyaml import load, Map, Str, Int, Seq, YAMLError schema = Map({"name": Str(), "age": Int(), "possessions": Seq(Str())})
42 is now parsed as an integer:
>>> person = load(yaml_snippet, schema) >>> person.data OrderedDict([('name', 'Ford Prefect'), ('age', 42), ('possessions', ['Towel'])])
A YAMLError will be raised if there are syntactic problems, violations of your schema or use of disallowed YAML features:
# All about the character name: Ford Prefect age: 42
For example, a schema violation:
try: person = load(yaml_snippet, schema) except YAMLError as error: print(error)
while parsing a mapping in "<unicode string>", line 1, column 1: # All about the character ^ (line: 1) required key(s) 'possessions' not found in "<unicode string>", line 3, column 1: age: '42' ^ (line: 3)
If parsed correctly:
from strictyaml import load, Map, Str, Int, Seq, YAMLError schema = Map({"name": Str(), "age": Int(), "possessions": Seq(Str())})
You can modify values and write out the YAML with comments preserved:
person = load(yaml_snippet, schema) person['age'] = 43 print(person.as_yaml())
# All about the character name: Ford Prefect age: 43 possessions: - Towel
As well as look up line numbers:
>>> person = load(yaml_snippet, schema) >>> person['possessions'][0].start_line 5
Install
$ pip install strictyaml
Why StrictYAML?
There are a number of formats and approaches that can achieve more or less the same purpose as StrictYAML. I’ve tried to make it the best one. Below is a series of documented justifications:
-?
Using:
Design justifications
There are some design decisions in StrictYAML which are controversial and/or not obvious. Those are documented?
Contributors
- @gvx
- @AlexandreDecan
- @lots0logs
- @tobbez
Contributing
Before writing any code, please read the tutorial on contributing to hitchdev libraries.
Before writing any code, if you’re proposing a new feature, please raise it on github. If it’s an existing feature / bug, please comment and briefly describe how you’re going to implement it.
All code needs to come accompanied with a story that exercises it or a modification to an existing story. This is used both to test the code and build the documentation. | https://hitchdev.com/strictyaml/ | CC-MAIN-2019-09 | refinedweb | 478 | 51.78 |
I'm considering making the switch from MATLAB to Python. The application is quantitative trading and cost is not really an issue. There are a few things I love about MATLAB and am wondering how Python stacks up (could not find any answers in the reviews I've read).
Is there an IDE for Python that is as good as MATLAB's (variable editor, debugger, profiler)? I've read good things about Spyder, but does it have a profiler?
When you change a function on the path in MATLAB, it is automatically reloaded. Do you have to manually re-import libraries when you change them, or can this been done automatically? This is a minor thing, but actually greatly improves my productivity.
IDE: No. Python IDEs are nowhere near as good or mature as MATLAB's, though I've heard good things about Wing IDE. Generally, I find IDEs to be total overkill for Python development, and find that I'm more productive with a well-setup text editor (vim in my case) and a separate visual debugger (WinPDB).
Changing functions: Modules must be reloaded after changes using the
reload() built-in function.
import foo #now you've changed foo.py and want to reload it foo = reload(foo)
I've switched over myself from MATLAB to Python, because I find that Python deals much better with complexity, i.e., I find it easier to write, debug and maintain complex code in Python. One of the reasons for this is that Python is a general purpose language rather than a specialist matrix-manipulation language. Because of this, entities like strings, non-numerical arrays and (crucially) associative arrays (or maps or dictionaries) are first-class constructs in Python, as are classes.
With regards to capabilities, with NumPy, SciPy and Matplotlib, you pretty much have the whole set of functionality that MATLAB provides out of the box, and quite a lot of stuff that you would have to buy separate toolboxes for.
I've been getting on very well with the Spyder IDE in the Python(x,y) distribution. I'm a long term user of Matlab and have known of the existence of Python for 10 years or so but it's only since I installed Python(x,y) that I've started using Python regularly. | https://pythonpedia.com/en/knowledge-base/5214369/python-vs-matlab | CC-MAIN-2020-34 | refinedweb | 385 | 60.75 |
#include <StelObjectMgr.hpp>
Manage the selection and queries on one or more StelObjects.
Each module is then free to manage object selection as it wants.
Execute all the drawing functions for this module.
Reimplemented from StelModule.
Find and select an object near given equatorial J2000 position.
Find and select an object near given screen position.
Find and select an object from its standard program name.
Find and select an object from its translated name.
Return the list objects of type "withType" which was recently selected by the user.
Initialize itself.
If the initialization takes significant time, the progress should be displayed on the loading bar.
Implements StelModule.
Find and return the list of at most maxNbItem objects auto-completing the passed object I18n name.
Add a new StelObject manager into the list of supported modules.
Registered modules can have selected objects
Indicate that the selected StelObjects has changed.
Set the weight of the distance factor when choosing the best object to select.
Default to 1.
Notify that we want to select the given object.
Notify that we want to select the given objects.
Update the module with respect to the time.
Implements StelModule. | http://www.stellarium.org/doc/0.11.1/classStelObjectMgr.html | CC-MAIN-2014-15 | refinedweb | 193 | 60.72 |
Jan 26, 2013 04:12 PM|LINK
Error 1:-<authentication mode="Windows"/>
Then changed to mode="none", it did not worked. so, i commented the above line, the problem is solve.
pls give explaions abount error 1
Error 2:- <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="UserRegistration.aspx.cs" Inherits="UILayer._Default" %>
Then i changed the codebehind to codefile . the problem is solved. after sometime publishing the same webpage giving the error in this manner
Error 3:-<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="UserRegistration.aspx.cs" Inherits="UILayer._Default" %>
now what is should to overcome this error and pls explaination why this error are arising.
Thanks
Jan 27, 2013 01:23 AM|LINK
Look at this bro, this is the error am getting
Description: An
error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately.
Parser Error Message: The file '/iis/hcc/UserRegistration.aspx.cs' does not exist.
Source Error:
Source File: /iis/hcc/userregistration.aspx Line: 1
Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927
Jan 28, 2013 04:55 AM|LINK
dahla ji, good morning
yeah i added the code behind file userregistration.aspx.cs page
now giving error
Compiler Error Message: CS0246: The type or namespace name 'PropertiesLayer' could not be found (are you missing a using directive or an assembly reference?)
Star
13984 Points
Jan 28, 2013 06:05 AM|LINK
madduriaravindError 3:-<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="UserRegistration.aspx.cs" Inherits="UILayer._Default" %>
Still you are using CodeBehind.. Change it to Codefile in the page directive and then include the name of the codebehind file...
<%@ Page Language="C#" AutoEventWireup="true" CodeFile="UserRegistration.aspx.cs" Inherits="UILayer._Default" %>
and
Jan 30, 2013 11:27 AM|LINK
Thnaks you vishal.
the mistake is did is i did not converted the folder in iss into web application so all error rised. Now its working good.
6 replies
Last post Jan 30, 2013 11:27 AM by madduriaravind | http://forums.asp.net/p/1878090/5282898.aspx/1?Re+Regrading+parser+error | CC-MAIN-2013-20 | refinedweb | 345 | 50.94 |
[2.0.2][CLOSED] Date.parseDate() error with no day specified
Hi there, creators of ExtJS.
I've got a strange error. Today, is the 31 of March.
And if i'll try to parse such a date:
Code:
Date.parseDate('2-08', 'n-y').
i got the same error
to recur, you must set your system date to Mar,31th,2008
try to parse such a date:
Code:
Date.parseDate('2008-02', 'Y-m'); Date.parseDate('2008-04', 'Y-m'); Date.parseDate('2008-06', 'Y-m'); Date.parseDate('2008-09', 'Y-m'); Date.parseDate('2008-11', 'Y-m');
Why?
That's not a bug.
From the docs for the Date.parseDate() method (emphasis added):
Code:
Date.parseDate( String input, String format ) : Date ... ... Note that this function expects dates in normal calendar format, meaning that months are 1-based (1 = January) and not zero-based like in JavaScript dates. Any part of the date format that is not specified will default to the current date value for that part. ... ...
i.e. if you don't specify a day, it will default to the current day.
(likewise, the current month will be used if none is specified etc etc)
as to why you're observing the "rollover" date effect:
the Ext.Date methods are simply additions to the native ecmascript (i.e. javascript) Date object. The native javascript Date object does some smart(?) internal calculations to avoid invalid dates -- this means you'll never be able to create an invalid Date object (i.e. one that holds an invalid date). The rollover you're seeing is due to these smart(?) calculations by the javascript Date object to prevent the creation of invalid Date objects.
If you have firebug installed, fire up a plain old webpage (that doesn't have any javascript libraries loaded) and try to create an invalid Date object. I'm dead sure you won't succeed, and i won't be eating my words.
[edit 1]
some examples of the rollover effect (try these in FireBug on a plain old webpage):
Code:
new Date(2008, 0, 32); // 32nd January 2008 new Date(2008, 1, 30); // 30th Feb 2008 new Date(2008, 5, 60); // 60th June 2008
@cjqcjq2008, in your case, since no day was specified, your current day (i.e. 31) will be used, giving the equivalent of
Code:
new Date(2008, 1, 31); // 31st Feb 2008 new Date(2008, 3, 31); // 31st Apr 2008 new Date(2008, 5, 31); // 31st Jun 2008 new Date(2008, 8, 31); // 31st Sep 2008 new Date(2008, 10, 31); // 31st Nov 2008
- Join Date
- Mar 2007
- Location
- The Netherlands
- 24,246
- Vote Rating
- 119
The date parser will use the current year, month and day if corresponding values are not found in the string.
So, parsing 2008-02 on 2008-03-31 will result in 2008-02-31, which becomes 2008-03-02.
I can't see a simple solution, because the parser also needs to support parsing time only.
There still should be a way to pass an alternate method to do date validation that will disallow the rollover. I have tried passing another validation function as an argument, and it DOES NOT override the rollover effect. Please advise.
[edit 1]
this might come in handy while you're busy posting your code
[edit 2]
or this:
But how to address this problem?
or how to modify "Date.js" , if my application only needs to support month parse correctly?
- Join Date
- Mar 2007
- Location
- The Netherlands
- 24,246
- Vote Rating
- 119
Or you could use the following code change:
Code:
Date.createParser = function(format) { var funcName = "parse" + Date.parseFunctions.count++; var regexNum = Date.parseRegexes.length; var currentGroup = 1; Date.parseFunctions[format] = funcName; var code = "Date." + funcName + " = function(input){\n" + "var y = -1, m = -1, d = -1, h = -1, i = -1, s = -1, ms = -1, o, z, u, v;\n" + "input = String(input);var d = new Date();\n" + "y = d.getFullYear();\n" + "m = d.getMonth();\n" + "d = 1;\n" + "var results = input.match(Date.parseRegexes[" + regexNum + "]);\n" + "if (results && results.length > 0) {"; var regex = ""; var special = false; var ch = ''; for (var i = 0; i < format.length; ++i) { ch = format.charAt(i); if (!special && ch == "\\") { special = true; } else if (special) { special = false; regex += String.escape(ch); } else { var obj = Date.formatCodeToRegex(ch, currentGroup); currentGroup += obj.g; regex += obj.s; if (obj.g && obj.c) { code += obj.c; } } } code += "if (u)\n" + "{v = new Date(u * 1000);}" + "else if (y >= 0 && m >= 0 && d > 0 && h >= 0 && i >= 0 && s >= 0 && ms >= 0)\n" + "{v = new Date(y, m, d, h, i, s, ms);}\n" + "else if (y >= 0 && m >= 0 && d > 0 && h >= 0 && i >= 0 && s >= 0)\n" + "{v = new Date(y, m, d, h, i, s);}\n" + "else if (y >= 0 && m >= 0 && d > 0 && h >= 0 && i >= 0)\n" + "{v = new Date(y, m, d, h, i);}\n" + "else if (y >= 0 && m >= 0 && d > 0 && h >= 0)\n" + "{v = new Date(y, m, d, h);}\n" + "else if (y >= 0 && m >= 0 && d > 0)\n" + "{v = new Date(y, m, d);}\n" + "else if (y >= 0 && m >= 0)\n" + "{v = new Date(y, m);}\n" + "else if (y >= 0)\n" + "{v = new Date(y);}\n" + "}return (v && (z || o))?\n" + " (z ? v.add(Date.SECOND, (v.getTimezoneOffset() * 60) + (z*1)) :\n" + " v.add(Date.HOUR, (v.getGMTOffset() / 100) + (o / -100))) : v\n" + ";}"; Date.parseRegexes[regexNum] = new RegExp("^" + regex + "$", "i"); eval(code); };
I used the Ext.override patch, and it works pretty well. However, I still need help with the following problem. Suppose I submit the form with an invalid date using a Strtus application. When the form re-displays, rather than display the incorrect values that were originally entered, the date values will come up blank. Is there anyway to display the erroneous values on the screen? | https://www.sencha.com/forum/showthread.php?30914-2.0.2-CLOSED-Date.parseDate()-error-with-no-day-specified&p=145730&viewfull=1 | CC-MAIN-2016-18 | refinedweb | 979 | 83.15 |
It looks like a host of solutions are out there (
BST-based,
BIT-based,
Merge-sort-based). Here I'd like to focus on the general principles behind these solutions and its possible application to a number of similar problems.
The fundamental idea is very simple: break down the array and solve for the subproblems.
A breakdown of an array naturally reminds us of subarrays. To smoothen our following discussion, let's assume the input array is
nums, with a total of
n elements. Let
nums[i, j] denote the subarray starting from index
i to index
j (both inclusive),
T(i, j) as the same problem applied to this subarray (for example, for Reverse Pairs,
T(i, j) will represent the total number of important reverse pairs for subarray
nums[i, j]).
With the definition above, it's straightforward to identify our original problem as
T(0, n - 1). Now the key point is how to construct solutions to the original problem from its subproblems. This is essentially equivalent to building recurrence relations for
T(i, j). Since if we can find solutions to
T(i, j) from its subproblems, we surely can build solutions to larger subarrays until eventually the whole array is spanned.
While there may be many ways for establishing recurrence relations for
T(i, j), here I will only introduce the following two common ones:
T(i, j) = T(i, j - 1) + C, i.e., elements will be processed sequentially and
Cdenotes the subproblem for processing the last element of subarray
nums[i, j]. We will call this sequential recurrence relation.
T(i, j) = T(i, m) + T(m + 1, j) + Cwhere
m = (i+j)/2, i.e., subarray
nums[i, j]will be further partitioned into two parts and
Cdenotes the subproblem for combining the two parts. We will call this partition recurrence relation.
For either case, the nature of the subproblem
C will depend on the problem under consideration, and it will determine the overall time complexity of the original problem. So usually it's crucial to find efficient algorithm for solving this subproblem in order to have better time performance. Also pay attention to possibilities of overlapping subproblems, in which case a dynamic programming (DP) approach would be preferred.
Next, I will apply these two recurrence relations to this problem "Reverse Pairs" and list some solutions for your reference.
I -- Sequential recurrence relation
Again we assume the input array is
nums with
n elements and
T(i, j) denotes the total number of important reverse pairs for subarray
nums[i, j]. For sequential recurrence relation, we can set
i = 0, i.e., the subarray always starts from the beginning. Therefore we end up with:
T(0, j) = T(0, j - 1) + C
where the subproblem
C now becomes "find the number of important reverse pairs with the first element of the pair coming from subarray
nums[0, j - 1] while the second element of the pair being
nums[j]".
Note that for a pair
(p, q) to be an important reverse pair, it has to satisfy the following two conditions:
p < q: the first element must come before the second element;
nums[p] > 2 * nums[q]: the first element has to be greater than twice of the second element.
For subproblem
C, the first condition is met automatically; so we only need to consider the second condition, which is equivalent to searching for all elements within subarray
nums[0, j - 1] that are greater than twice of
nums[j].
The straightforward way of searching would be a linear scan of the subarray, which runs at the order of
O(j). From the sequential recurrence relation, this leads to the naive
O(n^2) solution.
To improve the searching efficiency, a key observation is that the order of elements in the subarray does not matter, since we are only interested in the total number of important reverse pairs. This suggests we may sort those elements and do a binary search instead of a plain linear scan.
If the searching space (formed by elements over which the search will be done) is "static" (it does not vary from run to run), placing the elements into an array would be perfect for us to do the binary search. However, this is not the case here. After the
j-th element is processed, we need to add it to the searching space so that it becomes searchable for later elements, which renders the searching space expanding as more and more elements are processed.
Therefore we'd like to strike a balance between searching and insertion operations. This is where data structures like binary search tree (
BST) or binary indexed tree (
BIT) prevail, which offers relatively fast performance for both operations.
1.
BST-based solution
we will define the tree node as follows, where
val is the node value and
cnt is the total number of elements in the subtree rooted at current node that are greater than or equal to
val:
class Node { int val, cnt; Node left, right; Node(int val) { this.val = val; this.cnt = 1; } }
The searching and insertion operations can be done as follows:
private int search(Node root, long val) { if (root == null) { return 0; } else if (val == root.val) { return root.cnt; } else if (val < root.val) { return root.cnt + search(root.left, val); } else { return search(root.right, val); } } private Node insert(Node root, int val) { if (root == null) { root = new Node(val); } else if (val == root.val) { root.cnt++; } else if (val < root.val) { root.left = insert(root.left, val); } else { root.cnt++; root.right = insert(root.right, val); } return root; }
And finally the main program, in which we will search for all elements no less than twice of current element plus
1 (converted to
long type to avoid overflow) while insert the element itself into the BST.
Note: this homemade BST is not self-balanced and the time complexity can go as bad as
O(n^2) (in fact you will get
TLE if you copy and paste the solution here). To guarantee
O(nlogn) performance, use one of the self-balanced BST's (e.g.
Red-black tree,
AVL tree, etc.).
public int reversePairs(int[] nums) { int res = 0; Node root = null; for (int ele : nums) { res += search(root, 2L * ele + 1); root = insert(root, ele); } return res; }
2.
BIT-based solution
For
BIT, the searching and insertion operations are:
private int search(int[] bit, int i) { int sum = 0; while (i < bit.length) { sum += bit[i]; i += i & -i; } return sum; } private void insert(int[] bit, int i) { while (i > 0) { bit[i] += 1; i -= i & -i; } }; } private int index(int[] arr, long val) { int l = 0, r = arr.length - 1, m = 0; while (l <= r) { m = l + ((r - l) >> 1); if (arr[m] >= val) { r = m - 1; } else { l = m + 1; } } return l + 1; }
More explanation for the BIT-based solution:
We want the elements to be sorted so there is a sorted version of the input array which is
copy.
The
bitis built upon this sorted array. Its length is one greater than that of the
copyarray to account for the root.
Initially the
bitis empty and we start doing a sequential scan of the input array. For each element being scanned, we first search the
bitto find all elements greater than twice of it and add the result to
res. We then insert the element itself into the
bitfor future search.
Note that conventionally searching of the
bitinvolves traversing towards the root from some index of the
bit, which will yield a predefined running total of the
copyarray up to the corresponding index. For insertion, the traversing direction will be opposite and go from some index towards the end of the
bitarray.
For each scanned element of the input array, its searching index will be given by the index of the first element in the
copyarray that is greater than twice of it (shifted up by
1to account for the root), while its insertion index will be the index of the first element in the
copyarray that is no less than itself (again shifted up by
1). This is what the
indexfunction is for.
For our case, the running total is simply the number of elements encountered during the traversal process. If we stick to the convention above, the running total will be the number of elements smaller than the one at the given index, since the
copyarray is sorted in ascending order. However, we'd actually like to find the number of elements greater than some value (i.e., twice of the element being scanned), therefore we need to flip the convention. This is what you see inside the
searchand
insertfunctions: the former traversing towards the end of the
bitwhile the latter towards the root.
II -- Partition recurrence relation
For partition recurrence relation, setting
i = 0, j = n - 1, m = (n-1)/2, we have:
T(0, n - 1) = T(0, m) + T(m + 1, n - 1) + C
where the subproblem
C now reads "find the number of important reverse pairs with the first element of the pair coming from the left subarray
nums[0, m] while the second element of the pair coming from the right subarray
nums[m + 1, n - 1]".
Again for this subproblem, the first of the two aforementioned conditions is met automatically. As for the second condition, we have as usual this plain linear scan algorithm, applied for each element in the left (or right) subarray. This, to no surprise, leads to the
O(n^2) naive solution.
Fortunately the observation holds true here that the order of elements in the left or right subarray does not matter, which prompts sorting of elements in both subarrays. With both subarrays sorted, the number of important reverse pairs can be found in linear time by employing the so-called two-pointer technique: one pointing to elements in the left subarray while the other to those in the right subarray and both pointers will go only in one direction due to the ordering of the elements.
The last question is which algorithm is best here to sort the subarrays. Since we need to partition the array into halves anyway, it is most natural to adapt it into a
Merge-sort. Another point in favor of
Merge-sort is that the searching process above can be embedded seamlessly into its merging stage.
So here is the
Merge-sort-based solution, where the function
"reversePairsSub" will return the total number of important reverse pairs within subarray
nums[l, r]. The two-pointer searching process is represented by the nested
while loop involving variable
p, while the rest is the standard merging algorithm.
public int reversePairs(int[] nums) { return reversePairsSub(nums, 0, nums.length - 1); } private int reversePairsSub(int[] nums, int l, int r) { if (l >= r) return 0; int m = l + ((r - l) >> 1); int res = reversePairsSub(nums, l, m) + reversePairsSub(nums, m + 1, r); int i = l, j = m + 1, k = 0, p = m + 1; int[] merge = new int[r - l + 1]; while (i <= m) { while (p <= r && nums[i] > 2L * nums[p]) p++; res += p - (m + 1); while (j <= r && nums[i] >= nums[j]) merge[k++] = nums[j++]; merge[k++] = nums[i++]; } while (j <= r) merge[k++] = nums[j++]; System.arraycopy(merge, 0, nums, l, merge.length); return res; }
III -- Summary
Many problems involving arrays can be solved by breaking down the problem into subproblems applied on subarrays and then link the solution to the original problem with those of the subproblems, to which we have sequential recurrence relation and partition recurrence relation. For either case, it's crucial to identify the subproblem
C and find efficient algorithm for approaching it.
If the subproblem
C involves searching on "dynamic searching space", try to consider data structures that support relatively fast operations on both searching and updating (such as
self-balanced BST,
BIT,
Segment tree,
...).
If the subproblem
C of partition recurrence relation involves sorting,
Merge-sort would be a nice sorting algorithm to use. Also, the code could be made more elegant if the solution to the subproblem can be embedded into the merging process.
If there are overlapping among the subproblems
T(i, j), it's preferable to cache the intermediate results for future lookup.
Lastly let me name a few leetcode problems that fall into the patterns described above and thus can be solved with similar ideas.
315. Count of Smaller Numbers After Self
327. Count of Range Sum
For
leetcode 315, applying the sequential recurrence relation (with
j fixed), the subproblem
C reads: find the number of elements out of visited ones that are smaller than current element, which involves searching on "dynamic searching space"; applying the partition recurrence relation, we have a subproblem
C: for each element in the left half, find the number of elements in the right half that are smaller than it, which can be embedded into the merging process by noting that these elements are exactly those swapped to its left during the merging process.
For
leetcode 327, applying the sequential recurrence relation (with
j fixed) on the pre-sum array, the subproblem
C reads: find the number of elements out of visited ones that are within the given range, which again involves searching on "dynamic searching space"; applying the partition recurrence relation, we have a subproblem
C: for each element in the left half, find the number of elements in the right half that are within the given range, which can be embedded into the merging process using the two-pointer technique.
Anyway, hope these ideas can sharpen your skills for solving array-related problems.
@fun4LeetCode I really appreciate your effort presenting a good encyclopedia !! This really of great help. Though I solved it using BST, BIT approach is really encouraging
This post is simply amazing! It deserves many more upvote. And thanks again for sharing.
Nice explanation of the logic behind these three problems. I had something similar in mind but you elaborate it much better.
Great solution with BIT!
Let me add more explanations for why the BIT approach is
"i += i&(-i)" for search, and
"i -= i &(-i)" for insert.
which is contrary to the "commonly" used way for BIT, where
"i += (-i)" for insert, and
"i -= i&(-i)" for search
First, the concept of "search(i)" here should be explained as "getSum(i)", which is to get the accumulative frequency from the starting index to i (inclusively), where i is an index in BIT.
In this problem, we want to get how many elements that are greater than 2 * nums[j], (num[j] is the current value that we are visiting). Therefore, instead of "searching down", here we need to "searching up".
Based on the classical BIT format, where:
"i += (-i)" for insert
"i -= i&(-i)" for search
One possible way is to use the getRange approach,
so we can use getSum(MaxNumberOfValue) - getSum(index(2*nums[j])) to get the number of elements that are greater than 2 * nums[j].
Another possible way is just like @fun4LeetCode implements,
we can reverse the direction for insert and search in BIT
so what we get is always the number greater than query values by using single search method.
@iaming Hi iaming. Sorry for the confusion. I have added some additional explanation of the BIT solution. Take a look and let me know if it helps.
Share my AVL Tree Implementation:
This is a somewhat long solution, but a good practice to use an AVL Tree, LOL...
Ref:
(1)
(2)
public class Solution {
public int reversePairs(int[] nums) { // Algo thinking: building a BST, go left when node.val <= 2 * root.val, right otherwise // But need to keep it balanced -> AVL Tree or Red-Black Tree // time = O(NlgN), space = O(N) if (nums == null || nums.length == 0) return 0; int n = nums.length; TreeNode root = new TreeNode(nums[0]); int ans = 0; for (int i = 1; i < nums.length; i++) { ans += search(root, (long) nums[i] * 2); root = insert(root, (long) nums[i]); // preOrder(root); // System.out.println(); } return ans; } private int search(TreeNode root, long key) { if (root == null) return 0; if (key < root.val) { // key < root.val: go left return root.rightCount + search(root.left, key); } else { // key >= root.val: go right return search(root.right, key); } } private TreeNode insert(TreeNode root, long key) { if (root == null) return new TreeNode(key); if (key < root.val) { // key < root.val: go left root.left = insert(root.left, key); } else if (key == root.val){ root.rightCount++; return root; } else { root.rightCount++; root.right = insert(root.right, key); } root.height = Math.max(getHeight(root.left), getHeight(root.right)) + 1; int balance = getBalance(root); // System.out.println(root.val + " balance " + balance); // case 1 left left if (balance > 1 && getHeight(root.left.left) > getHeight(root.left.right)) { return rightRotate(root); } // case 2 left right if (balance > 1 && getHeight(root.left.left) < getHeight(root.left.right)) { root.left = leftRotate(root.left); return rightRotate(root); } // case 3 right right if (balance < -1 && getHeight(root.right.left) < getHeight(root.right.right)) { return leftRotate(root); } // case 4 right left if (balance < -1 && getHeight(root.right.left) > getHeight(root.right.right)) { root.right = rightRotate(root.right); return leftRotate(root); } return root; } private TreeNode leftRotate(TreeNode root) { // setp 1: take care of nodes TreeNode newRoot = root.right; TreeNode b = newRoot.left; newRoot.left = root; root.right = b; // step 2: take care of height root.height = Math.max(getHeight(root.left), getHeight(root.right)) + 1; newRoot.height = Math.max(getHeight(newRoot.left), getHeight(newRoot.right)) + 1; // step 3: take care of rightCount root.rightCount -= getRightCount(newRoot); return newRoot; } private TreeNode rightRotate(TreeNode root) { // setp 1: take care of nodes TreeNode newRoot = root.left; TreeNode b = newRoot.right; newRoot.right = root; root.left = b; // step 2: take care of height root.height = Math.max(getHeight(root.left), getHeight(root.right)) + 1; newRoot.height = Math.max(getHeight(newRoot.left), getHeight(newRoot.right)) + 1; // step 3: take care of rightCount newRoot.rightCount += getRightCount(root); return newRoot; } private int getHeight(TreeNode node) { return node == null ? 0 : node.height; } private int getBalance(TreeNode node) { return node == null ? 0 : getHeight(node.left) - getHeight(node.right); } private int getRightCount(TreeNode node) { return node == null ? 0 : node.rightCount; } private void preOrder(TreeNode root) { if (root == null) { System.out.print("NIL "); return; } System.out.print(root.val + " "); preOrder(root.left); preOrder(root.right); } class TreeNode { long val; int rightCount; int height; TreeNode left; TreeNode right; public TreeNode(long val) { this.val = val; height = 1; rightCount = 1; } }
}
A slightly more verbose modification of OP's Binary Index Tree solution and add some comments to (maybe?) make it more understandable.
public class Solution { // Binary Index Tree // O(logn) // @bit : binary index tree array // @i : from start index to index i in bit (inclusively) // return : the prefix sum from start index to index i private int getSum(int[] bit, int i) { int sum = 0; while (i < bit.length) { sum += bit[i]; i = getNext(i); } return sum; } // O(logn) // update the value in index i by diff private void update(int[] bit, int i, int diff) { // need to update all of its affected nodes as well while (i > 0) { bit[i] += diff; i = getParent(i); } } // get to the parent of index i in bit private int getParent(int i){ return i - (i & -i); } // get to next index whose represented range is next to the represented range of i private int getNext(int i){ return i + (i & -i); } public int reversePairs(int[] nums) { int res = 0; int[] copy = Arrays.copyOf(nums, nums.length); // initialization as all 0 int[] bit = new int[copy.length + 1]; Arrays.sort(copy); for (int ele : nums) { // the number of element that > 2 * ele before ele res += getSum(bit, index(copy, 2L * ele + 1)); // insert current element into bit update(bit, index(copy, ele), 1); } return res; } // helper function // binary search to find the first element that >= val in input array // input arr should be sorted private int index(int[] arr, long val) { int l = 0, r = arr.length - 1, m = 0; while (l <= r) { m = l + ((r - l) >> 1); if (arr[m] >= val) { r = m - 1; } else { l = m + 1; } } return l + 1; } }
Brilliant post. Thanks @fun4LeetCode !
@fun4LeetCode what's the correct way to analyze the complexity?
n x (log n + log n) = O(nlogn)?; }
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/79227/general-principles-behind-problems-similar-to-reverse-pairs | CC-MAIN-2017-39 | refinedweb | 3,413 | 60.65 |
On Thu, Jul 22, 2010 at 5:55 AM, Nathan Schneider <nathan at cmu.edu> wrote: > I think a better alternative to allow for safe localization of > variables to a block would be to adapt the 'with' statement to behave > as a 'let' (similar to suggestions earlier in the thread). For > instance: > > with fname = sys.argv[1], open(fname) as f, contents = f.read(): > do_stuff1(fname, contents) > do_stuff2(contents) > do_stuff3(fname) # error: out of scope > > This makes it clear to the reader that the assignments to 'fname' and > 'contents', like 'f', only pertain to the contents of the 'with' > block. It allows the reader to focus their eye on the 'important' > part—the part inside the block—even though it doesn't come first. It > helps avoid bugs that might arise if 'fname' were used later on. And > it leaves no question as to where control flow statements are > permitted/desirable. > > I'm +0.5 on this alternative: my hesitation is because we'd need to > explain to newcomers why 'f = open(fname)' would be legal but bad, > owing to the subtleties of context managers. Hmm, an intriguing idea. I agree that the subtleties of "=" vs "as" could lead to problems though. There's also the fact that existing semantics mean that 'f' has to remain bound after the block, so it would be surprising if 'fname' and 'contents' were unavailable. It also suffers the same issue as any of the other in-order proposals: without out-of-order execution, the only gain is a reduction in the chance for namespace collisions, and that's usually only a problem for accidental collisions with loop variable names in long functions or scripts. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia | https://mail.python.org/pipermail/python-ideas/2010-July/007662.html | CC-MAIN-2021-49 | refinedweb | 291 | 70.23 |
vb net system timers timer example is a vb net system timers timer document that shows the process of designing vb net system timers timer format. A well designed vb net system timers timer example can help design vb net system timers timer example with unified style and layout.
vb net system timers timer example basics
When designing vb net system timers timer document, it is important to use style settings and tools. Microsoft Office provide a powerful style tool to help you manage your vb net system timers timerb net system timers timer styles may help you quickly set vb net system timers timer titles, vb net system timers timer subheadings, vb net system timers timer section headings apart from one another by giving them unique fonts, font characteristics, and sizes. By grouping these characteristics into styles, you can create vb net system timers timer documents that have a consistent look without having to manually format each section header. Instead you set the style and you can control every heading set as that style from central location. you also need to consider different variations: visual basic timer example, visual basic timer example word, system timers timer background thread, system timers timer background thread word, timer msdn, timer msdn word, system timers timer ui thread, system timers timer ui thread word
Microsoft Office also has many predefined styles you can use. you can apply Microsoft Word styles to any text in the vb net system timers timerb net system timers timer documents, You can also make the styles your own by changing how they look in Microsoft Word. During the process of vb net system timers timer style design, it is important to consider different variations, for example, c system timers timer example, c system timers timer example word, java timer example, java timer example word, vba timer example, vba timer example word, system timers timer fire immediately, system timers timer fire immediately word.
vb net system timers timer example
timer class system timers net framework source code for this type, see the reference source. system. componentmodel.component system.timers.timer to dispose of it indirectly, use a language construct such as using in c or using in visual basic . timers class example static void main timer timer new timer timer. timer elapsed event system timers timer.elapsed event .net framework current version . other versions vb. copy. use this code inside a project created with the visual c gt windows timers. public class example private static system.timers.timer atimer public timer class system timers net framework class library system.timers. system.timers timer class timers.timer. namespace system.timers assembly system in system.dll . syntax. c . c f vb. copy. hostprotectionattribute securityaction. linkdemand for example, suppose you have a critical server that must be kept running timers in vb net in this article i will explain you about timers in vb.net. the sample example given below shows how to use a timer object that is triggered on dim mytimer as new system.timers.timer addhandler mytimer.elapsed vb net timer example this vb.net article covers the timer type from the system.timers namespace. timers monitor processes. vb net i was reading about system threading system.timers.timer and it for example in the code example button click event is writing data to a vb net public class service dim mytimer as new system.timers.timer protected could you elaborate with an example of implementation, cheers browse other questions tagged vb.net multithreading service timer smtp or net this article offers a fairly comprehensive explanation comparing the timer classes in the .net framework class library also available as a .chm how to use system threading timer i have used system.timers.timer to watch several folders and move files in my windows service. heres a simple example of how to use the threading.timer vb.net code imports system.threading. public class form . all about net timers net right now, and later on discuss about dispatchertimer in wpf. system. timers.timer. it is a special timer that is designed for running in | http://www.slipbay.com/vb-net-system-timers-timer-example/ | CC-MAIN-2017-17 | refinedweb | 684 | 54.42 |
renderExtraColumn seems to fail sometimes after reconfigure is called on the grid 4.2 now. It creates the header fine but the first column is missing so all the columns are shifted one over. If I scroll up and down it fixes itself.
This is with a buffered grid.
Anyone else get this?
There is a little bug with date filter.
If you have null/empty date value in the store, during the application of the filter an error will occur:
"Uncaught TypeError: Cannot call method 'getTime' of null"
This can be easily corrected adding a if control before the applyFilters method (FilterBar.js, 666 row) return the item.
Code:
switch(operator) { case 'eq': filterFn = function(item) { if (column.filter.type == 'date') { var valore = item.get(column.dataIndex); if(valore && valore !== null && valore != "") return Ext.Date.clearTime(item.get(column.dataIndex), true).getTime() == Ext.Date.clearTime(newVal, true).getTime(); } else { return (Ext.isEmpty(item.get(column.dataIndex)) ? me.autoStoresNullValue : item.get(column.dataIndex)) == (Ext.isEmpty(newVal) ? me.autoStoresNullValue : newVal); } }; break;
Steve
Using filterbar Ext 4.2.0663 problem
Hello,
I'm using FilterBar successfully with Ext 4.1.3 but after upgrading to 4.2.0663 I get an error when loading the grid.
This is a part of the code im using:
PHP Code:
Ext.define('sdmDataGridPanel', {
extend: 'Ext.grid.Panel',
forceFit: true,
store: Ext.create('sdmDataStore'),
plugins: [{
ptype: 'filterbar',
showShowHideButton: false,
showClearAllButton: false
}],
id: 'grid',
....
TypeError: result.push is not a function
xt-all-debug.js (row 107050)
Hope somebody can help me..
That's probably the same problem reported in the message #134 in this thread (look at the previous page): you probably need to tweak a few lines in the source where there is a call to the function getGridColumns(true): simply remove the argument and it should work.
Thanks lelit,
I have been blind i think...
Going to tweak the files...
It works...
Thanks for this plugin, it's just what i was looking for.
I have a similar problem as mpete, i can´t populate my comboboxes when the grid store is set to "remoteFilter: true". Should i create a separate store for each combo? I'm just getting started in extjs 4, any example would be appreciated.
Thanks in advance,
Agustin
Initial filters on non-visible columns ignored
I had an issue where there was a filter applied to my grid store on a column that is not displayed in the grid. The FilterBar plugin was ignoring this initial filter because of this line in the parseInitialFilters() function:
if (filter.property && !Ext4.isEmpty(filter.value) && me.columns.get(filter.property)) {
My fix was to add this inital filter to the me.filterArray even though it didn't satisfy the condition:
me.columns.get(filter.property)
Regards,
Scott Langley
slangley@scharp.org
Thanks for the great pluging!
I am currently migrating from Ext 3.4 to 4.2.x and have to reimplement my CustomFilterBar needs.
I took this as the new foundation of my GridFilter.
Here are some comments on Bugs / things i needed to implement!
1) Render bug on using "renderHidden: false"
You have to replace afterender event by viewready
//grid.on('afterrender', me.renderFilterBar, me, { single: true });
grid.on('viewready', me.renderFilterBar, me, { single: true });
this should solve the problem with hidden columns and filters initially shown
2) applyFilters
i disabled delay for filters triggered by RETURN Key (also for remote loading).
Formerly i do not used change event, but this way it works quite well.
I think delaying during typing is ok, but you do not want to wait using RETURN KEY
//applyFilters: function(field, newVal) {
applyFilters: function(field, newVal, oldValue, eOpts) {
...
var delay = (!eOpts) ? 10 : me.updateBuffer;
//me.task.delay(me.updateBuffer, function() {
me.task.delay(delay, function() {
3) Info's / debud output on things that corrupts MixedCollections on duplicate keys:
parseFiltersConfig:...
generally i changed the default behavior that each column will have a autoFilter.
If i do not want a column filtered, i have to add filter: false.
Using Sencha Architect this is more comfortabel, because i do not have to add a custom config property for each column!
Code:
... Ext.each(columns, function(column) { if (column.filter || column.dataIndex && column.filter !== false) { if (!column.filter || column.filter === true || column.filter === 'auto') { // automatic types configuration (store based) var modelField = me.grid.store.model.prototype.fields.get(column.dataIndex); //var type = me.grid.store.model.prototype.fields.get(column.dataIndex).type.type; if (modelField) { var type = modelField.type.type; if (type == 'auto') type = 'string'; column.filter = type; } else { column.filter = 'string'; console.log('Grid FilterBar warning! invalid dataIndex "' + column.dataIndex + '" for column "' + column.text + '"'); } }
At the end i added code for checking on duplicate keys and log a warning
Code:
if (!me.columns.containsKey(column.dataIndex)) me.columns.add(column.dataIndex, column); else console.log('Grid FilterBar warning! Duplicate dataIndex "' + column.dataIndex + '" for column "' + column.text + '"');
I also added lot's of things around using glyphs / pictos from FontAwesome and so on to have a nice UI, but the interesting things for the common are mentioned above!
You also have add the things mentioned formerly in this thread like
- use getGridColumns() instead getGridColumns(true)
Maybe this is a little timesave for you.
Cheers Holger
Leonardo, any chance of putting a version of the plugin on some public versionated site, such as github or bitbucket, where contributions like Holger's, and in general simple fixes, would be simpler to track?
Issue with Locked columns & Grid Reconfigure
Hi,
First of all, absolutely great plugin. Great job !
I am trying to use this plugin - works well for me in normal scenario, but I run into some issues with certain scenarios.
I am using Ext JS version 4.2.1 (already changed getGridColumns(true) to getGridColumns(). Now, here is the scenario:
1. It works well if I try to reconfigure the grid without any locked columns
2. To handle columns, I have defined following in my grid config:
Code:
normalGridConfig: { plugins: [{ ptype: 'filterbar', renderHidden: false, showShowHideButton: true, showClearAllButton: true }] }, lockedGridConfig: { plugins: [{ ptype: 'filterbar', renderHidden: false, showShowHideButton: true, showClearAllButton: true }] },
Once I reconfigure the grid, I get the following error:
this.grid.headerCt.getColumns is undefined
I just tried to update all references ofPHP Code:
headerCt.getColumns
PHP Code:
var headerCt = this.grid.headerCt?this.grid.headerCt:this.grid.normalGrid.headerCt?this.grid.normalGrid.headerCt:this.grid.lockedGrid.headerCt; var columns = headerCt.getGridColumns();
Will really appreciate your help on this one.
Thanks,
Ani
Thread Participants: 79
-)
- Matt Bittner (1 Post)
-)
- sencha2@freightgate.com (1 Post)
-)
- jaykravetz (3 Posts)
- lelit (3 Posts)
- Druid33 (2 Posts)
- BillySao (3 Posts)
- cow_boy (3 Posts)
- sanjaykpanjagutta (1 Post)
- landoni (3 Posts)
- m0r14rty (1 Post)
- bramsreddy )
- eh-sv (1 Post) | https://www.sencha.com/forum/showthread.php?152923-Ext.ux.grid.FilterBar-plugin&p=957201&viewfull=1 | CC-MAIN-2016-18 | refinedweb | 1,120 | 51.24 |
NAME
SYNOPSIS
DESCRIPTION
RETURN VALUE
ERRORS
NOTES
SEE ALSO
pmempool_sync(), pmempool_transform() - pool set synchronization and transformation
#include <libpmempool.h> int pmempool_sync(const char *poolset_file, unsigned flags); (EXPERIMENTAL) int pmempool_transform(const char *poolset_file_src, const char *poolset_file_dst, unsigned flags); (EXPERIMENTAL)
The pmempool_sync() function synchronizes data between replicas within a pool set.
pmempool_sync() accepts two arguments:
poolset_file - a path to a pool set file,
flags - a combination of flags (ORed) which modify how synchronization is performed.
NOTE: Only the pool set file used to create the pool should be used for syncing the pool.
NOTE: The pmempool_sync() cannot do anything useful if there are no replicas in the pool set. In such case, it fails with an error.
NOTE: At the moment, replication is only supported for libpmemobj(7) pools, so pmempool_sync() cannot be used with other pool types (libpmemlog(7), libpmemblk(7)).
The following flags are available:
pmempool_sync() checks that the metadata of all replicas in a pool set is consistent, i.e. all parts are healthy, and if any of them is not, the corrupted or missing parts are recreated and filled with data from one of the healthy replicas.
If a pool set has the option SINGLEHDR (see poolset(5)), the internal metadata of each replica is limited to the beginning of the first part in the replica. If the option NOHDRS is used, replicas contain no internal metadata. In both cases, only the missing parts or the ones which cannot be opened are recreated with the pmempool_sync() function.
pmempool_transform() modifies the internal structure of a pool set. It supports the following operations:
adding one or more replicas,
removing one or more replicas ,
adding or removing pool set options.
Only one of the above operations can be performed at a time.
pmempool_transform() accepts three arguments:
poolset_file_src - pathname of the pool set file for the source pool set to be changed,
poolset_file_dst - pathname of the pool set file that defines the new structure of the pool set,
flags - a combination of flags (ORed) which modify how synchronization is performed.
The following flags are available:
When adding or deleting replicas, the two pool set files can differ only in the definitions of replicas which are to be added or deleted. When adding or removing pool set options (see poolset(5)), the rest of both pool set files have to be of the same structure. The operation of adding/removing a pool set option can be performed on a pool set with local replicas only. To add/remove a pool set option to/from a pool set with remote replicas, one has to remove the remote replicas first, then add/remove the option, and finally recreate the remote replicas having added/removed the pool set option to/from the remote replicas' poolset files. To add a replica it is necessary for its effective size to match or exceed the pool size. Otherwise the whole operation fails and no changes are applied. If none of the pool set options is used, the effective size of a replica is the sum of sizes of all its part files decreased by 4096 bytes per each part file. The 4096 bytes of each part file is utilized for storing internal metadata of the pool part files. If the option SINGLEHDR is used, the effective size of a replica is the sum of sizes of all its part files decreased once by 4096 bytes. In this case only the first part contains internal metadata. If the option NOHDRS is used, the effective size of a replica is the sum of sizes of all its part files. In this case none of the parts contains internal metadata.
NOTE: At the moment, transform operation is only supported for libpmemobj(7) pools, so pmempool_transform() cannot be used with other pool types (libpmemlog(7), libpmemblk(7)).
pmempool_sync() and pmempool_transform() return 0 on success. Otherwise, they return -1 and set errno appropriately.
EINVAL Invalid format of the input/output pool set file.
EINVAL Unsupported flags value.
EINVAL There is only master replica defined in the input pool set passed to pmempool_sync().
EINVAL The source pool set passed to pmempool_transform() is not a libpmemobj pool.
EINVAL The input and output pool sets passed to pmempool_transform() are identical.
EINVAL Attempt to perform more than one transform operation at a time.
ENOTSUP The pool set contains a remote replica, but remote replication is not supported (librpmem(7) is not available).
The pmempool_sync() API is experimental and it may change in future versions of the library.
The pmempool_transform() API is experimental and it may change in future versions of the library.
libpmemlog(7), libpmemobj(7) and
The contents of this web site and the associated GitHub repositories are BSD-licensed open source. | https://pmem.io/pmdk/manpages/linux/v1.10/libpmempool/pmempool_sync.3/ | CC-MAIN-2022-05 | refinedweb | 785 | 61.06 |
I was recently working on a tool that will help us automate the process of numbering Windows and Doors in Revit based on Room that they are in. Below is a detailed explanation of how it works so far. Please feel free to comment and help me make it better.
1. First I collected all of the doors/windows in the project. “Get Family Instances by Category” is a custom node that can be downloaded from the Package Manager.
2. Once you have all the doors in the project it’s time to extract their room assignment information. Each door in Revit can contain information about what room it swings into. That information is generated by enabling Room Calculation Point in a Door Family Editor. I am using Revit API to extract that information:
doc = __doc__
room_number = []
room_name = []
doors = []
room = []
filter_1 = IN1
fam_inst = IN0
collector = FilteredElementCollector(doc)
phase_collector = collector.OfClass(Phase)
for i in phase_collector:
if i.Name == “New Construction”:
phase = i
else:
print(“no phase w/ specified name exists”)
for i in fam_inst:
to_room = i.ToRoom[phase]
from_room = i.FromRoom[phase]
Once we have the information about what rooms doors swing into and from, it’s time to put in place some filters. My general rule was that if door swings into a Circulation space I would use the room that it swings from instead. Also, if door is exterior and swings into space that contains no room (exterior), the i would also use the room that it swings from. Here’s the filter part of code:
if to_room is None or to_room.get_Parameter(“Name”).AsString() == filter_1:
if from_room is None:
room_number.append(“No To or From Room”)
room_name.append(“No To or From Room”)
doors.append(“No To or From Room”)
else:
room_number.append(from_room.get_Parameter(“Number”).AsString())
room_name.append(from_room.get_Parameter(“Name”).AsString())
doors.append(i)
room.append(from_room)
else:
room_number.append(to_room.get_Parameter(“Number”).AsString())
room_name.append(to_room.get_Parameter(“Name”).AsString())
doors.append(i)
room.append(to_room)
#Assign your output to the OUT variable
OUT = [[room_number], [room_name], [doors], [room]]
3. Now we have four (4) outputs that we can use to assign Mark values to Doors/Windows (Room Number that we decided to use based on conditions, room name, door family to assign mark value to, and room family that will become useful a little later). Since Python node in Dynamo only allows single output I had to group all this information into lists of lists. Let’s separate them with a little bit of list management:
4. Next step is to build a numbering sequence for our doors. I decided that I wanted them numbered in a clockwise fashion with each consecutive door/window getting a roomnumber + letter suffix. Ex. 100A, 100B, 100C and so on. First let’s knock out the clockwise order for multiple doors/windows per room.
In order to do that I measured an angle between the door location and room location(room placement point). I used a custom node called “LunchBox XYZ Angle” for that. However, since Revit like most applications by default returns the inside angle (the smaller angle), I had to figure out a way to figure out which angles are actually larger than 180 in order to get the full 360 range. For that I grabbed the location points (both door and room) and decomposed them to extract the X value only. Now, with a little bit of Python I was able to check which door location X value is smaller than the X value of room location point. If it was smaller it meant that door/window is located on the left side of the room and angles measured are actually larger than 180. For all angles larger than 180 i used a simple math function to get the correct value: angle = 180 + (180 – angle inputed). Here’s a Python code to accomplish that:
import math
door_x = IN0
room_x = IN1
angle = IN2
result = []
for i, j, k in zip(door_x, room_x, angle):
if i <= j:
result.append(math.pi+((math.pi)-k))
else:
result.append(k)
#Assign your output to the OUT variable
OUT = result
5. Now that we have the proper angles, doors and room numbers all we need is to sort them before we start assigning parameters. I used a Python node to sort all three of them simultaneously (maintaining synchronized order of all families was crucial). Here’s the code:
from itertools import groupby
room_number = IN0
angle = IN1
door = IN2
grps = sorted(zip(room_number, angle, door), key=lambda x: (x[0], x[1]))
room_number, angle, door = [], [] ,[]
for i, grp in groupby(grps, lambda x: x[0]):
sub_rm_number, sub_angle, sub_door = [], [] ,[]
for j in grp:
sub_rm_number.append(j[0])
sub_angle.append(j[1])
sub_door.append(j[2])
room_number.append(sub_rm_number)
angle.append(sub_angle)
door.append(sub_door)
OUT = [[room_number], [angle], [door]]
This code takes three input lists: room_number, angle and door and sorts it. First it sorts by room number from smallest to largest, then it sorts the lists by angle from smallest to largest. The outputs are three synchronized and sorted lists:
Next I used the sorted room number list and doors to assign proper suffix values to them and finally write them to Door’s Instance Mark Parameter.
sequence = IN
uniq_seq = []
def increment_item(item = ‘A’):
next_char = [ord(char) for char in item]
next_char[-1] += 1
for index in xrange(len(next_char)-1, -1, -1):
if next_char[index] > ord(‘Z’):
next_char[index] = ord(‘A’)
if index > 0:
next_char[index-1] += 1
else:
next_char.append(ord(‘A’))
return ”.join((chr(char) for char in next_char))
def char_generator(start = ‘A’):
current = start
yield start
while True:
current = increment_item(current)
yield current
def build_unique_sequence(sequence):
key_set = dict([item, char_generator()] for item in set(sequence))
return map(lambda item:'{}{}’.format(item, key_set[item].next()), sequence)
OUT = build_unique_sequence(sequence)
This code generates Mark values that will be assigned using “Set List Instance Parameters” node. Since we are modifying Revit elements while doing that the “Transaction” node has to follow it.
The result is doors and windows numbered based on Room that they are in/swing into.
*Thanks to David Mans and Stack Overflow for all the help with Python coding.
** I still need to test this on a larger sample size (real project), but feel free to download the example file and play with it.
*** For those brave enough that run this script on a real project: I DO NOT TAKE RESPONSIBILITY FOR POSSIBLE CATASTROPHIC RESULTS. These components make irreversible changes to families (Door/Window Mark parameters) so detach from Central before testing.
Download link:
-Konrad
LibGNet error in script
I don’t know what’s inside of your Python node. I would have to see all of your code because I am not sure what is throwing that error. Again, this was written for Dynamo 0.6.3 and I didn’t have time to translate. I am sorry, but I have no plans on translating that workflow to 0.7.1 anytime soon. If/when i do i will have it posted. Good luck!
thx a lot | http://archi-lab.net/?p=4 | CC-MAIN-2014-42 | refinedweb | 1,179 | 54.32 |
I'm creating a program that reads text files, each with a priority and message, and then prints them in the correct order. Files with higher priority are printed first, and if they have the same priority, the ones that were entered in first are printed first. My code compiles fine, but if more than 3 text files are entered, I get a Segmentation fault. It also has a problem sorting by priority. Is there a way to fix this without significantly changing my code?
Here is an example text file: (the first character is the priority, 1 being highest)Here is an example text file: (the first character is the priority, 1 being highest)Code:#include <iostream> #include <fstream> #include <string> #include <cassert> using namespace std; /* ----------------------- Package Class ----------------------- */ class package { public: int priority; int order; string message; }; /* ----------------------- Main Program ----------------------- */ int main() { /* ----------------------- The Variables ----------------------- */ int amount; package temp; package *packages; string fileName; ifstream fin; /* ----------------------- Opening the Files ----------------------- */ cout << "\nEnter the amount of packages you want entered: " << endl; cin >> amount; packages = new package[amount]; if (packages == 0) cout << "Memory Cannot Be Allocated"; cout << "\nEnter the names of the packages one by one:" << endl; for (int i=0; i<amount; i++) { cout << "Package " << i+1 << ": "; cin >> fileName; fin.open(fileName.data(), ios::in); assert( fin.is_open() ); fin >> packages[i].priority; getline(fin, packages[i].message); packages[i].order = i+1; fin.close(); } /* ----------------------- Sorting the Packages ----------------------- */ for (int i=1; i<amount-1; i++) { for (int j=0; j=amount-i; j++) { if (packages[j].priority > packages[j+1].priority) { packages[j] = temp; packages[j] = packages[j+1]; packages[j+1] = temp; } if (packages[j].priority == packages[j+1].priority) { if (packages[j].order > packages[j+1].order) { packages[j] = temp; packages[j] = packages[j+1]; packages[j+1] = temp; } } } } /* ----------------------- Printing the Files ----------------------- */ cout << "The Packages:"; for (int i=0; i<amount; i++) { cout << "\n\n" << "Priority: "<< packages[i].priority << "\nOrder: " << packages[i].order << "\n" << packages[i].message; } cout << endl << endl; delete[] packages; return 0; }
Thanks so much for the help!Thanks so much for the help!Code:1This program is awesome | http://cboard.cprogramming.com/cplusplus-programming/126602-segmentation-fault.html | CC-MAIN-2014-15 | refinedweb | 349 | 56.45 |
This chapter describes how to use the modeling tools and technologies to create Unified Modeling Language (UML) class, profile, activity, sequence, and use case diagrams, as well as database, EJB, and business component diagrams to model your various business services and database structures.
This chapter includes the following sections:
Section 5.1, "About Modeling with Diagrams"
Section 5.2, "Getting to Know the Diagram Types"
Section 5.3, "Creating, Using, and Managing Diagrams"
Section 5.5, "Importing and Exporting UML"
Section 5.6, "Modeling with UML Class Diagrams"
Section 5.7, "Modeling EJB/JPA Components"
Section 5.8, "Modeling with Database Diagrams"
Section 5.9, "Modeling with Activity Diagrams"
Section 5.10, "Modeling with Sequence Diagrams"
Section 5.11, "Modeling with Use Case Diagrams"
Section 5.3.7, "Storing Diagrams Locally"
Section 5.12, "Storing UML Elements Locally"
JDeveloper supports six standard UML diagrams types, and four additional diagram types to model the software and systems development for your applications.
To create your diagrams there are New Gallery wizards. The wizard lets you choose the diagram type, the package, and select the components you want available in the Component window. You can also create a UML application, which allows you to quickly create diagrams and related components. Figure Figure 5-28 shows an example create dialog for a class diagram.
The UML diagrams available are UML-compliant. Once your diagram is created, you will find many UML elements available to drop and drag onto your diagrams from the Components window.
All of the diagram types can be created using the New Gallery wizards and are supported with JDeveloper diagram editor, Components window, and the Properties window. There are also transformation options for database objects.
JDeveloper offers six standard UML diagram types:
Activity Diagram. Model system behavior as coordinated actions. You can use activity objects to model business processes, such as tasks for business goals, like shipping, or order processing.
Activity Diagram with Partitions. Create visual dividing lines, or modeling constructs to groups, or dividing activities and actions that share similar characteristics.
Class Diagram. Model the structure of your system. Use to inspect the architecture of existing classes, interfaces, attributes, operations, associations, generalizations and interface realizations.
Sequence Diagram. Model the traces of system behavior as a sequence of events. Sequence diagrams primarily show this as messages between objects ordered chronologically.
Use Case Diagram. Model what a system is supposed to do. A use case diagram is a collection of actors, use cases, and their communications.
Profile Diagram. Define extensions to UML using profiles and stereotypes.
There are four diagrams to choose from to model your business services:
Business Components Diagram. Model entity objects, view objects, application modules and the relationships between them.
Database Diagram. Model your online and offline database tables and their relationships as well as views, materialized views, sequences, public and private synonyms.
EJB Diagram. Model the entity beans, session and message-driven beans inside a system, and the relationships between them.
Java Class Diagram. Model the relationships and the dependencies between Java classes, interfaces, enums, fields, methods, references, inheritance relationships, and implementation relationships.
Oracle JDeveloper provides you with a wide range of tools and diagram choices to model your application systems. There are handy wizards to walk you through creating your diagrams and elements, as well as a Components window and Properties window to make it easy to drag and drop, and to edit a variety of elements without leaving your editing window.
Figure 5-1 shows the diagram editor window, with a class diagram, as well as the Applications window and Components window. Open diagrams by double-clicking them in the Applications window, and once open, drag-and-drop components onto the diagram editor from the Components window.
Once you have created your diagram, add components to your diagram from the Components window. Zoom in and out of your diagrams with keystroke commands or view them at original size, or a percentage of the original size. When you are finished, you can publish your diagram as an image or print it using the right-click context menu or the main menu commands.
The New Gallery wizard creates your diagrams ready for you to start adding your components.
In the Applications window, select your project, then choose File > New From Gallery > General > Diagrams.
Select a diagram type, click OK.
The default package for a diagram is the default package specified in the project settings. An empty diagram is created in the specified package in the current project, and opened in the content area. Click OK.
Renaming your diagram renames that diagram without leaving a copy of the original diagram in the original name.
In the Applications window, select the diagram to rename.
Choose File > Rename.
Use the right-click context option to publish your diagram as a graphic image. You can preview and print your diagram once it is published as an image.
To publish a diagram as a image:
Right-click your diagram, then choose Publish Diagram.
Or
Click on the surface of the diagram, then choose Diagram > Publish Diagram.
Select the destination folder from the table for the image file.
From the File type drop-down list, select the file type for the image file (SVG, SVGZ, JPEG, or PNG).
In the File name box, enter a name for the image file, including the appropriate file extension (.svg, .svgz, .jpg, or .png).
Click Save.
Use the context menu or keystrokes to copy elements across different diagrams.
To copy elements from a diagram and paste them into another diagram:
Select the diagram elements, then choose Copy on the context menu, or choose the Copy icon on the toolbar, or press Ctrl-C.
Open the destination diagram.
Place the pointer where you want the diagram elements to be added, then choose Paste from the context menu (or choose the Paste icon on the toolbar, or press Ctrl-V.
Change from portrait to landscape or your margins for printing using page setup.
To setup the page before printing:
Click on the surface of the diagram you want to print, then choose File > Page Setup.
Make changes to the settings on the tabs of the Page Setup dialog.
Set a specific area of your diagram to print using set print area off the File menu option. diagram print area choose File > Print Area > Clear Print Area.
To see a preview of your page go to File > Print Preview. You can also set print options from this page by choosing Print Options. On the Print Option page you can add header and footer content as well as text formatting.
Use Ctrl+scroll to zoom in and out of diagrams. When you are using the thumbnail view, use scroll to zoom. There are also zoom options on the diagram toolbar.
In the zoom drop-down list, located on the diagram toolbar, choose 100%, or click the diagram, then choose Diagram > Zoom > 100%.
In the zoom drop-down list, located on diagram toolbar, choose Fit to Window, or click the diagram, then choose Diagram > Zoom > Fit to Window.
In the zoom drop-down list, located on the diagram toolbar, choose Zoom to Selected, or click the diagram, then choose Diagram > Zoom > Zoom to Selected.
Delete the diagram and related diagram elements using the menu bar.
In the Applications window, select the diagram to remove.
Choose Edit > Delete..
Most of your diagram elements are available in the Components window. There are a variety of tools to help you manage your elements visually, as well as managing the properties of your elements.
Click on the element name in the Structure window. The element is selected in the diagram. You can also use the thumbnail window of the diagram to find an element. To display a thumbnail view of a diagram, select the diagram either in the applications window or by clicking on the background of the diagram, then choose Window > Thumbnail. You can grab the view area box and move it over elements on the thumbnail view of the diagram. The corresponding elements are brought into view on the main diagram.
Press and hold down the Ctrl key, then click the element on the diagram.
Select all elements on a diagram to perform actions on all elements at the same time, such as align, copy, delete, or move. Click on the diagram surface, and then select any element, and choose Edit > Select All. You can also drag out an area on the diagram surface to select all or multiple elements.
If you want to edit or manage many elements of the same type, at the same time, use the select all option.
To select all elements of the same type:
Select an object of the type you want.
From the context menu, choose Select All This Type.
If you select a group of elements, and you want to exclude particular elements, use the deselect option. This might be quicker than selecting the entire group one element at a time.
To deselect a selected element in a group of selected elements:
Press and hold down the Ctrl key.
Click the element(s) on the diagram to deselect.
Grouping elements locks two or more elements into a single group that serves as a container for the aggregate.
To group elements on a diagram:
Expand the diagram annotations accordion, if necessary. In the Components window, click Group.
Position the pointer at the corner of the area on the diagram to group the elements, then press and hold down the mouse button.
Drag the mouse pointer over the area.
Release the mouse button when the objects are entirely enclosed.
Use the Manage Group feature to move elements in and out of groups, move elements to other groups, or move groups in and out of other groups.
To manage grouped elements on a diagram:
Select the group to manage.
Right-click and select Manage Group.
You can also move elements in and out of groups by Shift+dragging the element to the desired position.
Open the Properties dialog in one of the following ways:
On the diagram, double-click the element
Or
Select the element on the diagram, then, from its context menu, choose Properties.
Open the Properties window by selecting Window > Properties. The Properties window displays both visual and semantic properties.
Select the diagram element.
Select the property to change.
On the right of the Properties window, select the control and change the value. (The control may be an edit box, a drop-down list, a checkbox, etc.).
Note:
These properties are not all valid for all elements.
Select the element or elements on the diagram.
Then in the Properties window (Window >Properties),.
Define the default font and color of any elements you add to your diagrams using the Preferences dialog.
To change the color or font of diagram elements to be added to a diagram:
Choose Tools > Preferences, select Diagrams, select the diagram type, and then (from the Edit Preferences For drop-down box), select the element type to change, as shown in Figure 5-3.
On the make the required changes.
Use the Preferences dialog to copy preferences between elements.
To copy and paste visual properties to elements:
Select the element with the properties you want to copy.
Right-click and select Visual Properties.
Right-click and select Paste Graphical Options..
To resize a diagram element:
Select the element to resize.
Position the pointer on any grab bar on the element and hold down the mouse button. The pointer is displayed as a double-headed arrow when it is over a grab bar.
Drag the grab bar until the element is resized, then release the mouse button.
Dragging elements on the diagram surface is the easiest way of moving elements. To move elements over a larger areas, cut and paste. Whenever a diagram element moved towards the visible edge of the diagram, the diagram is automatically scrolled. New diagram pages are added where an element is moved off the diagram surface.
To move diagram elements:
Select the element, or elements to move.
Position the pointer on the elements, then press and hold down the mouse button.
Drag the selected elements to their new position.
Release the mouse button. If an element overlaps another element they are displayed on top of one another. Right-clicking the element and choose Bring to Front to view.
You can undo and redo your most recent graphical actions by choosing Edit >Undo [...] or clicking the undo icon. Graphical actions change the appearance of elements on the diagram surface and include the following:
Cutting and pasting elements on redo an action, choose Edit > Redo [...] or click the redo icon
Diagrams can be laid out in hierarchical, symmetrical, grid, and row styles. Elements within your diagrams can also have customized layout styles. There are many preferences available to customize the way you diagram looks. Most preferences can be set using the various diagram preferences dialogs at Tools > Preferences > Diagrams (diagram type), as shown in Figure 5-4. From the general preferences dialog you can choose Edit Preferences for to set specific preferences for new diagrams. The general preferences dialog sets preferences for all diagrams of that type Right-click and choose Visual Properties to edit preferences for the diagram currently in your editing window.
Hierarchical layout puts diagram elements in hierarchies based on generalization structures and other edges with a defined direction, as show in Figure 5-5. edges.
Symmetrical layout aligns diagram elements symmetrically based on the edges between the nodes as shown in Figure 5-6., as shown in Figure 5-7. For class diagrams, a layout is created which represents each generalization hierarchy in an aligned fashion.
Grid layout puts the diagram elements in a grid pattern with nodes laid out in straight lines either in rows from left to right, or in columns from top to bottom, as shown in Figure 5-8. Nodes are laid out starting with the top left node.
Layout styles are available by opening the context menu for a diagram and choosing Lay Out Shapes, or by using the Diagram Layout Options Dropdown as shown in Figure 5 5-9.
After the selected elements have been laid out they remain selected to be moved together to any position on the diagram.
To set the layout for new elements on a diagram go to View > Properties window.
To se the default layout for elements on a diagram go to Tools > Preferences > Diagrams.
Diagram elements can be automatically snapped to the nearest grid lines, even if the grid is not displayed on the diagram. Grid cells on the diagram are square and only one value is required to change both the height and width of the grid cells. By default, elements are not snapped to the grid on activity diagrams. To set the default diagram grid display and behavior go to Tools > Preferences, select Diagrams. From there you can select the Show Grid checkbox to display the grid or Snap to Grid checkbox to snap elements to the grid. The grid does not have to be displayed for elements to be snapped to it.
To define diagram grid display and behavior for the current diagram:
Click the surface of the diagram.
In the View > Properties window, select to Show Grid, or Snap to Grid.
Distributing diagram elements spacially arranges elements to specific point, such as top, bottom, etc. When you are distributing elements, the outermost selected elements on the vertical and horizontal axes are used as the boundaries. To fine tune the distribution, move the outermost elements in the selection, then redistribute the element.
To distribute diagram elements:
Select three or more diagram elements and choose Diagram > Distribute.
Select the distribution for the elements.
Select the horizontal distribution: None, Left, Center, Spacing, or Right.
Select the vertical distribution: None, Top, Center, Spacing, or Bottom.
Click OK.
Elements can be aligned vertically and horizontally. You can also change the location of elements to have equal vertical and horizontal spacing.
To align and size elements:
Select two or more elements. Choose Diagram > Align.
Choose from the following:
Select the horizontal alignment.
Select the vertical alignment.
Use the Size Adjustments checkboxes to set the size of the selected elements:
Select the Same Width for all the selected elements to have the same width. The new element width is the average width of the selected element.
Select the Same Height for all the selected elements to have the same height. The new element height is the average height of the selected elements.
Click OK.
Hide a single edge or any number of edges on your diagrams. Edges that are hidden on a diagram continue to show in the Structure window, with "hidden" appended. If there are any hidden edges, you can bring them back into view individually or all at once.
To hide one or more edges:
Select the diagram edge to hide. (To select all edges of a particular type, right-click an edge, then choose Select All This Type.)
Right-click and choose Hide Selected Shapes.
You can also go to the Structure window, select the edge or edges to hide, right-click and choose Hide Shapes.
In the Structure window, select the edge to show, right-click and choose Show Hidden Shapes.
In the Structure window, select the edges to show, right-click and choose Show Hidden Shapes.
Right-click an object listed in the Structure window and choose Order By Visibility.
Classes and interfaces related to those currently displayed on the diagram can be brought onto the diagram. This includes classes or interfaces that are extended, implemented, or referenced by the selected class or interface.
Choose from the following options to display related classes on a diagram:
Select the class or interface, on the diagram, then choose Model > Show > Related Elements.
Right-click the class or interface, on the diagram, then choose Show > Related Elements.
Diagram edges can be laid out in either oblique or rectilinear line styles. Oblique lines can be repositioned at any angle. Rectilinear lines are always shown as a series of right angles.
You can set the default line style for each diagram edge using the Line Style prefernence.using the "Line Style" preference under Tools > Preferences > Diagrams > Class and edit preferences for Association, as shown in Figure 5-10.
You can also set the line style for all instances of that diagram type. ideogram edges and change their line style. If you change an individual line from oblique to rectilinear, the line will be redrawn using right angles. If you change an individual line from rectilinear to oblique, no change will be made to the line, but you can reposition it (or portions of it) at any angle.
You can also also choose no crossing style, which is the default setting.
Nodes can be hidden or shown. You can create nodes inside or outside elements.
Create a node using the Components window.
To create a node on a diagram:
Select the node type you want to create from those listed in the Components window for your diagram.
Click the diagram where you want to create the node, or drag it from the component palette..
Elements can be represented on a diagram as internal nodes on other diagram elements.
Internal nodes can be used to create the following:
Inner classes and inner interfaces.
Relation usages. Applications window, or diagram, and drop it in the expanded node to create an inner node.
Select the diagram element(s) and choose one of the following:
Diagram > View As > Compact.
Diagram > View As > Symbolic.
Diagram > View As > Expanded.
Use the optimize feature of the right-click context menu to optimize your nodes. Optimizing will adjust the size of the nodes so that all attributes show.
To optimize the size of nodes on a diagram:
Select the nodes to resize.
Right-click the selected nodes then choose Optimize Shape Size > Height and Width, as one option, or separately.
Notes are used for adding comments to a diagram or the elements on a diagram. A note can be attached to one or more elements. A note is stored as part of the current diagram, not as a separate file system element. Note options are available in the Components window, as shown in Figure 5-13.
Use the Diagram Annotations feature in the Components window to add a note to your diagram.
To add a note to a diagram:
Click the Note icon in the Diagram Annotations section of the Components window..
The Components window Diagram Annotations feature provides an Attachment component to attach notes to your diagram elements.
To attach a note to an element on a diagram:
Click the Attachment icon in the Diagram Annotations section of the Components window.
Click the note.
Click the element that you want to attach the note to.
JDeveloper supports UML transformations on your database tables, Java classes, and interfaces. You can use these transformation features to produce a platform-specific model like Java from a platform-independent model like UML. You can perform transformations multiple times using the same source and target models. There are also some reverse transforms to UML from Java classes.
Transformations can be done in the following ways:
Transformation on the same diagram as the original.
Transformation on a new diagram created for the transformed element.
Transformation only in the current project, and not visually on a diagram.
You can use the UML modeling tools to create a UML Class model, and then transform it to an offline database or vice-versa.
To transform UML, Java classes, or interfaces:
Select the elements 5-14 and Figure 5-15.
You can use the UML modeling tools to create a UML Class model, and then transform it to an offline database or vice-versa. Note that if you want to transform all of the elements on a diagram use File > New > From Gallery > Database Diagram > Offline Database Objects > Offline Database Objects from UML Class Model.
To transform a UML class diagram to an offline database:
Open the diagram to transform.
Select the elements to transform. Right-click and choose Transform.
Choose from one of the following:
Model Only. The offline database model based on the UML Class model is created in the Applications window in the current project.
Same Diagram. The offline database model based on the UML Class diagram is created. The offline database model can be viewed in:
The Applications window.
The UML Class diagram, as modeled tables.
New Diagram. The offline database model based on the UML Class diagram is created. The offline database model can be viewed in both:
The Applications window.
A new database diagram, as modeled tables and constraints.
Choose UML to Offline Database.
Click OK.
To transform an offline database diagram to UML
Select the offline database object or objects you want to transform.
Right-click and choose Transform. Select from one of the following options:
Model Only. A UML class model based on the database schema is created in the Applications window in the current project.
Same Diagram. A UML class diagram based on the offline schema is created. The new classes can be viewed in both:
The Applications window.
The UML Class diagram, as classes for each transformed database table.
New Diagram. When the wizard finishes, a UML class diagram based on the offline database schema is created. The classes can be viewed in both:
The Applications window as a new UML classes.
A new class diagram, as classes for each transformed database table.
The Transform dialog appears as shown in Figure 5-16. Select Offline Database Objects to UML.
Click OK.
To create offline database objects from UML:
In the Applications window, select the project containing the UML classes to transform.
Choose File > New > From Gallery.
In the Database Tier choose Items > Offline Database Objects from UML Class Model. When you invoke the wizard from the New Gallery, the offline database objects creating during the transform process are only available in the Applications window..
The Offline Database Objects from UML Class Model wizard opens, as show in Figure 5-17. Select the UML classes and associations to transform.
Click Finish. The offline database objects are created in the Applications window. 5-18.
If you select the option Transform Root Classes, root classes are transformed into offline tables, and all the attributes and foreign keys from their descendant classes in the hierarchy are also transformed as shown in Figure 5-19. 5-20.
If you select the option Transform all classes, inheriting from generalized classes, an offline table is created for every class in the transform set. Each table inherits columns and foreign keys from its ancestor tables but is otherwise independent, as shown in example Figure 5-21.
If you select the Transform all classes, creating foreign keys option to generalized classes, an offline table is created for every class in the transform set. No columns or foreign keys are inherited; a foreign key is created to the parent table or tables, as shown in Figure 5-22..
The UML elements that you create in the New Gallery are listed in the databases window and can be dropped onto your diagrams after they are created independently of the diagram.
Use the New Gallery to create UML elements without a pre-existing diagram.
To create UML elements off a diagram:
Select the project in the Applications window. applications window.
UML models created using other modeling software can be imported into JDeveloper using XML Metadata Interchange (XMI) if the models are UML 2.1.1 to 2.4.1 compliant.
The XMI specification describes how to use the metamodel to transform UML models as XML documents.
The following are restrictions that apply to importing:
Diagrams cannot be imported.
XMI must be contained in a single file. Any profiles referenced by XMI must be registered with JDeveloper through Tools > Preferences > UML > Profiles before importing. The profiles must be in separate files. For more information see, Section 5.5.3, "Using UML Profiles".
To import UML model from XMI:
With an empty project selected in the Applications window, choose File > Import.
Select UML from XMI, then click OK. Diagrams can be automatically created from the imported model. The dialog will offer you this option during the process, as shown in Figure 5-24. detailed in table Table 5-1. Double clicking on the items in the log navigates to the problem element. Often issues arise because of incorrect namespaces and standard object references.
As with other XML, the structure of a valid file is specified by XML schemas, which are referenced by xmlns namespaces. The XML consists of elements that represent objects or the values of their parent element object and attributes that are values. Sometimes the values are references to other objects that may be represented as an href as for HTML. custom profiles by going to Tools > Preferences > UML > Profiles, and clicking Add. The profiles page shows all of the current profiles available in the applicatiom.as shown on Figure 5-27. Once you have added a custom profile, edit the URI (Universal Resource Indicator) for the profile by selecting the profile and clicking Edit. The URI tells the application where to look for the profiles. For more information on UML Profiles, there are some sample profiles in the OMG Catalog at,.
When you are doing UML transformations, you select a profile to apply the stereotypes for the UML translation to the target format. When you are transforming a UML package, you need to tell the system which profile to use, and what to call the resulting object, (i.e., class, or package).
To apply a UML profile to a package:
Open the Package Properties dialog of your UML package by right-clicking on it in the Applications window and choosing Properties.
In the Package Properties dialog, select Profile Application. Click Add and choose a UML profile.
In the Package Properties dialog, select Packaged Element.
Expand the Class node, and choose Applied Stereotype and click Add. The properties available depend on the UML profile you are using. Figure 5-28 shows the Name after Transform property from the UML profile DatabaseProfile.
JDeveloper comes with a UML profile called DatabaseProfile which determines how class models are transformed to offline database models. For more information about UML profiles, see Section 5.5.3, "Using UML Profiles."
DatabaseProfile contains stereotype properties that control how elements are transformed. The stereotypes and their properties in this profile are described in Table 5-2.
The attributes, or properties, are described in Table 5-3.
To use DatabaseProfile to transform a class model:
Open the Package Properties dialog of a UML package
package.uml_pck by right-clicking on it in the Applications window 5-29. Specify the name to use after the package has been transformed into an offline database schema. Select the Applied Sterotype node and click Add. Under Properties, a new property
Name after transform is listed. Enter a name. Click OK. A new file
DatabaseProfile.uml_pa is now listed in the Applications window, as shown in Figure 5-30.
To examine the Database Profile file, right-click it in the Applications window and choose Properties. The dialog shows the profile now used in the transform. Click OK to close the dialog.
Now you can apply stereotypes to the various elements in the project. In the example shown in Figure 5-30, 5-31.
Once you have set the stereotypes to apply, proceed to transform the UML Class model following the steps in Section 5.3.6.2, "How to Transform UML and Offline Databases." The stereotypes and properties in DatabaseProfile you set are applied during transformation.
You can apply stereotypes to other elements as well. For example, you can specify datatypes and a primary key to attributes owned by a particular class. In the same Class Properties dialog, expand Owned Attibute. and select an existing attribute, or create one by clicking Add and entering a name for it.Expand the node for the owned attribute, select Applied Stereotype and click Add. Figure 5-31 shows that at this level you can specify a number of datatypes, whether the attribute should be transformed to a primary key, and the name after transform.
For information about the stereotypes and properties covered by DatabaseProfile, see Table 5-2, "Stereotypes and Properties in DatabaseProfile" and Table 5-3, "Properties of Stereotypes in DatabaseProfile".
The profile diagram is structure diagram which describes the lightweight extension mechanism to UML by defining custom stereotypes, tagged values, and constraints. Profiles allow adaptation of the UML metamodel for different platforms and domains or your modeled business processes. Metamodel customizations are defined in a profile, which is then applied to a package. Stereotypes are specific metaclasses, tagged values are standard meta-attributes, and profiles are specific kinds of packages.
Semantics of standard UML metamodel elements are defined in a profile. For example, in a Java model profile, generalization of classes can be restricted to single inheritance without having to explicitly assign a Java class stereotype to each and every class instance. Profiles can be dynamically applied to or retracted from a model. They can also be dynamically combined so that several profiles will be applied at the same time on the same model.The profiles mechanism is not really a first-class extension mechanism. It does not allow to modify existing metamodels or to create a new metamodel as MOF does. Profiles only allow adaptation or customization of an existing metamodel with constructs that are specific to a particular domain, platform, or method. You can't take away any of the constraints that apply to a metamodel, but using profiles, you can add new constraints.Use the New Gallery to create your profile diagram. Choose File > New > From Gallery > Diagrams > Profile Diagram.
Create MOF (Meta-Object Facility) Model Library .jar files to enable your UML objects from one project to be reused by another.
To create a MOF Model JAR File
Select the model node in your application project.
Go to New > From Gallery > Deployment Profiles > MOF Model Library. Click OK.
Add the name of your deployment profile. Click OK. The next dialog appears.
In the MOF Model Library dialog, select the location for your JAR file and complete the dialog. Click OK.
Deploy your changes to the JAR you have just created by choosing the project and right-clicking and choosing Deploy > MyMOFLibrary. If you want to add a library to your project go to Tools > Manage Libraries > Libraries > New and add the source path for your MOF Model Library. If you redeploy, you can update the JAR with any new changes you make.
The definitions of the classes on a diagram, their members, inheritance, and composition relationships are all derived directly from the Java source code for those classes. These are all created as Java code, as well as being displayed on the diagram. If you change, add to, or delete from, the source code of any class displayed on the diagram, those changes will be reflected on those classes and interfaces on the diagram. Conversely, any changes to the modeled classes are also made to the underlying source code. Some information relating to composition relationships, or references, captured on a Java class diagram is stored as Javadoc tags in the source code.
A Java class diagram can contain shapes from other diagram types (Oracle ADF Business Components, UML elements, Enterprise JavaBeans, and database objects). 5-33 shows an example of a typical class diagram layout.
All attributes and operations display symbols to represent their visibility. The visibility symbols are: + Public, - Private, # Protected, and ~ Package.
Use the Properties window to set properties for your elements. Each of the elements is represented by a unique icon and description as shown in Figure 5-34..
Class properties are added to modeled classes and interfaces on a diagram by doing one of the following:
Double-click the modeled class or interface to access the properties dialog.
Right-click the class or interface and choose Properties.).
Use the right-click context menu to hide or show attributes and operations elements on your diagram.
To hide one or more attributes or operations:
Select the attributes or operations to hide.
Right-click the selected items and choose Hide > Selected Shapes. To show attributes or operations choose Show All Hidden Members.
Generalized structures are created on a diagram of classes by using the Generalization icon on the Class Components window.
Where an interface is realized by a class, model it using the Realization icon on the Class Components window for the diagram.
A variety of associations can be created between modeled classes and interfaces using the association icons. Associations are modified by double-clicking the modeled association and changing its properties.
Java classes, interfaces, or enums are created on a diagram by clicking on the Java Class icon, Java Interface icon or Java Enum icon on the Java Components window for the diagram, and then clicking on the diagram where you want to create the class. The Java source file for the modeled class or interface is created in the location specified by your project settings.
Java Class, Java Interface, and Java Enum icons are represented on a diagram as rectangles containing the name and details of the Java class. Java classes and interfaces are divided into compartments, with each compartment containing only one type of information.
An ellipsis (...) is displayed in each compartment that is not large enough to display its entire contents. To view a modeled class so that all the fields and methods are displayed, right-click the class and choose Optimize Shape Size, then Height and Width.
Each type of class on a diagram is identified by a stereotype in the name compartment. This is not displayed by default.
Members (fields and methods) display symbols to represent their visibility. The visibility symbols are: + Public, - Private, # Protected. If no visibility symbol is used, the field or method has package visibility.
A diagram can include primary or inner classes from different packages, the current application, or from libraries. Inner Java classes and inner interfaces are defined as members of their 'owning' class. Hence, they are also referred as member classes.
Inner classes and inner interfaces are displayed in the inner classes compartment of the modeled Java class or interface on the diagram. Inner classes are prefixed with the term Class, and inner interfaces are prefixed with the term Interface, between the visibility symbol and the class or interface name.
To create an inner class or inner interface on a modeled Java class or interface, either add the inner class to the implementing Java code, or create a new Java class or interface as an internal node on an existing modeled class.
Inner Java classes and inner Java interfaces cannot have the same name as any containing Java class, Java interface or package or contain any static fields or static methods.
A variety of references (previously referred to as associations) can be created quickly between classes and interfaces on a diagram using the various reference icons on the Java Class Components window for the diagram. References created between modeled Java classes are represented as fields in the source code of the classes that implement the references. Compositional relationships are represented on the diagram as a solid line with an open arrowhead in the direction of the reference. Table 5-5 displays the references that can be modeled on a diagram.
Note:
If you want to quickly change the properties of a reference on a diagram, double-click it to display the Code Editor and change the details of the reference.
Labels are not displayed on references by default. To display the label for a reference, right-click the reference and choose Visual Properties, then select Show Label. The default label name is the field name that represents the reference. If you select this label name on the diagram and change it, an
@label <label_name> Javadoc tag will be added before the field representing the reference in the code.
You can change the aggregation symbol used on a reference on a diagram by right-clicking the reference, choosing Reference Aggregation Type, then choosing None, Weak (which adds an
@aggregation shared Javadoc tag to the code representing the reference), or Strong (which adds an
@aggregation composite Javadoc tag to the code representing the reference). Aggregation symbols are for documentary purposes only.
Inheritance structures, which are represented in the Java source as
extends statements, can be created on a diagram of Java classes using the Extends icon on the Java Class Components window for the diagram. Extends relationships are represented on the diagram as a solid line with an empty arrowhead pointing towards the extended class or interface.
Where an interface is implemented by a class, this can be created using the Implements icon on the Java Components window for the diagram. Creating an implements relationship adds
implements statement to the source code for the implementing class. Implements relationships are represented on the diagram as a dashed line with an empty arrowhead pointing towards the implemented Java interface.
Extends relationships model inheritance between elements in a class model. Extends relationships can be created between Java classes and between Java interfaces, creating an extends statement in the class definition. Enums cannot extend other classes, or be extended by other classes.
Note:
As multiple class inheritance is not supported by Java, only one extends relationship can be modeled from a Java class on a diagram. Multiple extends relationships can be modeled from a Java interface.
Implements relationships specify where a modeled Java class is used to implement a modeled Java interface. This is represented as an implements keyword in the source for the Java class. Implements relationships are represented on class diagrams as dashed lines with an empty arrowhead pointing towards the interface to be implemented. Enums cannot implement interfaces.
If the implemented interface is an extension (using an extends relationship) of other modeled interfaces, this is reflected in the Java source code for the interface.
A class that implements an interface can provide an implementation for some, or all, of the abstract methods of the interface. If an interface's methods are only partially implemented by a class, that class is then defined as abstract.
You can create members (fields and methods) of a Java class or interface on a diagram. The fields and methods are added to modeled Java classes and interfaces on a diagram by double-clicking the modeled Java class or interface then adding the field or method using the Java Source Editor.
Fields are used to encapsulate the characteristics of a modeled Java class or Java interface. All modeled fields have a name, a datatype and a specified visibility.
When a field or method is displayed on a class on a diagram, it is prefixed with + (if declared as public), - (if declared as private) or # (if declared as protected). Static fields are underlined on the diagram.
Methods are defined on a class to define the behavior of the class. Methods may have return types, which may be either a scalar type or a type defined by another class.:
To invoke a refactoring operation:
Select a program element in a source editor window, databases window, or structure pane.
Right-click on the program element.
Choose an operation from the context menu.
You can also choose Refactor from the toolbar and select a refactoring operation from the drop-down list.
Rename
Move
Make Static
Pull Members Up
Push Members Down
Change Method (Java methods only)
Enterprise JavaBeans (EJBs) modeling helps you visualize your EJB entity relationships and architecture, and to quickly create a set of beans to populate with properties and methods, and to create a graphical representation of those beans and the relationships and references between them. Whenever a bean is modeled, the underlying implementation files are also created.
To model EJBs start by creating an EJB diagram. For more information, see Section 5.3.1.1, "How to Create a New Diagram." You can later add other elements like UML classes, Java classes, business components, offline database tables, UML use cases and web services to the same diagram. For more information, see Section 5.3.2, "Working with Diagram Elements."
The following are the modeling options available:
Entity beans can be either Container-Managed Persistence (CMP) or Bean-Managed Persistence (BMP). Before creating entity beans with bean-managed persistence, you may want to first consider whether you will need to create relationships between those entity beans. Relationships can only be created between entity beans with container-managed persistence.
Session beans can be have their session type changed on a class diagram by right-clicking on the session bean and choosing Session Type, then Stateful or Session Type, then Stateless.
Message-driven beans are most often used to interact (using EJB References) with session and entity beans.
Enterprise JavaBeans are created on a diagram by using the Entity Bean icon, Session Bean icon or Message-Driven Bean components on the Components window. Select the component and then click the diagram in the desired spot. The implementation files for the modeled elements are created in the location specified by your project settings.
Tip:
If you want to model the implementing Java classes for a modeled bean on a diagram, right-click the modeled bean and choose Show Implementation Files.
Properties and methods are added by either double-clicking the bean and adding the property or method using the EJB Module Editor or by creating the new property or method 'in-place' on the modeled bean itself.
Modeled session and entity beans are made up of several compartments. For example, Message-driven beans have only a name compartment containing the
«message-driven bean» stereotype and the name of the bean. For EJB 3.0 beans the model looks different because there are no compartments for interfaces.
Notice the relationship and edges between the beans. References can be created from any bean to another bean with a remote or local interface. References can only be modeled between beans that are inside the current deployment descriptor.
To create a diagram of EJB/JPA classes:
Create a new EJB diagram in a project or application in the New Gallery.
Create the elements for the diagram using the EJB Components window. Table 5-6 shows the EJB Components window.
You can model a relationship between any two entities on a class diagram by dragging the relationship component from the Components window. You can also show the inheritance edge between the root and child entity.
To model a relationship between two entities on a diagram:
Click the icon for the relationship to create.
Notes:
The navigability and multiplicity of a relationship end can be changed after it has been created.If EJB component icons are not displayed, select EJB Components from the dropdown on the Components window .
Click the entity at the 'owning', or 'from', end of the relationship.
Click the entity bean at the 'to' end of the relationship.
Click the relationship line on the diagram, then click the text fields adjacent to the association to enter the relationship name.
Note:
To change the multiplicity of a relationship end on the diagram, right-click on the relationship end and choose either Multiplicity > 1 or Multiplicity > *.
References can be created from any bean to any other bean with a remote interface using the EJB Reference icon and local references can be created from any bean to any other bean with a local interface using the EJB Local Reference icon on the EJB Components window for the diagram.
A variety of relationships can be created quickly between modeled entity beans using the 1 to * Relationship icon, Directed 1 to 1 Relationship icon, Directed 1 to * Relationship and Directed Strong Aggregation icons.
Properties can be added to modeled EJBs by either double-clicking the bean and adding the property or method using the EJB Module Editor or by creating the new property or method directly on the modeled bean.
When creating a property directly on a modeled bean, enter the name and datatype of the property. For example:
name : java.lang.String
A public (+) visibility symbol is automatically added to the start of the property.
Note:
If a property type from the
java.lang package is entered without a package prefix, for example,
String or
Long, a property type prefix of j
ava.lang. is automatically added. If no type is given for a property, a default type of '
String' (
java.lang.String) is used
Both local/remote and local/local home methods can be created on modeled beans on a class diagram.
When creating a method in-place on a modeled bean, enter the name, and optionally the parameter types and names, and return type of the method. The method return type must be preceded by a colon (:). For example:
getName(String CustNumber) : java.lang.String
A public (+) visibility symbol is automatically added to the start of the method.
Notes:
If a return type from the java.lang package is entered without a package prefix, for example,
String or
Long, a return type prefix of
java.lang. is automatically added to the Java in the method's class. If no parameter types are provided, the method will be defined with no parameters. If no return type is specified, a default return type of void is used. To change a property of the method, double-click the class on the diagram, or on the applications window, then change the details of the method using the EJB Editor.
References can be created between modeled beans on a class diagram.
EJB References can be created from any bean to any other bean with a remote interface.
EJB Local References can be created from any bean to any other bean with a local interface.
Note:
References can only be made to beans that are inside the current deployment descriptor.
To model a reference between modeled beans:
Click the icon from those listed on the EJB Components window:
EJB Reference
EJB Local Reference
Click the bean at the 'owning', or 'from', end of the reference.
Click the bean at the 'to' end of the reference.
Each modeled bean has underlying Java source files that contain the implementation code for that element. These implementation files can be displayed on the diagram as modeled Java classes.
To display a modeled implementing Java class for a modeled bean:
Select the bean, the Java implementation you want to model on the diagram, then choose Model > Show > Implementation Files.
Or, right-click the bean and choose Show Implementation > Files.
The Java source code for a modeled bean can be displayed in the source editor with simple commands on the diagram.
To display the Java source code for a model element:
Right-click the element on the diagram. Choose Go to Source, then choose the source file you want to view.
Or
Select the element and choose Model > Go to Source.
You can change the accessibility of a property or method using right-click.
To change the accessibility of a property or method:
Right-click the property or method you want to change.
Choose the required accessibility option from the Accessible from option.
The accessibility options are:
Local Interface
Remote Interface
Local and Remote Interfaces
Modeled entity beans can be reverse-engineered on a diagram of EJBs from table definitions in your application database connection.
To reverse-engineer a table definition to an entity bean:
Open, or create a diagram.
Expand the node in the Connections window for your database connection.
Expand the user node, then the Tables nodes.
Click the table, the definition to use to create an entity bean, and drag it to the current diagram.
To reverse-engineer several tables to entity beans, hold down the Ctrl key, select the tables in the databases window and drag these tables to the diagram, then release the Ctrl key.
Select the EJB version and click OK.
Modeling your database structures gives you a visual view of your database schema and the relationships between the online or offline tables. You can also transform database tables to UML classes and interfaces and vice-versa using the transformation features. For more information on database transformation see Chapter 5, "How to Transform UML and Offline Databases".
With JDeveloper, you can model offline database objects as well as database objects from a live database connection. You can also create database objects such as tables and foreign key relationships right on your diagram and integrate them with an online or offline database. Offline objects appear in the projects accordion under the offline database heading, and online objects appear under the database connection in the application resources accordion > database connection, in the databases window
Use database diagrams to view your structure of database objects and relationships, as well as create directly on your diagram components such as tables and foreign key relationships, views and join objects, materialized views, synonyms and sequences.
All of the database objects from online or offline databases, as well as the new objects you create are displayed in the Applications window.
Create your database diagram using the New Gallery. See Section 5.3.1.1, "How to Create a New Diagram.".
Once your database diagram is created, you can choose from the components in the Components window, as shown in Figure 5-36.
Create an offline database object on the diagram by clicking on the icon on the Database Objects Components window, and then clicking on the diagram where you want to create the object. This process doesn't create new objects, but adds existing objects to a database diagram.
You can also drag objects from a database connection in the Databases window, or from an offline schema in the Applications window, onto a diagram.
Foreign keys can be created by clicking Foreign Key on the Database Components window, then clicking the table you want to originate the foreign key, and then clicking the destination table for the foreign key. The Create Foreign Key dialog allows you to select an existing column in the target table, or create a new column.
You can create join objects between two table usages in a view by clicking Join Objects, then clicking on the two table usages to be joined. The Edit Join dialog allows you to specify the join.
All templates are defined in the Offline Database Properties dialog.If the object being created has a template defined in the offline database in which it is being created, then using the <ObjectType> component creates objects based on its template.So, for example if the offline database 'Database1' has a Template table 'MyTab' , when you create a new table in Database1 using the table option in the Component window, the new table created is based on Template 'MyTab'. The existing objects from the offline database are added to the diagram.
Synonyms are created on a diagram by clicking Synonym on the Database Components window, and then clicking on the diagram where you want to create the synonym. You can also drag drop objects from online database connection, and import objects from the database connection to the offline database by adding the existing objects from database connection to the diagram.
Public synonyms are created in the PUBLIC schema.
Define a base relation for a view by clicking on Relation Usage on the Database Components window, and then clicking on the view.
Sequences are created on a diagram by clicking on Sequence)on the Database Components window for the diagram, and then clicking on the diagram where you want to create the sequence. You can also drag sequences from a database connection, or from an offline database in the Applications window, and drop them on the diagram. You can also drag tables from an online database connection, or import objects from the database connection to offline database by adding existing objects from database connection to the diagram.
Sequences based on templates are created on a diagram by clicking on (Sequence from Template) on the Database Components window for the diagram, and then clicking on the diagram where you want to create the sequence. You can also create a sequence by drag dropping from the online database connection, or objects can be imported from a database connection to offline database by just add the existing objects from database connection to the diagram.Similarly, existing objects from offline DB can be added to the diagram.The Choose Template Object dialog is displayed, which allows you to choose the template you want to base the materialized view on.
Synonyms based on templates are created on a diagram by clicking on (Synonym from template) on the Database Components window for the diagram, and then clicking on the diagram where you want to create the synonym. The Choose Template Object dialog is displayed, which allows you to choose the template you want to base the materialized view on. You can also create a a synonym by drag dropping from the online database connection, or objects can be imported from a database connection to offline database by just add the existing objects from database connection to the diagram.
Tables are created on a diagram by clicking on Table on the Database Components window, and then clicking on the diagram. You can also drag tables from a database connection, or from an offline database in the Applications window, and drop them on the diagram. Similarly, existing objects from offline DB can be added to the diagram the same way.
You can choose to view table column icons on a modeled table which indicates which columns are primary keys, or foreign keys, or unique keys.
The first column in the modeled table indicates whether the column is in a primary, unique, or foreign key:
Column is in a primary key
Column is in a foreign key
Column is in a unique key
The second column indicates whether the table column is mandatory.
Note:
If a table column is in a primary key it will only display the primary key icon even though it may also be in a unique key or foreign key.
Tables are created on a diagram by clicking on Table from template on the Database Components window, and then clicking on the diagram. The Choose Template Object dialog is displayed, which allows you to choose the template to base the materialized view on. You can also create a table by drag dropping from the online database connection. Objects can be imported from a database connection to offline database by adding the existing objects from database connection to the diagram.
Views are created on a diagram by clicking on View, and then clicking on the diagram. Views can also be created by drag dropping objects from online database connection. Objects can also be imported from database connection to offline database by adding existing objects from the database connection to the diagram.You can also create a a database view by drag dropping from the online database connection, or objects can be imported from a database connection to offline database by just add the existing objects from database connection to the diagram.
Define the view by adding tables and views, or table columns or elements of other views to the newly defined view. You can also drag views from a database connection, or from an offline database in the applications window, and drop them on the diagram.
Views are created on a diagram by clicking on View from template, and then clicking on the diagram. The Choose Template Object dialog is displayed to choose the template you want to base the materialized view on.
On the database diagram, right-click and choose Create Database Objects In > Database or Schema.
Complete the Specify Location dialog or Select Offline Schema dialog.
Note:
All subsequent database objects will be created in the database or schema you have chosen. Existing objects are unchanged.
Use activity diagrams to model your business processes. Your business process are coordinated tasks that achieve your business goals such as order processing, shipping, checkout and payment processing flows.
Activity diagrams capture the behavior of a system, showing the coordinated execution of actions, as shown in Figure 5-37.
The Components window contains the elements available for your activity diagram. An Activity is the only element that you can place directly on the diagram. You can place the other elements inside an Activity. Each of the elements is represented by unique icons as well as descriptive labels, as shown in Figure 5-38 and Table 5-7.
use the New Gallery wizard to create your activity diagram following the steps in Chapter 5, "How to Create a New Diagram".
To create nodes, click the Initial Node icon, the Activity Final Node icon, or Final Flow Node icon in the Component window, then click on the diagram where you want to place the node.
You create partitions on a diagram by selecting an action, then selecting Show Activity Partition under Display Options in the Properties window.
To show a partition on an activity diagram:
In the activity diagram, select an action.
In the Properties window, expand the Display Options node.
Select Show Activity Partition. The action on the diagram displays the text, (No Partition).
Click on the text. An editing box appears where you can enter a name for the partition.
The sequence diagram describes the interactions among class instances. These interactions are modeled as exchanges of messages. At the core of a sequence diagram are class instances and the messages exchanged between them to show a behavior pattern, as shown in Figure 5-39.
The elements you add from the Components window 5-40 displays the elements in the Components window available to use in your sequence diagram. Each element is represented by a unique icon as well as a descriptive label.
use the New Gallery wizard to create your activity diagram following the steps in Chapter 5, "How to Create a New Diagram". databases window 5-41 5-42 shows the combined fragments that display in the Components window when your diagram is open in the diagramming window.
Use case diagrams overview the usage requirements for a system. For development purposes, use case diagrams describe the essentials of the actual requirements or workflow of a system or project, as shown in Figure 5-43.
Use case diagrams show how the actors interact with the system by connecting actors with their actions. If an actor supplies information, initiates the use case, or receives information as a result of the use case, then there is an association between them.
Figure 5-44 displays the Components window with the elements available to add to your use case diagram. Each element is represented by a unique icon and descriptive label.
An Interaction is the only element you can add directly to the diagram. You put all of the other elements within an Interaction.
You can determine the appearance and other attributes for subject, actor and other objects of these types by modifying the properties in the Properties window, or by right-clicking the object and modifying the properties, or by creating and customizing an underlying template file.
You can use templates to add the supporting objects to the Components window. For more information, see Chapter 5, "How to Create Use Case Components Templates".
You can show the system being modeled by enclosing all its actors and use cases within a subject. Show development pieces by enclosing groups of use cases within subject lines. Add a subject to a diagram by clicking on Subject in the Components window, then drag the pointer to cover the area that you want the subject to occupy. Figure 5-45 Components window, Components window. Components window under Diagram Annotations.
You can represent interactions between use cases and subjects using the Communication icon on the Components window..
Select the Editor tab and the text that you want to change, then choose one or more of the formatting options from the toolbar. Components window there must be one use case template that supports it. The use case template that you specify is not copied or moved: it remains in its original location.
To create a new Components page and add a new use case component:
Go to Tools > Configure Palette.
In the Configure Components dialog, select use case for Page Type.
Click Add.
Enter a name for the new Components page.
From the drop-down list, choose Use Case, then click OK. The new page appears in the Components window. The Pointer item is included automatically.
Open the context menu for the component area of the Components window. | http://docs.oracle.com/middleware/1212/jdev/OJDUG/dev_apps_modeling.htm | CC-MAIN-2016-40 | refinedweb | 10,444 | 55.34 |
Use this How-To to set WMI root level security for your Spiceworks scan account.
Credit to Steve Patrick of Spat's WebLog for provideing the info to get this to work.
Complete steps 1 - 5 from your workstation or a test workstation. I recommend you run step 7 from your sysvol folder (assuming your in a domain environment).
Start - Run - compmgmt.msc
Expand 'Services and Applications'
Right click 'WMI Control"
Select 'Properties'
Click 'Security' tab
Click to highlight 'Root'
Click 'Security' button in lower left corner.
In this case I added spiceworks@mydomain.com and gave it full permission (you may want to give it read only).
Once you have done this, click OK to all dialog boxes and close the snapin.
Open a command window with administrative credentials (start - run - cmd - [right click on cmd.exe] click "Run as Administrator"), at the c:\> prompt, type the following then press enter:
wmic /namespace:\\root /output:c:\sd.txt path __systemsecurity call getSD
note: The picture is incorrect, please use the above syntax, use the picture for reference.
From this document, you are interested in the highlighted portion. I have blurred a portion of mine. We only want the numbers and the commas, no brackets or anything else.
If you are using notepad, don't forget to turn on wordwrap.
Create a new txt document and rename it WMISecurity.vbs.
Edit the document and past the following:
strSD = array(*****PLACE YOUR DATA FROM STEP 5 HERE*******)
set namespace = createobject("wbemscripting.swbemlocator").connectserver(,"root")
set security = namespace.get("__systemsecurity=@")
nStatus = security.setsd(strSD)
At this point, you can use your favorite scripting utility to deploy your script (please test before putting into production).
You could also add the line:
cscript (path to your script)\WMISecurity.vbs
to a .bat file and call it from Active Directory, GPO login script, or just run it manually from each machine having WMI security issues.
Take a look at Adam1818 excellent How-To on 'Spiceworks Computer Scan Error Repair' located here:
Sometimes there are more than just WMI Security issues to deal with and this is a great tool to fix multiple issues.
on step 4, please note you need to run cmd as admin on vista and 7 computers, just thought id note that :D
Get Security Descriptor from computer
ESR 1, you are correct, it does need to be run from an elevated command prompt. It has been changed. | https://community.spiceworks.com/how_to/2447-set-wmi-security-via-login-script-gpo | CC-MAIN-2017-13 | refinedweb | 406 | 65.62 |
react-query v1.0 was released on 26 February, which brought about a change in the react-query API and all-new dedicated devtools.
In this post, I will be discussing the following changes on:
- Query keys and query functions
useQueryHook
- The new queries operation handler,
queryCache
- react-query-devtools
A comprehensive list of the updates (mostly minor changes) can be found on the changelog.
Moving on, I’ll be discussing these changes in the following sections but it is essential that you check this article where I talked about react-query and built a sample first.
Updating react-query
In your existing application, update your react-query package with either of these commands, depending on the package manager you’ve chosen:
npm install react-query // or yarn add react-query
Query keys and query functions
Query keys
The new query keys in react-query can now entertain more serializable members in the array constructor as opposed to the previous limitation of only a
[String, Object] member, giving more insight and detail to your queries.
Example:
//old const { data } = useQuery(["recipes", { id: recipe.id }]) // new const { data } = useQuery(["recipes", {id: recipe.id}, status])
Query functions
The query functions in the older versions of react-query accepted only one argument, which is the query object pointing to the data to be retrieved. However, the new version of react-query requires that all query key items are passed into query functions retrieving data from a source.
In the old version, the query functions were written as:
export async function fetchRecipe({ id }) { return (await fetch( ` )).json(); }
But, in the new version, the above query is rewritten as:
export async function fetchRecipe(key, { id }) { return (await fetch( ` )).json(); }
In the above, the
key argument there is the query name from the
useQuery Hook where this query function will be used. This new addition is very important as it enables the query function to act on a specific query where it is called from.
This is a breaking change; in newer versions, the old method of writing query functions will not work.
useQuery Hook
In the
useQuery Hook, the
paginated optional argument has been removed due to the introduction of two new Hooks:
usePaginatedQuery and
useInfiniteQuery. This includes the following options and methods as well:
isFetchingMore
canFetchMore
fetchMore
The
useQuery Hook still maintains its mode of operation.
queryCache
import { queryCache } from "react-query";
The
queryCache instance is responsible for managing all state activities that a query undergoes in react-query. It manages all of the state, caching, lifecycle, and magic of every query. It has a number of methods, such as the
prefetchQuery, which was previously an independent Hook. The methods under the
queryCache instance are:
1.
queryCache.prefetchQuery([, query], function, …)
Originally an independent Hook in react-query before the release of version 1.0.0, the
queryCache.prefetchQuery() method prefetches data and stores it in cache before the data is required by the application.
The old
prefetchQuery Hook is now discontinued and is no longer available. As such, if your application uses this Hook, you’ll have to replace
prefetchQuery() with
queryCache.prefetchQuery(arg) to avoid breaking your app upon updating the react-query package.
In older versions:
import { useQuery, prefetchQuery } from "react-query"; <Button onClick={() => { // Prefetch the Recipe query prefetchQuery(["Recipe", { id: Recipe.id }], fetchRecipe); setActiveRecipe(Recipe.id); }} >
In the new version:
import { useQuery, queryCache } from "react-query"; <Button onClick={() => { // Prefetch the Recipe query queryCache.prefetchQuery(["Recipe", { id: Recipe.id }], fetchRecipe); setActiveRecipe(Recipe.id); }} >
2.
queryCache.getQueryData(querykey)
This is a synchronous method that returns the data corresponding to the query key passed into it from the cache. If the query doesn’t exist or cannot be found,
undefined is returned.
Example:
import { queryCache } from "react-query"; const data = queryCache.getQueryData("Recipes") // Returns the list of recipes present else undefined.
3.
queryCache.setQueryData(querykey, updater)
This method updates a query whose identifier has been passed into the method with new data passed as the
updater value. The
updater value can either be the value to be updated or a function to update the query.
Example:
import { queryCache } from "react-query"; queryCache.setQueryData("Recipes", ["Toast Sandwich", "Brocolli"]); queryCache.setQueryData(queryKey, oldData => newData);
setQueryData is a synchronous method that updates the passed query immediately and creates a new query if the passed query doesn’t exist.
4.
queryCache.refetchQueries(querykey)
This method refetches a single or multiple queries, depending on which is passed into it. This method is particularly useful where you want to refresh you app to get new data but do not want to reload the whole page to avoid re-rendering all the components.
Here is an example where
refetchQueries is used in an
onClick function to reload the list of recipes on a page:
import { queryCache } from "react-query"; <Button onClick={() => { queryCache.refetchQueries("Recipes"); }}> Refesh Recipes </Button>
In the above code, once the button is clicked, the
Recipes query is refetched and the page updated with new recipes if the query has been updated.
5.
queryCache.removeQueries(queryKeyorFn, { exact })
This method removes queries from the cache based on the query key passed into it. Queries can also be removed by passing a function instead of a query key.
Example:
import { queryCache } from "react-query"; queryCache.removeQueries("Recipes") // Removes all cached data with query key `Recipes`.
6.
queryCache.getQuery(queryKey)
This method returns complete information on a query: instances, state, query identifier, and query data from the cache. This is the query method utilized in react-query-devtools, which we’ll discuss later in this post.
It tends to be unnecessary in most scenarios but comes in handy when debugging. You’d use it like this:
import { queryCache } from "react-query"; queryCache.getQuery("Recipes"); // Returns complete information about the "Recipes" query
7.
queryCache.isfetching
This method returns an integer of the queries running in your application. It is also used to confirm whether there are running queries.
import { queryCache } from "react-query"; if (queryCache.isFetching) { console.log('At least one query is fetching!') }
Note that this isn’t a Boolean method.
8.
queryCache.subscribe(callbackFn)
The
subscribe method is used to subscribe to the query cache as a whole to inform you of safe/known updates to the cache, like query states changing or queries being updated, added, or removed. This method also comes in handy when debugging.
It is used like this:
import { queryCache } from "react-query"; const callback = cache => {} const unsubscribe = queryCache.subscribe(callback)
9.
queryCache.clear()
This method clears every query presently stored in cache. This method can be used when unmounting components.
import { queryCache } from "react-query"; queryCache.clear();
This marks the end of the new
queryCache features. Let’s move on to the new react-query-devtools.
react-query-devtools
Like other devtools, react-query-devtools enables you to keep track of the query operations in your application. It can either be embedded on your app or kept afloat, giving you the option to keep it open or closed.
You can install react-query-devtools through Yarn or npm:
npm install react-query-devtools // or yarn add react-query-devtools
Operations
react-query-devtools allows you to monitor the state of your queries, view data retrieved from queries, remove queries from cache and refetch queries. In the devtools console, there are four indicators of state of a running query:
- Fresh: This indicates that the query is a new one and transits into the next state almost immediately
- Fetching: This indicates that the query is being fetched from its fetcher function
- Stale: This indicates that the query has been fetched and is on standby. Queries in this state rerun when there’s a window focus on them (except when turned off from the
ReactQueryConfigProvider)
- Inactive: This indicates that the query operation has been completed
Attached below is a short clip of react-query-devtools in action, demonstrating the query operation processes:
Conclusion
The new updates to react-query are pretty excellent. The addition of the devtools makes it easy to build apps and debug with react-query.
Check here to reference the code snippets used in the new features explanations above. Keep building amazing things, and be sure to keep checking the blog for crispy new posts ❤ . “What’s new in react-query v1.0”
Is SWR still your preference for data fetching after this update to react-query? | https://blog.logrocket.com/whats-new-in-react-query-v1-0/ | CC-MAIN-2022-21 | refinedweb | 1,391 | 54.22 |
Windows Server 2012 Essentials Build document
<in progress - placeholder>
The assumption for Windows Server 2012 Essentials (hereinafter called WSE12) is that it will be the first domain controller in the network. Note this does not mean it has to be the only DC, just that like the products it inherited it's legacy from, that
it has to hold the FSMO roles. The assumption is that the external router will perform the role of DHCP and provide WSE12 with a dynamic IP address. Whilst you can install the role later after the server is installed and assign the server a static IP, it
is assumed that DHCP will be enabled and running on the router as you build the WSE12 server. Please review the router setup document
here.
When installing the DHCP role, ensure that the other DHCP servers are turned OFF before you install the role.
A proper How-To is here:
Official SBS Blog: Running DHCP Server on SBS 2011 Essentials With a Static IP.
As a suggestion, you may wish to shrink that last volume and divide the space into two volumes, one for data storage and the other for client backup files. Doing so will give you the flexibility to not backup the volume containing the client backup files.
On the Dashboard - the devices tab - under Device Tasks click Implement Group Policy, which will setup policies for Folder Redirection and Security Settings for Windows Update, Windows Defender and firewall.
Follow the OEM instructions here: Add Branding to the Dashboard, Remote Web Access, and Launchpad:
Tip from
Essentials can support an on premises Exchange server on a member server, hosted Exchange or an email deployment with Office 365. You can even look into using a third party mail solution such as Kerio.
For an on premises Exchange server - follow Robert Pearman's Exchange script that will automatically install the Exchange server and allow it to be configured. Consider it a wizard without a GUI. Full details here: This works for sure with Server 2008 r2
and Exchange 2010 - On Prem Exchange Windows Server 2012 Essentials: The Script! « Title (Required):
To add network printers follow Robert Pearman's post on how to add printers:
You want to follow Robert Pearman's post on this topic:
If you added users via ADUC or migrated to Essentials and now the user isn't in the console - follow this:
Type *cd “\Program Files\Windows Server\Bin”*, and press ENTER.
Type *WssPowerShell.exe*, and then press ENTER.
Type *Import-WssUser –SamAccountName <username>*, and then press ENTER.
Repeat the previous step for each user name that you want to import into the Dashboard. domain, you want them to remain totally independent for many a reason.
Here is a nugget of information that you will love then. When you run the Connect computer wizard () you can avoid the domain join bit by simply running this command in an elevated command prompt beforehand:
reg add "HKLM\SOFTWARE\Microsoft\Windows Server\ClientDeployment" /v SkipDomainJoin /t REG_DWORD /d 1
....then, run the wizard and you will get all the good backup and home features without the domain join.
(taken from
see link
If you still want to use RWA with this non domain workstation
1. When prompted for the RD Gateway Credentials
- Use a Standard User defined on the WSE server. That user must be given permission to access the client machine-- done by double-clicking the user and selecting the Computer Access tab. Give it a few seconds to populate as it can be as slow as molasses.
- Use the format domain\username. The domain is the domain name created when the WSE server was first built. For example: [wse domain name]\[wse standard username]
2. If that works, it prompts with the Windows Security dialog from the client computer.
- Click on "Use another account" and once again use the domain\username format. But this time the domain should be the name of the client computer. The user should be a user of the client machine that has been given remote access. For example: [client machine
name]\[client username]
See this
wiki post.
If you see the following error in Health Reports...
--------------------------
DFSR Event ID: 2147485861: 4C7D4FA6-61AF-11E2-93ED-806E6F6E6963
---------------------------
Steps to prevent the error...
1. Change the registry key:StopReplicationOnAutoRecovery value to 1
Open Command Prompt | Regedit
Go to key: HKLM\System\CurrentControlSet\Services\DFSR\Parameters\StopReplicationOnAutoRecovery
Make sure StopReplicationOnAutoRecovery key is set to: 1
2. From an elevated command prompt, type the following command:
wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid="4C7D4FA6-61AF-11E2-93ED-806E6F6E6963" call ResumeReplication
(all on one line)
3. After successfully running the WMIC command:
Open Command Prompt | Regedit
Go to key: HKLM\System\CurrentControlSet\Services\DFSR\Parameters\StopReplicationOnAutoRecovery
Change StopReplicationOnAutoRecovery from 1 to 0
Quit Regedit
For more information, see. | https://social.technet.microsoft.com/wiki/contents/articles/13620.windows-server-2012-essentials-build-document.aspx | CC-MAIN-2018-13 | refinedweb | 803 | 52.19 |
My boss asked me to write
a program that accepts an array of integers and prints the minimum of the numbers
What does he mean by that ??
My boss asked me to write
a program that accepts an array of integers and prints the minimum of the numbers
What does he mean by that ??
You'd probably have to ask him or her to be sure.
Normally that would mean reading integers from a file or standard input and then outputting to a file or standard output the smallest number you read in.
Can you help me with the Code please ?
:-) Invisible ?
We generally help with existing code. You try and if you have a specific question you ask. It's hard to help with the code when there is no code to help with.
When you have code to show, make sure to post it in [code][/code] tags.
Code:#include <iostream> using namespace std; char DoMenu(); void find_largest(); void find_smallest(); int main() { char choice; do { choice = DoMenu(); switch (choice) { case 'a': // fall through case 'A': find_largest(); break; case 'b': case 'B': find_smallest(); break; } } while ((choice != 'C') && (choice != 'c')); return 0; } char DoMenu() { char MenuChoice; cout << "A - Find the largest # with a known quantity of numbers\n"; cout << "B - Find the smallest # with an unknown quantity of numbers\n"; cout << "C - Quit\n"; cout << "Please enter your choice: "; cin >> MenuChoice; return MenuChoice; } void find_largest() { int largest = 0; int numbers=0, input, i=0; cout << endl; cout << "How many numbers to enter: "; cin >> numbers; for (i=0; i<numbers; i++) { cout << "Enter number " << i+1 << ": "; cin >> input; if (input > largest) largest = input; } cout << endl; cout << "Largest: " << largest << "\n\n"; } void find_smallest() { int smallest = 9999; int input=0, i=1; cout << endl; do { cout << "Enter number " << i << ": "; cin >> input; i++; if ((input < smallest) && (input != -99)) smallest = input; } while (input != -99); cout << endl; cout << "Smallest: " << smallest << "\n\n"; }
That's nice code that you apparently found on the internet:
But if you aren't going to do any work yourself it makes no sense for us to do it for you.
Now, as I have Code, I can study it and learn from it...
I encourage you to do so. I just wanted others to be aware, in case it wasn't obvious, that your posted code wasn't something you came up with and needed help with.
Your boss asked you to write this eh? Seems very simplistic for 'work stuff'. | https://cboard.cprogramming.com/cplusplus-programming/99692-program-accepts-array-integers-prints-minimum-numbers.html?s=71e7b8cd332e50b0ef0bf72985a79274 | CC-MAIN-2019-51 | refinedweb | 410 | 74.83 |
In this tutorial, we’ll learn how to use React Dropzone to create an awesome file uploader. Keep reading to learn more about react-dropzone.
Read part two in the React file upload series: Upload a File from a React Component
A Basic React Dropzone File Picker
We’re going to start by building a very simple file picker using React Dropzone.
As an extra challenge, try to follow this tutorial using React Hooks. If you’re new to Hooks, I’ve written a simple introduction to React Hooks.
Jump into App.js and start by removing the boilerplate code in the render function. After you’ve stripped out this unwanted code, import the React Dropzone library.
To keep things simple, we’ll name the method the same as the prop: onDrop.
Our onDrop method has a single parameter, acceptedFiles, which for the time-being we log out to the console. This is only for testing purposes.
Save the component, open your browser and go to your running React app. Click the text label and a file picker window will open up! Great! We’ve got a basic file picker working. 🎉 you’ve seen. This is because it uses the Render Props technique.
The Render Prop Function is used to change the HTML inside of the Dropzone component, based on the current state of Dropzone.
To demonstrate, let’s add a variable to the Render Prop function called isDragActive. This gives us access to Dropzone’s current drag state.
Now that we have access to the isDragActive state, we can change the text value} accept="image/png" minSize={0} maxSize={maxSize} > {({getRootProps, getInputProps, isDragActive, isDragReject, rejectedFiles}) => { const isFileTooLarge = rejectedFiles.length > 0 && rejectedFiles[0].size > maxSize; return ( > )} } < multiple prop to the React Dropzone component declaration, like so:
<Dropzone onDrop={this.onDrop} accept="image/png" minSize={0} maxSize={maxSize} multiple > ... </Dropzone>
React Dropzone using Hooks
Since writing this tutorial React Hooks have been officially released, and the react-dropzone library has been updated to include a custom useDropzone Hook.
Therefore, I’ve re-written the whole App.js as a functional component using the useDropzone custom hook provided by
import React, { useCallback } from 'react'; import { useDropzone } from 'react-dropzone' const App = () => { const maxSize = 1048576; const onDrop = useCallback(acceptedFiles => { console.log(acceptedFiles); }, []); const { isDragActive, getRootProps, getInputProps, isDragReject, acceptedFiles, rejectedFiles } = useDropzone({ onDrop, accept: 'image/png', minSize: 0, maxSize, }); const isFileTooLarge = rejectedFiles.length > 0 && rejectedFiles[0].size > maxSize; return ( <div className="container text-center mt-5"> > </div> ); }; export default App;
Showing a List of Accepted Files
One nice touch we could add to our react dropzone component is to see a list of accepted files before we upload them. It’s a nice bit of UX that goes a long way to adding to the experience.
The useDropzone Hook provides us a variable, acceptedFiles, which is an array of File objects. We could map through that array and display a list of all the files that are ready to be uploaded:
... <ul className="list-group mt-2"> {acceptedFiles.length > 0 && acceptedFiles.map(acceptedFile => ( <li className="list-group-item list-group-item-success"> {acceptedFile.name} </li> ))} </ul> ...
Wrapping Up
There you have it! A simple file picker component built using React Dropzone. If you enjoyed the tutorial or ran into any issues, don’t forget to leave a comment below. 👍
💻 More React Tutorials
Is part 2 “Adding file to server” available yet?
Can you also explain how to upload only 1 file?
Thanks a lot!!!
How to reduce image size while uploading
How can i save this files on a folder?
I don’t understand where do I put the code of showing the list of files.
Also, can i limit it to one file and don’t accept more?
I have two dropzone. I want the content of my 1st dropzones to also put it in 2nd dropzone. It’s like the 2nd dropzone will mirror the 1st dropzone. How can i do it?
How can I add preview thumbnail?
How to clear files array after drop, I set a limit to 4, but on every upload, this value should be clear. I have created thumbs using acceptedFiles pushing into another array, while deleting those thumbs my dropzone input files array also need to be clear, please support me.
Cool! It was really easy to understand) Thanks 🙂
Best Wishes!
Hello, Can you tell me how I can render a list of uploaded files?
how can i delete(Remove) added document.?? eg:- Artboard.png
And
How to add one document as multiple time
Have you run into issues with Dropzone only uploading one file on Samsung devices shen multiple files have been selected?
can’t wait to read that blog post | https://upmostly.com/tutorials/react-dropzone-file-uploads-react?utm_campaign=Fullstack%2BReact&utm_medium=web&utm_source=Fullstack_React_125 | CC-MAIN-2020-29 | refinedweb | 785 | 58.38 |
Up to [DragonFly] / src / share / man / man5
Request diff between arbitrary revisions
Keyword substitution: kv
Default branch: MAIN
Fix typo.
Adjust for recent devd(8) import.
Adjust for dhcpd/dhcrelay removal.
MFC: Update rc.conf.5: * Drop superfluous 'This variable contains' * Drop note that dhclient is ISC * Break network_interfaces item into ifconfig_interface etc. parts * Add Xref for rtadvd.8 * Add /etc/startif.interface to FILES section * Improve markup
Update hammer.8: * Drop superflous 'This variable contains' * Drop note that dhclient is ISC * Break network_interfaces item into ifconfig_interface etc. parts * Add Xref for rtadvd.8 * Add /etc/startif.interface to FILES section * Improve markup
MFC: Remove more (x)ntpd remains.
Remove more (x)ntpd remains.
Adjust date for bthcid(8) variables.
Nuke the ntpd(8). Approved-by: dillon@
Document bthcid(8) related variables in rc.conf(5).
Add dot (.) forgotten in last commit. Noticed-by: swildner@
Mention wpa_supplicant(8) and wpa_supplicant.conf(5) if a user adds WPA keyword to an ifconfig_$foo line.
Remove some unused variables.
Add etc/rc.d/newsyslog to the build (seems to have been forgotten).
Add some more rc vars for Bluetooth to rc.conf and document them.
* Document the valid values for variables of type bool. They is already documented in rc.subr(8) but it seems appropriate to add this info here, too. * Mention that '_enable' is not needed.
Remove references to kldxref(8) which we don't have.
Hardware sensors framework originally developed in OpenBSD and ported to FreeBSD by Constantine A. Murenin <mureninc at gmail.com>. Obtained-from: OpenBSD via FreeBSD GSoC 2007 project
Add forgotten section number.
Bring some changes from FreeBSD into the jail rc script. Submitted-by: Pawel Biernacki <kaktus@acn.pl> With some adjustments by me.
Add some words about rtsold_{enable,flags}.
Add a hostapd rc script. Obtained-from: FreeBSD
Remove some non-existant variables and clean up the manpage a bit.
Nuke some obsolete named(8) related variables which were removed from the system when we upgraded to BIND9 in 2004.
Document pf(4) related variables.
Oops, add variable types.
Document some variables which were added with Andreas Hauser's rc.firewall rewrite of 2004.
Kill documentation for harvest_{interrupt,ethernet,p_to_p} which were removed in 2004.
Remove etc/rc.d/archdep which didn't serve any real purpose.
Fix typo.
Miscellaneous mdoc fixes.
Next round of fixing all kinds of spelling mistakes.
Another round of spelling fixes in manpages, messages, readmes etc.
My round of spelling corrections in /share/man.
Add reference to dntpd(8) in SEE ALSO.
* Add descriptions for: dntpd_enable, dntpd_flags, dntpd_program, dhcpd_enable, dhcrelay_enable, ftpd_enable, ftpd_flags, mixer_enable, nfs_client_flags, rand_irq, resident_enable, varsym_enable, vidhistory * Add references to the associated manual pages. * Move sshd_program below sshd_enable. * Minor cleanup. There's still some work left.
Remove ldconfig_paths_aout which was removed from the system..
Remove more DEC Alpha support.
Remove devd specific documentation.
fix case
Ports -> pkgsrc
Mop up remains of the ibcs2/streams/svr4 removal: * Remove streams(4) and svr4(4) manual pages. * Add associated modules and their manual pages to the list of files to be removed upon 'make upgrade'. * Remove IBCS2 and SPX_HACK options. * Change M_ZOMBIE definition back to static. * Fix miscellaneous references & comments.
Misc mdoc(7) cleanup: * Fix section numbers. * Fix .Xr abuse. * Remove reference to obsolete plot(1) manual page. * Fix typo.
Remove the (unmaintained for 10+ years) svr4 and ibcs2 emulation code. Poof, gone.
Remove sysinstall(8) reference and fix wording.
Fix xref order.
Remove trailing blank space characters. mdoc(7) explicitly recommends doing so lest troff might get confused.
Document cleanvar_enable in rc.conf.5 and document the purge code in cleanvar. Submitted-by: "George Georgalis" <george@galis.org>
Mop up OLDCARD remains.
Handle renaming of battd.1 to battd.8
Remove PCVT kernel part and mop up.
Add RCNG support for setting the negative attribute cache timeout, make the boot-time display of nfs client parameters more readable, and properly document the nfs_access_cache and nfs_neg_cache RCNG configuration variables. Increase the default NFS attribute cache timeout from 2 to 5 seconds.
- Add battd to rc.conf(5) Submitted by: Devon H. O'Dell <dodell@sitetronics.com>
Add some documentation for ntpd_flags.
ntpdate(8) is gone and has been replaced by rdate(8).
Replace the Perl scripts makewhatis(1), makewhatis.local(8) and catman(1) by C programs. Submitted by: Dheeraj Reddy <dheerajs@comcast.net> Taken from: FreeBSD In contrast to FreeBSD, put makewhatis.local under src/libexec and put makewhatis into src/usr.sbin. Update man pages accordingly.>.
update manpage to reflect changes in RCng
Add the DragonFly cvs id and perform general cleanups on cvs/rcs/sccs ids. Most ids have been removed from !lint sections and moved into comment sections.
import from FreeBSD RELENG_4 1.64.2.52 | http://www.dragonflybsd.org/cvsweb/src/share/man/man5/rc.conf.5?f=h | CC-MAIN-2015-27 | refinedweb | 790 | 55.2 |
This article concerns matrix-bot-sdk, a TypeScript client SDK for Matrix. We'll build a simple "echo bot", meaning a bot which replies to messages with the text it has just read.
Note that although the SDK is written in TypeScript, we'll use JavaScript in our examples. If you'd prefer to use TypeScript, then do!
Let's make a new folder, and import our only npm dependency. The following examples are all meant to be run in a bash terminal.
mkdir matrix-js-echo-bot cd matrix-js-echo-bot npm install matrix-bot-sdk
Create a new file named "index.js", and let's get started.
In our js file, start by importing the minimum we'll need for this example:
const sdk = require("matrix-bot-sdk"); const MatrixClient = sdk.MatrixClient; const SimpleFsStorageProvider = sdk.SimpleFsStorageProvider; const AutojoinRoomsMixin = sdk.AutojoinRoomsMixin;
Create a new account for your bot on a homeserver, then get the
access_token. The simplest way to do this is using Element, take a look at these instructions. Set some variables to store the homeserver and
access_token. This is all the authentication you need!
const homeserverUrl = ""; // make sure to update this with your url const accessToken = "YourSecretAccessToken";
Now we'll configure a storage provider - matrix-bot-sdk provides the
SimpleFsStorageProvider, which is ideal for most cases:
const storage = new SimpleFsStorageProvider("bot.json");
When the bot starts, the SDK will create a a new file called "bot.json" to store the data it needs.
Finally we're ready to start the client! As you'd expect, we'll use the variables we've already specified.
const client = new MatrixClient(homeserverUrl, accessToken, storage);
There is one more thing we need to do. We'll include a mixin which instructs the bot to auto-accept any room invite it receives. This makes testing much more convenient.
AutojoinRoomsMixin.setupOnClient(client);
Finally, let's start the Client:
client.start().then(() => console.log("Client started!"));
If you're keeping up, your code should look something like:
import { MatrixClient, SimpleFsStorageProvider, AutojoinRoomsMixin } from "matrix-bot-sdk"; const homeserverUrl = ""; // make sure to update this with your url const accessToken = "YourSecretAccessToken"; const storage = new SimpleFsStorageProvider("bot.json"); const client = new MatrixClient(homeserverUrl, accessToken, storage); AutojoinRoomsMixin.setupOnClient(client); client.start().then(() => console.log("Client started!"));
Let's run it:
node index.js
This should now join and sit idle, but join any room you invite the bot to.
Right now, while it's just listening to invites and nothing else, what is the bot actually doing? It's calling the
/sync endpoint in a loop. Calling this endpoint returns all new events since some previous point.
Leave the script running and open
bot.json, which is the file we specified for storage. This file contains a field
syncToken, which is being occasionally updated - the SDK uses this field to give a token to the homeserver, which uses it to know which events to send back.
In order to echo messages, our bot must first be able to read them. The
client.on() method of our MatrixClient takes two arguments: one for the event type, one for a callback to handle the event:
client.on("room.message", (roomId, event) => { if (! event["content"]) return; const sender = event["sender"]; const body = event["content"]["body"]; console.log(`${roomId}: ${sender} says '${body}`); });
In this way we can inspect an the contents of an event and render them. We choose to exit early in the case that
event["content"] is empty because this will usually mean the message was redacted.
To send a message, we use the
client.sendMessage() method. This takes two arguments: the roomId, and a JSON object containing the contents of the message to send, for example:
client.sendMessage(roomId, { "msgtype": "m.text", "body": "This is message text.", });
Note, it's also possible to use
client.sendText() to achieve the same result, as in
client.sendText(roomId, "This is message text.")
The reason for showing
client.sendMessage() is to make it clear that the message format is just the same as you'd find in the spec.
To work, an echobot needs only to listen for incoming messages, read the message text, and use it to reply. Let's demonstrate that now.
client.on("room.message", (roomId, event) => { if (! event["content"]) return; const sender = event["sender"]; const body = event["content"]["body"]; console.log(`${roomId}: ${sender} says '${body}`); if (body.startsWith("!echo")) { const replyText = body.substring("!echo".length).trim(); client.sendMessage(roomId, { "msgtype": "m.notice", "body": replyText, }); } });
It's extremely simple to listen to messages with matrix-bot-sdk create an echobot! There are many more features, you can see the MatrixClient class is very well documented. Next in this series, we'll explore Rich Replies, and take a look at the kick and ban functions for room administration.
This SDK uses TypeScript, which provides a lot of benefits. In this example, we used JavaScript, but it's just as easy to use TypeScript and maybe preferable, since it is the language matrix-bot-sdk is written in.
First let's install
tsc, which compiles from TypeScript to JavaScript:
npm install typescript
Now, start tsc in watch-mode (
-w), and leave it to compile our code:
npx tsc --watch *.ts
Now, whenever we create a new TypeScript (
.ts) file, it will be automatically watched and compiled to JavaScript.
When you have your .js file(s), you can run them with
node <filename> as normal. | https://www.matrix.org/docs/guides/usage-of-matrix-bot-sdk/ | CC-MAIN-2022-27 | refinedweb | 903 | 67.76 |
After IPv4, How Will the Internet Function? 320
An anonymous reader writes ?"
Dual stack failed? (Score:5, Interesting): (Score:3, Informative)
Re: (Score:3, Insightful) (Score:2)
What about variations on that theme we're all hearing about the Premium Internet - can they hook that stuff up to nice new IP6 addresses, with not a titty to be found, leaving the "ghetto" kids in IP4?
Re:Dual stack failed? (Score:4, Informative): (Score:2)):
Re:Dual stack failed? (Score:4, Insightful)
in reality, if someone does anything even remotely competent, it should be a 1 day process, maximum - after all, using NAT or IPv6 internally should make it even less of an issue.
I think if you were to estimate the time it takes to change the company fleet of cars from summer to winter tires, you'd budget about ten seconds per car - that's how long it takes in Formula One, right? Companies don't plan to redo their network structure, ever. They do as little as possible as rarely as possible because it's pure cost. What you're looking at is an endless amount of cruft with IPs hard coded all over PCs, routers, configuration files, scripts, scheduled jobs, firewall configurations, stored server information or URLs, documentation, the works. Sure you could blow away millions of dollars on optimizing "network reoganization" process, making the company a world leader in that until someone with the money asks "Why the f*ck are we spending all this money on THIS? What in heaven's name do we get for it?" and you'd better have a better answer "So we can give some IP addresses back to ICANN for free." Otherwise cleaning up all that cruft will be on your project time and project cost, and if you still think you can do it in a day you're a monkey on crack.
Re: (Score:2)
This is simply wrong. Please don't spout this anymore - you are spreading a myth.
There are about 40
/8 blocks allocated to organizations.
Since 2004 we've used at least an average 10 blocks each year and I'm not including the rush in 2010 when 19 blocks were allocated.
If we could magically reclaim all 40 of these
/8 blocks today it would buy us no more than 4 years. And remember - those organizations that lost their space would be eligible to immediate regain some percentage of their space based on their a
Re: (Score:3)
Only that it would buy more time during the transition
The IPv6 transition doesn't need time, it already had tons of that. What IPv6 needs pressure to force people to actually start doing it and for that a shortage of IPv4 is actually a good thing.
Re:Dual stack failed? (Score:5, Interesting)
Engineering of application-layer protocols is far easier when everyone is addressable. The deployment of NAT has had a cascading effect on many application layer protocols that would have had a simple, obvious implementation were every node equally addressable. Instead, every new application protocol has to consider and work around NAT.
So sure, as we stand today that ship has sailed and NAT has created a hierarchy of nodes that is unavoidable in today's network engineering, but I wonder how much innovation has been stifled by time spent working around NAT.
Re: (Score:2): (Score:3): (Score:2)
Dual stack WAS fine. The people who put off implementing a perfectly reasonable solution until it would no longer work have doomed us to increasingly ridiculous schemes to clean up their mess.
Re: (Score:3)
IPv6 is not at all complicated. It does have a few more complicated OPTIONS that 90% or more of users can completely disregard without a problem.
The addresses are longer, but that simply cannot be helped. How would you vastly expand the global namespace without having a longer name?
Even without ISP support, there's 6to4. That would have been at least good enough to gain some experience with it. Now, when your ISP does get up to speed you'll have even more to catch up on.
Complicated is when you request a new
Re:Dual stack failed? (Score:5, Insightful)
Dual stack works but is has failed in the sense that it can't be the singular solution during the transition from IPv4 to IPv6.
Re: (Score:2, Informative): (Score:2)
Re: (Score:2)
Re: (Score:2)
My D-Link router
I found your problem!
Re: (Score:2)
If someone can't afford to buy a router that's hundreds of dollars, at least look at MikroTik (routerboard) hardware. Similar price range without the brain dead functionality of the typical D-Link or Linksys.
Re: (Score:2)
If someone can't afford to buy a router that's hundreds of dollars, at least look at MikroTik (routerboard) hardware. Similar price range without the brain dead functionality of the typical D-Link or Linksys.
Or if they get (or already have) a Linksys, installing dd-wrt [dd-wrt.com] turns it in to a pretty decent little box. That's been my personal preference over the last couple of years.
Re: (Score:3)
I use openwrt myself, but dd-wrt was my 'gateway drug'
:)
Re: (Score:2) th
Re: (Score:2)
Re: (Score:2)
The solution is probably carrier-grade NAT for IPv4 (so you only get a private IPv4 address) with dual-stack. But that has it's own problems.
IPv6 of course (Score:2) infrastruct
Re: (Score:2)
IPv6 of course.
The issue will be new sites needing IP addresses will be IPv6 only.
"Not an issue, it's a feature!"
This almost out nonsense needs to stop (Score:5,:This almost out nonsense needs to stop (Score:5, Interesting)
If you are a company and want some static IPs and your ISP says "Sure, you can have IPv4 addresses at $30/month each, or as many IPv6 addresses as you want for free,"
That won't work. Problem is, if you are a company without an IPv4 address, you are not reachable by 99% of Internet users, i.e. you don't exist.
Companies will pay whatever price, though. They have to. But to suggest that the company can solve this by migrating to IPv6 is short-sighted. The company can only solve this by migrating all of its intended customers to IPv6, in other words: they can't.
You have made me realize an interesting point, though: as long as ISPs do not migrate their users to IPv6, they can charge extortionary prices for the remaining IPv4 addresses; ISPs have an incentive to create this artificial scarcity. Time to call for government regulation?
;)
Re: (Score:2) transi
Re: (Score:3)
it is standing in the way of a real and workable solution to the problem.
So no, IPv6 is not the solution. IPv6 has simply become part of the problem.
So let me guess the solution
... "AOL keywords"?
What a load of nonsense (Score:2) whe
Re: (Score:2)
>: (Score:2).
Re: (Score:2)
After having worked for an ISP I can tell you that most of the customers will want to log in at the same time so you really do need an ip for everyone.
Large-scale NAT in Qatar (Score:2)
many countries provide a NATed private IP anyway.
Err... you mean company, right?
Countries too [wikipedia.org].
Re: (Score:2).
Mobile, home and small office equipment? (Score:2): (Score:2)
Re: (Score:2) carr
Re: (Score:3)
Explain to me why all that stuff needs to be upgraded, but other stuff - your stuff and my stuff - doesn't?
The short answer: (Score:2)
lots of IP4 only cable / dsl modems and routers (Score:2)
lots of IP4 only cable / dsl modems and routers are out there. Do any of E-mta (that the cable force you to rent (if you have cable phone) do IPV6?)
Re: (Score:2)
Do any of E-mta (that the cable force you to rent (if you have cable phone) do IPV6?)
Can you find one that does not? I believe a DOCSIS 3.0 certification requirement is ipv6.
As with cellphones, the question is rarely what the manufacturer made possible, but what your provider felt like allowing you to do.
Wrong problem (Score:2) wor
Easy.. (Score:5, Interesting): (Score:3)
Sure, it will break all the P2P traffic that relies on IPv4-only, but that will quickly force those services to support IPv6.
It will prety much suck for quite some time. (Score:5, Insightful): (Score:2): (Score:2)
Why not four UTF-32/UCS-4 characters instead of four decimal numbers? [wikipedia.org]
Re: (Score:3).
Whats your plan for delegated reverse DNS for a
/48 allocation? (This should be interesting)
Re: (Score:2) peopl
Re: (Score:2).
Re:It will prety much suck for quite some time. (Score:5, Informative)
... but ONLY once!
You can only omit one run of zeros, because otherwise the length of each run would be ambiguous.
Re:It will prety much suck for quite some time. (Score:5, Informative).
Re: (Score:3)
> IPv6 is a potential privacy nightmare.
You have a loony definition of "privacy". I'm sure you will be able hide your IPv6 address behind a proxy, just you now conceal your street address by having all your snail-mail delivered to a PO box (after all, you wouln't want anyone to know where you live) and never give out your unlisted phone number.
Re: (Score:2)
You don't have to make long addresses if you don't want to. You can drop leading zeros and the
:: compression replaces any range of zeros, not only one set. So a prefix you might get from your ISP becomes:
2001:DB8:A::/48
I can remember that easily and then make up a plan such as "/64 corresponds to VLAN". Say you have VLAN 5 and a statically assigned host 9 on that VLAN.
2001:DB8:A:5::9/64
Although it still has scary A-F in the number. Or you can stick with the crazy long addresses if that's easier.
Re:It will prety much suck for quite some time. (Score:5, Informative).
You don't know what you are talking about. Of course '2600000.35.1254.1785' is easier to remember, you aren't using all the bits. If you used the full 64 bits, it's going to be longer no matter what base you are using. Your hex example, if you converted it to decimal, would look just as bad: 536939960.2242052096.35374.57701172. It's not actually easier to remember.
::FFFF:192.168.0.1. Not particularly harder to remember than an IPv4 address now. IPv4 was designed by people who thought before talking. Unlike you, apparently. Work on that: try to figure stuff out before blathering.
There is also a shortcut built in for IPv6 addresses. For example, if you had an IPv4 LAN with addresses in the 192.168.0.1 range, you could represent them in IPv6 with
Re: (Score:3)
2600000.35.1254.1785
Here's a subnet mask for that. FFFF:FFFF. Now, in your head, quickly apply that to your base10 IP.
Who uses IPs anymore anyway except in a few corner cases for debugging? Use DNS or add an address to your fav list. Post its also work great for doing general network work where you need to know an IP.
Why assign IPv4 to phones? (Score:2): (Score:2)
Movistar - Argentina - Mobile - 10.x.x.x network (Score:2)
Internet is a series of peaches (Score:2)
ZOMG THE SKY [isn't] FALLING! (Score:2)
Guys, look at This list of Class A [wikipedia.org].
Re: (Score:2)
Assuming that the chunks will be released to the public, then yes, you are right.
Re: (Score:2)
You listed 17
/8 blocks in your post. If you managed to reclaim every single one of those, you'd almost make up for IANA's 19 allocations in 2010.
And let us know how it goes when you try to take those addresses from the US military.
Re: (Score:2): (Score:3)
Foreigners sometimes have service, too. I've got prepaid AT&T and T-Mobile accounts for my US trips (each has different advantages), so I count toward 2 Americans having service, even though I'm not American and don't live in the country. 100% saturation merely means there are as many active lines as there are people, but it says nothing of how those lines are distributed.
Re: (Score:2)
Why does it bother you what other people choose for a personal phone? If it's truly a personal phone, you can refuse to support it, given that they have a company phone as well.
He probably works at an android software development shop (just kidding...)
Re: (Score:2)
You mean like...
"One phone for family, one phone for work, one phone for the girlfriend, one for the wife, one for the other girlfriend...", and one evil company that binds them all in darkness.
Re: (Score:2)
From The Wire
Re: (Score:2)
Don't forget one phone with DNS, so in darkness bind() them.
Re: (Score:2)
Re: (Score:2)
> That already exists, it's called "using both".
Seems like that would be IPv10.
Re: (Score:2)
What happened to IPv5?
Re: (Score:2)
Re:IPv7 (Score:4, Interesting) [oreillynet.com]
It was assigned to an interesting, but ultimately not implemented, protocol.
Re: (Score:2)
In the real world ( read as 'unix world' ) odd numbers are always "experimental"
.. There is a 5 .. but it was never meant for mass consumption
Which is why OSPFv3 is used with IPv6.
Re: (Score:2)
Re: (Score:2)
Maybe you should Google what IPv5 was for. Here, I'll help. Read this [oreillynet.com].
Re: (Score:3)
Because a lot of services don't work well through NAT. VPN and voice services are good examples.
Re: (Score:2)
Re: (Score:2)
it doesnt scale up well. NAT is also somewhat heavy on routers and massive NAT tables would require investment in equipment.
essentially all modern computing platforms are IP6 capable, the best choice is to make the switch and not mess with large scale NAT.
Re: (Score:2): (Score:2): (Score:3)
Skype relies on other people running Skype with a public IP address acting as a proxy. When everybody goes NAT, Skype breaks down as well.
Re: (Score:2): (Score:2): (Score:2)
Re: (Score:2)
Re: (Score:2)? (Score:2): (Score:3) subscr
Each region could have its own /8 (Score:2): (Score:2) pr
Run a server and get TOS'd (Score:2) carri
Re: (Score:2).
Re: (Score:2)
And as reiterated in various publications and on this very site over and over again....
Reclaiming IPv4 addresses is a waste of time as the reclaimed addresses will be used faster than one can reclaim more.
Band-aids wont work.
Re: (Score:2)
Re: (Score:2)
You clearly have not heard of our solution in the lab: use complex numbers for each octet.
Oh you also oppose the tyrrany of the powers-of-two addressing space? Excellent. Personally I've been working on an Egyptian fractions representation. Imagine the entire IP addressing space between the intervals of zero and one. And we'll never need a larger space, merely subdivide more aggressively. Regular expressions and routing tables are a bit tedious of course. But, string handling technology has been neglected for years by the tyrrany of floating point accelerators, its time for a new paradigm. | https://tech.slashdot.org/story/10/12/27/148258/after-ipv4-how-will-the-internet-function?sdsrc=prev | CC-MAIN-2017-09 | refinedweb | 2,619 | 73.68 |
The purpose to to handle delimited text, such as a CSV or TSV.
The way the algorithm is implemented takes full advantage of the
single character nature of the delimiter, which is why it so
fast.
Maybe stephen can answer the whitespace question.
As for escaping, i'm following the scheme that i've seen
in most programs (particularly excel), which is to quote
an entire string if it contains a delimiter, and within that,
you can use the quote character twice to escape it.
If someone is willing to implement another version, that's
fine, but this one is optimized for single character delimiters.
And i'm using the cvs approach that excel uses which is to quote
the value only if the delimiter appears in the value. Inside
a quoted value, you can put the quote character twice to escape
it. If the quote character is not the first parsed character in
the field, then it's treated as plain text. I've verified this
behavior in the past with excel.
-----Original Message-----
From: Arun Thomas [mailto:arun.thomas@paybytouch.com]
Sent: Friday, November 14, 2003 6:31 PM
To: Jakarta Commons Developers List
Subject: RE: [lang] [Bug 22692] - StringUtils.split ignores empty items
I'm a bit confused (should have expressed this in earlier comments) on the
importance of whitespace. Why is it so important? As far as I can see,
what's needs to be identified for the tokenization is:
token separator
separates tokens in the string
non-separated region delimiter
begins/ends a portion of the string to be treated as one
token
escape
used in such a delimited region to remove the "special"
nature of a delimiter or escape
Also, is there a reason that any of these should be constrained to
characters. (I speak from an interface perspective.) There might be good
reasons (performance or otherwise) for particular implementations to handle
only characters, etc.
-AMT
-----Original Message-----
From: Inger, Matthew [mailto:inger@Synygy.com]
Sent: Friday, November 14, 2003 2:46 PM
To: 'Jakarta Commons Developers List'
Subject: RE: [lang] [Bug 22692] - StringUtils.split ignores empty items
I see what you mean. It appears, as robust as CharSet it, is does way too
much, and is slow for what we need it for.
I'm going back to DelimiterSet, but rather than an interface, it will be an
inner class with several constructors:
public DelimiterSet(char[]);
public DelimiterSet(String);
public DelimiterSet(char);
and two useful methods:
public boolean contains(char);
public char[] getChars();
This will be an immutable object. The
constructor sorts the character array
using Arrays.sort, and the contains method
uses Arrays.binarySearch. This should give
us a pretty efficient algorithm for the
contains method. There's also a predefined
whitespace delimiter set "WHITESPACE_DELIMITERSET"
so people don't have to construct their own
all the time.
-----Original Message-----
From: Stephen Colebourne [mailto:scolebourne@btopenworld.com]
Sent: Friday, November 14, 2003 5:26 PM
To: Jakarta Commons Developers List
Subject: Re: [lang] [Bug 22692] - StringUtils.split ignores empty items
An interesting idea, although the performance would be very poor without
some effort in the CharSet class. Stephen
From: "Todd V. Jonker" <todd@consciouscode.com>
> Or just use lang.CharSet
>
>
> On Fri, 14 Nov 2003 16:58:45 -0500, "Inger, Matthew"
> <inger@Synygy.com>
> said:
> > What about an interface:
> >
> > public class DelimitedTokenizer {
> >
> > public static interface DelimiterSet {
> > public boolean isDelimiter(char c);
> > }
> > }
> >
> > and having the ability to pass in this
> > interface. Of course, we'd still have a
> > single char version as well, so someone
> > might pass either a single char or an implementation
> > of this interface as the delimiter. I suppose I could
> > do the same thing for quotes, but i find that less useful.
>
> ---------------------------------------------------------------------
> | http://mail-archives.apache.org/mod_mbox/commons-dev/200311.mbox/%3CECE60FFBB1DDBF4DA10DA1C93843A4B70451E3D4@freeport.synygy.com%3E | CC-MAIN-2018-34 | refinedweb | 622 | 54.22 |
Asked by:
Is there a list of standard C++ library headers supported for Metro apps?
Question
I found the following list for the C runtime library:
But can't find anything regarding the standard C++ library.
I assume <thread> is out. How about <filesystem>? <future>?Wednesday, May 30, 2012 6:36 PM
All replies
Hello,
There is document about C++ 11 features supported in native C++
But in metro, the way of accessing file system and concurrency programming is different than before.
I would suggest you to use these namespace in metro.
Windows.Storage namespace
And Concurrency Namespace
Best regards,
Jesse
Jesse Jiang [MSFT]
MSDN Community Support | Feedback to us
Thursday, May 31, 2012 7:15 AM
- Thanks for the link - as for the standard C++ lib, I think that all are supported, only stuff that is not marked with the partition app macros are not allowed (see headers). File system is in, because if you look at the C lib, you see fopen is also allowed, so I assume nothing wrong with <filesystem>Thursday, May 31, 2012 11:33 AM
- Looks like a duplicate of Are there any restrictions of using C Runtime library in Metro Style apps or WinRT DLL
The following is signature, not part of post
Please mark the post answered your question as the answer, and mark other helpful posts as helpful, so they will appear differently to other users who are visiting your thread for the same problem.
Visual C++ MVPThursday, May 31, 2012 2:39 PM
- What do the partition app macros look like? I don't see anything indicating metro/desktop (like the #pragma region XXXX Family stuff in the win32 SDK headers).Thursday, May 31, 2012 4:35 PM
- It is similar, but the c runtime library and standard c++ library are different (though they are sometimes used interchangably). I didn't notice that thread because I assumed it was fopen() type stuff.Thursday, May 31, 2012 4:37 PM
you can search the headers for this define:
_CRT_USE_WINAPI_FAMILY_DESKTOP_APP
Thursday, May 31, 2012 6:50 PM
- Proposed as answer by Dan RuderMicrosoft employee, Moderator Friday, June 1, 2012 6:38 PM
And for the Windows APIs, there is a new header file, winapifamily.h, that contains partition defines WINAPI_PARTITION_DESKTOP (for all Win32 apps) and WINAPI_PARTITION_APP (for Metro style apps). These partitions get set when you set WINAPI_FAMILY to WINAPI_FAMILY_APP or WINAPI_FAMILY_DESKTOP_APP.
Generally, you don't need to include winapifamily.h explicitly in your code because it is included by windows.h. Furthermore, the Visual Studio project templates automatically set the these defines for you; if you go into the project settings -> C++ settings -> Preprocessor -> Preprocessor Definitions, you will see that the Metro style projects set WINAPI_FAMILY = WINAPI_FAMILY_APP for you. Then, VC++ IntelliSense will red-underline stuff that isn't defined in the selected API family.
Sincerely,
Dan Ruder [MSFT]
Friday, June 1, 2012 5:46 PMModerator
- Edited by Dan RuderMicrosoft employee, Moderator Friday, June 1, 2012 6:12 PM add details
- Proposed as answer by Dan RuderMicrosoft employee, Moderator Friday, June 1, 2012 6:38 PM
The CRT partitions are defined in crtdefs.h. Note that crtdefs.h will define/undefine _CRT_USE_WINAPI_FAMILY_DESKTOP_APP according to how WINAPI_FAMILY (from winapifamily.h) is defined. Then, _CRT_BUILD_DESKTOP_APP gets defined based on _CRT_USE_WINAPI_FAMILY_DESKTOP_APP.
Therefore, if you set WINAPI_FAMILY = WINAPI_FAMILY_APP or WINAPI_FAMILY_DESKTOP_APP, you automatically get the CRT defines set correctly. If you don't define WINAPI_FAMILY, then the default is to set the CRT defines for desktop apps builds. So, you shouldn't have to explicitly set the CRT defines yourself.
Sincerely,
Dan Ruder [MSFT]
Friday, June 1, 2012 6:09 PMModerator
- Proposed as answer by Dan RuderMicrosoft employee, Moderator Friday, June 1, 2012 6:38 PM | https://social.msdn.microsoft.com/Forums/en-US/f75d4bfc-6815-48af-9cbe-acf194298a67/is-there-a-list-of-standard-c-library-headers-supported-for-metro-apps?forum=winappswithnativecode | CC-MAIN-2022-21 | refinedweb | 620 | 50.26 |
This question has already been solved: Start a new discussion instead
vijayan121
Posting Virtuoso
1,812 posts since Dec 2006
Reputation Points: 1,152 [?]
Q&As Helped to Solve: 336 [?]
Skill Endorsements: 18 [?]
0
ideally, do not use the (now deprecated) functionality in namespace
__gnu_cxx . they have been superceded by tr1 in c++0x.
use
-std=c++0x (gcc 4.3) ,
std::tr1 (gcc4.2) or
boost::tr1 (earlier versions).
the error is because
__gnu_cxx::hash<> is a struct (function object); not a function. if you have to use it, use it this way
#include <functional> #include <ext/hash_fun.h> using namespace std ; template<class X, class Pred= less<X> > class Hash{ private: Pred comp; public: enum{bucket_size = 4, min_buckets = 8}; Hash() : comp(){ } Hash(Pred p) : comp(p){ } size_t operator()(const X& x) const{ const size_t i= __gnu_cxx::hash<X> [B]()[/B] (x); return 16807U*(i%127773U)+(95329304U-2836U*(i/127773U)); } };
Question Answered as of 7 Years Ago by vijayan121
You | https://www.daniweb.com/software-development/cpp/threads/120612/hash-function-error-when-using-gcc | CC-MAIN-2015-32 | refinedweb | 161 | 67.65 |
Importing Files Using SSIS
A topic that has come up a few times recently is the idea of loading a set of files into a database using SSIS. One of the data flow components in SSIS is the Import Column transform, which allows you to load files into a binary column.
There is a great video by Brian Knight that shows how to use it, and I recommend viewing that (12/01/2013 update: evidently, the video this post originally linked to is no longer available). If you are looking for a quick overview of the component, and a different approach for loading the list of filenames to import, then read on.
The Import Column transform works in the data flow, and imports one file for each row that is passed through it. It expects a column that contains the file name to import as an input. It outputs a column of type DT_TEXT, DT_NTEXT, or DT_IMAGE, that contains the file contents.
I’ve included a sample package with this post that uses the Import Column transform. It has a single data flow that uses a script component to get the list of files to import.
The package has two connection managers, one of which points to a SQL Server database where the files will be stored. The other connection manager is a File connection manager, that is pointed to a folder. This is the folder that we want to import the files from.
The script component was created as a Source. A single output column of type DT_WSTR was added to contain the filenames.
On the connection managers page, the File connection manager is specified so that it can be accessed from the script.
The script uses the Directory class from the System.IO namespace. By calling the GetFiles method, the code can iterate through all of the files in the directory, and output one row for each file.
Imports System Imports System.Data Imports System.Math Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper Imports Microsoft.SqlServer.Dts.Runtime.Wrapper Imports System.IO Public Class ScriptMain Inherits UserComponent Public Overrides Sub CreateNewOutputRows() Dim fileName As String For Each fileName In Directory.GetFiles(Me.Connections.ImportFilesDir.AcquireConnection(Nothing).ToString()) Output0Buffer.AddRow() Output0Buffer.Filename = fileName Next Output0Buffer.SetEndOfRowset() End Sub End Class
The next component is the Import Columns component. Configuring it can be a little difficult. I found the Books Online documentation a little unclear on what had to be done. On the Input Columns tab, the column that contains the filename (including path) to import needs to be selected.
On the Input and Output Properties tab, a new column was added to hold the binary contents of the file. When adding this column, make a note of the LineageID value, as it needs to be used in the next step.
After adding the output column, the input column (that contains the filename, not the file contents), needs to be selected. The LinageID from the previous step needs to be put into the FileDataColumnID property. This tells the component which column to populate with the file contents.
The OLE DB Destination is fairly straightforward, as it just maps the columns from the data flow to the database.
Hopefully this helps if you are working with the Import Column transform. The samples are located on my Skydrive.
hi this is some thing i was looking for but my package is slightly different
it needs file directories(which your package is giving with the file names)
and i want file names for each one in a seperate column.
You can’t do that with this component. You’d have to create a custom script.
Hi,
I hope this make sense,I have one question about the Error handling, I want to track all the bad rows which handle through error redirect to file or table but I want to create some functionality which automatically use the same error output file or table in the same package after correcting the errors apart and process it from the already processed data.
Please help me out.
Regards,
Smith
Not sure if I understand the question, but couldn’t you just have a second data flow (set up the same way) that handles the files after they have been corrected?
I’ve used this and it works great. I was wondering if there is a way of also capturing the file datetime stamp to store with the file?
Glad it works for you. You can capture the Last Modified time for the file by adding a new output column to the Script Source for the timestamp, and retrieve it using this code in the script:
DateTime lastModifiedDate = File.GetLastWriteTime(filename);
Then you can just copy the lastModifiedDate value to the new output column.
Sorry, just realized I did that as C# code. The equivalent VB code is:
Dim lastModifiedDate As DateTime = File.GetLastWriteTime(filename)
I think the video link is a bit out of date, it seems to direct to some sort of badly written english spam page.
Apologies for that – the video site has changed hands and the original video no longer seems to be posted.
No worries, not your fault.
Things get out of date very quickly on the internet
This is a great tutorial, if a little light on detail, making it a bit hard going for an SSIS novice like myself. However, I’m getting an error when I try to execute the package, telling me that the column can’t be inserted because the conversion between types DT_WSTR and DT_IMAGE is not supported. I assumed this was because your tutorial is written on the premise that the files being imported are images, whereas mine are Word documents, so I changed the datatype of the Output Column in the Import Column component to DT_NTEXT. However, I still get the exact same error message (referring to DT_IMAGE, even though I’ve changed the datatype). And actually, my theory about why the package failed can’t have been correct anyway because the message is explicitly telling me that the conversion between DT_WSTR and DT_IMAGE isn’t possible, yet that’s the conversion type your tutorial seems to describe?
Like I said, I’m a total novice at this, so I’m probably just being stupid!
Word docs would still be stored in a DT_IMAGE column – DT_NTEXT would be for Unicode text information. Work documents are stored in a binary format (zipped XML), which SSIS treats as the DT_IMAGE type.
It sounds like something is misconfigured in the component – are you sure you used the right lineage IDs? My example isn’t actually converting any types. It tells SSIS that it should use the content of the filename column to look for a file on your disk, and read the binary content of the file into the file column. There is no data conversion happening.
I changed it to DT_IMAGE but it still didn’t work. However, I seem to have it working simply by putting the code that’s run in the For Each… Next loop inside a Try…Catch statement. I haven’t even put anything in the Catch statement, but this alone seems to have allowed the package to execute successfully and, to my surprise after adding the Try-Catch, every file has been imported into the database.
I’m now trying to figure out how to get the transformation to also include the file name and extension as strings to be inserted into columns associated with file record. Doesn’t seem to be anywhere near as easy as I would have hoped!
Thank for replying, by the way!
This is an old thread, but maybe someone’s still looking at it. I have a flat file with some binary hex data in one of the fields and am trying to get back into an image column in SQL Server. If I import it as DT_TEXT, apparently I can’t convert between DT_TEXT and DT_IMAGE. Any idea how to get the binary data into the image column?
You may need to use a script to read the column as text version of bytes, and then write it out to actual bytes. As a I recall, the Data Conversion and Derived Column transforms don’t really work with large data like DT_TEXT and DT_IMAGE.
[...] this time, I found an excellent blog post by my friend, John Welch (blog | @john_welch), titled Importing Files Using SSIS. John’s idea is straightforward and very “SSIS-y,” as Kevin Hazzard (blog | @KevinHazzard) [...] | http://agilebi.com/jwelch/2008/02/02/importing-files-using-ssis/?replytocom=331228 | CC-MAIN-2018-30 | refinedweb | 1,424 | 62.17 |
csTextProgressMeter Class Reference
The csTextProgressMeter class displays a simple percentage-style textual progress meter. More...
#include <csutil/cspmeter.h>
Inherits scfImplementation1< csTextProgressMeter, iProgressMeter >.
Detailed Description
The csTextProgressMeter class displays a simple percentage-style textual progress meter.
By default, the meter is presented to the user by passing CS_MSG_INITIALIZATION to the system print function. This setting may be changed with the SetMessageType() method. After constructing a progress meter, call SetTotal() to set the total number of steps represented by the meter. The default is 100. To animate the meter, call the Step() method each time a unit of work has been completed. At most Step() should be called 'total' times. Calling Step() more times than this will not break anything, but if you do so, then the meter will not accurately reflect the progress being made. Calling Reset() will reset the meter to zero, but will not update the display. Reset() is provided so that the meter can be re-used, but it is the client's responsibility to ensure that the display is in a meaningful state. For instance, the client should probably ensure that a newline '
' has been printed before re-using a meter which has been reset. The complementary method Restart() both resets the meter and prints the initial tick mark ("0%"). The meter does not print a newline after 100% has been reached, on the assumption that the client may wish to print some text on the same line on which the meter appeared. If the client needs a newline printed after 100% has been reached, then it is the client's responsibility to print it.
Definition at line 55 of file cspmeter.h.
Constructor & Destructor Documentation
Constructs a new progress meter.
Destroys the progress meter.
Member Function Documentation
Abort the meter.
Finalize the meter (i.e. we completed the task sooner than expected).
Get the current value of the meter (<= total).
Definition at line 108 of file cspmeter.h.
Get the refresh granularity.
Definition at line 118 of file cspmeter.h.
Get the tick scale.
Definition at line 80 of file cspmeter.h.
Get the total element count represented by the meter.
Definition at line 106 of file cspmeter.h.
Reset the meter to 0%.
Definition at line 95 of file cspmeter.h.
Reset the meter and print the initial tick mark ("0%").
Set the refresh granularity.
Valid values are 1-100, inclusive. Default is 10. The meter is only refreshed after each "granularity" * number of units have passed. For instance, if granularity is 20, then * the meter will only be updated at most 5 times, or every 20%.
Set the id and description of what we are currently monitoring.
An id can be something like "crystalspace.engine.lighting.calculation".
Definition at line 88 of file cspmeter.h.
Set the tick scale.
Valid values are 1-100, inclusive. Default is 2. A value of 1 means that each printed tick represents one unit, thus a total of 100 ticks will be printed. A value of 2 means that each tick represents two units, thus a total of 50 ticks will be printed, etc.
Set the total element count represented by the meter and perform a reset.
Definition at line 104 of file cspmeter.h.
Increment the meter by n units (default 1) and print a tick mark.
The documentation for this class was generated from the following file:
- csutil/cspmeter.h
Generated for Crystal Space 1.4.1 by doxygen 1.7.1 | http://www.crystalspace3d.org/docs/online/api-1.4/classcsTextProgressMeter.html | CC-MAIN-2016-26 | refinedweb | 577 | 60.72 |
seb@frankengul.org a écrit :
On Wed, Jun 13, 2007 at 09:32:24AM -0400, John David Anglin wrote:I could remove this patch altogether. But I still wonder why this patch works ok on other archs and not on hppa*.hppa* does a deferred generation of plabels for indirect function calls. I think this is affected by the change. I don't recall exactly but there may have been a followup patch to fix this. Of course, this change isn't in the 4.1 GCC tree.Ok, thanks for the explanations. I just have to figure out what it means now ;-). Can I find this patch somewhere ? Is there any discussion on the ML ? The patch was discussed near January of this year on the gcc ML.Dave -- J. David Anglin dave.anglin@nrc-cnrc.gc.ca National Research Council of Canada (613) 990-0752 (FAX: 952-6602)
Ok, I nailed it. I removed PR20218 patch and generated a 4.1.2-12+b1 then build glibc. Glibc build is ok now.I think the hppa toolchain can move on and build the glibc-2.5-11 and then rebuild the portmap and all pie executable (pfeww... what a bug chain).
Seb
diff -r -u -b -B -w gcc-4.1-4.1.2-12/debian/changelog gcc-4.1-4.1.2-12+b1/debian/changelog --- gcc-4.1-4.1.2-12/debian/changelog 2007-06-13 22:34:38.000000000 +0200 +++ gcc-4.1-4.1.2-12+b1/debian/changelog 2007-06-13 12:44:48.000000000 +0200 @@ -1,3 +1,9 @@ +gcc-4.1 (4.1.2-12+b1) unstable; urgency=low + + * Revert 20218 patch that breaks gcc + + -- Sebastien Bernard <seb@frankengul.org> Wed, 13 Jun 2007 12:44:07 +0200 + gcc-4.1 (4.1.2-12) unstable; urgency=high * i386-biarch.dpatch: Update for the backport for PR target/31868. diff -r -u -b -B -w gcc-4.1-4.1.2-12/debian/rules.patch gcc-4.1-4.1.2-12+b1/debian/rules.patch --- gcc-4.1-4.1.2-12/debian/rules.patch 2007-06-13 22:34:38.000000000 +0200 +++ gcc-4.1-4.1.2-12+b1/debian/rules.patch 2007-06-13 12:45:06.000000000 +0200 @@ -41,8 +41,6 @@ fastjar-version \ fastjar-doc \ libstdc++-doxygen \ - pr20218 \ - pr20218-mips \ pr31868 \ arm-libffi \ libffi-backport \ @@ -112,10 +110,6 @@ debian_patches += pr25524-doc pr26885-doc gcc-4.1-x86-blended-doc libjava-backport-updates2 endif -ifneq (,$(filter $(DEB_TARGET_ARCH), amd64 i386 powerpc ppc64 sparc s390)) - debian_patches += pr20218 -endif - ifeq ($(with_libffi),yes) debian_patches += \ libffi-configure | https://lists.debian.org/debian-hppa/2007/06/msg00066.html | CC-MAIN-2017-13 | refinedweb | 438 | 63.46 |
Agenda
See also: IRC log
<scribe> Scribe: Peter Linss
<scribe> ScribeNick: plinss
Date: 24 Mar 2011
late arrivals from JT, HST
NM: we usually cancel calls during conferences
... regrets from TBL, anyone else not available?
... regrets from LM
... will check and cancel later
YL: can scribe next week or week after
<noah> Minutes of the 17th
NM: objections to approval of minutes from 17th?
<noah> RESOLUTION: Minutes of 17 March 2011 are approved
NM: additions to the agenda?
AM: comment on Dan's stuff, this works with client side storage. Probably worth speaking about when we talk about offline
NM: do we need to discuss preparation for IETF meeting
LM: I noticed there was a URN-BIS group, does anyone else think this is worth interest?
NM: will add as a another business item
... briefly consider action items relating to IETF meeting
... who best to report back form IETF meeting?
LM: we should work together
<noah> ACTION Larry and Henry to report on IETF Meeting Due 2011-04-07
<trackbot> Created ACTION-542 - And Henry to report on IETF Meeting Due 2011-04-07 [on Larry Masinter - due 2011-03-31].
<noah> ACTION-530?
<trackbot> ACTION-530 -- Henry S. Thompson to draft slides for IETF meeting, with help from Larry Due 2011-02-22 -- due 2011-03-15 -- OPEN
<trackbot>
NM: propose close action 530
<noah> close ACTION-530
<trackbot> ACTION-530 Draft slides for IETF meeting, with help from Larry Due 2011-02-22 closed
<noah> ACTION-519?
<trackbot> ACTION-519 -- Peter Linss to frame architectural opportunities relating to scalability of resource access Due: 2011-03-15 -- due 2011-03-15 -- OPEN
<trackbot>
NM: peter status of action 519
PL: been head down on CSS issues, haven't gotten to it
NM: keep this open
LM: don't think we need this for IETF
<noah> ACTION-519 Due 2011-04-05
<trackbot> ACTION-519 Frame architectural opportunities relating to scalability of resource access Due: 2011-03-15 due date now 2011-04-05
<noah> ACTION-519?
<trackbot> ACTION-519 -- Peter Linss to frame architectural opportunities relating to scalability of resource access -- due 2011-04-05 -- OPEN
<trackbot>
<noah> Ashok's draft:
<noah> Concentrate on sections 4,5 & 6
NM: Ashok you suggested we concentrate on sections 4, 5 & 6
... show of hands who has read this
<noah> I have read the pertinent sections, not necessarily the older ones
NM: Yves and I have read, other's have not
... take comments form me, then Yves
YL: section 4 there is a point about using ? or #
<noah> YL: I think in section 4 we need to put more emphasis on distinction of identifying a document vs. an application
<noah> Reminder of my blog posting on this:
AM: I had a question about what you were asking there, do you want more detail or were you asking for a fundamental difference in message
YL: I think it creates an architectural issue about follow your nose vs having to use javascript to understand the url
<Zakim> noah, you wanted to comment on new APIs
NM: I would like to see some more nuance in the analysis
... if you use ? you have to reload, so we have to use #
... that's true in old browsers, not in new browsers
... lets imagine a world in which apis let you take your choice
... as long as the domain doesn't change the page doesn't reload
... problems with # go away when you use ?
AM: the problem with that is that this state is not quite ready yet
... when the state changes, then we can do things differently
NM: for 6m people who downloaded FireFox 4, they got it
... there is the legacy problem
... but we are the architecture committee, not documenting current state
... given that this is the direction, can we compare pros and cons with both approaches
... with ? you can send to server, with # you cannot
... I think the TAG responsibility is to look to the future and tell that story
<Zakim> jar_, you wanted to warn that fragid FYN is a tar pit, do you really want to take it on?
AM: i can start thinking about that
JAR: I want to warn about 3rd bullet
<Yves> to respond to NM with ? you reload the page (or with a new URI you switch to a new URI), with # in many cases, you trigger retrieval of side data
<noah> JAR is talking about "A related fragment id meaning arises when one considers content-negotiation"
JAR: not sure thats' relevant
... we've been talking about that wrt 3023
<noah> JAR references RFC 3023:
JAR: there's the notion that frag id refers to an element
... my advice is to figure out how to avoid the subtext
AM: i could take out that bullet
<noah> Noah thinks this is yet another reason to avoid fragids...? does the right thing, more or less by definition, I think.
JAR: you refer to webarch or just remove it
NM: there are two ways we can read your comment
... go with encourage and #, but don't discuss conneg
... or use ? and conneg work as always
... we find places were architecture of frad is is fragile, conneg, ability send to server
JAR: there has to be a way to fudge this or you'll rat hole
... you want to be able to write this so RDF doesn't fail
<jar_> ... but you don't want to draw attention to RDF either...
<noah> My ultimate problem is that # is, strangely, architected pretty much to address into a representation, which is an odd thing to do into a URI. We just keep hitting ways in which that causes weakness. So, admitting that I'm belaboring this, ? seems stronger to me.
AM: you're pointing out a question we aren't going to be able to deal with in this document
YL: you pointed out when we change url in bar it triggers a reload, so we use #
NM: first class id for the we is URI without #
AM: you and i are looking at the world differently
<Zakim> noah, you wanted to respond to Ashok
AM; we aren't in a world of documents, but apps, state is document like
NM: we agree there
... what if we had ajax from day 1
<jar_> NM was saying the javascript # ids each feel like they're identifying a document...
NM: thing kid of look like documents, but they're just app states, so you can't email them around and such
... if the things you're navigating look like documents, lets make sure we can link to them
... its a huge loss if we can't do that
... we can't do this with tweet now with hackery with #!
AM: unless there are other comments, i'd like to speak about one of the other things form your blog
... because google apps uses ?, you can use it without javascript
... I did an experiment, I got a map, captured the link, turned script off and pasted the link
... what i got back was significantly different
... it works sometimes, but what you get without javascript is sometimes significantly less functional
NM: webarch tells a story about that
... take a URI and the server gets to decide what to do with it
... the server owns the resource
... they chose how to represent it
... we can warn people that will tend to be true
AM: that's worth speaking about in # vs ?
AM; you may get less function
NM: if we used #, you'd get nothing, because the # would not be sent to the server
... with ? theres a chance the representation you get will be useful
... tell that whole comparison
... you have enough input to think about redrafting?
AM: theres a fair amount to think about and type in
<JeniT> Noah, it's fine... hope to be there soon
<noah> ACTION-533?
<trackbot> ACTION-533 -- Noah Mendelsohn to schedule TAG discussion of #! (check with Yves) [self-assigne] -- due 2011-04-05 -- OPEN
<trackbot>
<noah> close ACTION-533
<trackbot> ACTION-533 Schedule TAG discussion of #! (check with Yves) [self-assigne] closed
<noah> ACTION-481?
<trackbot> ACTION-481 -- Ashok Malhotra to update client-side state document with help from Raman -- due 2011-03-01 -- OPEN
<trackbot>
<noah> ACTION-481 Due: 2011-04-12
<noah> ACTION-481?
<trackbot> ACTION-481 -- Ashok Malhotra to update client-side state document with help from Raman -- due 2011-03-01 -- OPEN
<trackbot>
<noah> ACTION-481 Due 2011-04-12
<trackbot> ACTION-481 Update client-side state document with help from Raman due date now 2011-04-12
<noah> ACTION-508?
<trackbot> ACTION-508 -- Larry Masinter to draft proposed bug report regarding interpretation of fragid in HTML-based AJAX apps -- due 2011-02-22 -- OPEN
<trackbot>
<noah> ACTION-538?
<trackbot> ACTION-538 -- Noah Mendelsohn to schedule discussion of Ashok's post F2F client state draft for 24 March [self-assigned] -- due 2011-03-22 -- PENDINGREVIEW
<trackbot>
<noah> close ACTION-538
<trackbot> ACTION-538 Schedule discussion of Ashok's post F2F client state draft for 24 March [self-assigned] closed
NM: we should defer offline apps until Dan is available
<noah> JAR: I think Larry wanted to know if we had input on that.
JAR: I wrote to larry about the sha1 urn namespace
... we have been talking about registries, and the tension between urn and http
LM: I just saw it on their agenda
JAR: been trying to think about something coherent to say about http and namespaces
<jar_> issue-50, jar + ht
LM: i'd appreciate if someone could look more in to background and report on it
JAR: i'll look at it
NM: particularly in the case where they refer to applicable specifications
<noah> My concern became:
NM: they have a proposed resolution
<noah> Proposed resolution of the issue is now available from HTML WG chairs:
NM: if you same something is a conformant html5 document, a UA should be able to conform with only the html5 spec
<noah>
<noah> ACTION-475?
<trackbot> ACTION-475 -- Ashok Malhotra to write finding on client-side storage, DanA to review -- due 2011-03-21 -- OPEN
<trackbot>
AM: lets get client side state stuff finished first
NM: will you have this for the f2f?
AM: yes
<noah> ACTION-475 Due 2011-05-24
<trackbot> ACTION-475 Write finding on client-side storage, DanA to review due date now 2011-05-24
<noah> ACTION-523?
<trackbot> ACTION-523 -- Ashok Malhotra to (with help from Noah) build good product page for client storage finding, identifying top questions to be answered on client side storage -- due 2011-03-01 -- OPEN
<trackbot>
<noah> ACTION-523 Due 2011-04-05
<trackbot> ACTION-523 (with help from Noah) build good product page for client storage finding, identifying top questions to be answered on client side storage due date now 2011-04-05
<noah> ACTION-515?
<trackbot> ACTION-515 -- Larry Masinter to (as trackbot proxy for John) who will publish, slightly cleaned up, with help from Noah and Larry -- due 2011-03-07 -- OPEN
<trackbot>
<noah> ACTION-515 Due 2011-04-12
<trackbot> ACTION-515 (as trackbot proxy for John) who will publish, slightly cleaned up, with help from Noah and Larry due date now 2011-04-12
<noah> LM: Note websec group meeting in Prague next week, might make some progress there. 3:30 local time 30 March 2011
<jar_> IETF websec
<noah> ACTION-508?
<trackbot> ACTION-508 -- Larry Masinter to draft proposed bug report regarding interpretation of fragid in HTML-based AJAX apps -- due 2011-02-22 -- OPEN
<trackbot>
<Larry>
<noah> NM: Bug report against media type registration?
<noah> ACTION-508?
<trackbot> ACTION-508 -- Larry Masinter to draft proposed bug report against HTML5 media type registration regarding interpretation of fragid in HTML-based AJAX apps -- due 2011-02-22 -- OPEN
<trackbot>
<noah> ACTION-481?
<trackbot> ACTION-481 -- Ashok Malhotra to update client-side state document with help from Raman -- due 2011-04-12 -- OPEN
<trackbot>
<noah> ACTION-508 Due 2011-04-19
<trackbot> ACTION-508 Draft proposed bug report against HTML5 media type registration regarding interpretation of fragid in HTML-based AJAX apps due date now 2011-04-19
NM: is there anything relating to client side state you want to re-open?
... summary of previous discussion
JT: I don't have anything to say about that
NM: can you verify that HTML5 pushstate allows to to change uri without reloading?
JT: yes
AM: yes
JT: both the address bar and the history can be manipulated
AM: there was a question that tim asked, are there any restrictions to that?
JT: yes
... you can't change domain or scheme without reload
<noah> ACTION-421?
<trackbot> ACTION-421 -- Henry S. Thompson to frame the discussion of EXI deployment at a future meeting -- due 2011-01-21 -- PENDINGREVIEW
<trackbot>
<noah> ACTION-509?
<trackbot> ACTION-509 -- Jonathan Rees to communicate with RDFa WG regarding documenting the fragid / media type issue -- due 2011-03-07 -- PENDINGREVIEW
<trackbot>
JAR: I sent email to www-tag
... in RDFa they're using frag id's with media type xhtml or html in a way that's not sacntioned by media type registrations
<jar_>
JAR: summarizes email
<Larry> maybe this is also my action item bug report
<noah> ACTION-508?
<trackbot> ACTION-508 -- Larry Masinter to draft proposed bug report against HTML5 media type registration regarding interpretation of fragid in HTML-based AJAX apps -- due 2011-04-19 -- OPEN
<trackbot>
<noah> LM: Related?
LM: we need to reconcile what RDFa is doing with frag ids
JAR: more serious issue is 3023 frag id semantics
<Larry> note that there are discussions at IETF application area about MIME and registrations etc.
NM: you said that 3023 and 2054 have not yet caught up
JAR: I haven't heard anyone object to idea of updating media spec to be consistent with RDFa
... 3020 says xpointer semantics applies, frag id have to be treated as errors if not defined with id=
... same thing we went through with RDFa+xml
... this makes the problem with rdf+xml infect all of xml
... not high on working groups priorities
... don't see it as part of their charter
NM: they're defining a URI space
JAR: the frag is practice they're promoting is not explicit, just used in examples
NM: the examples contravene important specs that no one is changing
... these are not good practice, why are they in the document?
JAR: not sure what to do about it
<Zakim> JeniT, you wanted to talk about cross-format fragment identifiers for RDF
JT: using uris with # is just very common across rdf
... when the uri is requested, you may get a different format back , but you want to keep the uri relevant
<noah> I'm starting to wonder whether we need to just change the claim that # is resolved relative to media type. That's a change that would nail a lot of things, including conneg, no?
JT: its a nasty thing about frag ids when you have multiple formats representing the same uri
... its not about a particular representation, its about identifying an abstract construct
JAR: I agree, that's what webarch says
NM: do we need to tell 3986 to change?
<Yves> noah, but it will raise lots of conflicts, if # was suddenly no longer bound to media type
NM: semantic web has deployed ignoring it, we need another story
<Yves> including needing a # registry
<Larry> there have been calls for URI-scheme relative fragment identifiers
<Larry> i don't understand how to make # *not* depend on media type
JAR: there are two different fixes, make 3023 more like webarch
LM: first step is to write down what the problem is
... my suggestion is look at some way of adding this issue, can we get a coherent story and examples
... i have an action to revise document
<noah> ACTION-472?
<trackbot> ACTION-472 -- Larry Masinter to update the mime-draft based on comments & review -- due 2011-04-09 -- OPEN
<trackbot>
LM: asks if someone else can take action
NM: summarizes action 472
JT: i'm willing to take a look
<noah> ACTION Jeni To propose addition to MIME/Web draft to discuss sem-web use of fragids not grounded in media type Due: 2011-04-05
<trackbot> Created ACTION-543 - Propose addition to MIME/Web draft to discuss sem-web use of fragids not grounded in media type Due: 2011-04-05 [on Jeni Tennison - due 2011-03-31].
<noah> ACTION-543 Due 2011-04-05
<trackbot> ACTION-543 Propose addition to MIME/Web draft to discuss sem-web use of fragids not grounded in media type Due: 2011-04-05 due date now 2011-04-05
<noah> ACTION-529?
<trackbot> ACTION-529 -- Noah Mendelsohn to schedule telcon discussion of a potential TAG product relating to offline applications and packaged Web -- due 2011-02-17 -- PENDINGREVIEW
<trackbot>
<noah> ACTION-534?
<trackbot> ACTION-534 -- Jonathan Rees to create issue page relating to Harry Halpin's concerns about 200/303 responses -- due 2011-03-03 -- PENDINGREVIEW
<trackbot>
<noah> close ACTION-534
<trackbot> ACTION-534 Create issue page relating to Harry Halpin's concerns about 200/303 responses closed
NM: jeni are you avaialable next week
JT: yes
NM: we will have a call next week
meeting adjourned | http://www.w3.org/2001/tag/2011/03/24-minutes | CC-MAIN-2014-15 | refinedweb | 2,879 | 64.95 |
So today was a bit of a slog because I had to get some stuff back for a review I was writing for Staples. You can check out my latest review over at Staples Tech Hub.
After that I met up with some friends to discuss how I can help them with their website, online marketing and distribution. I’m only considering these guys now because I really like what they’re doing and the unique approach they bring to the table compared to other competing solutions on the market. Will be more of part-time endeavour to feel each other out, but I have a good feeling about them.
After wrapping it all up, it was nearly 3:30 in the afternoon when I tried to cobble something together. Then I drew a blank. I felt as if I ran out of gas for the day, because I had a hangout scheduled with a good friend of mine, Brandon Chu over at Shopify. Then a bit overwhelmed about knowing the mountain of work and learning I need to surmount to get ready for the Lighthouse Labs bootcamp I was admitted to.
Fears and doubt of taking the more practical route of going back to marketing or product management. But in my heart of hearts, I know I want to do this. To be able to dive into something and immerse myself fully. The other option would be not to pursue it at all. Then I’d be filled with regret, not having the opportunity like this pop up again.
It was great to reaffirm and remember why I wanted to learn programming in the first place. To build useful things and create value in the world. And maybe add another feather to my cap in terms of another thing I’ll be able to do.
I’ve come to accept that I probably won’t ever fit the mould of being a specialist. Like a TED Talk I recently watched, I’m more of a “multipotentialite”, “polymath”, and renaissance man. I still like the term “T-Shaped Individual” but, that’s just personal taste 🙂
Soo.. what about the app?
Okay, so back to the app. I didn’t finish an app per se, it’s more of a deeper exploration of UIPickerView.
I had the idea that I could create an app using UIPickerView called “Seasons”. It would have four images, one for each season, and leveraging UIPickerView. For each season, the background would change to a picture of that season. That’s it. Not sure if I was to use an image array or simply add images to Xcassets and call them in a switch or if/else statement.
It turned out to be trickier than anticipated because I’m still learning how to properly use Apple’s documentation to figure out how to use protocols properly.
In any case, I could only get as far as implementing UIPickerView, with the colours, white, red, green and blue.
import UIKit class ViewController: UIViewController, UIPickerViewDataSource, UIPickerViewDelegate { @IBOutlet weak var pickerView: UIPickerView! var pickerDataSource = ["White", "Red", "Green", "Blue"] override func viewDidLoad() { super.viewDidLoad() self.pickerView.dataSource = self self.pickerView.delegate = self } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } func numberOfComponentsInPickerView(pickerView: UIPickerView) -> Int { return 1 } func pickerView(pickerView: UIPickerView, numberOfRowsInComponent component: Int) -> Int { return pickerDataSource.count } func pickerView(pickerView: UIPickerView, titleForRow row: Int, forComponent component: Int) -> String? { return pickerDataSource[row] } func pickerView(pickerView: UIPickerView, didSelectRow row: Int, inComponent component: Int) { if(row == 0) { self.view.backgroundColor = UIColor.whiteColor() } else if(row == 1) { self.view.backgroundColor = UIColor.redColor() } else if(row == 2) { self.view.backgroundColor = UIColor.greenColor() } else { self.view.backgroundColor = UIColor.blueColor() } } }
I feel kinda tired, going into Day 14. I knew it wasn’t going to be easy, but since it’s long weekend.. I think it’ll do me some good to be able to head into the remaining 80% of this challenge with a fresh brain, solid attitude and in good spirits. | http://www.quantifire.net/blog/category/learning/page/3/ | CC-MAIN-2020-05 | refinedweb | 670 | 65.22 |
I tried //My abs() method public static double abs(double d) { double s = d; if (d < 0.0) return -s; if (d >= 0.0) return s; }
But I keep getting a compile time error
Error: This method must return a result of type double
you can try this code....
public static double abs(double d) {
double s = d;
if (d < 0.0)
return -s;
else
return s;
}
another aletrnative code for absolute function is :
public static double abs(double a) {
return (a <= 0.0) ? 0.0 - a : a;
Thanks a lot. It works.
Secure Information
Content will be erased after question is completed.
Enter the email address associated with your account, and we will email you a link to reset your password.
Forgot your password? | https://www.studypool.com/discuss/450038/write-my-own-absolute-value-function-in-java?free | CC-MAIN-2017-13 | refinedweb | 125 | 66.74 |
2009 UCF Bowl Media Guide
All the stats, notes and team information the media needs to know for the St. Petersburg Bowl.
THE ST. PETERSBURG BOWL 20 0 9 2 0 09 U C F F O O TB A L L TB Official Gameweek Information UCF BROADCAST INFO Television ESPN Mark Jones (PxP), Bob Davie (color) Rob Stone (sidelines) UCF ISP Sports Radio Network Marc Daniels - Play-by-Play Gary Parris - Color Analyst Jerry O'Neill - Sideline Report Scott Adams - Reporter WYGM 740-AM (flagship) (Orlando) WMMV 1350-AM (Cocoa) WDCF 1350-AM (Dade City) WROD 1340-AM (Daytona Beach) WMMB 1240-AM (Melbourne) WLVJ 1040-AM/WKAT 1360-AM (Miami/Fort Lauderdale/South Palm Beach) WPSL 1590-AM (Treasure Coast/North Palm Beach) WZHR 1400-AM (Zephyrhills) Internet UCFAthletics.com RUTGERS 8-4 Overall 3-4 BIG EAST Head Coach - Greg Schiano GS Career Record - 54-55 (Ninth Year) GS Record at RU - 54-55 (Ninth Year) GS Record vs. UCF - 0-0 UCF 8-4 Overall 6-2 Conference USA Head Coach - George O'Leary GOL Career Record - 86-73 (13th Year) GOL Record at UCF - 34-40 (Sixth Year) GOL Record vs. RU - 0-0 Game #13 � Dec. 19, 2009 � 8:00 p.m. � ESPN Tropicana Field (28,000) � St. Petersburg, Fla. p BY THE NUMBERS 3 9 82.5 UCF is making its third bowl appearance and seeks its first win. A ninth win would match UCF's third-winningest season ever. UCF ranks fourth in the nation in rushing defense yielding 82.5 yards per game, trailing only Texas, Alabama and TCU. Miles from Orlando to St. Petersburg, the shortest bowl trip in America in `09. 106.49 - TABLE OF CONTENTS � 1 - Schedule, TOC and Broadcast Info. � 2 - UCF Quick Facts � 3 - Depth Chart � 4-5 - 2009 Roster, Media Info. � 6 - Bowl/Opponent Quick Facts � 7-8 - Team Notes � 9-10 - Bowl Notes � 10-11 - Offense Notes � 11-12 - Defense Notes � 12 - Special Teams Notes � 13 - Academics/NFL Notes � 14-15 Records Update � 16-17 The Last Time � 18-29 - 2009 Game Recaps � 18 - 9/5 vs. Samford � 19 - 9/12 at Southern Miss � 20 - 9/19 vs. Buffalo � 21 - 9/26 at East Carolina � 22 - 10/3 vs. Memphis � 23 - 10/17 vs. No. 9 Miami � 24 - 10/24 at Rice � 25 - 11/1 vs. Marshall � 26 - 11/7 at No. 2 Texas � 27 - 11/14 vs. No. 13/12 Houston � 28 - 11/21 vs. Tulane � 29 - 11/28 at UAB � 30 - Team Statistics � 31-32 - Individual Statistics � 33 - Defensive Statistics � 34-35 - Game-By-Game Statistics � 36 - Superlatives � 37 - Bright House Networks Stadium � 38-51 - Player Biographies � 52-53 - Head Coach George O'Leary � 54-55 - Assistant Coaches � 56-57 - Other Postseason Games � 58-59 - 2005 Sheraton Hawaii Bowl � 60-61 - 2007 AutoZone Liberty Bowl � 62-64 - University of Central Florida D Day D t Date O Opponent t R lt/Ti Result/Time TV Series Sat. Sept. 5 Samford W, 28-24 BHSN UCF, 8-3 Sat. Sept. 12 at Southern Miss* L, 19-26 USM, 4-1 Sat. Sept. 19 Buffalo W, 23-17 BHSN UCF, 6-1 Sat. Sept. 26 at East Carolina* L, 14-19 BHSN ECU, 8-1 Sat. Oct. 3 Memphis* W, 32-14 BHSN UCF, 5-1 Sat. Oct. 17 #9 Miami L, 7-27 CBS CS UM, 2-0 Sat. Oct. 24 at Rice* W, 49-7 UCF, 2-1 Sun. Nov. 1 Marshall* W, 21-20 ESPN UCF, 5-3 Sat. Nov. 7 at #2 Texas L, 3-35 FSN UT, 2-0 Sat. Nov. 14 #13/12 Houston* W, 37-32 CBS CS UCF, 2-1 Sat. Nov. 21 Tulane* W, 49-0 BHSN UCF, 3-1 Sat. Nov. 28 at UAB* W, 34-27 UCF, 6-1 Sat. Dec. 19 vs. Rutgers# 8:00 p.m. ESPN First Meeting * Denotes Conference USA Games...# St. Petersburg Bowl; St. Petersburg, Fla. 2009 UCF SCHEDULE/RESULTS Last Meeting Notes Brett Hodges comes off the bench, completes 10-of-17 plus GW TD. Hodges goes 15-for-26 with a career-high 158 yards and two TDs. Knights score 16 unanswered second-half points to rally for the win. UCF has best passing day in two years but are undone by five TOs. Harvey amasses 219 rushing yards, and Hodges throws for two TDs. Knights' defense gets six sacks but offense held to 229 yds total offense. 42-point victory marks UCF's most lopsided road win ever. Down 20-7 with 8:00 to play, UCF rallies for a thrilling win on ESPN. Colt McCoy (470 yards) and Jordan Shipley (273 yards) fuel UT win. UCF gets first FBS ranked win behind 39:30 time of possession. UCF has a 504-50 total offense edge in biggest C-USA shutout ever. Harvey's 130 yards rushing leads UCF to sixth straight C-USA win. Knights will make third bowl game appearance, look for first win. 1 THE ST. PETERSBURG BOWL GENERAL INFORMATION School University of Central Florida City/Zip Orlando, Florida 32816 Founded 1963 Enrollment 53,537 Nickname Knights School Colors Black and Gold Stadium (Capacity): Bright House Networks Stadium (45,323) Surface 419 Bermuda Grass Affiliation NCAA Division I Conference Conference USA President Dr. John C. Hitt (Austin College, `62) Director of Athletics Keith R. Tribble (Florida, `77) Athletic Department Phone 407-823-3213 Ticket Office Phone 407-823-1000 COACHING STAFF Head Coach George O'Leary Alma Mater/Year New Hampshire, `69 Record at UCF 34-40 (.459) (Sixth Season) Career Record (yrs) 86-73 (.541) (13th Season) Football Office Phone 407-823-5397 Best Time to Call Contact SID Assistant Coaches (Alma Mater/Year) Defensive Backs Sean Beckton (UCF, `93) Linebackers/Recruiting Coor. Geoff Collins (Western Carolina, `94) Running Backs George Godsey (Georgia Tech, `00) Defensive Coordinator Dave Huxtable (E. Illinois, `79) Wide Receivers/Asst. Head Coach David Kelly (Furman, `79) Offensive Line Brent Key (Georgia Tech, `01) Defensive Line Jim Panagos (Maryland, `96) Tight Ends/Special Teams Tim Salem (Arizona St.,`85) Offensive Coordinator/QBs Charlie Taaffe (Siena, `73) HISTORY First Year of Football First Year of Division I-A Football Overall All-Time Record FBS Era Record (Since 1996) Overall All-Time BHNS Record Overall All-Time Citrus Bowl Record 1979 1996 177-170-1 (.510) 80-84 (.488) 14-6 (.700) 112-59-1 (.654) TEAM INFORMATION 2009 Overall Record 8-4 (.667) C-USA Record 6-2 (.750), 2nd East Div. Multiple Basic Offense Basic Defense Multiple, 4-3 Letterwinners Returning/Lost 44/12 Offensive Letterwinners Returning/Lost 25/5 Defensive Letterwinners Returning/Lost 16/6 Special Teams Letterwinners Returning/Lost 3/1 Starters Returning/Lost 20/9 Offensive Starters Returning/Lost 10/1 Defensive Starters Returning/Lost 6/5 Specialty Starters Returning/Lost 4/3 ATHLETICS COMMUNICATIONS Director - Football SID Office Phone Cell Phone E-mail Associate Director Office Phone Cell Phone E-mail Assistant Director - Football Asst. Office Phone Cell Phone E-mail Assistant Director Office Phone Cell Phone E-mail Assistant Director Office Phone Cell Phone E-mail Athletics Communications Fax BHNS Press Box Phone Websites Leigh Torbin 407-823-0994 407-325-5703 ltorbin@athletics.ucf.edu Doug Richards 407-823-2142 407-405-5823 drichards@athletics.ucf.edu Brian Ormiston 407-823-2409 407-920-1233 bormiston@athletics.ucf.edu Sarah Tarasewicz 407-823-6489 407-462-3214 sarah@athletics.ucf.edu Andrew Gavin 407-823-2464 407-405-5821 agavin@athletics.ucf.edu 407-823-5293 407-882-0386 UCFAthletics.com UCFBowl.com LOGOS PRIMARY FOOTBALL OTHERS COLLEGEPRESSBOX.COM CollegePressBox.com is the official media website for Conference USA football. Access and download weekly game notes, quotes, statistics, media guides and more for the conference and each of its 12 member schools throughout the season. Login information will be distributed to accredited media or you can apply for a password by sending an e-mail to password@collegepressbox.com. 2 UCF BOWL WEEK MEDIA CALENDAR SUNDAY MONDAY TUESDAY Arrival at Hilton St. Pete Bayfront Time TBA Mid-late afternoon Brief Comments Upon Arrival WEDNESDAY Practice 10:00-12:30 (First 25 minutes open to photos) Team available after practice THURSDAY Practice 10:00-12:30 (First 25 minutes open to photos) Coach O'Leary available after practice FRIDAY No Interviews * Coach O'Leary radio show from 5-6 p.m. at Ferg's Sports Bar & Grill on 740 The Game SATURDAY Game Day 8 p.m. Kickoff Post-Game Availability THE ST. PETERSBURG BOWL DEPTH CHART POS # WR 5 3 14 WR 81 6 9 LT 77 66 LG 65 74 C 73 69 RG 68 74 RT 76 75 TE 88 85 HB 43 32 24 QB 11 4 TB 34 27 35 OFFENSE PLAYER Rocky Ross A.J. Guyton Quincy McDuffie Kamar Aiken Brian Watters Jamar Newsome Nick Pieschel Abr� Leggins Cliff McCray Chad Hounshell Ian Bustillo Zac Norris Theo Goins Chad Hounshell Jah Reid Mike Buxton Adam Nissley Willie Gaetjens Ricky Kay Billy Giovanetti Brendan Kelly Brett Hodges Rob Calabrese Brynn Harvey Jonathan Davis Ronnie Weaver ELIG RSr. RSo. Fr. Jr. RJr. RJr. RSo. RJr. RSr. RFr. RSr. RSo. RFr. RFr. RJr. RJr. RSo. RSo. Jr. RFr. RFr. RSr. So. So. Fr. RFr. HT 6-2 5-11 5-10 6-2 6-2 6-2 6-7 6-4 6-2 6-4 6-2 6-3 6-4 6-4 6-7 6-8 6-6 6-5 6-3 5-11 6-3 6-1 6-2 6-1 5-9 6-0 WT 209 195 172 218 191 198 302 317 307 297 301 288 316 297 314 309 264 246 239 225 228 191 213 205 195 202 CAREER HIGHS 9 REC - 135 YDS - 2 TD - 55 LG 9 REC - 119 YDS - 1 TD - 76 LG 4 REC - 77 YDS - 1 TD - 27 LG 8 REC - 131 YDS - 2 TD - 72 LG 7 REC - 121 YDS - 2 TD - 62 LG 6 REC - 60 YDS - 1 TD - 54 LG 3 REC - 46 YDS - 0 TD - 34 LG 4 REC - 52 YDS - 1 TD - 19 LG 3 REC - 23 YDS - 1 TD - 1 RUSH - 2 YDS 2 REC - 23 YDS - 1 TD - 17 LG 23 COM - 45 ATT - 342 YDS - 2 TD - 76 LG 13 COM - 35 ATT - 167 YDS - 2 TD - 62 LG 42 RUSH - 219 YDS - 3 TD - 50 LG 22 RUSH - 76 YDS - 1 TD - 45 LG 25 RUSH - 123 YDS - 1 TD - 48 LG PLAYER NOTES An ESPN The Magazine Academic All-American Led UCF with six receptions vs. Tulane Had first four career catches vs. Houston Hauled in five catches with two TD vs. Tulane Posted four catches for 20 yards vs. Houston A pair of catches for 27 yards vs. Marshall Made 17-straight starts until missing the TLN game Has started at both guard and tackle this year Seven starts at RG and four at LG this season Made first start at LG at Texas The starting center in every game this year Attended Port St. Joe High School in Florida C-USA All-Freshman Offensive Team Made first start at LG at Texas All-C-USA Offensive First Team Saw time in 12 games during the 2008 season Pulled in one catch for 22 yards at UAB Saw time in 10 games a year ago Had one reception for 12 yards vs. Tulane Made first career touchdown catch at UAB Two catches for 23 yards and a TD against TLN All-C-USA Honorable Mention honors Was 10-for-19 with 76 yards in start at Texas All-C-USA Honorable Mention honors Rushed for 151 and 2 TDs vs. Tulane and UAB Had 58 yards and a TD on five rushes at Rice 48 53 LT 95 94 RT 98 46 RE 49 99 OLB 38 56 MLB 59 50 OLB 57 55 CB 20 19 CB 23 21 SS 2 40 FS 18 29 DEFENSE LE David Williams Darius Nall Travis Timmons Wes Tunuufi Sauvao Torrell Troup Rashidi Haughton Bruce Miller Jarvis Geathers Derrick Hallman Jordan Richards Cory Hogue Josh Linam Lawrence Young Alex Thompson Josh Robinson A.J. Bouye Justin Boddie Darin Baldwin Michael Greco Reggie Weams Kemal Ishmael Lyle Dankenbring RJr. RSo. Sr. RJr. Sr. RJr. RJr. Sr. Jr. RSr. RSr. So. Jr. RSr. Fr. Fr. Jr. Jr. RSr. Jr. Fr. RFr. 6-2 6-3 6-4 6-3 6-3 6-3 6-2 6-2 6-0 6-2 6-1 6-3 6-0 6-2 5-10 6-0 6-2 5-11 6-3 6-0 5-11 5-10 238 249 297 292 314 285 253 238 212 225 228 234 217 229 189 180 184 197 217 191 197 190 5 TACK - 4 SOLO - 4 ASST - 1 SACK Had three tackles and a sack against Miami 6 TACK - 5 SOLO - 4 ASST - 2 SACK - 1 FF Earned two sacks and a FF against Tulane 4 TACK - 3 SOLO - 3 ASST - 1 SACK - 1 FF - 1 INT Had a 10-yard fumble recovery vs. Tulane 2 TACK - 2 SOLO - 1 ASST - 1 SACK - 1 FF Posted two tackles, one for a loss, vs. Tulane 7 TACK - 4 SOLO - 6 ASST - 1 SACK - 1 FF All-C-USA Defensive Second Team 2 TACK - 1 SOLO - 1 ASST Posted one tackle against Tulane 10 TACK - 8 SOLO - 6 ASST - 3 SACK - 1 FF - 1 INT C-USA Defensive Player of the Year 7 TACK - 5 SOLO - 5 ASST - 3 SACK - 2 FF All-C-USA Defensive First Team 13 TACK - 8 SOLO - 10 ASST - 1 SACK - 1 FF - 1 INT All-C-USA Honorable Mention honors 9 TACK - 8 SOLO - 5 ASST - 1 SACK - 1 FF Nine tackles and one forced fumble at ECU 13 TACK - 10 SOLO - 8 ASST - 2 SACK - 2 FF - 1 INT All-C-USA Defensive First Team 3 TACK - 2 SOLO - 3 ASST Had three stops in win vs. Houston 11 TACK - 9 SOLO - 7 ASST - 1 SACK - 2 FF - 1 INT All-C-USA Honorable Mention honors 9 TACK - 3 SOLO - 7 ASST - 1 FF Amassed eight tackles and a break-up at Rice 11 TACK - 10 SOLO - 2 ASST - 1 INT All-C-USA Defensive Second Team 3 TACK - 2 SOLO - 1 ASST Had two tackles and a break-up vs. Tulane 10 TACK - 7 SOLO - 4 ASST - 1 SACK - 1 FF - 1 INT Tied career high with 10 tackles at UAB 8 TACK - 7 SOLO - 4 ASST - 1 SACK - 1 INT Had seven stops at UAB and two PBUs 8 TACK - 5 SOLO - 3 ASST - 1 FF Did not play against Tulane 7 TACK - 3 SOLO - 5 ASST - 1 INT Picked off a pass and had four tackles vs. TLN 11 TACK - 8 SOLO - 4 ASST Posted seven tackles against Tulane 1 TACK - 1 SOLO Notched a tackle against Tulane 3 SPECIALISTS PK P H LS PR KR 16 28 41 28 41 5 60 61 3 27 14 21 Nick Cattoi Jamie Boyle Blake Clingan Jamie Boyle Blake Clingan Rocky Ross Charley Hughlett James Getsee A.J. Guyton Jonathan Davis Quincy McDuffie Darin Baldwin So. Fr. Jr. Fr. Jr. RSr. So. RSo. RSo. Fr. Fr. Jr. 6-5 5-10 6-3 5-10 6-3 6-2 6-4 6-2 5-11 5-9 5-10 5-11 210 182 229 182 229 209 232 220 195 195 172 197 4 FG - 4 ATT - 50 LG - 8 KICKOFFS - 544 YDS 0 FG - 2 ATT - 4 KICKOFFS - 263 YDS 11 PUNTS - 463 YDS - 70 LG 3 PR - 42 YDS - 38 LG 2 PR - 41 YDS - 23 LG 6 KR - 126 YDS - 95 LG - 1 TD 3 KR - 67 YDS - 42 LG Tied school FG record, going 4-for-4 vs. MEM Perfect 3-for-3 on extra points vs. Tulane Booted a career-long 70-yarder at Texas Serving as the backup punter Also serves as the team's punter Named to the C-USA All-Academic Team All-C-USA Honorable Mention honors Was a tight end at Hialeah-Miami Lakes HS Two punt returns for 31 yards at UAB Took back two punts for 41 yards vs. Tulane Five returns for 122 yards against Houston Returned one kick back for 29 yards vs. TLN THE ST. PETERSBURG BOWL 2009 UCF NUMERICAL ROSTER No. Name 2 Michael Greco** 3 A.J. Guyton* 4 Rob Calabrese* 5 Rocky Ross*** 6 Brian Watters** 7 Brandon Davis* 9 Jamar Newsome* 10 Nico Flores 11 Brett Hodges 12 Marquee Williams* 12 Robertson Auguste 13 Dontravius Floyd 13 Brian Taaffe 14 Quincy McDuffie 14 L.D. Crow 15 Chad Alexander* 16 Nick Cattoi* 17 Joe Weatherford* 18 Kemal Ishmael 18 Andy Slowik 19 A.J. Bouye 19 David Bohner 20 Josh Robinson 21 Darin Baldwin** 22 Emery Allen*** 23 Justin Boddie** 24 Brendan Kelly 25 Jarrett Swaby 26 Jordan Becker 27 Jonathan Davis 28 Jamie Boyle 29 Lyle Dankenbring 30 Latavius Murray* 31 Austin Hudson 32 Billy Giovanetti 33 Nick Giovanetti 34 Brynn Harvey* 35 Ronnie Weaver* 37 Henry Wright 38 Derrick Hallman** 39 Omar Hansborough 40 Reggie Weams** 41 Blake Clingan** 41 Nick Black 42 Loren Robinson 43 Ricky Kay** 43 Michael Dominguez 44 Brandon Bryant 45 Javen Harris 46 Rashidi Haughton 47 T.J. Harnden* 48 David Williams** 49 Bruce Miller** 50 Josh Linam* 51 D.J. Williams 52 Jack Carter 53 Darius Nall* 54 Chance Henderson*** 55 Alex Thompson*** 56 Jordan Richards** 57 Lawrence Young** 58 Troy Davis 59 Cory Hogue*** 60 Charley Hughlett* 61 James Getsee 62 Rey Cunha 65 Cliff McCray* 66 Abr� Leggins 67 Jake Goray 68 Theo Goins 69 Zac Norris 73 Ian Bustillo** 73 Kevin Garvy 74 Chad Hounshell 75 Mike Buxton* 76 Jah Reid** 77 Nick Pieschel* 78 Scott Irwin 80 Jim Teknipp 81 Kamar Aiken** 82 Khymest Williams** 83 John Lubischer* 84 Corey Rabazinski*** 85 Willie Gaetjens* 86 D.J. Brown 87 J.T. McArthur 88 Adam Nissley* 90 Ash Weekley 91 Victor Gray 92 Jordan Rae 93 Frankie Davis 94 Wes Tunuufi Sauvao* 95 Travis Timmons*** 96 Chris Martin 97 Robert Pritchard 98 Torrell Troup*** 99 Jarvis Geathers* * Denotes letters earned Pos. S WR QB WR WR RB WR QB QB WR DB WR QB WR QB LB PK QB DB QB DB PK/P CB DB CB DB RB S DB RB PK/P DB RB WR RB DB RB RB DB LB DB DB P LB LB HB LB DL LB DE DT DE DE LB LB LB DE LB LB LB LB LB LB SN SN OL OL OL OL OL OL OL DE OL OL OL OL DE TE WR WR TE TE FB TE WR TE DE DL DE DL DT DT DL DE DT DE Ht. 6-3 5-11 6-2 6-2 6-2 5-9 6-2 6-2 6-1 6-0 5-11 6-2 6-4 5-10 6-3 6-2 6-5 6-4 5-11 6-0 6-0 6-0 5-10 5-11 5-9 6-2 6-3 6-1 6-4 5-9 5-10 5-10 6-3 6-1 5-11 5-9 6-1 6-0 6-1 6-0 6-2 6-0 6-3 5-11 6-3 6-3 5-11 6-4 5-11 6-3 6-3 6-2 6-2 6-3 6-1 6-2 6-3 6-1 6-2 6-2 6-0 6-2 6-1 6-4 6-2 6-3 6-2 6-4 6-5 6-4 6-3 6-2 6-3 6-4 6-8 6-7 6-7 6-2 6-6 6-2 5-10 6-4 6-3 6-5 6-4 5-10 6-6 6-4 6-4 6-2 6-2 6-3 6-4 6-5 6-4 6-3 6-2 Wt. 217 195 213 209 190 194 198 199 191 200 185 216 215 172 228 218 210 200 197 190 180 171 189 197 182 190 228 189 220 195 182 190 216 192 225 190 205 206 215 212 181 191 229 231 201 239 224 269 232 285 244 238 253 234 205 232 249 236 229 225 217 230 228 232 220 289 307 317 282 316 288 301 220 297 309 314 302 230 213 213 179 250 244 246 236 185 264 230 241 260 280 292 297 279 217 314 238 Year RSr. RSo. So. RSr. RJr. So. RJr. Fr. RSr. Fr. RFr. Fr. FRr. Fr. RSo. RSo. So. RSo. Fr. RSo. Fr. RSo. Fr. Jr. Sr. Jr. RFr. Fr. RSo. Fr. Fr. RFr. So. RJr. RFr. RSo. So. RSo. Fr. Jr. RSo. Jr. Jr. RFr. RFr. Jr. RJr. Fr. Fr. RJr. RSr. RJr. RJr. So. RFr. RSo. RSo. Sr. RSr. RSr. Jr. Fr. RSr. So. RSo. Fr. RSr. RJr. RSo. RFr. RSo. RSr. RFr. RFr. RJr. RJr. RSo. RSo. Fr. Jr. Jr. RJr. Sr. RSo. Fr. RSo. RSo. RFr. Fr. RFr. Fr. RJr. Sr. Fr. RFr. Sr. Sr. Hometown/High School (Previous School) Ft. Lauderdale, Fla./Cardinal Gibbons (Pearl River C.C.) Homestead, Fla./Homestead Islip Terrace, N.Y./East Islip Jacksonville, Fla./Bolles School Rome, Ga./Rome Suwanee, Ga./Peachtree Ridge St. Petersburg, Fla./Boca Ciega Miami, Fla./North Miami Beach Winter Springs, Fla./Winter Springs (Wake Forest) Ocala, Fla./Vanguard Miami, Fla./Archbishop Curley-Notre Dame Homestead, Fla./South Dade Clarksville, Md./St. Paul's (Fordham) Orlando, Fla./Edgewater Palm Harbor, Fla./Countryside (Stanford) Lake Wales, Fla./Lake Wales Tampa, Fla./Gaither Land O'Lakes, Fla./Land O'Lakes Miami, Fla./North Miami Beach Bradenton, Fla./Lakewood Ranch Tucker, Ga./Tucker Matthews, N.C./Sun Valley (Navy) Sunrise, Fla./Plantation Homestead, Fla./South Dade Milton, Fla./Milton Atlanta, Ga./North Atlanta Shoreham, N.Y./Shoreham-Wading River Belle Glade, Fla./Glades Day Clearwater, Fla./Central Catholic Lawrenceville, Ga./Tucker Central Valley, N.Y. / Monroe-Woodbury Stuart, Fla./Martin County Nedrow, N.Y./Onandaga Central Gainesville, Fla./Bucholz (Air Force) Winter Park, Fla./Bishop Moore Winter Park, Fla./Bishop Moore (Georgia Southern) Largo, Fla./Largo Wabasso, Fla./Vero Beach Orlando, Fla./Edgewater Fort Pierce, Fla./Fort Pierce Central Homestead, Fla./Homestead Baton Rouge, La./Redemptorist Coral Springs, Fla./Coral Springs Bemus Point, N.Y./Maple Grove Longwood, Fla./Lynam Deltona, Fla./DeLand Miami, Fla./North Miami Beach Atlanta, Ga./Booker T. Washington Eufaula, Ala./Eufaula Miami, Fla./Hialeah Tallahassee, Fla./Lincoln Lexington, S.C./White Knoll Canton, Ga./Woodstock Tavares, Fla./Tavares Ft. Lauderdale, Fla./Cardinal Gibbons Miami, Fla./Coral Reef Douglasville, Ga./Chapel Hill Conyers, Ga./Heritage Gainesville, Fla./Buchholz Cary, N.C./Cary/Hargrave Military Academy Pensacola, Fla./Woodham Lawrenceville, Ga./Berkmar Naples, Fla./Naples Tampa, Fla./Hillsborough Miami, Fla./Hialeah-Miami Lakes Miami, Fla./Jackson Miami, Fla./Southridge Orlando, Fla./Evans/Copiah-Lincoln JC Fort Walton Beach, Fla./Choctawhatchee Houston, Texas/Hightower Port St. Joe, Fla./Port St. Joe Miami, Fla./Killian Palm Beach, Fla./Cardinal Newman Mentor, Ohio/Lake Catholic Bonita Springs, Fla./Estero Haines City, Fla./Haines City Fort Lauderdale, Fla./St. Thomas Aquinas Lake Wales, Fla./Lake Wales Concord, Ohio/Riverside Miami, Fla./Chaminade-Madonna Lutcher, La./Lutcher Boca Raton, Fla./Boca Raton Community (Duke) Winter Park, Fla./Winter Park Titusville, Fla./Titusville Ocala, Fla./Dunnellon Laurens, S.C./Laurens Cumming, Ga./South Forsyth Jacksonville, Fla./Bolles School Orlando, Fla./Dr. Phillips Weston, Fla./Cypress Bay Clermont, Fla./East Ridge Leesville, La./Leesville Gainesville, Fla./Buchholz Fort Walton Beach, Fla./Choctawhatchee Suwanee, Ga./North Gwinett Conyers, Ga./Salem Andrews, S.C./Andrews (Feather River C.C.) MEDIA INFORMATION UCF ATHLETICS COMMUNICATIONS UCF Athletics Director of Communications Leigh Torbin will accompany the Knights throughout their time in St. Petersburg. Assistant Director of Athletics Communications Brian Ormiston will also be in attendance for most of the week. Contact information is below: Email: ltorbin@athletics.ucf.edu Cell: (407) 325-5703 Websites: UCFAthletics.com, UCFBowl.com Twitter: @UCF_FootballSID Facebook: UCF-Knights-Insider Bowl Website: StPetersburgBowl.com ST. PETERSBURG BOWL MEDIA RELATIONS John Gerdes can assist you with media concerns relative to the bowl week and game itself. He can be reached at either (813) 909-3152 or stpetebowl@yahoo.com. TRAVEL INFORMATION The Knights will bus to St. Petersburg on Tuesday afternoon and be headquartered at the Hilton St. Petersburg Bayfront Hotel at 333 First Street South. UCF will return to Orlando immediately following the game. MEDIA HOTEL The official media hotel is the Hotel Indigo at 234 Third Avenue North. There will be a media work room at the hotel. To reserve a room, go to: CREDENTIAL REQUESTS Credentials must be made to the St. Petersburg Bowl and NOT to UCF. They can be applied for online at: PRACTICE SITE UCF will hold its practices at Gibbs High School at 850 34th Street South. MEDIA OPPORTUNITIES (SUBJECT TO CHANGE) * Coach George O'Leary and team captains will be available for brief comments upon arrival at the Hilton on Dec. 15. * On Dec. 16, the beginning of UCF's practice will be open for photography with coaches and student-athletes available after the session. * On Dec. 17, the beginning of UCF's practice will be open for photography with head coach George O'Leary available after the session. * On Dec. 18, the Knights will have no formal media availabilities; however, coach O'Leary will host a special edition of his radio call-in show on Friday at 5 p.m. at Ferg's Sports Bar & Grill. * Coach O'Leary and select student-athletes will be available for social comments only at bowlrelated events that are open to the media such as the Party on the Pier, Beach Bash and All Children's Hospital Visit. * On Dec. 19, UCF coaches and requested student-athletes will be available following the St. Petersburg Bowl. CREDITS The 2009 UCF St. Petersburg Bowl Guide is a publication of UCF Athletics Communications. Written by Leigh Torbin, Brian Ormiston, Joe Hornstein and Christian Edwards with inside layout and design by Ormiston and Torbin. Covers designed by Sarah Tarasewicz. Special thanks to Doug Richards, Stephanie Hayes and Andrea Ciotti. Photography by Sideline Sports, Collegiate Images, UCFSports.com and Visit St. Petersburg/ Clearwater. Printing by DME of Daytona Beach. 4 THE ST. PETERSBURG BOWL 2009 UCF ALPHABETICAL ROSTER No. Name 81 Kamar Aiken** 15 Chad Alexander* 22 Emery Allen*** 12 Robertson Auguste 21 Darin Baldwin** 26 Jordan Becker 41 Nick Black 23 Justin Boddie** 19 David Bohner 19 A.J. Bouye 28 Jamie Boyle 86 D.J. Brown 44 Brandon Bryant 73 Ian Bustillo** 75 Mike Buxton* 4 Rob Calabrese* 52 Jack Carter 16 Nick Cattoi* 41 Blake Clingan** 14 L.D. Crow 62 Rey Cunha 29 Lyle Dankenbring 7 Brandon Davis* 93 Frankie Davis 27 Jonathan Davis 58 Troy Davis 43 Michael Dominguez 10 Nico Flores 13 Dontravius Floyd 85 Willie Gaetjens* 73 Kevin Garvy 99 Jarvis Geathers* 61 James Getsee 32 Billy Giovanetti 33 Nick Giovanetti 68 Theo Goins 67 Jake Goray 91 Victor Gray 2 Michael Greco** 3 A.J. Guyton* 38 Derrick Hallman** 39 Omar Hansborough 47 T.J. Harnden* 45 Javen Harris 34 Brynn Harvey* 46 Rashidi Haughton 54 Chance Henderson*** 11 Brett Hodges 59 Cory Hogue*** 74 Chad Hounshell 31 Austin Hudson 60 Charley Hughlett* 78 Scott Irwin 18 Kemal Ishmael 43 Ricky Kay** 24 Brendan Kelly 66 Abr� Leggins 50 Josh Linam* 83 John Lubischer* 96 Chris Martin 87 J.T. McArthur 65 Cliff McCray* 14 Quincy McDuffie 49 Bruce Miller** 30 Latavius Murray* 53 Darius Nall* 9 Jamar Newsome* 88 Adam Nissley* 69 Zac Norris 77 Nick Pieschel* 97 Robert Pritchard 84 Corey Rabazinski*** 92 Jordan Rae 76 Jah Reid** 56 Jordan Richards** 20 Josh Robinson 42 Loren Robinson 5 Rocky Ross*** 18 Andy Slowik 25 Jarrett Swaby 13 Brian Taaffe 80 Jim Teknipp 55 Alex Thompson*** 95 Travis Timmons*** 98 Torrell Troup*** 94 Wes Tunuufi Sauvao* 6 Brian Watters** 40 Reggie Weams** 17 Joe Weatherford* 35 Ronnie Weaver* 90 Ash Weekley 51 D.J. Williams 48 David Williams** 82 Khymest Williams** 12 Marquee Williams* 37 Henry Wright 57 Lawrence Young** * Denotes letters earned Pos. WR LB CB DB DB LB LB DB PK/P DB PK/P TE DL OL OL QB LB PK P QB OL DB RB DL RB LB LB QB WR FB DE DE SN RB DB OL OL DL S WR LB DB DT LB RB DE LB QB LB OL WR SN DE DB HB RB OL LB TE DL WR OL WR DE RB DE WR TE OL OL DE TE DE OL LB CB LB WR QB S QB TE LB DT DT DT WR DB QB RB DE LB DE WR WR DB LB Ht. 6-2 6-2 5-9 5-11 5-11 6-4 5-11 6-2 6-0 6-0 5-10 6-4 6-4 6-2 6-8 6-2 6-2 6-5 6-3 6-3 6-3 5-10 5-9 6-2 5-9 6-2 5-11 6-2 6-2 6-5 6-3 6-2 6-2 5-11 5-9 6-4 6-5 6-4 6-3 5-11 6-0 6-2 6-3 5-11 6-1 6-3 6-1 6-1 6-1 6-4 6-1 6-4 6-2 5-11 6-3 6-3 6-4 6-3 6-4 6-5 5-10 6-2 5-10 6-2 6-3 6-3 6-2 6-6 6-3 6-7 6-4 6-3 6-2 6-7 6-2 5-10 6-3 6-2 6-0 6-1 6-4 6-6 6-2 6-4 6-3 6-3 6-2 6-0 6-4 6-0 6-4 6-1 6-2 5-10 6-0 6-1 6-0 Wt. 213 218 182 185 197 220 231 190 171 180 182 236 269 301 309 213 232 210 229 228 289 190 194 280 195 230 224 199 216 246 220 238 220 225 190 316 282 241 217 195 212 181 244 232 205 285 236 191 228 297 192 232 230 197 239 228 317 234 250 279 185 307 172 253 216 249 198 264 288 302 217 244 260 314 225 189 201 209 190 189 215 213 229 297 314 292 190 191 200 206 230 205 238 179 200 215 217 Year Jr. RSo. Sr. RFr. Jr. RSo. RFr. Jr. RSo. Fr. Fr. Fr. Fr. RSr. RJr. So. RSo. So. Jr. RSo. Fr. RFr. So. Fr. Fr. Fr. RJr. Fr. Fr. RSo. RFr. Sr. RSo. RFr. RSo. RFr. RSo. Fr. RSr. RSo. Jr. RSo. RSr. Fr. So. RJr. Sr. RSr. RSr. RFr. RJr. So. RSo. Fr. Jr. RFr. RJr. So. RJr. Fr. RSo. RSr. Fr. RJr. So. RSo. RJr. RSo. RSo. RSo. RFr. Sr. RFr. RJr. RSr. Fr. RFr. RSr. RSo. Fr. FRr. Fr. RSr. Sr. Sr. RJr. RJr. Jr. RSo. RSo. RFr. RFr. RJr. Jr. Fr. Fr. Jr. Hometown/High School (Previous School) Miami, Fla./Chaminade-Madonna Lake Wales, Fla./Lake Wales Milton, Fla./Milton Miami, Fla./Archbishop Curley-Notre Dame Homestead, Fla./South Dade Clearwater, Fla./Central Catholic Bemus Point, N.Y./Maple Grove Atlanta, Ga./North Atlanta Matthews, N.C./Sun Valley (Navy) Tucker, Ga./Tucker Central Valley, N.Y. / Monroe-Woodbury Ocala, Fla./Dunnellon Atlanta, Ga./Booker T. Washington Miami, Fla./Killian Bonita Springs, Fla./Estero Islip Terrace, N.Y./East Islip Miami, Fla./Coral Reef Tampa, Fla./Gaither Coral Springs, Fla./Coral Springs Palm Harbor, Fla./Countryside (Stanford) Miami, Fla./Jackson Stuart, Fla./Martin County Suwanee, Ga./Peachtree Ridge Clermont, Fla./East Ridge Lawrenceville, Ga./Tucker Lawrenceville, Ga./Berkmar Miami, Fla./North Miami Beach (FIU) Miami, Fla./North Miami Beach Homestead, Fla./South Dade Titusville, Fla./Titusville Palm Beach, Fla./Cardinal Newman Andrews, S.C./Andrews (Feather River C.C.) Miami, Fla./Hialeah-Miami Lakes Winter Park, Fla./Bishop Moore Winter Park, Fla./Bishop Moore (Georgia Southern) Houston, Texas/Hightower Fort Walton Beach, Fla./Choctawhatchee Orlando, Fla./Dr. Phillips Ft. Lauderdale, Fla./Cardinal Gibbons (Pearl River C.C.) Homestead, Fla./Homestead Fort Pierce, Fla./Fort Pierce Central Homestead, Fla./Homestead Tallahassee, Fla./Lincoln Eufaula, Ala./Eufaula Largo, Fla./Largo Miami, Fla./Hialeah Conyers, Ga./Heritage Winter Springs, Fla./Winter Springs (Wake Forest) Naples, Fla./Naples Mentor, Ohio/Lake Catholic Gainesville, Fla./Bucholz (Air Force) Tampa, Fla./Hillsborough Lake Wales, Fla./Lake Wales Miami, Fla./North Miami Beach Deltona, Fla./DeLand Shoreham, N.Y./Shoreham-Wading River Orlando, Fla./Evans (Copiah-Lincoln JC) Tavares, Fla./Tavares Boca Raton, Fla./Boca Raton Community (Duke) Fort Walton Beach, Fla./Choctawhatchee Laurens, S.C./Laurens Miami, Fla./Southridge Orlando, Fla./Edgewater Canton, Ga./Woodstock Nedrow, N.Y./Onandaga Central Douglasville, Ga./Chapel Hill St. Petersburg, Fla./Boca Ciega Cumming, Ga./South Forsyth Port St. Joe, Fla./Port St. Joe Fort Lauderdale, Fla./St. Thomas Aquinas Suwanee, Ga./North Gwinett Winter Park, Fla./Winter Park Weston, Fla./Cypress Bay Haines City, Fla./Haines City Cary, N.C./Cary/Hargrave Military Academy Sunrise, Fla./Plantation Longwood, Fla./Lynam Jacksonville, Fla./Bolles School Bradenton, Fla./Lakewood Ranch Belle Glade, Fla./Glades Day Clarksville, Md./St. Paul's (Fordham) Concord, Ohio/Riverside Gainesville, Fla./Buchholz Gainesville, Fla./Buchholz Conyers, Ga./Salem Leesville, La./Leesville Rome, Ga./Rome Baton Rouge, La./Redemptorist Land O'Lakes, Fla./Land O'Lakes Wabasso, Fla./Vero Beach Jacksonville, Fla./Bolles School Ft. Lauderdale, Fla./Cardinal Gibbons Lexington, S.C./White Knoll Lutcher, La./Lutcher Ocala, Fla./Vanguard Orlando, Fla./Edgewater Pensacola, Fla./Woodham COACHING STAFF Head Coach George O'Leary (New Hampshire, 1969) Defensive Backs Sean Beckton (UCF, 1993) * Linebackers/Recruiting Coordinator Geoff Collins (Western Carolina, 1994) * Running Backs George Godsey (Georgia Tech, 2001) Defensive Coordinator Dave Huxtable (Eastern Illinois, 1979) Wide Receivers/Assistant Head Coach David Kelly (Furman, 1979) Offensive Line Brent Key (Georgia Tech, 2001) Defensive Line Jim Panagos (Maryland, 1993) Tight Ends/Special Teams Coordinator Tim Salem (Arizona State, 1985) * Offensive Coordinator/Quarterbacks Charlie Taaffe (Siena, 1973) * Assistant AD/Director of Football Operations Marty O'Leary (Georgia Tech, 2002) Director of Strength & Conditioning Ed Ellis (Alabama, 1987) Head Athletic Trainer Mary Vander Heiden (UW-Eau-Claire, 1998) Team Physicians Michael Jablonski, MD Kenneth Krumins, MD Douglas Meuser, MD Daniel Monette, MD Director of Video Services John Kvatek (Miami (Ohio), 1995) Equipment Coordinator Robert Jones (Carson Newman, 1984) Director of Player Personnel Albert Boone (Florida State, 2003) Graduate Assistants Michael Buscemi (UCF, 2008) Andrew Thacker (Furman, 2008) Mark Cammack (UCF, 2007) * In Press Box on Gamedays PRONUNCIATIONS Justin Boddie ............................................... body David Bohner......................................... BAW-ner A.J. Bouye ................................................boy-YAY Ian Bustillo .........................................bus-TEE-yo Rob Calabrese ................................ cal-ah-BREES Nick Cattoi .............................................. cah-TOY Rey Cunha .............................................coon-yah Willie Gaetjens ......................................gate-gins James Getsee .......................................... gets-ee Brynn Harvey...............................................brinn Rashidi Haughton............ rah-SHEE-dee HAW-tin Abr� Leggins .........................................AWE-bray Josh Linam ..............................................LINE-um John Lubischer ................................LOO-bish-err Nick Pieschel .................. pih-shell (like Michelle) Corey Rabazinski ......................rah-bah-ZIN-skee Andy Slowik..........................................slow-wick Brian/Charlie Taaffe.........taff (rhymes with staff) Torrell Troup ......................................... tore-RELL Wes Tunuufi Sauvao......... tuh-NOO-fee sow-voh Khymest Williams.................................. kim-mist 5 THE ST. PETERSBURG BOWL TROPICANA FIELD FACTS Primary Tenant: Tampa Bay Rays (1998-current) Former Tenants: Tampa Bay Lightning (1993-96), Tampa Bay Storm (1991-96) Former Names: Florida Suncoast Dome (1990-93), ThunderDome (1993-96) Opened: 1990 Cost: $130 million Owner: City of St. Petersburg Playing Surface: FieldTurf Capacity: 28,000 for St. Petersburg Bowl Noteable Events: 2008 World Series (vs. Philadelphia); 2008 ALCS (vs. Boston); 1999 Final Four (UConn def. Duke 77-74 in final); 1996 Stanley Cup Playoffs first round (vs. Philadelphia). RUTGERS FACTS Location: New Brunswick, N.J. Founded: 1766 Enrollment: 52,471 Conference: BIG EAST Color: Scarlet Nickname: Scarlet Knights Home Field (Capacity): Rutgers Stadium (52,454) Surface: FieldTurf President: Dr. Richard L. McCormick Athletics Director: Tim Pernetti Head Coach: Greg Schiano (Bucknell, 1988) Record at Rutgers: 54-55 (Ninth Year) Career Record: Same Record vs. UCF: 0-0 Record vs. C-USA: 0-0 Record in Bowl Games: 3-1 All-Time Record: 604-594-42 2009 Record: 8-4, 3-4 BIG EAST Bowl Game Record: 3-2 Last Appearance: 2008 PapaJohns.com Bowl Result: W, 29-23 vs. NC State In St. Petersburg Bowl: 0-0 SID: Jason Baum SID Phone: (732) 445-4200 SID Cell: (201) 966-6338 SID Email: jbaum@scarletknights.com Website: ScarletKnights.com Leading Rusher: Joe Martinek (192-923-9) Leading Passer: Tom Savage (135-258-6-12 TD-1,917 YDS) Leading Receiver: Tim Brown (51-1,051-8) Leading Tackler: Devin McCourty (48-30-78, 7.5 TFL) ST. PETERSBURG BOWL FACTS 6 Game Operator: ESPN Regional Executive Director: Brett Dulaney Asst. Executive Director: Nikki Godfrey Manager, Sales & Marketing: Carlos Padilla Media Relations: John Gerdes Gerdes Phone: (813) 909-3152 Gerdes Email: StPeteBowl@yahoo.com Website: StPetersburgBowl.com GAME HISTORY DATE SCORE MVPs 12/20/08 USF 41, Memphis 14 USF: QB Matt Grothe Mem: WR Duke Calhoun CB Devin McCourty THE ST. PETERSBURG BOWL TEAM NOTES RESILIENT KNIGHTS SHOW THEIR HEART � No mountain has seemed too steep for the Knights in 2009 as the team is 8-4 despite leading at the half only three times in those 12 games. Even both of UCF's C-USA losses nearly saw the Knights rally to win as both games dramatically got down to onside-kick situations on the road. The Knights have outscored opponents 196-115 (+81) after intermission this year, including a 105-47 (+58) edge in the third quarter of games. DON'T CALL IT A COMEBACK � The Knights have overcome double-digit deficits three times this year. UCF trailed defending MAC Champion Buffalo 17-7 at the half on Sept. 19 but scored 16 unanswered secondhalf points to win the game 23-17. It marked the first time the Knights rallied back from a double-digit deficit to win since Nov. 19, 2005. In that game, UCF trailed Rice by 12, 21-9, at the half in Houston before rallying to win 3128. On Nov. 1, UCF also trailed Marshall 17-7 at halftime (and trailed by 13 points in the fourth quarter) before rallying to win. UCF was down 14 points to No. 13 Houston (17-3) on Nov. 14 before rallying for what tied for the secondbiggest come-from-behind win in school history. UCF is 3-0 this year when leading at the half, 1-0 when tied at the half, and a remarkable 4-4 when trailing at halftime. UCF DEFICITS OVERCOME GAME DEFICIT Buffalo 7-17, late 3rd Memphis 3-7, halftime Marshall 7-20, 8:00 4th Houston 3-17, late 2nd 0-2 TO 6-2 � This season UCF became just the second team to ever start a Conference USA slate at 0-2 and then win each of its next six league games to finish 6-2. This rare feat was also accomplished by the 2006 Rice Owls. The Knights dropped seven-point games on the road at Southern Miss and East Carolina in September before the team's offense in particular had a chance to gel. UCF bounced back a week after the ECU loss to down Memphis, 32-14, at Bright House Networks Stadium and did not look back in league play, including a victory over West Division Champion Houston, then ranked No. 13/12 in the nation, and a win over bowl-bound Marshall. ALL WE NEED IS 20 � With the strength of UCF's defense over the past few years, its offense has not needed much to win games. In fact, 20 points has been a sort of magical dividing line. Over the past 33 games, UCF is 18-1 when scoring 20 points or more and 1-13 when scoring less than 20 points. This year so far the Knights are 8-0 when scoring at least 20 and 0-4 when held under that number. WELL DISCIPLINED � UCF is second in the nation, trailing only Navy, with 3.92 penalties per game. UCF also ranks fifth nationally for giving its foes just 37.00 penalty yards per game. UCF GREATER THAN THE SUM OF ITS PARTS � UCF finished second in a competitive C-USA East Division which sent four teams to bowl games. The Knights won six league games this year, but UCF did so with such a team effort that it provided for record low weekly individual accolades. UCF had just two C-USA Player of the Week awards this year, both on defense by Bruce Miller. That sum ties the 2006 Southern Miss Golden Eagles for the fewest C-USA Player of the Week honorees ever by a team with six C-USA wins. Thanks in large part to the dominance of Houston quarterback Case Keenum, a Heisman Trophy candidate for much of the season, UCF and East Carolina this year became just the second and third six-win C-USA teams to ever win that many league games without an Offensive Player of the Week to their credit, joining the 1999 Southern Miss squad. UCF is just the sixth six-win team out of 28 all-time to not have a Special Teams Player of the Week, all of which is a testament to the overall special team effort that allowed UCF to go 8-4 this year and 6-2 in C-USA play. THE OPENING DRIVE... � UCF leads C-USA in total, rushing and scoring defense plus sacks and TFLs. � The Knights tied for the fifth-largest improvement in the country from 2008 to the 2009 regular season. � UCF won five of its last six games to close out the 2009 regular season and each of its last six conference tilts. � The Knights are going for their ninth win of the season yet have led at the half just three times this year. � UCF is looking for its first-ever win over a BIG EAST foe and is facing Rutgers for the first time. � The Knights averaged 381.6 yards of total offense over the final eight games of the season. � UCF's Bruce Miller was named the Conference USA Defensive Player of the Year and is fifth in the nation in sacks. � St. Petersburg is one of just three bowls to match two teams who both have an APR of at least 960. Helping that is UCF's ESPN The Magazine Academic All-American Rocky Ross. UCF RESULTS (8-4, 6-2) DATE OPPONENT Sept. 5 SAMFORD Sept. 12 at Southern Miss* Sept. 19 BUFFALO Sept. 26 at East Carolina* Oct. 3 MEMPHIS* Oct. 17 No. 9 MIAMI Oct. 24 at Rice* Nov. 1 MARSHALL* Nov. 7 at No. 2 Texas Nov. 14 No. 13/12 HOUSTON* Nov. 21 TULANE* Nov. 28 at UAB* Dec. 19 vs. Rutgers# * Denotes Conference USA games # St. Petersburg Bowl SCORE W, 28-21 L, 19-26 W, 23-17 L, 14-19 W, 32-14 L, 7-27 W, 49-7 W, 21-20 L, 3-35 W, 37-32 W, 49-0 W, 34-27 8 p.m. RESULT W, 23-17 W, 32-14 W, 21-20 W, 37-32 KNIGHTS HOPE FOR NINE IN 2009... � With a bowl game win, 2009 would tie for the third-winningest season in school history with nine. 10 2007, 1990 9 1998, 1993, 1987 8 2009, 2005 AMONGST AMERICA'S MOST IMPROVED � UCF is four games better than it was a year ago, recording an 8-4 record thus far after going 4-8 in 2008. This ties for the fifth-biggest improvement in the nation. BIGGEST IMPROVEMENTS FROM 2008-09 SCHOOL 2008 2009 MARGIN SMU 1-11 7-5 +6 Idaho 2-10 7-5 +5 Ohio 4-8 9-4 +5 Washington 0-12 5-7 +5 UCF 4-8 8-4 +4 Iowa State 2-10 6-6 +4 Middle Tenn. 5-7 9-3 +4 Temple 5-7 9-3 +4 RUTGERS RESULTS (8-4, 3-4) 7 DATE OPPONENT SCORE Sept. 7 CINCINNATI* L, 15-47 Sept. 12 HOWARD W, 45-7 Sept. 19 FIU W, 23-15 Sept. 26 at Maryland W, 34-13 Oct. 10 TEXAS SOUTHERN W, 42-0 Oct. 16 PITTSBURGH* L, 17-24 Oct. 23 at Army W, 27-10 Oct. 31 at Connecticut* W, 28-24 Nov. 12 No. 23/24 USF* W, 31-0 Nov. 21 at Syracuse* L, 13-31 Nov. 27 at Louisville* W, 34-14 Dec. 5 No. 24/23 WEST VIRGINIA* L, 21-24 Dec. 19 vs. UCF# 8 p.m. * Denotes BIG EAST games # St. Petersburg Bowl THE ST. PETERSBURG BOWL CONFERENCE USA STANDINGS East Division Teams East Carolina UCF Southern Miss Marshall UAB Memphis West Division Teams Houston SMU UTEP Tulsa Rice Tulane C-USA W-L 6-2 6-2 3-5 3-5 2-6 1-7 Overall W-L 10-3 7-5 4-8 5-7 2-10 3-9 C-USA W-L 7-1 6-2 5-3 4-4 4-4 1-7 Overall W-L 9-4 8-4 7-5 6-6 5-7 2-10 TEAM NOTES, CONT. KNIGHTS STRIKE GOLD IN 49-0 WIN � UCF's dominating 49-0 win over Tulane on Senior Day, Nov. 21, was the largest shutout win ever in a Conference USA league game. The previous record was a 33-0 win by Southern Miss over Houston on Nov. 15, 1997. It also represents the third-most lopsided win this year in the nation over a team from the Football Bowl Subdivision. It is the second-largest shutout win this year in an FBS conference game as well. UCF outgained Tulane by a margin of 504 yards to 50 in the game. GAME Nebraska 55, La-Lafayette 0 BYU 52, Wyoming 0 UCF 49, Tulane 0 Boise St. 48, Miami (Ohio) 0 Ohio St. 45, New Mex. St. 0 CONF. NC MWC C-USA NC NC DATE 9/26/09 11/7/09 11/21/09 9/12/09 10/31/09 The Knights are 14-6 at BHNS all-time and 10-3 in C-USA play, playing to an average of 40,612 fans per game over that three-year span. BESTING THE WEST � UCF is a perfect 6-0 against teams from CUSA's West Division at Bright House Networks Stadium. The Knights defeated Tulsa twice at home in 2007, including the Conference USA Championship Game. The Knights also beat UTEP at their new home in 2007 and SMU in 2008. UCF has defeated Houston and Tulane this year at Bright House Networks Stadium. The sixth C-USA West team, Rice, will make its Bright House Networks Stadium debut next fall. UCF SURE IS "BIG" TIME � Did you know that UCF is America's third-largest university? Official numbers from the U.S. Department of Education are below. LARGEST CAMPUS ENROLLMENTS (FALL, 2009) SCHOOL ................................. STUDENTS Arizona State ............................ 55,552 Ohio State ................................. 55,014 UCF ........................................ 53,537 Minnesota ................................ 51,659 Texas ......................................... 51,032 LARGEST UNDERGRADUATE ENROLLMENTS (FALL, 2008) SCHOOL .................................. STUDENTS UCF ........................................ 42,933 Texas ......................................... 39,000 Arizona State ............................ 38,627 Ohio State ................................. 38,479 Texas A&M ................................ 38,430 � One connection between the four largest schools in America is UCF assistant coach Tim Salem who has spent time at all of them. Salem enrolled at Minnesota, transferred to Arizona State and has coached at both Ohio State and UCF. C-USA Bowl Game Schedule ST. PETERSBURG BOWL UCF (8-4) vs. Rutgers (8-4) Saturday, Dec. 19 � 8 p.m. � ESPN Tropicana Field; St. Petersburg, Fla. R+L CARRIERS NEW ORLEANS BOWL Southern Miss (7-5) vs. Middle Tennessee (9-3) Sunday, Dec. 20 � 8:30 p.m. � ESPN Louisiana Superdome; New Orleans, La. SHERATON HAWAII BOWL SMU (7-5) vs. Nevada (8-4) Thursday, Dec. 24 � 8 p.m. � ESPN Aloha Stadium; Honolulu, Hawaii LITTLE CAESAR'S PIZZA BOWL Marshall (6-6) vs. Ohio (9-4) Saturday, Dec. 26 � 1 p.m. � ESPN Ford Field; Detroit, Mich. BELL HELICOPTER ARMED FORCES BOWL Houston (10-3) vs. Air Force (7-5) Thursday, Dec. 31 � 12 p.m. � ESPN Amon G. Carter Stadium; Fort Worth, Texas AUTOZONE LIBERTY BOWL East Carolina (9-4) vs. Arkansas (7-5) Saturday, Jan. 2 � 5:30 p.m. � ESPN Liberty Bowl Memorial Stadium; Memphis, Tenn. NOVEMBER REIGN � There's an old adage that the best teams get better as the year goes along and UCF has been no exception under George O'Leary. The Knights are 15-6 overall in November since joining Conference USA after going 4-1 in 2009, losing only on the road at No. 2 Texas. November is UCF's winningest month in the C-USA era, eclipsing the school's 10 October wins. NO LETDOWNS � UCF went 3-0 this year in games following a game against a ranked opponent. UCF downed Rice on the road a week after facing No. 9 Miami, downed No. 13 Houston a week after facing No. 2 Texas and then defeated Tulane after the game with the Cougars. KNIGHTS SHINE BRIGHT AT BRIGHT HOUSE � UCF completed its 2009 home schedule going 6-1, in addition to a perfect 4-0 mark vs. C-USA opponents at Bright House Networks Stadium. The Knights went undefeated at home in league play for the second time in three years (2007). 8 IN THEIR OWN WORDS... Senior Captain Cory Hogue, LB On overcoming the team's slow start: "I think a loss can only teach you. You can only grow and gain strength from a loss. I think a lot of guys have matured from those losses and learned from them. I think the second half of the year was great for us. We finished 8-4, and now we are going to try to make it 9-4 against Rutgers." Senior Captain Rocky Ross, WR On playing in St. Petersburg: "Hopefully we can get a lot of UCF fans there. It is pretty much in the middle of Florida. It should be easy travel and good weather so UCF fans have no excuse not to go to this game." The last time UCF faced a BIG EAST team, Sept. 5, 2008, Rocky Ross' touchdown in the final minute sent UCF to overtime against No. 17 USF but the Bulls would prevail, 31-24, before a packed house at Bright House Networks Stadium and an ESPN2 national television audience. THE ST. PETERSBURG BOWL BOWL GAMES AND SERIES NOTES THE THIRD FRAME � UCF is playing in a bowl game for the first time. In 2005, on the heels of a C-USA East Division title, the 8-4 Knights headed west to the Sheraton Hawaii Bowl where Nevada beat UCF in a 49-48 thriller. In 2007, after winning the Conference USA Championship, a 10-3 UCF team traveled to Memphis for the AutoZone Liberty Bowl but dropped a 10-3 decision to Mississippi State. IT'S JUST DOWN I-4 � UCF will be playing a bowl game in its home state for the first time ever after making its bowl debut almost 5,000 miles away in Hawaii in 2005 and competing in Memphis in 2007. UCF will have the shortest distance to travel of any 2009 bowl team. Using Mapquest.com and measuring from the city center, it is 106.49 miles from Orlando to St. Petersburg, narrowly edging out Southern Miss' trip from Hattiesburg to the R+L Carriers New Orleans Bowl (110.57 miles). UCF is one of six teams nationally which will have to travel less than 200 miles for its bowl game. 2009 BOWL TRIPS UNDER 200 MILES SCHOOL UCF Southern Miss Temple North Carolina Florida State Troy SITE St. Petersburg, Fla. New Orleans, La. Washington, D.C. Charlotte, N.C. Jacksonville, Fla. Mobile, Ala. MILES 106.49 110.57 141.23 141.61 163.81 172.50 NEW JERSEY IS NEW � UCF is facing Rutgers for the first time. It is also UCF's first-ever game against a school from the state of New Jersey. Alltime, UCF is 63-64-1 in 128 previous first meetings. The Knights last faced a new opponent on Oct. 11, 2008, when the team lost at Miami. CONNECTIONS � Several Knights will be returning to their home area by Tampa Bay for this game, including Jordan Becker (Clearwater), Nick Cattoi (Tampa), Brynn Harvey (Largo), Charley Hughlett (Tampa), Andy Slowik (Bradenton) and Joe Weatherford (Land O'Lakes). UCF has no players from New Jersey but does have a pair of Long Islanders in Islip Terrance's Rob Calabrese and Shoreham's Brendan Kelly. The Knights also feature Jamie Boyle of Central Valley in New York State's Orange County, near the New Jersey state line...Rutgers' roster features numerous players from Florida and two from Central Florida in Winter Haven's D.C. Jefferson and Cocoa's David Rowe...UCF's Cliff McCray and Rutgers' Damaso Munoz were classmates at Miami's Southridge HS...The UCF coaching staff also has some New York City area natives. George O'Leary grew up in both New York City's lower east side and Central Islip on Long Island. He coached at Central Islip HS and Liverpool HS before beginning his collegiate coaching career at Syracuse. UCF defensive line coach Jim Panagos is a Brooklyn native who attended East Islip HS. UCF offensive coordinator Charlie Taaffe is an Albany native who graduated from Siena and later coached at both Albany and Army. BIG EAST HAS NOT BEEN BIG EASY � UCF has never defeated a BIG EAST Conference foe. UCF's last game against a BIG EAST team was one of its most exciting though as the Knights, before a capacity crowd at Bright House Networks Stadium, scored in the final minute to force overtime but eventaully fell, 31-24, to No. 17 USF. With the addition of Rutgers to the roll, UCF will have faced six of the eight current members of the BIG EAST Conference in its brief football history. The Knights have yet to face either Cincinnati or Connecticut. In its history, UCF has faced nine of the 12 all-time members of the BIG EAST football conference, also playing Boston College, Miami and Virginia Tech in the past but never Temple. UCF VS. RUTGERS STATISTICAL COMPARISON Rushing Yards Brynn Harvey Jonathan Davis Ronnie Weaver 1,077 309 86 Joe Martinek Mohamed Sanu Jourdan Brooks 923 305 278 Passing Yards Brett Hodges Rob Calabrese 2,263 215 Tom Savage Domenic Natale 1,917 213 Receptions A.J. Guyton Rocky Ross Kamar Aiken 42 37 32 Tim Brown Mohamed Sanu Shamar Graves 51 47 13 Points Scored Brynn Harvey Nick Cattoi Kamar Aiken 86 67 42 San San Te Joe Martinek Tim Brown 84 54 48 Total Tackles Cory Hogue Derrick Hallman Lawrence Young Josh Robinson 99 79 73 65 Devin McCourty Damaso Munoz Ryan D'Imperio Zaire Kitchen 78 75 70 61 DOME SWEET DOME � UCF will be playing its fifth game indoors when it plays at Tropicana Field. The Knights are 1-1 all-time against Tulane in the Louisiana Superdome, downing the Green Wave 36-29 on Sept. 22, 2001, but losing by a 10-9 score on Nov. 18, 2006. UCF is 0-2 against Syracuse at the Carrier Dome, dropping games there in both 2001 and 2003. The Knights will face the Green Wave in the Crescent City next fall. WE'RE IN THE BIG LEAGUES � UCF is playing in an active Major League Baseball park for the second year in a row as Tropicana Field is also home to the Tampa Bay Rays. On Oct. 11, 2008, UCF played Miami at then-Dolphin Stadium, home of the Florida Marlins. Other than schools which play their home games in a baseball facility, the Knights are one of four teams nationally to be playing in an active MLB park in both 2008 and 2009 joining Iowa (Metrodome and Land Shark Stadium), Northern Illinois (Metrodome and Rogers Centre) and USF (Tropicana Field and Rogers Centre). Interceptions Josh Robinson Five with 6 1 Khaseem Greene David Rowe Nine with 2 2 1 Team Category UCF RU 321.8 138.4 183.4 120.7 27.5 312.2 108.9 203.2 117.7 17.4 196 184 33% 31% 12-9/3 32-13/19 +20 31:08 71-514 Total Offense 348.0 Rushing Offense 138.5 Passing Offense 209.5 Passing Efficiency 132.81 Scoring Offense 26.3 Total Defense 348.1 Rushing Defense 82.5 Passing Defense 265.6 Passing Efficiency Def. 133.98 Scoring Defense 20.7 First Downs For 235 First Downs Against 230 Third Down Conversions 39% Third Downs - Opponents 41% Turnovers-INT/Fumbles 18-11/7 TOs Forced-INT/Fumbles 26-11/15 Turnover Margin +8 Time of Possession 31:06 Penalties-Yards 47-444 9 THE ST. PETERSBURG BOWL BOWL AND SERIES NOTES, CONT. PLAYOFFS? PLAYOFFS? WE'VE HAD THOSE BEFORE TOO � UCF's postseason history is deeper than just its two previous bowl game appearances. In 1987, the Knights went 8-3 in the regular season to earn a spot in the NCAA Division II playoffs. UCF beat Indiana (Pa.) 12-10 in its first game before falling to eventual national champion Troy State. UCF joined what was then known as Division I-AA in 1990 and was an NCAA Tournament team its first year at that level. UCF downed Youngstown State on the road, 20-17, and William & Mary, 52-38, in Orlando before losing, 44-7, on the road at eventual national champion Georgia Southern. In 1993, UCF once again qualified for the I-AA playoffs but lost its opening game, 56-30, at eventual national champion Youngstown State. UCF joins Troy as the only schools to win a Division II playoff game, a Football Championship Subdivision playoff game and appear in a bowl game since 1987. BOWL EXPERIENCES BEYOND UCF � In addition to the Knights who remain from the team's 2005 and 2007 rosters, two have travelled to bowl games before transferring to UCF. Although he did not play in any of the games, Brett Hodges was with Wake Forest at the 2007 FedEx Orange Bowl (vs. Louisville), 2007 Meineke Car Care Bowl (vs. UConn) and 2008 EagleBank Bowl (vs. Navy). Michael Greco was at NC State for the Wolfpack's 14-0 win over USF in the 2005 Meineke Car Care Bowl but redshirted that season and did not play in the game. - UCF has eight scoring drives of at least 80 yards in its last nine games. - Against Memphis, UCF secured the victory with a 92-yard fourth quarter touchdown drive, its longest march since a 99-yard drive against Marshall in 2007 and tying for the second-longest of the six-year George O'Leary era. - It all adds up to first downs. UCF tallied 28 vs. Tulane and Memphis, which were its most in a game since ECU on Oct. 6, 2007. THAT'S OUR BALL! � UCF amassed 39:30 in time of possession to help keep Houston's dangerous offense off of the field in the Knights' upset win over the No. 13/12 Cougars on Nov. 14. It is the most possession time that UCF has had in a game in the FBS era (since 1996). UCF followed it up with 37:23 showing against Tulane. HE'S THE GUY-TON � Wide receiver A.J. Guyton missed the entire 2008 season with an injury but showed that he was back to his old self at East Carolina, hauling in a career-high nine passes for 119 yards. The nine receptions tied for the most by a Knight since current Jacksonville Jaguar Mike Sims-Walker caught 13 passes against Rice on Oct. 21, 2006. He had 103 yards at Rice and also threw a touchdown pass in that game. He had an even 100 yards against Marshall, making him the first Knight since Rocky Ross in 2006 to have consecutive 100 yard receiving games. Guyton leads UCF with 42 catches this year and 559 receiving yards. HARVEY'S DANGEROUS FOR OPPONENTS � Sophomore Brynn Harvey is 26th in the nation in rushing, averaging 97.91 yards per game on the heels of a 130-yard, three-touchdown performance in the win vs. Tulane, his fifth 100-yard rushing day of the season and third in a row. Harvey ran for 219 in UCF's win over Memphis. He proved to be a workhorse in that game, carrying the ball 42 times, thirdmost in school history and the second-most nationally this year. Harvey has 14 rushing touchdowns this year, good for second in UCF history. UCF as a team had just eight all of last year. He is the first Knight since Kevin Smith in 2007 to have consecutive three-touchdown games (vs. Houston and Tulane), and also the first since Smith ended his career with eight in 2007 to record three consecutive 100-yard rushing games (vs. Houston, Tulane and UAB). SUPER SOPHOMORE � Brynn Harvey is starting to earn comparisons to former Knight Kevin Smith, a consensus All-American in 2007 who is now the starting tailback for the Detroit Lions. Their sophomore year statistics are not far apart for average totals. ATT YARDS TD AVG. YPG Harvey, 2009 248 1,077 14 4.3 97.9 Smith, 2006 206 934 7 4.5 103.8 ELEVEN IN A WINNER � UCF is currently riding a streak of 11-consecutive quarters with a score. The Knights were shutout through 15 minutes by then-No. 13/12 Houston on Nov. 14 but scored in each of the final three quarters of their upset win and then scored in each quarter of wins over Tulane and UAB. UCF last scored in 11-consecutive quarters from Oct. 13-Nov. 3, 2007. BRETT PLAYING LIKE A VET � After transferring from Wake Forest for his senior year to play for his favorite childhood team, Winter Springs' Brett Hodges assumed the role of starting quarterback with his steady play in the first two games. He did not disappoint in his first start against Buffalo, completing 15-of-20 passes and piling up 212 yards of total offense (71 rushing, 141 passing) in the win. His 342 passing yards against Marshall were the most by a Knight since 2003 and the team's first 300-yard passing effort overall since 2007. Behind Hodges, UCF is fifth in C-USA in passing efficiency a year after ranking 12th in the league. OFFENSE NOTES OFFENSE GOT IT IN GEAR � UCF has a new offensive coordinator (Charlie Taaffe), starting quarterback (Brett Hodges) and has regularly started two linemen (Abr� Leggins and Cliff McCray) who joined the team over the summer. It obviously needed some time to click, but it did starting in week five against Memphis. The differences from the first four games of the season to the last eight games are dramatic: CATEGORY Scoring Offense Rushing Offense Passing Offense Total Offense Time of Possession GAMES 1-4 84 (21.0) 380 (95.0) 743 (185.8) 1,123 (280.8) 28:32 GAMES 5-12 232 (29.0) 1,282 (160.2) 1,771 (221.4) 3,053 (381.6) 32:23 10 THE BIG PLAY IS BACK � UCF promised to get more dynamic on offense this year and delivered. The Knights have 32 passing plays of at least 20 yards. In all of 2008, UCF had a total of just 12 completions of 20 yards or more. UCF had six such completions against Marshall alone, half of its 2008 season total. Eight different Knights have a catch of at least 20 yards this year while only four did all of last year. UCF is also spreading the ball around more. Ten different Knights had a catch at UAB, the most in one game since also having 10 against Miami (Ohio) on Nov. 28, 2003. KNIGHTS ABLE TO PUT LONG DRIVES TOGETHER � A clear sign of UCF's improved offense in 2009 is its ability to sustain long drives. UCF has had nine scoring marches of at least 11 plays this year after having just five a year ago. UCF had scoring drives of over 8:30 against both Southern Miss and Buffalo, a feat it had not accomplished since 2007. - Against Buffalo, UCF had two separate 14-play scoring drives. It was the first time that UCF had two scoring drives of at least 14 plays in the same game since the 1993 NCAA Division I-AA Playoffs at Youngstown State. THE ST. PETERSBURG BOWL NOTES ON HODGES' PERFORMANCE VS. HOUSTON � Brett Hodges completed 21-of-25 passes for 241 yards with one TD and one INT in UCF's upset win over No. 13/12 Houston on Nov. 14. At one point completed 12-consecutive passes, tying for the second-longest streak in UCF history. His 84-percent completion percentage is sixth-best in UCF history and the second-best since Daunte Culpepper's 1998 graduation. Combined with his 342 passing yards against Marshall, it is the most yards by a UCF QB in consecutive starts since 2004. He has 2,263 yards passing on the season, the most by a Knight since Steven Moffett 's 2,925 in 2005. JAH RULES � Only one UCF player was named to the All-Conference USA offensive unit and that was junior right tackle Jah Reid. The 6-7, 314 pound behemoth from Haines City has started every game this year and helped pave the way for a UCF offense that has improved from 2,754 yards and 16.6 points per game in 2008 to 4,176 yards and 26.3 points per game in 2009. ROCKY V � Named to the ESPN The Magazine CoSIDA Academic All-America Team and the C-USA Football All-Academic Team, wide receiver Rocky Ross is in his second senior year after earning a medical redshirt in 2008 due to a broken collar bone suffered at UTEP. As a junior, Ross led the Knights with 50 receptions and 658 yards for an average of 47.0 yards per game. In 2009, he picked up where he left off in 2007, ranking second on UCF thus far with 37 catches, for 412 yards. He also caught the game-winning score with 0:23 to play vs. Marshall. The Jacksonville native ranks seventh in UCF history in career receptions and 10th in career receiving yards. CONTINUING TO BRING THE PRESSURE � UCF ranks fifth in the nation with 3.08 sacks per game. UCF also stands 11th in the nation in TFLs with 7.58 per game, continuing its lofty 2008 pace when UCF was third nationally. Both stats lead C-USA. A total of 19 different Knights have at least half of a TFL and 11 have a sack. UCF has four players (Jarvis Geathers, Derrick Hallman, Cory Hogue and Bruce Miller) who have hit double digits in TFLs this year. UCF's six sacks vs. Miami tied for the most yielded by UM since the 2005 FSU game. NCAA LEADERS RUSHING DEFENSE 1. Texas 62.15 2. Alabama 77.92 3. TCU 80.50 4. UCF 82.50 5. Ohio State 83.42 SACKS (PER GAME) 1. Pittsburgh 3.75 2. Texas Tech 3.25 3. Nebraska 3.23 4. Middle Tenn. 3.17 5. UCF 3.08 Oklahoma 3.08 IT'S MILLER'S TIME � Junior Bruce Miller is the C-USA Defensive Player of the Year. It is the first time that a Knight has ever received that honor and marks the fourth time in five years of C-USA play that UCF has claimed at least one major conference award. A defensive end who also has the brawn to line up in a three-technique when UCF goes to its nickel package, Miller leads the league in both sacks (12) and TFLs (16.5) this year. He is fifth nationally in sacks and tied for 18th in TFLs. Just a junior, his 26 career sacks already place him amongst C-USA's all-time leaders as well. BRUCE IS LOOSE � Bruce Miller was named the C-USA Defensive Player of the Week twice this year. The first time was following the Marshall game (Nov. 1) in which he had three TFLs, 2.5 sacks, 10 total tackles, forced a fumble with 2:12 to play that set up UCF's game-winning touchdown, had two pass pressures and broke up a pass. He got the award again after UCF beat UAB on Nov. 28. Miller had 2.5 tackles for loss and two sacks of the slippery Joe Webb on the day. He also broke up a pass. 2009 NCAA SACK LEADERS (PER GM.) 1. Von Miller, Texas A&M 1.42 2. Brandon Sharpe, Texas Tech 1.36 3. Josh McNary, Army 1.14 4. Ryan Kerrigan, Purdue 1.08 5. BRUCE MILLER, UCF 1.00 6. Derrick Morgan, Ga. Tech 0.96 0.96 6. Lindsey Witten, UConn 6. Jerry Hughes, TCU 0.96 9. JARVIS GEATHERS, UCF 0.92 9. Ndamukong Suh, Nebraska 0.92 9. Four others 0.92 NCAA ACTIVE CAREER SACK LEADERS 1. Dexter Davis, Ariz. St. 31 2. Brandon Graham, Michigan 29.5 3. Daniel Te'o-Nesheim, Wash. 29 3. Eric Norwood, South Carolina 29 5. Jerry Hughes, TCU 28.5 5. Jan Jorgensen, BYU 28.5 5. George Selvie, USF 28.5 8. C.J. Wilson, East Carolina 27 9. Greg Hardy, Ole Miss 26.5 10. BRUCE MILLER, UCF* 26 * Miller is first amongst juniors DEFENSE NOTES NO RUNNING ALLOWED � UCF tops C-USA in rushing defense, and ranks third nationally behind just Texas, Alabama and TCU, yielding just 82.50 yards per game. UCF also led C-USA in rushing defense in 2008. In league history, only TCU (200203) has led C-USA in rushing defense in consecutive seasons. Against Marshall, UCF held Darius Marshall, then the second-leading rusher in the nation, to a season-low 80 yards on 28 carries (2.9 avg.). At No. 2 Texas, Colt McCoy had to throw the ball to kill the clock in the fourth quarter because the Longhorn rushing attack was going nowhere. Held to just 67 yards in the game, it was Texas' lowest rushing total for a home non-conference game since Arkansas held UT to 62 in 2003. "We just decided to quit running it," Mack Brown said of Colt McCoy's 42 attempts in the game. UCF followed that up by holding No. 13/12 Houston to 46 yards on the ground, its lowest showing in two years. Then came the Senior Day performance of a lifetime as UCF held Tulane to minus-30 yards rushing. UCF has held its opponent under 100 yards of rushing in eight out of 12 games this year. SIMPLY THE BEST � In addition to having C-USA's premier rushing defense at 82.50 yards per game, UCF leads the league in total defense (348.08 ypg) and scoring defense (20.67 ppg). GREEN WAVE KEPT TO LOW TIDE � In its 49-0 shutout win over Tulane, UCF held the Green Wave to a CUSA record low 50 yards of total offense, including minus-30 yards rushing. The 50 yards of total defense marked the fifth-best performance in the nation this decade against a FBS foe. TOP TOTAL DEFENSE THIS DECADE VS. FBS (2000-09) 24 Ole Miss vs. Mississippi State, 11/28/08 35 Virginia Tech vs. Duke, 9/10/05 45 Wisconsin vs. Temple, 9/10/05 46 Oklahoma vs. Colorado, 12/4/04 50 UCF vs. Tulane, 11/21/09 11 C-USA Defensive Player of the Year Bruce Miller pressured a pair of Heisman Trophy candidates this year in Colt McCoy and Case Keenum. THE ST. PETERSBURG BOWL SUPER TROUP-ER � He may not have the biggest numbers of anyone on the defensive line but defensive tackle Torrell Troup has attracted a steady stream of salivating NFL scouts to Orlando this fall. A double-team eating nose guard who is nimble enough to create havoc by himself, Troup unselfishly allows those around him to rack up tremendous numbers. He has fair stats himself, making 32 stops on the year, with five TFLs and two sacks, four pass breaks and three hurries. He was named to the All-C-USA Second Team, the second year in a row in which he received that recognition. HURRICANE HOGUE � Cory Hogue earned First-Team All-Conference USA accolades in 2009 as he leads the team to date with 99 tackles and is third on the squad with 11.5 TFLs. Showing his versatility, he is also third on the squad with five pass break-ups. He is tied for fifth in C-USA in TFLs and 11th in total tackles. Hogue was an especially disruptive force against Miami on Oct. 17, recording 3.5 TFLs and two of UCF's six sacks on the night. A CENTURY FOR CORY? � Cory Hogue's first tackle of the St. Petersburg Bowl will be the milestone 100th of his senior season. It will also make him the first Knight linebacker to have a 100-tackle season since Antoine Poe stopped 111 opponents in 2003. Hogue is also all but assured of becoming the first Knight linebacker to lead the team in tackles since 2002 when Stanford Rhule had 127. JARVIS GATHERS SACKS � Jarvis Geathers is second to only Bruce Miller in C-USA with his 11 sacks and 13.5 TFLs. The 11 sacks tie for 10th nationally and his TFL total places him in a tie for 43rd position in America. Although he is not a starter and plays primarily as a pass rusher in nickel situations, Geathers' truly remarkable sack-to-snap ratio earned him a spot on the All-C-USA First Team. He played a huge role in UCF's come-from-behind win over Buffalo, recording three second-half sacks, forcing fumbles on two of those which were recovered by the Knights. One of the fumbles set up the game-tying field goal and the other effectively ended the game with 0:26 left. HERE'S TO YOU MR. ROBINSON � True freshman Josh Robinson made his first career start on Sept. 12 at Southern Miss and came away leading the Knights with a total of 10 tackles, all of them solo. He has not looked back since. His C-USA-leading six interceptions this year tops all true freshmen nationally and eclipsed Joe Burnett 's school freshman record of five. The six interceptions match the national high for all freshmen, including redshirts, and he stands tied for eighth nationally. Robinson is the only true freshman to rank in the top 100 in the nation in passes defended with 14. He has made six interceptions in UCF's last eight games. Despite leading the league in interceptions he was only named to the All-Conference USA Second Team. He has also made 65 tackles this season. SPECIAL TEAMS NOTES COVERING LIKE A BLANKET � UCF has proven quite adept at covering its punts and kickoffs thus far in 2009, consistently ranking in the top 10 in the nation for both units. Presently, UCF ranks ninth in the nation in kickoff return yardage against and seventh in punt return yardage against. UCF has yielded 967 yards on 53 kickoff returns against (18.25 avg.) this year. With punter Blake Clingan's improved hang time, UCF foes have returned just 12 punts for 37 yards (3.08 avg.). FIFTEENTH IN THE NATION IN KICKOFF RETURNS � UCF is excelling in both phases of kickoffs as the Knights also rank 15th in the nation and second in C-USA with an average of 24.58 yards per kick return. Meanwhile, the Knights are a solid 28th in punt returns as well, averaging 12.00 yards per try. THREE RETURNS OVER 70 YARDS � By the season's second week, UCF had three different players return a kickoff for at least 72 yards. Quincy McDuffie had a 95-yard touchdown against Samford, Jamar Newsome had an 89-yard run back at Southern Miss and Darin Baldwin had a 72-yard return against Samford. It marks the first time in UCF history that the Knights have had three separate players run a kickoff back at least 70 yards in the same season. Newsome's has the dubious honor of being the longest non-scoring play in UCF history. McDUFFIE'S: I'M LOVIN IT � True freshman Quincy McDuffie had an auspicious debut for the Knights against Samford. The Orlando native ran a kickoff back 95 yards for a touchdown. It was the sixth-longest return in school history. Curiously, of UCF's 12 all-time kickoff return touchdowns, six have been by freshmen. McDuffie's was the first in a season-opener. He is averaging 23.42 yards per return on his true freshman season. A.J. IS A-OK � Wide receiver A.J. Guyton took over the punt return duties in midseason and averaged a steady 10.8 yards on his 13 returns. Fellow wide receiver Rocky Ross started the season as UCF's punt returner, his first career action in that role, and he averaged 12.4 yards on his eight returns. True freshman Jonathan Davis has made the most of his time as UCF's punt returner, averaging a team-high 15.2 yards on his four returns. 12 HALLMAN'S SUCCESSFUL SWITCH � Derrick Hallman started for UCF in 2008 at linebacker but moved to safety to start the season as the Knights secondary would see four new starters. He would return to linebacker in mid-season and go on to receive honorable mention All-C-USA accolades after ranking second on the team with 79 tackles including 5.5 TFLs. NALL ALL-IN AGAIN � While several key members of the 2007 defense missed all or most of 2008, UCF was the happiest of all to welcome back the inspirational Darius Nall who missed last year while undergoing radiation treatments for cancer. Nall was a member of the 2007 C-USA All-Freshman Team and saw ample time this fall at DE in the team's nickel package. Nall made 4.5 TFLs this season, including four sacks. He also had seven forced hurries, a pass break-up, and both forced and recovered a fumble. An elite national track runner in high school, Quincy McDuffie has been an asset for UCF as a true freshman this year as a kick returner and WR. THE ST. PETERSBURG BOWL ACADEMIC NOTES ROSS MAKES ACADEMIC ALL-AMERICA � Rocky Ross was named to the ESPN The Magazine CoSIDA Academic All-America Second Team on Nov. 24. He is the second in school history, joining Keith Shologan (2007). Ross and Jordan Richards had been named to the All-District Team. Ross, who hopes to be a high school athletics director if a professional football career does not materialize, carries a 3.88 cumulative GPA. He earned a bachelor's degree in criminal justice and is pursuing a master's in sports and fitness. Ross was also named to the Conference USA Football All-Academic Team, a squad comprised of the league's best 11 student-athletes regardless of position as voted on by the C-USA football SIDs. Ross' 3.88 GPA was the highest on the team. THE BRAIN BOWL? � According to The Institute for Diversity and Ethics in Sport, a nationallyrenouned arm of UCF's DeVos Sport Business Management Program, the St. Petersburg Bowl will feature a matchup of two of the stronger academic teams in the country. Rutgers' Academic Performance Rate (APR) is a 980 while UCF's is a 960. That 970 average is the highest of any of the 34 bowl games nationwide this year, edging the 968 average score in both the Capital One Bowl (Penn State - 976 vs. LSU - 960) and the Brut Sun Bowl (Stanford - 984 vs. Oklahoma - 952). The St. Petersburg Bowl is one of just three games nationwide where both teams scored at least a 960 on their APR. It is joined in that regard by the afore-mentioned Capital One Bowl and the Tostitos Fiesta Bowl between Boise State (966) and TCU (962). SUMMER EXCELLENCE � Led by 13 Knights with a perfect 4.0, UCF earned a summer session GPA of 3.118, raising the overall team cumulative GPA to a 2.990. Both of those marks are the second-highest of the O'Leary era in terms of both semester GPA and cumulative GPA. A DIFFERENT KIND OF HELMET DECAL � A total of 69 UCF student-athletes have earned the right to wear "Scholar-Baller"helmet decals (pictured at right) this fall emblematic of having earned a 3.0 semester GPA or better during 2008-09. CAPS AND GOWNS FAMILIAR SIGHT � Seven Knights (Michael Greco, T.J. Harnden, Brett Hodges, Cory Hogue, Jordan Richards, Rocky Ross and Alex Thompson) had already received their bachelor's degrees entering the season and spent the fall working on either a second bachelor's degree or a master's degree. That sum had UCF tied for 10th most in the nation. Not surprisingly with smart players, nine of the teams with at least seven graduates on its roster this fall will play in a bowl game this year while Notre Dame earned bowl eligibility but declined an invitation. MOST FBS ACTIVE PLAYERS WITH A BACHELOR'S DEGREE Alabama (12), Boston College (10), Virginia Tech (10), Miami (9), Penn State (9), Auburn (8), East Carolina (8), Notre Dame (8), Texas Tech (8), UCF (7), Texas A&M (7), UNLV (7). KNIGHTS STACK UP WELL AGAINST NCAA MEASURING STICK � The 2009 Academic Progress Rate (APR) announcement by the NCAA gave the football team a mark of 960. That score tied LSU for 23rd in the nation. It also ranked ninth nationally amongst non-military public universities. UCF's "First Year in College" graduation rate for 2009 also exceeds the national average. ALL-C-USA SELECTIONS DEFENSIVE PLAYER OF THE YEAR DL Bruce Miller ALL-C-USA FIRST TEAM DL Jarvis Geathers LB Cory Hogue DL Bruce Miller OL Jah Reid SECOND TEAM DB Josh Robinson DL Torrell Troup HONORABLE MENTION LB Derrick Hallman RB Brynn Harvey QB Brett Hodges LS Charley Hughlett LB Lawrence Young UCF head coach George O'Leary is fond of the fact that every single one of the many Knights he has helped send on to the NFL who have stayed for all four years, has also earned his bachelor's degree (pictured is Kamar Aiken). ALL-FRESHMAN TEAM OL Theo Goins DB Josh Robinson ALL-ACADEMIC TEAM WR Rocky Ross UCF IN THE NFL UCF had a total of 15 players on NFL opening day rosters this year and a total of 17 players on NFL rosters for opening day including practice squads. For the second-consecutive year UCF has the most active NFL alumni of any Conference USA school. UCF ALUMNI ON NFL OPENING DAY ROSTERS PLAYER .........................................POS ....................................... TEAM Atari Bigby ....................................... S .....................................Green Bay Patrick Brown ..................................OT ............................. New England* Joe Burnett ......................................CB .................................... Pittsburgh Daunte Culpepper ...........................QB ......................................... Detroit Leger Douzable ................................DT ....................................N.Y. Giants Travis Fisher .....................................CB ......................................... Seattle Michael Gaines ................................ TE ........................................Chicago Cornell Green ..................................OT ....................................... Oakland Rashad Jeanty .................................. LB .....................................Cincinnati Darcy Johnson ................................. TE ....................................N.Y. Giants Brandon Marshall ........................... WR ........................................Denver Matt Prater ......................................PK .........................................Denver Sha'reff Rashad ................................. S ...................................N.Y. Giants* Asante Samuel .................................CB .................................Philadelphia Mike Sims-Walker ........................... WR .................................Jacksonville Josh Sitton ...................................... OG....................................Green Bay Kevin Smith......................................RB ......................................... Detroit * - Practice Squad 13 THE ST. PETERSBURG BOWL RECORD BOOK UPDATE NICK CATTOI SINGLE GAME FIELD GOALS 1. NICK CATTOI, VS. MEM., 10/3/09 ................... 4 Seven other times..............................................4 BRYNN HARVEY SEASON RUSHING TOUCHDOWNS 1. Kevin Smith, 2007 ............................................29 2. BRYNN HARVEY, 2009 .................................. 14 Marquette Smith, 1995 ...................................14 SEASON RUSHING YARDS 1. Kevin Smith, 2007 .......................................2,567 2. Marquette Smith, 1995 ..............................1,511 3. Willie English, 1991 ....................................1,338 4. Kevin Smith, 2005 .......................................1,178 5. Gerod Davis, 1992 ......................................1,154 6. BRYNN HARVEY, 2009 ..............................1,077 7. Marquette Smith, 1994 ..............................1,058 8. Alex Haynes, 2002 ......................................1,038 SEASON RUSHING ATTEMPTS 1. Kevin Smith, 2007 ..........................................450 2. Marquette Smith, 1995 .................................274 3. Kevin Smith, 2005 ..........................................249 4. BRYNN HARVEY, 2009 ................................ 248 RUSHING YARDS IN A GAME 1. Kevin Smith, at UAB, 11/10/07 ......................320 2. Kevin Smith, vs. Tulsa, 12/1/07 .....................284 3. Willie English, vs. Arkansas State, 10/5/91 ....242 4. Marquette Smith, vs. UL-Monroe, 10/28/95 .225 5. Kevin Smith, vs. UL-Lafayette, 9/29/07..........223 6. BRYNN HARVEY, VS. MEMPHIS, 10/3/09 .... 219 Kevin Smith, vs. UTEP, 11/24/07....................219 RUSHING ATTEMPTS IN A GAME 1. Kevin Smith, vs. UTEP, 11/24/07......................46 2. Kevin Smith, at USM, 10/28/07 .......................43 3. BRYNN HARVEY, VS. MEMPHIS, 10/3/09 ...... 42 CONFERENCE USA RUSHING ATT. IN A GAME 1. Kevin Smith, UCF vs. UTEP, 11/24/07 ..............46 2. Matt Forte', TULANE vs. MEM., 10/27/07 .......44 3. Kevin Smith, UCF at USM, 10/28/07................43 4. BRYNN HARVEY, UCF VS. MEM., 10/3/09 ..... 42 Derrick Nix, USM at ECU, 10/9/99 ...................42 BRETT HODGES SEASON COMPLETION PERCENTAGE (100 ATT.) 1. Daunte Culpepper, 1998 ...............................73.6 2. Ryan Schneider, 2003 ...................................69.0 3. Kyle Israel, 2006............................................65.1 4. Steven Moffett, 2004 ....................................64.2 5. Daunte Culpepper, 1997 ...............................62.5 6. Ryan Schneider, 2000 ...................................61.9 7. Ryan Schneider, 2002 ...................................61.6 8. BRETT HODGES, 2009 ................................ 61.1 CONSECUTIVE PASS COMPLETIONS 1. Daunte Culpepper, vs. Samford, 10/11/97 ..........15 2. BRETT HODGES, vs. HOUSTON, 11/14/09 .........12 Daunte Culpepper, vs. Eastern Ky., 8/31/95 ........12 Daunte Culpepper, at UAB, 11/9/96 ....................12 Ryan Schneider, at Northern Illinois, 10/7/00 .....12 BLAKE CLINGAN CAREER PUNTS 1. BLAKE CLINGAN, 2007-09 ........................... 204 2. Charlie Pierce, 1993-96 .................................173 CAREER PUNTING YARDS 1. BLAKE CLINGAN, 2007-09 .........................8,072 2. Charlie Pierce, 1993-96 ..............................7,111 CAREER PUNTING AVERAGE 1. Matt Prater, 2002-05 ....................................47.6 2. Glenn McCombs, 1983-84 ............................41.2 3. Javier Beorlegui, 1998-01 ...........................41.12 4. Charlie Pierce, 1993-96 ..............................41.10 5. Aaron Horne, 2004-06 ..................................40.1 6. BLAKE CLINGAN, 2007-09 .......................... 39.6 CONFERENCE USA CAREER PUNTS 1. Ryan Dougherty, ECU, 2003-06......................237 2. Parker Mullins, UAB, 2003-06 ........................235 3. Mark Haulman, USM, 1999-02 .....................223 4. Chris Beckman, TULANE, 2003-06 ................222 5. Adam Wulfeck, CIN, 1998-01.........................214 6. BLAKE CLINGAN, UCF, 2007-09 ................... 204 C-USA CAREER PUNTING YARDS 1. Ryan Dougherty, ECU, 2003-06.................10,112 2. Parker Mullins, UAB, 2003-06 .....................9,434 3. Chris Beckman, TULANE, 2003-06 ..............9,421 4. Mark Haulman, USM, 1999-02 ...................9,308 5. Adam Wulfeck, CIN, 1998-00......................9,008 6. Luke Johnson, USM, 2003-05 .....................8,503 7. BLAKE CLINGAN, UCF, 2007-09 .................8,072 CORY HOGUE CAREER TACKLES 1. Rick Hamilton, 1989-92 .................................443 2. Bill Giovanetti, 1979-82 .................................429 3. Nakia Reddick, 1993-96 .................................354 4. Darrell Rudd, 1981-84 ...................................347 5. Wyatt Bogan, 1984-88 ...................................341 6. Jason Venson, 2005-08 ..................................322 7. Tito Rodriguez, 1998-01 ................................321 8. Atari Bigby, 2001-04 ......................................296 9. Bill Stewart, 1987-90 .....................................294 10. CORY HOGUE, 2005-09 ............................... 275 QUINCY MCDUFFIE SEASON KICKOFF RETURNS 1. Ted Wilson, 1984 .............................................40 2. QUINCY McDUFFIE, 2009 ............................. 33 Mark Whittemore, 1991 ..................................33 SEASON KICKOFF RETURN YARDAGE 1. Ted Wilson, 1984 ...........................................952 2. QUINCY McDUFFIE, 2009 ........................... 773 3. Joe Burnett, 2008 ..........................................745 With linemen like fifth-year senior Cliff McCray (L) and All-Conference USA pick Jah Reid (R) opening holes, sophomore Brynn Harvey (34) has compiled just the eighth 1,000-yard rushing season in UCF history this fall, scoring 14 touchdowns along the way to tie for second-best in school history. 14 THE ST. PETERSBURG BOWL RECORD BOOK UPDATE JOSH ROBINSON SEASON INTERCEPTIONS 1. Keith Evans, 1986 ..............................................8 2. JOSH ROBINSON, 2009 ................................... 6 Joe Burnett, 2007 ..............................................6 Johnell Neal, 2007 .............................................6 INTERCEPTIONS BY A FRESHMAN 1. JOSH ROBINSON, 2009 ................................... 6 2. Joe Burnett, 2005 ..............................................5 CONFERENCE USA SEASON INTERCEPTIONS 1. Anthony Floyd, LOU, 2000 ...............................10 2. Rodregis Brooks, UAB, 1999 ..............................9 3. Lynaris Elpheage, TULANE, 2002 .......................8 Jason Goss, TCU, 2002 .......................................8 5. Kevin Sanders, UAB, 2008..................................7 Quentin Demps, UTEP, 2006 ..............................7 J.R. Reed, USF, 2003...........................................7 Mike James, HOU, 1999.....................................7 9. JOSH ROBINSON, UCF, 2009 ........................... 6 Several others ....................................................6 ROCKY ROSS Josh Robinson has broken Joe Burnett's freshman school record by making six interceptions, all in the last eight games. Robinson leads all true freshmen in the nation in INTs and passes defended. CAREER RECEPTIONS 1. David Rhodes, 1991-94..................................213 2. Mark Nonsant, 1995-98.................................198 3. Sean Beckton, 1987-90 ..................................196 4. Mike Walker, 2003-06 ....................................184 5. Siaha Burley, 1997-98 ....................................165 6. Charles Lee, 1997-99 .....................................162 7. ROCKY ROSS, 2005-09 ................................ 153 CAREER RECEIVING YARDS 1. David Rhodes, 1991-94...............................3,618 2. Mark Nonsant, 1995-98..............................2,809 3. Mike Walker, 2003-06 .................................2,561 4. Sean Beckton, 1987-90 ...............................2,493 5. Jimmy Fryzel, 1999-02 ................................2,469 6. Ted Wilson, 1983-86 ...................................2,443 7. Charles Lee, 1997-99 ..................................2,249 8. Siaha Burley, 1997-98 .................................2,248 9. Bernard Ford, 1985-87 ...............................2,138 10. ROCKY ROSS, 2005-09 ..............................1,935 BRUCE MILLER SINGLE SEASON SACKS 1. Darrell Rudd, 1984........................................19.5 2. Greg Jefferson, 1993 ........................................15 3. Bobby Spitulski, 1990 ......................................13 4. BRUCE MILLER, 2009 .................................... 12 Matt O'Shaughnessy, 1980 ..............................12 6. JARVIS GEATHERS, 2009 ............................... 11 Emil Ekiyor, 1993 .............................................11 SEASON TACKLES FOR LOSS 1. Jermaine Benoit, 1997..................................24.5 2. Elton Patterson, 2001 ......................................21 3. Greg Jefferson, 1993 ........................................20 Justen Moore, 1998 .........................................20 5. Elton Patterson, 2000 ......................................19 6. Darrell Rudd, 1984...........................................18 7. Bruce Miller, 2008 ...........................................17 Taveres Tate, 1996 ...........................................17 9. BRUCE MILLER, 2009 ................................. 16.5 CAREER SACKS 1. Darrell Rudd, 1981-84 ..................................31.5 2. Greg Jefferson, 1991-94 ..................................31 3. Elton Patterson, 1999-02 ..............................30.5 4. Jermaine Benoit, 1993-97 ............................28.5 5. BRUCE MILLER, 2007-09 ............................... 26 CAREER TACKLES FOR LOSS 1. Elton Patterson, 1999-02 ..............................59.5 2. Darrell Rudd, 1981-84 ..................................56.5 3. Taveres Tate, 1993-96 ......................................48 4. Justen Moore, 1996-99.................................47.5 5. Bobby Spitulski, 1988-91 .................................46 6. BRUCE MILLER, 2007-09 ............................ 42.5 CONFERENCE USA SEASON SACKS 1. Bo Schobel, TCU, 2003.....................................17 2. Dewayne White, LOU, 2001.............................15 Roderick Coleman, ECU, 1997 .........................15 4. Phillip Hunt, HOU, 2008...................................14 Bryan Thomas, UAB, 2001 ...............................14 6. Albert McClellan, MAR, 2006 ..........................13 Antwan Peek, CIN, 2001 ..................................13 Andre Arnold, MEM, 2000...............................13 Roderick Coleman, ECU, 1998 .........................13 10. BRUCE MILLER, UCF, 2009 ............................ 12 CONFERENCE USA CAREER SACKS 1. Dewayne White, LOU, 2000-02 ....................37.5 2. Adalius Thomas, USM, 1996-99....................34.5 3. Phillip Hunt, HOU, 2005-08 .............................34 4. Michael Josiah, LOU, 1999-01 .........................31 5. Bryan Thomas, UAB, 1999-01 ..........................30 6. Michael Boley, USM, 2001-04..........................28 Roderick Coleman, ECU, 1997-98 ....................28 8. Antwan Peek, CIN, 1999-02 .............................27 9. BRUCE MILLER, UCF, 2007-09 ....................... 26 Cedric Scott, USM, 1997-00 .............................26 15 THE ST. PETERSBURG BOWL THE LAST TIME A TEAM... BLOCKED A PUNT By UCF: at Memphis (1st Qtr, Bruce Miller), 11/22/08 By Opponent: at Miami (1st Qtr, R. Gordon), 10/11/08 BLOCKED A FIELD GOAL By UCF: vs. South Florida (3rd Qtr, Joe Burnett), 9/6/08 By Opponent: at East Carolina (2nd Qtr, Linval Joseph) 9/26/09 BLOCKED AN EXTRA POINT By UCF: at UAB (4th Qtr, Torrell Troup), 11/28/09 By Opponent: vs. Houston (3rd Qtr, L.J. Castile), 11/14/09 100 YARD RUSHER AND 300 YARD PASSER By UCF: at East Carolina [Kevin Smith (147) and Kyle Israel (313)], 10/6/07 By Opponent: at UAB Joe Webb (137) and Joe Webb (322), 11/28/09 100 YARD RECEIVER AND 300 YARD PASSER By UCF: vs. Marshall [A.J. Guyton (100) and Brett Hodges (342)], 11/1/09 By Opponent: vs. Houston [Tyron Carrier (149) and Case Keenum (377)], 11/14/09 100 YARD RUSHER, 100 YARD RECEIVER AND 300 YARD PASSER By UCF: at East Carolina [Kevin Smith (147 rushing), Kamar Aiken (122 receiving) and Kyle Israel (313 passing)], 10/6/07 By Opponent: at Louisiana-Monroe [Marquis Williams (121 rushing), Marty Booker (159 receiving) and Jeremiah (310 passing)], 11/1/97 TWO 100 YARD RUSHERS By UCF: vs. Marshall [Kevin Smith (166) and Jason Peters (108)], 10/4/06 By Opponent: Nevada (Hawaii Bowl) [B.J. Mitchell (178) and Robert Hubbard (126)], 12/24/05 TWO 100 YARD RECEIVERS By UCF: vs. Memphis [Rocky Ross (135) and Mike Walker (131)], 11/11/06 By Opponent: Louisiana Tech [(Sean Cangelosi (227) and John Simon (122)], 10/23/99 TWO QBS THROW OVER 100 YARDS By UCF: vs. Akron [Ryan Schneider (273) and Brian Miller (132)], 11/3/01 By Opponent: Memphis [Will Hudgens (143) and Tyler Bass (115)], 10/3/09 30-PLUS FIRST DOWNS By UCF: 30, at East Carolina, 10/6/07 By Opponent: 30, Nevada (Hawaii Bowl), 12/24/05 FEWER THAN 10 FIRST DOWNS By UCF: 9, at Marshall, 10/30/04 By Opponent: 7, Tulane, 11/21/09 500 YARDS PASSING By UCF: Never By Opponent: 597, Louisiana Tech, 10/23/99 LESS THAN 100 YARDS PASSING By UCF: 76, at Texas, 11/7/09 By Opponent: 80, Tulane, 11/21/09 90-PLUS OFFENSIVE PLAYS By UCF: 92, vs. Kent State, 11/16/02 By Opponent: 92, Tulsa (C-USA Championship), 12/1/07 500 YARDS OF TOTAL OFFENSE By UCF: 504, Tulane, 11/21/09 By Opponent: 527, UAB, 11/28/09 600 YARDS OF TOTAL OFFENSE By UCF: 601, vs. Memphis, 9/22/07 By Opponent: 637, at Florida, 9/9/06 10-PLUS PUNTS By UCF: 11, at Miami, Blake Clingan, 10/11/08 By Opponent: 11, Miami, Matt Bosher, 10/11/08 SUCCESSFUL ONSIDE KICK By UCF: at East Carolina, 9/26/09 By Opponent: vs. Tulsa, 10/20/07 RECORDED A SAFETY By UCF: at Tulsa, 10/26/08 By Opponent: at Miami, 10/11/08 MADE TWO-POINT CONVERSION By UCF: vs. Samford (Hodges to Harvey), 9/5/09 By Opponent: at Miami (Jacory Harris rush) 10/11/08 SCORED 50 PLUS POINTS By UCF: 56, vs. Memphis (56-20), 9/22/07 By Opponent: 64, South Florida (64-12), 10/13/07 RECORDED A SHUTOUT By UCF: vs. Tulane, 11/21/09 (49-0) By Opponent: vs. UAB, 11/29/08 (15-0) RECORDED A SHUTOUT AT UCF By UCF: vs. Tulane, 11/21/09 (49-0) By Opponent: vs. UAB, 11/29/08 (15-0) WON BY 30 OR MORE POINTS By UCF: 49 points (49-0), Tulane, 11/21/09 By Opponent: 32 points (35-3), at Texas, 11/7/09 CAME FROM AT LEAST 10 POINTS BEHIND TO WIN By UCF: 14 points (3-17), Houston, 11/14/09 By Opponent: 14 points (21-7, 28-14), East Carolina, 10/6/07 SCORED ON FIRST PLAY FROM SCRIMMAGE By UCF: at Rice (Hodges pass to Guyton, 76 yards), 10/24/09 WON ON THE FINAL SNAP OF REGULATION (NOT OT) By UCF: Louisiana-Lafayette (24-21), John Brown 28 YD FG, 10/1/05 By Opponent: Northern Illinois (30-28), Chris Nendick 39-YD FG, 10/9/04 OVERTIME WIN By UCF: Never By Opponent: East Carolina, 11/2/08 16 LESS THAN 50 YARDS RUSHING By UCF: 15 (27 att.), at Southern Miss, 9/12/09 By Opponent: minus-30 (24 att.), Tulane, 11/21/09 300 YARDS RUSHING By UCF: 308, vs. Tulsa (C-USA Championship), 12/1/07 By Opponent: 369, Nevada (Hawaii Bowl), 12/24/05 ATTEMPTED 40 PASSES By UCF: 48, vs. Marshall, 11/1/09 By Opponent: 57, Houston, 11/14/09 ATTEMPTED 50-PLUS PASSES By UCF: 52, vs. Florida Atlantic, 9/13/03 By Opponent: 57, Houston, 11/14/09 400 YARDS PASSING By UCF: 497, vs. Florida Atlantic, 9/13/03 By Opponent: 470, Texas, 11/7/09 THE ST. PETERSBURG BOWL THE LAST TIME AN INDIVIDUAL... KICKOFF RETURN FOR A TOUCHDOWN By UCF: Quincy McDuffie (95 yards), vs. Samford, 9/5/09 By Opponent: Devin Mays (100 yards), Houston, 11/14/09 OPENING KICKOFF RETURN FOR A TOUCHDOWN By UCF: at East Carolina, Curtis Francis 93 yards, 10/6/07 By Opponent: Pittsburgh, Lowell Robinson 97 yards, 10/13/06 STANDARD PUNT RETURN FOR A TOUCHDOWN By UCF: Joe Burnett (83 yards), vs. Tulsa, 12/1/07 By Opponent: Andre Davis (55 yards), Virginia Tech, 9/29/01 BLOCKED PUNT RETURN FOR A TOUCHDOWN By UCF: Augustus Ashley blocked and returned 19 yards vs. La.-Lafayette, 9/29/07 By Opponent: Joe Hunter (0 yards) on a Jerry White block, at West Va., 11/1/03 INTERCEPTION RETURN FOR A TOUCHDOWN By UCF: Josh Robinson (24 yards), at Rice, 10/24/09 By Opponent: Ryan Anderson (41 yards), Samford, 9/5/09 FUMBLE RETURN FOR A TOUCHDOWN By UCF: Derrick Hallman (26 yards), at Memphis, 11/22/08 By Opponent: Martez Smith (22 yards), Southern Miss, 9/12/09 30-PLUS CARRIES By UCF: Brynn Harvey (35-139-3 TD), Houston, 11/14/09 By Opponent: Anthony Sherrell (43-155-TD), at EMU, 11/8/03 100 YARDS RUSHING By UCF: Brynn Harvey (24-130-1 TD), at UAB, 11/28/09 By Opponent: Joe Webb (18-137-1 TD), UAB, 11/28/09 150 YARDS RUSHING By UCF: Brynn Harvey (42-219-1 TD), Memphis, 10/3/09 By Opponent: Jamaal Charles (22-153-1 TD), Texas, 9/15/07 200 YARDS RUSHING By UCF: Brynn Harvey (42-219-1 TD), Memphis, 10/3/09 By Opponent: Walter Reyes (31-241-4 TD), at Syracuse, 9/20/03 THREE RUSHING TOUCHDOWNS By UCF: Brynn Harvey (16-129-3 TD), Tulane, 11/21/09 By Opponent: Jackie Battle 14-70-3 TD), Houston, 10/28/06 FOUR RUSHING TOUCHDOWNS By UCF: Kevin Smith (39-284-4 TD), Tulsa, 12/1/07 By Opponent: Walter Reyes (31-241-4 TD), at Syracuse, 9/20/03 FIVE RUSHING TOUCHDOWNS By UCF: Never By Opponent: Lee Suggs (30-143-5 TD), Virginia Tech, 11/11/00 RUSHING TOUCHDOWN AND A RECEIVING TOUCHDOWN By UCF: Kevin Smith, 13, 4 and 44 yd runs and 15-yd pass, vs. Tulsa, 10/20/07 By Opponent: Dwayne Harris, 25-yd run and 3-yd pass, East Carolina, 9/26/09 50-PLUS PASSING ATTEMPTS By UCF: Ryan Schneider (52-37-2, 497, 3 TD), vs. FAU, 9/13/03 By Opponent: Case Keenum (56-33-1-377, 3 TD), Houston, 11/14/09 40 PASS ATTEMPTS By UCF: Brett Hodges (45-23-0, 342, 2 TD), Marshall, 11/1/09 By Opponent: Colt McCoy (42-33-1, 470, 2 TD), at Texas, 11/7/09 40 PASS COMPLETIONS By UCF: Never By Opponent: Tim Rattay (62-46-1, 561, 5 TD), La. Tech, 10/23/99 30 PASS COMPLETIONS By UCF: Steven Moffett (41-30-2, 322, 2 TD), vs. Akron, 10/16/04 By Opponent: Case Keenum (56-33-1-377, 3 TD), Houston, 11/14/09 300 YARDS PASSING By UCF: Brett Hodges (45-23-0, 342, 2 TD), Marshall, 11/1/09 By Opponent: Joe Webb (35-20-1, 322, 3 TD), UAB, 11/28/09 400 YARDS PASSING By UCF: Ryan Schneider (52-37-2, 497, 3 TD), vs. FAU, 9/13/03 By Opponent: Colt McCoy (42-33-1, 470, 2 TD), at Texas, 11/7/09 500 YARDS PASSING By UCF: Never By Opponent: Tim Rattay (62-46-1, 561, 5 TD), La. Tech, 10/23/99 THREE TOUCHDOWN PASSES By UCF: Kyle Israel, vs. UTEP, 11/24/07 By Opponent: Joe Webb, UAB, 11/28/09 FOUR TOUCHDOWN PASSES By UCF: Ryan Schneider (28-17-1, 325, 4 TD), vs. Ohio, 11/30/02 By Opponent: Chris Leak (29-19-1, 352, 4 TD), Florida, 9/9/06 FIVE TOUCHDOWN PASSES By UCF: Ryan Schneider (42-25-2, 360, 5 TD), at Buffalo, 11/9/02 By Opponent: Dustin Almond (37-23-1-277, 5 TD), Southern Miss, 10/15/05 SIX TOUCHDOWN PASSES By UCF: Darin Hinshaw (29-21-0, 350, 6 TD), vs. Liberty, 11/6/93 By Opponent: Jose Davis (51-32-3, 551, 6 TD), Kent St., 10/4/97 TEN-PLUS RECEPTIONS By UCF: 13, Mike Sims-Walker, vs. Rice, 10/21/06 By Opponent: 11, Jordan Shipley, Texas, 11/7/09 100 YARDS RECEIVING By UCF: 100, A.J. Guyton, vs. Marshall, 11/1/09 By Opponent: 149, Tyron Carrier, Houston, 11/14/09 150 YARDS RECEIVING By UCF: 169, Mike Sims-Walker, vs. Rice, 10/21/06 By Opponent: 162, Duke Calhoun, Memphis, 10/3/09 200 YARDS RECEIVING By UCF: 210, Brandon Marshall, vs. Nevada (Hawaii Bowl), 12/24/05 By Opponent: 273, Jordan Shipley, Texas, 11/7/09 TWO RECEIVING TOUCHDOWNS By UCF: 2, Kamar Aiken, Tulane, 11/21/09 By Opponent: 2, Tyron Carrier, Houston, 11/14/09 THREE-PLUS RECEIVING TOUCHDOWNS By UCF: 3, Brandon Marshall, vs. Nevada (Hawaii Bowl), 12/24/05 By Opponent: 3, Shaun McDonald, Arizona State, 9/7/02 75-YARD TOUCHDOWN RECEPTION By UCF: A.J. Guyton (76 yards), at Rice, 10/24/09 By Opponent: Jordan Shipley (88 yards), at Texas, 11/7/09 70-YARD NON-SCORING RECEPTION By UCF: Jeff Froehlich (72 yards), vs. Southeastern La., 9/18/82 By Opponent: Jeff Klein (78 yards), at Auburn, 11/6/99 70-YARD PUNT By UCF: 70, Blake Clingan, at Texas, 11/7/09 By Opponent: 77, Ross Thevenot, Tulane,11/21/09 MADE A 50-PLUS YARD FIELD GOAL By UCF: Nick Cattoi, 50 yds, at Southern Miss, 9/12/09 By Opponent: Jose Martinez, 64 yds, UTEP, 9/27/08 THREE FIELD GOALS By UCF: Nick Cattoi, vs. Buffalo, 9/19/09 By Opponent: Swayze Waters, UAB, 11/25/06 FOUR-PLUS FIELD GOALS By UCF: 4, Nick Cattoi, vs. Memphis, 10/3/09 By Opponent: 5, Swayze Waters, UAB, 11/29/08 TWO SACKS By UCF: 2, Bruce Miller, at UAB, 11/28/09 By Opponent: 2, Marcus McGraw, Houston, 11/14/09 THREE-PLUS SACKS By UCF: 3, Jarvis Geathers, vs. Buffalo, 9/19/09 By Opponent: 3, James Lockett, Tulsa, 10/26/08 TWO INTERCEPTIONS By UCF: Sha'reff Rashad at Marshall, 11/15/08 By Opponent: Derek Pegues, vs. Mississippi State (Liberty Bowl), 12/29/07 17 THE ST. PETERSBURG BOWL GAME 1 SAMFORD 0 7 17 0 - 24 UCF INDIVIDUAL STATISTICS Rushing Harvey Watters Weaver Hodges McDuffie Calabrese Passing Hodges Calabrese Receiving Ross Aiken Newsome Harvey Watters Rabazinski Punting Clingan Returns Ross Guyton Thompson Baldwin McDuffie Field Goals Boyle Boyle Defense Young Hogue Boddie Baldwin Att Gain Lost 31 115 4 1 15 0 3 6 0 2 4 0 1 1 0 2 0 8 Comp Att INT 10 17 1 3 7 0 No Yds TD 5 85 0 2 42 0 2 19 1 2 2 0 1 5 0 1 4 0 No Yds Avg 5 196 39.2 PR LG KR 6-91 39 0-0 1-8 8 0-0 0-0 0 1-0 0-0 0 1-72 0-0 0 2-118 QTR Time Dist 1 0:59 36 4 14:56 33 U-A-T S TFL 4-4-8 0 3-3 7-0-7 0 1-3 7-0-7 0 0 7-0-7 0 1-4 Net TD LG 111 2 20 15 0 15 6 0 4 4 0 3 1 0 1 -8 0 0 Yds TD LG 129 1 26 28 0 15 LG 26 21 10 6 5 4 LG IN20 TB 44 2 0 LG INTR LG 0 0-0 0 0 0-0 0 0 0-0 0 72 0-0 0 95 0-0 0 Result Missed Blocked FF FR INT 0 0 0 0 0 0 0 0 0 0 0 0 September 5, 2009 - Attendance: 38,719 Bright House Networks Stadium (Orlando, Fla.) SCORING SUMMARY Second Quarter SAM Riley Hawkins 29 yd pass from Dustin Taliaferro (C. Yaw kick) - 10:23 8-80, 3:51 UCF Brynn Harvey 1 yd run (Jamie Boyle kick) - 0:57 8-43, 2:26 Third Quarter UCF Quincy McDuffie 95 yd kickoff return (Jamie Boyle kick failed) - 14:48 SAM Riley Hawkins 67 yd pass from Richie Fordham (C. Yaw kick) - 14:08 2-73, 0:34 UCF Brynn Harvey 1 yd run (Brynn Harvey pass from Brett Hodges) - 7:40 11-71, 5:28 SAM Cameron Yaw 41 yd FG - 1:57 12-61, 5:37 Fourth Quarter UCF Jamar Newsome 9 yd pass from Brett Hodges (Nick Cattoi kick) - 10:53 4-32, 1:30 ORLANDO, Fla. - Sophomore Brynn Harvey had two scores and senior Brett Hodges threw the game-winning touchdown in his first game as a Knight as UCF (1-0) held off visiting Samford (01) to post a 28-24 victory in its season opener at Bright House Networks Stadium. Harvey rushed 31 times for 111 yards, his second career 100-yard game, and added a two-point conversion reception. In his first game at UCF, Hodges finished the game 10-for-17 with 129 yards and the key fourth-quarter touchdown. With 10:53 left in the game, Hodges hit junior Jamar Newsome on a slant for a nine-yard touchdown, capping a short four-play, 32-yard drive and giving UCF a lead that the defense would make stand. A pair of big plays by senior Rocky Ross set up the touchdown. He returned a punt 39 yards to the Samford 32-yard line and then caught a 20-yard pass from Hodges on the game's next play. Ross had a big game in his return from injury as the senior finished with a team-high 176 all-purpose yards. With the game tied at 7-7 coming out of halftime, true freshman Quincy McDuffie returned the second half's opening kickoff 95 yards to give UCF a 13-7 lead. Two big plays by the visitors, a 67-yard touchdown reception and an interception return for a touchdown, gave Samford a 21-13 edge with 13:12 left in the third quarter. However, Harvey's second touchdown of the contest was a oneyard plunge that ended an 11-play, 71-yard drive with 7:40 left in the third quarter. Hodges found him with a pass on the two-point conversion to tie the game at 21-21. UCF held the Samford rushing attack to just 78 yards on 30 carries, an average of just 2.6 yards per rush. In the first half, the Bulldogs dented the scoreboard first in the game with 10:23 remaining in the second quarter when Riley Hawkins hauled in a 29-yard pass in the back of the end zone from Dustin Taliaferro. UCF knotted the game at 7-7 with just 57 ticks left before halftime on a Harvey one-yard touchdown run. The score was set up by a 21-yard reception by junior Kamar Aiken just outside the goal line. Hodges went 3-for-4 for 37 yards on the drive. Kickoff returns: No.-Yds-TD Interceptions: No.-Yds-TD Fumble Returns: No.-Yds-TD Possession Time Third-Down Conversions Fourth-Down Conversions Red-Zone Scores-Chances Sacks By: Number-Yards PAT Kicks Field Goals SAM 16 78 30 2.6 0 103 25 208 21-35-0 5.9 9.9 2 286 65 4.4 0-0 3-19 9-302 33.6 22.6 2 5-262 52.4 14.4 0-0-0 4-65-0 1-41-1 0-0-0 30:22 4 of 15 0 of 0 0-0 1-7 3-3 1-1 UCF 14 125 41 3.0 2 141 16 157 13-24-1 6.5 12.1 1 282 65 4.3 2-1 6-55 5-196 39.2 39.2 2 5-332 66.4 49.4 7-99-0 4-190-1 0-0-0 0-0-0 29:38 7 of 16 0 of 0 3-6 1-15 2-3 0-2 SAMFORD INDIVIDUAL STATISTICS Rushing Att Gain Evans 23 81 Barnett 1 3 Taliaferro 4 19 Passing Comp Att Taliaferro 20 34 Fordham 1 1 Receiving No Yds Lowery 5 41 Covington 5 33 Hawkins 3 107 Evans 3 22 Johnson 3 6 Alexander 1 2 Fordham 1 -3 Punting No Yds Hooper 9 302 Returns PR LG Hawkins 0-0 0 Johnson 0-0 0 Fordham 0-0 0 Anderson 0-0 0 Field Goals QTR Time Yaw 3 1:57 Defense U-A-T S Smith 10-3-13 0 Davis 3-4-7 0 Brown 6-0-6 0 Lost 5 0 17 INT 0 0 TD 0 0 2 0 0 0 0 Avg 33.6 KR 2-32 1-14 1-19 0-0 Dist 41 TFL 3-5 0 1-4 Net TD LG 76 0 12 3 0 3 2 0 10 Yds TD LG 141 1 29 67 1 67 LG 11 10 67 15 5 2 0 LG IN20 TB 43 2 0 LG INTR LG 18 0-0 0 14 0-0 0 19 0-0 0 0 1-41 41 Result Good FF FR INT 0 0 0 0 0 0 1 0 0 18 THE ST. PETERSBURG BOWL GAME 2 UCF INDIVIDUAL STATISTICS September 12, 2009 - Attendance: 27,456 Carlisle-Faulkner Field (Hattiesburg, Miss.) Rushing Att Gain Lost Harvey 14 41 4 Calabrese 1 4 0 Weaver 4 7 4 McDuffie 1 2 0 Giovanetti 1 2 0 Hodges 4 1 22 Passing Comp Att INT Hodges 15 26 0 Calabrese 2 4 0 Receiving No Yds TD Ross 5 56 1 Aiken 4 53 1 Watters 2 24 0 Newsome 2 17 0 Harvey 2 11 0 Guyton 1 13 0 1 5 0 Giovanetti Punting No Yds Avg Clingan 7 259 37.0 Returns PR LG KR Ross 1-6 6 0-0 Newsome 0-0 0 1-89 Baldwin 0-0 0 2-29 McDuffie 0-0 0 2-27 Field Goals QTR Time Dist Cattoi 2 11:20 28 Cattoi 4 12:18 50 Defense U-A-T S TFL Hogue 4-8-12 0 1.5-2 Robinson 10-1-11 0 0 Young 6-5-11 1-10 2-14 Net TD LG 37 0 15 4 0 4 3 0 4 2 0 2 2 0 2 -21 0 1 Yds TD LG 158 2 29 21 0 16 LG 29 20 20 11 9 13 5 LG IN20 TB 46 1 0 LG INTR LG 0 0-0 0 89 0-0 0 18 0-0 0 16 0-0 0 Result Good Good FF FR INT 0 0 0 0 0 0 0 0 0 SCORING SUMMARY First Quarter USM Martez Smith 22 yd fumble recovery (Justin Estes kick) - 8:46 USM Leroy Banks 18 yd pass from Austin Davis (Justin Estes kick) - 5:11 5-44, 1:39 Second Quarter UCF Nick Cattoi 28 yd FG - 11:20 18-68, 8:46 USM Justin Estes 21 yd FG - 7:30 9-75, 3:45 UCF Kamar Aiken 4 yd pass from Brett Hodges (Nick Cattoi kick) - 3:25 8-79, 3:58 USM Justin Estes 36 yd FG - 0:00 6-15, 3:14 Fourth Quarter UCF Nick Cattoi 50 yd FG - 12:18 6-15, 3:14 USM Damion Fletcher 5 yd run (Kane Wommack rush failed) - 1:43 9-49, 4:11 UCF Rocky Ross 5 yd pass from Brett Hodges (Nick Cattoi kick failed) - 1:22 1-5, 0:05 HATTIESBURG, Miss. - Senior quarterback Brett Hodges went 15for-26 and threw for two touchdowns but was unable to help UCF erase an early 14-0 deficit as it fell in its Conference USA opener at Southern Miss, 26-19. Hodges finished with a career-high 158 yards in the air and no interceptions for the Knights (1-1, 0-1). A total of seven different players hauled in receptions for UCF against Southern Miss (2-0, 1-0), with senior Rocky Ross leading the way with five catches for 56 yards and a touchdown. Junior Kamar Aiken, meanwhile, pulled in four receptions, including a dazzling TD catch in the second quarter. Ball control was the story in the early minutes of the first quarter as UCF fumbled twice in its first three possessions. The second turnover gave Southern Miss the initial lead when sophomore quarterback Rob Calabrese coughed it up on his own 22-yard line and Martez Smith returned it to the house for a touchdown. Following a Knights' punt, the Golden Eagles needed 1:39 to drive 44 yards and put it back in the endzone, capped off by an 18-yard strike from Austin Davis to Leroy Banks. Hodges then took over under center for UCF and helped the offense put together an 18-play drive using small gains to wear down the defense. Only one of those plays was over 10 yards - a 20-yard reception by Aiken that put the Black and Gold at USM's 9-yard line. But that's where it would stall, and sophomore Nick Cattoi booted a 28-yard field goal with 11:20 remaining in the second quarter to make it 14-3. Southern Miss would add a field goal of its own on its next possession, but the Knights would strike again. Kick-started by a 19-yard toss to Aiken, Hodges would later find junior Brian Watters on a 20-yard gain to get into Golden Eagle territory. One play later and Hodges connected with Ross for 29 yards. Eventually on third down and the ball resting on the USM-4, the UCF quarterback pump-faked and placed it perfectly in the back of the endzone for a leaping Aiken, who just got his foot in to cut the deficit to 17-10. While UCF hoped to keep it a one-possession game going into halftime, Southern Miss took it all the way from its own-17 to the UCF19 in the final minute, setting up Justin Estes' 36-yard field goal. Now trailing 20-10 and with the end of the third quarter approaching, USM's Freddie Parham mishandled a UCF punt and junior Reggie Weams was there to fall on it on the USM-48. Six plays later and Cattoi converted his longest-career field goal, a 50-yarder that put the score at 20-13 with 12:18 left in the fourth quarter. It was the eighth-longest FG in UCF history. Midway through the fourth, the Golden Eagles put the game away with a nine-play drive that culminated in a Damion Fletcher 5-yard touchdown run. Even though junior Jamar Newsome returned the ensuing kickoff 89 yards and Hodges hooked up with Ross on a five-yard touchdown pass to the back of the endzone, the Knights could not recover the onside kick with just over a minute showing on the clock. 11 15 27 0.6 0 57 42 179 17-30-0 6.0 10.5 2 194 57 3.4 4-2 5-49 7-259 37.0 37.3 1 4-251 62.8 48.2 1-6-0 6.0 5-145-0 29.0 0-0-0 0-0-0 28:14 4 of 14 1 of 1 3-3 3-36 1-2 2-2 USM 23 131 42 3.1 1 185 54 253 23-31-0 8.2 11.0 1 384 73 5.3 3-1 7-44 5-164 32.8 31.6 4 5-314 62.8 33.8 3--2-0 -0.7 3-38-0 12.7 0-0-0 1-22-1 31:46 7 of 16 0 of 1 4-5 3-22 2-2 2-3 SOUTHERN MISS INDIVIDUAL STATISTICS Rushing Fletcher Harrison Parham Davis Passing Davis Receiving Brown Fletcher Banks Baptiste Pierce Parham Morris Punting Boehme Returns Parham Watson Thornton Harrison Field Goals Estes Estes Estes Defense Thornton Williams Law Att Gain 21 104 4 38 2 5 12 38 Comp Att 23 31 No Yds 7 75 6 51 4 42 2 42 2 15 1 20 1 8 No Yds 5 164 PR LG 2-(-4) 1 1-2 2 0-0 0 0-0 0 QTR Time 1 13:03 2 7:30 2 0:00 U-A-T S 5-0-5 0 4-0-4 1-10 4-0-4 1-3 Lost 1 0 0 42 INT 0 TD 0 0 1 0 0 0 0 Avg 32.8 KR 1-11 0-0 1-7 1-20 Dist 26 21 36 TFL 0 2-11 2-6 Net TD LG 103 1 15 38 0 31 5 0 3 -4 0 13 Yds TD LG 253 1 28 LG 19 15 18 28 10 20 8 LG IN20 TB 36 4 0 LG INTR LG 11 0-0 0 0 0-0 0 7 0-0 0 20 0-0 0 Result Missed Good Good FF FR INT 0 0 0 0 0 0 1 0 0 19 THE ST. PETERSBURG BOWL GAME 3 BUFFALO 0 17 0 0 - 17 UCF INDIVIDUAL STATISTICS Rushing Harvey Hodges Newsome McDuffie Passing Hodges Receiving Newsome Aiken Ross Guyton Rabazinski Nissley Harvey Punting Clingan Returns Hallman McDuffie Field Goals Cattoi Cattoi Cattoi Defense Hallman Hogue Greco Young Robinson Att Gain 25 100 13 82 1 4 1 0 Comp Att 15 20 No Yds 6 37 3 53 2 23 1 11 1 8 1 5 1 4 No Yds 2 62 PR LG 0-0 0 0-0 0 QTR Time 3 1:19 4 6:02 4 1:33 U-A-T S 7-1-8 0 6-2-8 0 5-3-8 0 5-1-6 0 5-1-6 0 Lost 2 11 0 0 INT 0 TD 0 0 0 0 0 0 0 Avg 31.0 KR 0-0 3-55 Dist 44 22 42 TFL 1-3 0 0 0 0 Net TD LG 98 2 13 71 0 17 4 0 4 0 0 0 Yds TD LG 141 0 39 LG 12 39 16 11 8 5 4 LG IN20 TB 43 0 0 LG INTR LG 0 1-23 23 23 0-0 0 Result Good Good Good FF FR INT 1 0 1-23 0 0 0 0 0 0 0 0 0 0 0 0 September 19, 2009 - Attendance: 35,121 Bright House Networks Stadium (Orlando, Fla.) SCORING SUMMARY First Quarter UCF Brynn Harvey 2 yd run (Nick Cattoi kick) - 4:44 14-68, 6:13 Second Quarter UB Jesse Rack 6 yd pass from Zach Maynard (A.J. Principe kick) - 10:11 12-71, 5:12 UB Jesse Rack 34 yd pass from Naam Roosevelt (A.J. Principe kick) - 6:51 4-48, 1:49 UB A.J. Principe 27 yd FG - 0:00 13-42, 5:29 Third Quarter UCF Brynn Harvey 4 yd run (Nick Cattoi kick) - 4:48 9-73, 4:44 UCF Nick Cattoi 44 yd FG - 1:19 4-6, 2:06 Fourth Quarter UCF Nick Cattoi 22 yd FG - 6:02 14-59, 8:39 UCF Nick Cattoi 42 yd FG - 1:18 5-19, 1:18 ORLANDO, Fla. - If you buy into the theory that one half can turn around a season, UCF delivered the kind of spectacular football over the final 30 minutes that could very well alter the course ahead. Trailing by 10 after a somewhat listless first half on both sides of the ball, UCF responded in a big way by scoring all of the game's 16 second-half points and beat Buffalo 23-17 at Bright House Networks Stadium. Led by steady quarterback Brett Hodges, UCF's offense converted on third down time and again and mounted four scoring drives in the second half. And UCF's defense forced three fumbles and got a pivotal interception from safety Derrick Hallman - the Knights' first pick of the season - all after halftime. UCF (2-1) rallied from a double-digit deficit for the first time since 2005, a 12-point come-from-behind victory against Rice. And the Knights left the game feeling somewhat reborn following a secondhalf performance that turned a possible loss into a thrilling victory. Tailback Brynn Harvey ran for 98 yards and two scores, his second multi-touchdown game of the season. And five receivers caught passes, led by blossoming junior Jamar Newsome, who had six catches. Kamar Aiken had 53 yards receiving, including a clutch over-the-shoulder catch for 39 yards on a third down play in the third quarter. Hodges saved the Knights with a flawless second half. He repeatedly kept plays alive by shuffling in the pocket, and he burned an unsuspecting Buffalo defense with 13 carries for 71 yards. His perfect 10-of-10 passing in the second half allowed him to complete 15 of 20 passes for 141 yards, but it was his running out of the ``wildcat'' formation that most were impressed with. Picking up where Hodges left off, UCF's defense did the rest. Senior defensive end Jarvis Geathers recorded three sacks and forced two fumbles. Buffalo's final two drives were stopped by Geathers' sack/ forced fumble plays - loose balls that were recovered by Darius Nall and Torrell Troup. Hallman, who had been critical of his play through the first two games, made a game-saving interception on a fourth-and-eight play with 3 minutes remaining. His interception was UCF's first of the season and fulfilled a promise he had made to his teammates earlier in the week. The 1958 University at Buffalo team that refused a bid to the Tangerine Bowl when it was asked by Orlando leaders to not include its two African-American players was honored at halftime. Orange County Mayor Rich Crotty spearheaded the movement to honor the team 51 years later with a trip to Orlando, securing sponsorships so that 34 of the 41 living players, coaches and staff could attend the game. Gerry Gergley, a 2003 UCF Hall of Fame inductee and a halfback on the `58 Buffalo team, was among the players honored. UB 20 86 32 2.7 0 96 10 218 23-36-1 6.1 9.5 2 304 68 4.5 3-3 2-24 2-77 38.5 28.5 0 3-180 60.0 41.7 0-0-0 0.0 5-88-0 17.6 0-0-0 1-6-0 28:14 6 of 16 1 of 3 2-2 2-12 2-2 1-1 UCF 18 170 42 4.0 2 186 16 141 15-20-0 7.1 9.4 0 311 62 5.0 2-2 6-48 2-62 31.0 31.0 0 6-392 65.3 50.7 0-0-0 0.0 3-55-0 18.3 1-23-0 0-0-0 31:46 9 of 15 0 of 0 3-4 3-10 2-2 3-3 BUFFALO INDIVIDUAL STATISTICS Rushing Thermilus Nduka Henry Maynard Passing Maynard Roosevelt Receiving Roosevelt Rack Hamlin Rivers Thermilus Punting Fardon Returns Young Cook Henry Field Goals Principe Defense Newton Winters Akobundu Cook Att Gain 12 36 9 27 5 17 6 16 Comp Att 22 35 1 1 No Yds 9 77 5 76 5 34 3 19 1 12 No Yds 2 77 PR LG 0-0 0 0-0 0 0-0 0 QTR Time 2 0:00 U-A-T S 6-3-9 0 8-0-8 1-1 5-0-5 0 3-2-5 0 Lost 0 1 3 6 INT 1 0 TD 0 2 0 0 0 Avg 38.5 KR 2-38 2-31 1-19 Dist 27 TFL 0 1-1 0 0 Net TD LG 36 0 7 26 0 10 14 0 9 10 0 8 Yds TD LG 184 1 25 34 1 34 LG 25 34 13 7 12 LG IN20 TB 43 0 1 LG INTR LG 20 0-0 0 20 0-0 0 19 0-0 0 Result Good FF FR INT 0 0 0 1 1-6 0 0 0 0 0 0 0 20 THE ST. PETERSBURG BOWL GAME 4 UCF INDIVIDUAL STATISTICS September 26, 2009 - Attendance: 43,210 Dowdy-Ficklen Stadium (Greenville, N.C.) Rushing Att Gain Harvey 16 74 Kelly 1 18 Aiken 1 1 Weaver 1 1 Hodges 5 0 Passing Comp Att Hodges 21 34 Receiving No Yds Guyton 9 119 Ross 5 56 Harvey 3 23 Aiken 2 50 Rabazinski 2 18 Punting No Yds Clingan 2 74 Returns PR LG Newsome 0-0 0 McDuffie 0-0 0 Young 0-0 0 Field Goals QTR Time 2 7:28 Cattoi Cattoi 4 13:35 Defense U-A-T S Hallman 3-10-13 0 Robinson 9-0-9 0 Richards 5-4-9 0 Lost 3 0 0 0 21 INT 4 TD 0 0 0 1 0 Avg 37.0 KR 1-18 4-84 0-0 Dist 26 45 TFL 0 1-2 2-5 Net TD LG 71 1 19 18 0 18 1 0 1 1 0 1 -21 0 0 Yds TD LG 266 1 40 LG 29 35 11 40 11 LG IN20 TB 42 0 0 LG INTR LG 18 0-0 0 35 0-0 0 0 1-9 9 Result Blocked Missed FF FR INT 0 0 0 0 0 0 1 0 0 SCORING SUMMARY First Quarter UCF Brynn Harvey 19 yd run (Nick Cattoi kick) - 7:19 6-89, 2:47 ECU Ben Hartman 20 yd FG - 4:42 6-77, 2:37 Second Quarter ECU Dwayne Harris 25 yd run (Ben Hartman kick) - 14:15 9-59, 4:19 Third Quarter ECU Ben Hartman 19 yd FG - 0:51 8-28, 3:50 Fourth Quarter ECU Dwayne Harris 3 yd pass from Pat Pinkney (Pat Pinkney pass failed) - 7:55 15-72, 5:40 UCF Kamar Aiken 10 yd pass from Brett Hodges (Jamie Boyle kick) - 1:06 6-80, 1:17 GREENVILLE, N.C. - Trailing 19-7 with just over 60 seconds remaining, UCF cut it to 19-14 and recovered the ensuing on-side kick, but East Carolina intercepted a Brett Hodges pass at midfield to seal the Pirates' Conference USA victory at Dowdy-Ficklen Stadium. The loss dropped the Knights' record to 2-2 for the season and 0-2 in league play. A.J. Guyton finished the day vs. ECU (2-2, 1-0) by setting careerhighsplay, 19 70 24 2.9 1 94 24 266 21-34-4 7.8 12.7 1 336 58 5.8 1-1 3-30 2-74 37.0 37.0 0 2-132 66.0 51.0 0-0-0 0.0 5-102-0 20.4 1-9-0 0-0-0 24:29 5 of 11 0 of 0 2-4 2-3 2-2 0-2 ECU 23 110 38 2.9 1 150 40 293 27-41-1 7.1 10.9 1 403 79 5.1 5-2 4-39 3-102 34.0 27.3 1 5-320 64.0 43.6 0-0-0 0.0 3-30-0 10.0 4-7-0 0-0-0 35:31 10 of 17 0 of 0 3-5 4-21 1-1 2-3 EAST CAROLINA INDIVIDUAL STATISTICS Rushing Jackson Harris Ruffin Williams Pinkney Passing Pinkney Harris Receiving Harris Taylor Willis Jackson Womack Bryant Kass Ruffin Freeney Punting Dodge Returns Simmons Johnson Ross Eskridge Harris Field Goals Hartman Hartman Hartman Defense Mattocks Eskridge Joseph Att Gain Lost 20 101 13 3 28 1 8 17 6 2 4 4 3 0 11 Comp Att INT 27 40 1 0 1 0 No Yds TD 10 121 1 6 58 0 2 34 0 2 31 0 2 22 0 2 3 0 1 13 0 1 6 0 1 5 0 No Yds Avg 3 102 34.0 PR LG KR 0-0 0 0-0 0-0 0 0-0 0-0 0 0-0 0-0 0 0-0 0-0 0 2-30 QTR Time Dist 1 4:42 20 3 0:51 19 4 2:23 33 U-A-T S TFL 4-7-11 1-5 1.5-6 3-5-8 0 .5-0 1-5-6 1.5-5 1.5-5 Net TD LG 88 0 24 27 1 25 11 0 6 0 0 4 -11 0 0 Yds TD LG 293 1 47 0 0 0 LG 47 16 25 21 11 2 13 6 5 LG IN20 TB 40 1 1 LG INTR LG 0 1-0 0 0 1-3 3 0 1-0 0 0 1-4 4 16 0-0 0 Result Good Good Missed FF FR INT 0 0 0 1 0 1-4 0 0 0 21 THE ST. PETERSBURG BOWL GAME 5 MEMPHIS 0 7 7 0 - 14 UCF INDIVIDUAL STATISTICS Rushing Harvey Hodges McDuffie Kelly Passing Hodges Receiving Newsome Kay Guyton Nissley Rabazinski Giovanetti Punting Clingan Returns Davis, J. Baldwin Hogue Robinson McDuffie Field Goals Cattoi Cattoi Cattoi Cattoi Defense Hogue Ishmael Hallman Milller Att Gain Lost 42 228 9 3 19 0 1 14 0 4 9 0 Comp Att INT 16 28 1 No Yds TD 5 58 1 4 52 1 4 50 0 1 34 0 1 13 0 1 7 0 No Yds Avg 4 143 35.8 PR LG KR 1-2 2 0-0 0-0 0 2-41 0-0 0 0-0 0-0 0 0-0 0-0 0 1-27 QTR Time Dist 2 13:21 24 3 10:55 46 3 3:58 42 4 3:15 26 U-A-T S TFL 5-2-7 0 1.5-6 6-0-6 0 0 2-4-6 0 .5-2 3-2-5 2.5-19 3.5-20 Net TD LG 219 1 35 19 0 10 14 0 14 9 0 4 Yds TD LG 214 2 34 LG 22 17 19 34 13 8 LG IN20 TB 45 2 1 LG INTR LG 0 0-0 0 28 0-0 0 0 1-0 0 0 1-33 33 27 0-0 0 Result Good Good Good Good FF FR INT 0 0 1-0 0 0 0 0 0 0 0 0 0 October 3, 2009 - Attendance: 40,408 Bright House Networks Stadium (Orlando, Fla.) SCORING SUMMARY Second Quarter UCF Nick Cattoi 24 yd FG - 13:21 9-53, 4:30 UM Duke Calhoun 61 yd pass from Tyler Bass (Matt Reagan kick) - 10:24 9-85, 2:53 Third Quarter UCF Nick Cattoi 46 yd FG - 10:55 7-13, 2:46 UCF Nick Cattoi 42 yd FG - 3:58 8-60, 3:41 UM DajLeon Farr 12 yd pass from Will Hudgens (Matt Reagan kick) - 2:08 5-65, 1:44 UCF Ricky Kay 7 yd pass from Brett Hodges (Brynn Harvey rush failed) - 0:03 6-58, 1:58 Fourth Quarter UCF Jamar Newsome 22 yd pass from Brett Hodges (Nick Cattoi kick) - 7:28 9-92, 5:01 UCF Nick Cattoi 26 yd FG - 3:15 6-49, 3:46 UCF Brynn Harvey 25 yd run (Nick Cattoi kick) - 0:41 4-30, 1:05 ORLANDO, Fla. - Just minutes after UCF had just completed its third come-from-behind victory of the season at Bright House Networks Stadium, jubilant Knights players huddled around coach George O'Leary and chanted ``One more step!" in unison. Following a frustrating loss at East Carolina where the Knights played well but sabotaged the effort with mistakes, UCF took the necessary steps for a runaway 32-14 defeat of Memphis before 40,408 fans. UCF (3-2 overall and 1-2 in Conference USA play) rallied this time by finishing drives offensively, pressuring the quarterback, forcing turnovers and eliminating the mistakes. The theme in practice all week was to take ``One more step,'' a phrase that had both literal and physical meanings. UCF took some big steps offensively, getting two second-half touchdown passes from senior quarterback Brett Hodges and a career-rushing day from sophomore tailback Brynn Harvey. The play of Hodges and Harvey, combined with some stellar line play, allowed UCF's offense to roll up a whopping 475 yards, 28 first downs and 39 minutes of time of possession. The dominant second-half performance sent UCF into its bye week with some feel-good momentum. And the third consecutive home victory gave the Knights some confidence before hosting in-state foe Miami on Oct. 17. Memphis (1-4, 0-2) led 14-9 late in the third quarter when senior quarterback Will Hudgens replaced starter Tyler Bass and hit DajLeon Farr for an 11-yard touchdown pass. But Hodges responded with quite possibly the most important drive of the season for the Knights. He capped the 92-yard drive - the longest of the season - by hitting Kay for a 7-yard score on the final play of the third quarter to put UCF up 15-14. Hodges, who passed for 214 yards and wasn't sacked, found Jamar Newsome for a 22-yard score with 7:28 to play to put UCF up 22-14. Hodges said he's pleased that UCF seems to be a stronger team in the second half of games, but he's still waiting for the day when the team puts four quarters together. Harvey did the rest of the work, carrying the ball 42 times for 219 yards - both career highs. It was the fourth-highest total of carries in a game in school history and the sixth-best total for rushing yards. His 25-yard touchdown sprint with 41 seconds remaining sealed the victory for the Knights. Factor in Nick Cattoi's four field goals - tying another school record and a UCF defense that garnered three sacks and two interceptions and it was just the kind of complete victory that the Knights needed. By finally putting it all together, they indeed took ``one more MEM 14 67 26 2.6 0 98 31 258 16-31-2 8.3 16.1 2 325 57 5.7 0-0 8-65 5-246 49.2 40.8 1 3-184 61.3 38.7 0-0-0 0.0 7-168-0 24.0 1-62-0 0-0-0 21:03 4 of 13 0 of 1 1-2 0-0 2-2 0-2 UCF 28 261 50 5.2 1 270 9 214 16-28-1 7.6 13.4 2 475 78 6.1 1-0 5-46 4-143 35.8 30.8 2 8-544 68.0 44.5 1-2-0 2.0 3-68-0 22.7 2-33-0 0-0-0 38:57 4 of 13 1 of 1 4-4 3-25 2-2 4-4 MEMPHIS INDIVIDUAL STATISTICS Rushing Att Gain Smith 8 38 Steele 8 22 Bass 7 36 Calhoun 1 2 Baker 1 0 Hudgens 1 0 Passing Comp Att Hudgens 6 15 Bass 10 15 Hall 0 1 Receiving No Yds Calhoun 4 162 Steele 3 21 Joachim 3 8 Baker 2 13 Onarheim 1 23 Smith 1 14 Farr 1 12 Singleton 1 5 Punting No Yds Reagan 5 246 Returns PR LG Brown 0-0 0 Johnson 0-0 0 Hightower 0-0 0 Griffin 0-0 0 Field Goals QTR Time Reagan 2 0:00 Reagan 4 13:49 Defense U-A-T S Jackson 11-2-13 0 Rockette 5-2-7 0 Lost 3 0 17 0 3 8 INT 1 1 0 TD 1 0 0 0 0 0 1 0 Avg 49.2 KR 0-0 4-96 2-55 1-17 Dist 30 42 TFL 0 1-1 Net TD LG 35 0 12 22 0 5 19 0 16 2 0 2 -3 0 0 -8 0 0 Yds TD LG 143 1 51 115 1 61 0 0 0 LG 61 20 6 8 23 14 12 5 LG IN20 TB 56 1 2 LG INTR LG 0 1-62 62 34 0-0 0 37 0-0 0 17 0-0 0 Result Missed Missed FF FR INT 1 0 0 0 0 0 22 THE ST. PETERSBURG BOWL GAME 6 UCF INDIVIDUAL STATISTICS October 17, 2009 - Attendance: 48,453 Bright House Networks Stadium (Orlando, Fla.) Rushing Harvey McDuffie Guyton Weaver Newsome Calabrese Hodges Passing Hodges Calabrese Receiving Newsome Kay Giovanetti Ross Aiken Kelly Watters Harvey Punting Clingan Returns Ross McDuffie Field Goals Cattoi Defense Boddie Young Hogue Hallman Att Gain Lost 12 30 5 3 21 0 1 14 0 2 5 0 2 5 0 1 1 0 3 2 20 Comp Att INT 12 27 1 1 4 0 No Yds TD 3 56 0 2 28 0 2 23 0 2 12 1 1 35 0 1 13 0 1 11 0 1 -2 0 No Yds Avg 7 248 35.4 PR LG KR 1-2 2 0-0 0-0 0 6-126 QTR Time Dist 1 5:30 32 U-A-T S TFL 6-4-10 0 0 7-1-8 0 1-7 7-1-8 2-13 3.5-15 2-6-8 0 1-1 Net TD LG 25 0 6 21 0 12 14 0 14 5 0 3 5 0 4 1 0 1 -18 0 2 Yds TD LG 163 1 41 13 0 13 LG 41 19 15 8 35 13 11 0 LG IN20 TB 49 3 0 LG INTR LG 0 0-0 0 35 0-0 0 Result Missed FF FR INT 0 0 0 0 0 0 0 0 0 0 0 0 SCORING SUMMARY First Quarter UM Leonard Hankerson 23 yd pass from Jacory Harris (Matt Bosher kick) - 1:46 8-80, 3:44 Second Quarter UM Matt Bosher 31 yd FG - 9:47 9-35, 3:55 Third Quarter UM Javarris James 5 yd run (Matt Bosher kick) - 13:18 3-51, 0:43 UCF Rocky Ross 8 yd pass from Brett Hodges (Nick Cattoi kick) - 9:15 9-80, 3:59 UM Matt Bosher 46 yd FG - 1:56 8-52, 4:15 Fourth Quarter UM Damien Berry 3 yd run (Matt Bosher kick) - 10:46 9-61, 4:40 ORLANDO, Fla. - By the end of a blustery night, a record crowd was mostly gone and Bright House Networks Stadium was half empty rather than rocking in celebration. On this night, there would be no defining, landmark victory for UCF and instead just the frustration of having missed out on too many opportunities. When UCF failed to convert from the 2-yard line late in the third quarter when it trailed by just 10 points, its chances of defeating No. 9 Miami seemed to wither away. Ultimately a 27-7 loss to the rival Hurricanes left the Knights shaking their heads in frustration and left a crowd of 48,453 numb from something other than the windy conditions. Hoping to take down the reborn, rebuilt Hurricanes and establish themselves among the Sunshine State's Famed Big Three, UCF failed to produce an error-free performance. Their fatal flaw once again - as it has been much of the season - was an inability to get points after moving into scoring position. UCF (3-3) got no points after driving to the 15-yard line on the first possession of the game. And the three-play possession from the 2 - when Brynn Harvey was dropped for a four-yard loss on first down and Hodges was intercepted after being blindsided - proved crippling for the Knights. Get 10, 14 or even six points out of those two possessions and it certainly could have been a different game for UCF. Miami, clearly back among the powers now after navigating its way through one of the toughest schedules in the country, improved to 5-1. The Hurricanes got 293 yards from Jacory Harris and 363 yards of total offense. UCF's defense kept the Knights in the game by relentlessly pressuring Harris all game. UCF recorded six sacks, the most in a game since last season against East Carolina. Cory Hogue had two sacks, while Darius Nall, Bruce Miller, David Williams and Jarvis Geathers each dropped Harris once. UCF electrified the largest crowd ever at Bright House Networks Stadium in the third quarter when they got within 17-7. Hodges hit on passes for 15, 41 and nine yards before finding Rocky Ross on an 8-yard slant pass for a touchdown. Just minutes later UCF seemed poised to get even closer to putting a major scare into the `Canes. A high punt snap gave UCF the ball at the 2-yard line after an illegal kicking penalty. But UCF went backwards on an option play - a call that O'Leary said he himself wondered about. The series that again gave UCF no points was a turning point in the game. Had UCF been able to solve its red-zone woes Saturday night, an upset might have been possible. Instead, UCF was once again frustratingly left with what-if scenarios. Touchbacks Punt returns: No.-Yards-TD Average Per Return Kickoff returns: No.-Yds-TD Average Per Return Interceptions: No.-Yds-TD Fumble Returns: No.-Yds-TD Possession Time Third-Down Conversions Fourth-Down Conversions Red-Zone Scores-Chances Sacks By: Number-Yards PAT Kicks Field Goals UM 26 70 46 1.5 2 157 87 293 20-27-0 10.9 14.6 1 363 73 5.0 1-0 6-32 4-133 33.2 32.8 3 6-374 62.3 41.3 0 2-19-0 9.5 2-26-0 13.0 1-9-0 0-0-0 37:13 6 of 13 0 of 1 3-3 2-20 3-3 2-2 UCF 11 53 24 2.2 0 78 25 176 13-31-1 5.7 13.5 1 229 55 4.2 1-0 3-30 7-248 35.4 32.7 3 2-107 53.5 40.5 0 1-2-0 2.0 6-126-0 21.0 0-0-0 0-0-0 22:47 6 of 15 0 of 0 1-3 6-41 1-1 0-1 MIAMI INDIVIDUAL STATISTICS Rushing James, J. Berry Chambers James, M. Benjamin Harris Passing Harris Cooper Receiving Byrd Hankerson Benjamin Graham Collier Streeter James, M. Epps Berry Punting Bosher Returns Collier James, M. McCarthy Field Goals Bosher Bosher Defense Sharpton Harris, B. Spence Att Gain 17 68 14 61 3 15 1 1 2 5 7 7 Comp Att 20 26 0 1 No Yds 5 85 4 64 4 46 2 42 1 28 1 12 1 9 1 5 1 2 No Yds 4 133 PR LG 2-19 13 0-0 0 0-0 0 QTR Time 2 9:47 3 1:56 U-A-T S 5-6-11 0 5-2-7 0 4-3-7 2-20 Lost 3 1 0 0 7 41 INT 0 0 TD 0 1 0 0 0 0 0 0 0 Avg 33.2 KR 0-0 2-26 0-0 Dist 31 46 TFL 1-1 0 2-20 Net TD LG 65 1 8 60 1 14 15 0 7 1 0 1 -2 0 5 -34 0 7 Yds TD LG 293 1 32 0 0 0 LG 32 23 13 21 28 12 9 5 2 LG IN20 TB 41 3 0 LG INTR LG 0 0-0 0 16 0-0 0 0 1-9 9 Result Good Good FF FR INT 0 0 0 0 0 0 0 0 0 23 THE ST. PETERSBURG BOWL GAME 7 UCF INDIVIDUAL STATISTICS ICE SCORING SUMMARY First Quarter UCF A.J. Guyton 76 yd pass from Brett Hodges (Nick Cattoi kick) - 14:40 1-76, 0:20 UCF Brett Hodges 1 yd run (Nick Cattoi kick) - 3:29 14-87, 6:33 UCF Ronnie Weaver 27 yd run (Nick Cattoi kick) - 2:02 1-27, 0:07 Second Quarter UCF Kamar Aiken 36 yd pass from A.J. Guyton (Nick Cattoi kick) - 1:55 6-89, 3:11 Third Quarter UCF Josh Robinson 24 yd interception return (Nick Cattoi kick) - 7:32 UCF Jamar Newsome 52 yd pass from Rob Calabrese (Jamie Boyle kick) - 0:23 3-63, 1:24 Fourth Quarter RICE Charles Ross 1 yd run (Clark Fangmeier kick) - 11:03 12-73, 4:12 UCF Ricky Kay 13 yd pass from Rob Calabrese (Jamie Boyle kick) - 3:01 4-23, 1:58 0 0 0 7-7 HOUSTON, Texas - From the first snap of the game, UCF was in control and it never let up in a 49-7 win at Rice in Houston, Texas. The Knights (4-3, 2-2 Conference USA) won their 50th road game in program history as they scored their most points on the road since collecting 64 at Louisiana Tech on Sept. 5, 1998. Following the opening kickoff, Brett Hodges found A.J. Guyton on a 76-yard touchdown reception to give the Black and Gold a quick 7-0 lead against the Owls (0-8, 0-4). It was the longest TD catch since Doug Gabriel's 80-yard scoring strike at Arizona State Sept. 7, 2002. Hodges went on to finish his outing 8-for-13 with 145 yards. Rob Calabrese also saw time under center, going 3-for-4 with 71 yards and two touchdowns. Guyton led the receivers with three receptions for 103 yards, and he also threw for a TD when he found Kamar Aiken on a 36-yarder in the second quarter. The most lopsided road win in UCF's 31-year football history, the Knights built on its 7-0 advantage but this time put together a 14play drive capped off by a one-yard run from Hodges to make it 14-0. Only 1:27 later and UCF completed its first quarter scoring on a 27-yard scamper from Ronnie Weaver. Guyton's touchdown pass to Aiken late in the second quarter provided the Knights with their 28-0 halftime lead, and it would keep up the offensive pressure in the third. Following Josh Robinson's 24-yard interception return, Calabrese entered and hooked up with Jamar Newsome on a 52-yard touchdown pass for the 42-0 cushion. Rice may have ruined the shutout bid with 11:03 left in the fourth by way of a one-yard run from Charles Ross, but Calabrese came right back and connected with Ricky Kay on a 13-yard TD pass for the final 49-7 margin. On the defensive side of the ball, UCF notched five sacks and forced five fumbles, recovering three of them. In all, UCF outgained Rice 465 to 282 in total offensive yards. October 24, 2009 - Attendance: 10,196 Rice Stadium (Houston, Texas) 21 213 39 5.5 2 242 29 252 12-18-0 14.0 21.0 4 465 57 8.2 0-0 5-51 5-175 35.0 35.0 4 8-537 67.1 51.2 3-60-0 20.0 1-21-0 21.0 1-24-1 0-0-0 27:09 3 of 9 0 of 0 2-3 5-49 7-7 0-1 RICE 16 104 38 2.7 1 172 68 178 25-37-1 4.8 7.1 0 282 75 3.8 7-3 9-75 9-365 40.6 33.9 4 2-135 67.5 57.0 0-0-0 0.0 5-67-0 13.4 0-0-0 0-0-0 32:51 6 of 18 1 of 1 1-1 2-14 1-1 0-0 Rushing Harvey Weaver Davis, J. Kelly Davis, B. Hodges McDuffie Passing Hodges Calabrese Guyton Receiving Guyton Newsome Ross Aiken Kay Nissley Watters Harvey Punting Clingan Returns Guyton Davis, J. Robinson McDuffie Field Goals Cattoi Defense Ishmael Boddie Thompson Att Gain 12 80 5 58 2 44 5 16 3 10 9 26 2 8 Comp Att 8 13 3 4 1 1 No Yds 3 103 2 62 2 19 1 36 1 13 1 13 1 6 1 0 No Yds 5 175 PR LG 2-42 38 1-18 18 0-0 0 0-0 0 QTR Time 2 9:14 U-A-T S 8-3-11 0 7-3-10 0 2-6-8 0 Lost 9 0 0 0 0 18 0 INT 0 0 0 TD 1 1 0 1 1 0 0 0 Avg 35.0 KR 0-0 0-0 0-0 1-21 Dist 37 TFL 0 1.5-8 .5-2 Net TD LG 71 0 23 58 1 27 44 0 35 16 0 7 10 0 6 8 1 12 8 0 6 Yds TD LG 145 1 76 71 2 52 36 1 36 LG 76 52 15 36 13 13 6 0 LG IN20 TB 45 4 0 LG INTR LG 0 0-0 0 0 0-0 0 0 1-24 24 21 0-0 0 Result Missed FF FR INT 0 0 0 1 0 0 0 0 0 RICE INDIVIDUAL STATISTICS Rushing Ross Goodson Smith Knox Shepherd Fanuzzi Passing Fanuzzi Shepherd Receiving Dixon Beasley Wardlow Goodson Dupree Smith Ross Maginot Randolph McDonald Punting Brundage Martens Returns Goodson Gaddis Field Goals None Defense Bradshaw Jones Briggs Att Gain 7 67 7 42 10 38 3 15 5 10 6 0 Comp Att 17 26 8 11 No Yds 6 54 4 26 3 36 3 19 2 12 2 8 2 1 1 10 1 7 1 5 No Yds 5 232 4 133 PR LG 0-0 0 0-0 0 QTR Time U-A-T 3-9-12 3-3-6 2-4-6 S 0 0 0 Lost 4 0 0 4 30 30 INT 1 0 TD 0 0 0 0 0 0 0 0 0 0 Avg 46.4 33.2 KR 1-27 4-40 Dist TFL 0 1-9 0 Net TD LG 63 1 54 42 0 17 38 0 19 11 0 11 -20 0 5 -30 0 0 Yds TD LG 116 0 15 62 0 18 LG 18 15 15 11 6 6 8 10 7 5 LG IN20 TB 63 2 0 58 2 0 LG INTR LG 27 0-0 0 17 0-0 0 Result FF 0 0 0 FR INT 0 0 0 0 0 0 24 THE ST. PETERSBURG BOWL GAME 8 MARSHALL 0 17 3 0 - 20 November 1, 2009 - Attendance: 35,676 Bright House Networks Stadium (Orlando, Fla.) UCF INDIVIDUAL STATISTICS Rushing Harvey Newsome Davis, J. Hodges Aiken Passing Hodges Team Receiving Ross Guyton Aiken Nissley Newsome Kay Watters Punting Clingan Returns Guyton McDuffie Baldwin Robinson Davis, J. Field Goals Cattoi Defense Miller Young Ishmael Hogue Att Gain Lost 21 51 4 1 17 0 2 16 0 5 1 10 1 0 12 Comp Att INT 23 45 0 0 3 0 No Yds TD 6 76 1 5 100 0 5 62 1 3 46 0 2 27 0 1 16 0 1 15 0 No Yds Avg 6 222 37.0 PR LG KR 3-31 17 0-0 0-0 0 2-43 0-0 0 1-29 0-0 0 0-0 0-0 0 1-8 QTR Time Dist 2 2:15 48 U-A-T S TFL 4-6-10 2.5-33 3-34 7-1-8 0 0 4-3-7 0 0 2-4-6 0 .5-2 Net TD LG 47 1 6 17 0 17 16 0 15 -9 0 1 -12 0 0 Yds TD LG 342 2 41 0 0 0 LG 21 41 23 27 19 16 15 LG IN20 TB 54 3 1 LG INTR LG 0 0-0 0 25 0-0 0 29 0-0 0 0 1-0 0 8 0-0 0 Result Returned FF FR INT 1 0 0 0 0 0 0 0 0 0 0 0 SCORING SUMMARY Second Quarter MAR Cody Slate 4 yd pass from Brian Anderson (C. Ratanamorn kick) - 12:03 7-91, 2:45 UCF Kamar Aiken 4 yd pass from Brett Hodges (Nick Cattoi kick) - 7:57 8-72, 4:01 MAR C. Ratanamorn 30 yd FG - 4:58 6-63, 2:53 MAR Darius Marshall 3 yd run (C. Ratanamorn kick) - 1:02 5-28, 1:13 Third Quarter MAR C. Ratanamorn 21 yd FG - 3:25 10-76, 5:04 Fourth Quarter UCF Brynn Harvey 2 yd run (Nick Cattoi kick) - 7:51 7-43, 2:25 UCF Rocky Ross 1 yd pass from Brett Hodges (Nick Cattoi kick) - 0:23 8-30, 1:49 ORLANDO, Fla. - With UCF's players and coaches joyously celebrating and Bright House Networks Stadium throbbing with raw emotion, the Knights finally had their signature moment and the kind of defining victory that could ultimately turn this season into a special one. When UCF wiped out a 13-point deficit with two fourth-quarter touchdowns - the final one with 23 seconds to play - the Knights improbably defeated Marshall 21-20 to capture the kind of victory that it has desperately sought all season. It might sound clich�, but UCF won only because it continued to fight on both sides of the ball until the final horn. The Knights got a game-saving strip of the football from standout defensive end Bruce Miller with 2:12 to play, a moment that set up UCF's winning score. Gritty senior quarterback Brett Hodges then found Rocky Ross for a one-yard score to tie the game with 23 seconds to play. Nick Cattoi's point-after boot provided the game-winning point and set off an eruption of emotion from the 35,676 inside the stadium. The night looked like it would be another forgettable one, much like the close loss to the University of Miami two weeks earlier, when Hodges' fourth-down pass fell incomplete with 2:40 to play with the Knights down 20-14. But two plays later, Miller made the play of his collegiate career. Marshall quarterback Brian Anderson had the ball locked away with two hands, but Miller was able to rip it from his grasp. Freshman defensive back Josh Robinson, who had his third interception of the season earlier in the game, recovered the loose ball to give UCF one more shot. And UCF was able to get it in the end zone largely because of a 19-yard strike from Hodges to Kamar Aiken that got the Knights down to the one-yard line with 26 seconds to play. From there, Hodges faked the ball to Brynn Harvey and found Ross open in the end zone for the tying score. UCF was behind the field-position battle most of the first half, and as it turned out, that cost the Knights dearly. UCF mustered just two first downs on its first four possessions. That allowed Marshall to either start or drive into UCF territory five times in the first half. The Herd led 17-7 at the break. Trailing 7-0 early in the second quarter after a shaky start offensively, UCF grabbed some momentum and got back in the game. Hodges had completions of 15 and 14 yards and a Jonathan Davis 15-yard sprint around right end put UCF in scoring position. And UCF rewarded O'Leary's decision to go for it on fourth-and-one when Hodges hit Kamar Aiken for a four-yard score. Aiken wisely cut the corner route short, coming back to the ball for his fourth TD catch of the season and the 10th of his career. As has been the case all season, UCF suffered a staggering moment in the red zone. Nick Cattoi's field goal try from 48 yards was tipped at the line and Marshall's Ashton Hall fielded the kick at the fiveyard line. He returned it 68 yards, taking the wind out of UCF's sails. Five plays later, Darius Marshall scooted around left end for a threeyard touchdown that put the Thundering Herd up 17-7 at the Miscellaneous Yards Possession Time Third-Down Conversions Fourth-Down Conversions Red-Zone Scores-Chances Sacks By: Number-Yards PAT Kicks Field Goals MAR 18 56 36 1.6 1 105 49 237 13-28-1 8.5 18.2 1 293 64 4.6 1-1 5-39 5-209 41.8 35.6 2 2 5-298 59.6 39.6 0-0-0 0.0 4-57-0 14.2 0-0-0 0-0-0 68 31:06 4 of 12 0 of 0 4-4 3-10 2-2 2-2 UCF 23 59 30 2.0 1 85 26 342 23-48-0 7.1 14.9 2 401 78 5.1 0-0 8-76 6-222 37.0 33.7 3 1 4-223 55.8 41.5 3-31-0 10.3 4-80-0 20.0 1-0-0 0-0-0 0 28:54 5 of 18 5 of 6 3-4 4-42 3-3 0-1 MARSHALL INDIVIDUAL STATISTICS Rushing Marshall Booker Anderson Passing Anderson Receiving Slate Wilson Dobson Smith Punting Whitehead Returns Gale Marshall Field Goals Ratanamorn Ratanamorn Defense Harvey Brown Bembry Burns Att Gain 28 87 1 3 7 15 Comp Att 13 28 No Yds 5 80 5 42 2 75 1 40 No Yds 5 209 PR LG 0-0 0 0-0 0 QTR Time 2 4:58 3 3:25 U-A-T S 7-4-11 1-1 4-4-8 0 5-2-7 0 5-0-5 0 Lost 7 0 42 INT 1 TD 1 0 0 0 Avg 41.8 KR 1-12 3-45 Dist 30 21 TFL 1-1 0 1-12 0 Net TD LG 80 1 10 3 0 3 -27 0 10 Yds TD LG 237 1 47 LG 47 12 46 40 LG IN20 TB 57 2 0 LG INTR LG 12 0-0 0 15 0-0 0 Result Good Good FF FR INT 0 0 0 0 0 0 0 0 0 0 0 0 25 THE ST. PETERSBURG BOWL GAME 9 UCF INDIVIDUAL STATISTICS November 7, 2009 - Attendance: 101,003 Darrell K Royal-Texas Memorial Stadium (Austin, Texas) Rushing Att Gain Davis 22 80 Guyton 1 11 Calabrese 15 24 Passing Comp Att Calabrese 10 19 Receiving No Yds Guyton 5 35 Kay 2 22 Aiken 2 12 Ross 1 7 Punting No Yds Clingan 8 340 Returns PR LG Guyton 1-5 5 McDuffie 0-0 0 Robinson 0-0 0 Field Goals QTR Time Cattoi 2 13:00 Defense U-A-T S Hogue 10-2-12 0 Robinson 5-1-6 0 Geathers 5-1-6 2-9 Ishmael 4-1-5 0 Lost 9 0 31 INT 0 TD 0 0 0 0 Avg 42.5 KR 0-0 4-79 0-0 Dist 39 TFL 2-3 0 2-9 0 Net TD LG 71 0 12 11 0 11 -7 0 6 Yds TD LG 76 0 14 LG 14 11 7 7 LG IN20 TB 70 4 0 LG INTR LG 0 0-0 0 27 0-0 0 0 1-3 3 Result Good FF FR INT 0 0 0 0 0 1 0 0 0 0 0 0 SCORING SUMMARY Second Quarter UCF Nick Cattoi 39 yd FG - 13:00 8-36, 3:04 UT Cody Johnson 20 yd run (Hunter Lawrence kick) - 11:49 4-80, 1:07 UT Cody Johnson 13 yd run (Hunter Lawrence kick) - 5:29 11-72, 4:39 Third Quarter UT James Kirdendoll 14 yd pass from Colt McCoy (H. Lawrence kick) - 3:11 15-87, 6:20 Fourth Quarter UT Jordan Shipley 88 yd pass from Colt McCoy (Hunter Lawrence kick) - 13:01 4-99, 1:46 UT Foswhitt Whittaker 6 yd run (Hunter Lawrence kick) - 9:13 6-51, 2:35 AUSTIN, Texas - For. 15 75 38 2.0 0 115 40 76 10-19-0 4.0 7.6 0 151 57 2.6 0-0 1-5 8-340 42.5 40.8 4 3 2-137 68.5 50.0 1-5-0 5.0 4-79-0 19.8 1-3-0 0-0-0 31:11 2 of 12 0 of 1 0-1 2-9 0-0 1-1 UT 24 67 25 2.7 3 93 26 470 33-42-1 11.2 14.2 2 537 67 8.0 1-0 6-53 2-56 28.0 25.5 0 0 6-391 65.2 45.3 2-14-0 7.0 1-17-0 17.0 0-0-0 0-0-0 28:49 10 of 14 1 of 1 4-4 6-31 5-5 0-1 TEXAS INDIVIDUAL STATISTICS Rushing Johnson McCoy Newton Whittaker Monroe Passing McCoy Receiving Shipley Williams Kirkendoll Chiles Goodwin Whittaker Johnson Smith Buckner Punting Tucker Returns Shipley Monroe Field Goals Lawrence Defense Kindle Houston Muckelroy Acho Att Gain Lost 10 48 4 8 22 9 1 6 0 5 16 13 1 1 0 Comp Att INT 33 42 1 No Yds TD 11 273 1 5 67 0 5 40 1 3 40 0 3 10 0 2 11 0 2 10 0 1 12 0 1 7 0 No Yds Avg 2 56 28.0 PR LG KR 2-14 11 0-0 0-0 0 1-17 QTR Time Dist 1 10:02 44 U-A-T S TFL 6-3-9 1-6 2-8 5-2-7 1-8 2.5-10 4-1-5 0 0 4-1-5 1-5 1-5 Net TD LG 44 2 20 13 0 8 6 0 6 3 1 8 1 0 1 Yds TD LG 470 2 88 LG 88 19 14 18 6 6 14 12 7 LG IN20 TB 40 0 0 LG INTR LG 0 0-0 0 17 0-0 0 Result Missed FF FR INT 0 0 0 0 0 0 0 0 0 0 0 0 26 THE ST. PETERSBURG BOWL GAME 10 HOUSTON 10 7 0 15 - 32 UCF INDIVIDUAL STATISTICS Rushing Harvey Davis, J. Guyton McDuffie Hodges Passing Hodges Receiving McDuffie Aiken Guyton Watters Kay Harvey Kelly Nissley Punting Clingan Returns Guyton Boddie McDuffie Field Goals Cattoi Defense Hogue Ishmael Boddie Hallman Att Gain 35 149 8 31 1 5 1 4 4 0 Comp Att 21 25 No Yds 4 77 4 60 4 30 4 20 2 15 1 23 1 10 1 6 No Yds 5 188 PR LG 2-(-3) 20 0-0 0 0-0 0 QTR Time 2 7:03 U-A-T S 5-5-10 0 5-3-8 0 5-1-6 0 3-3-6 0 Lost 10 4 0 0 21 INT 1 TD 1 0 0 0 0 0 0 0 Avg 37.6 KR 0-0 0-0 5-122 Dist 35 TFL 0 0 0 0 Net TD LG 139 3 41 27 1 15 5 0 5 4 0 4 -21 0 0 Yds TD LG 241 1 31 LG 27 31 11 15 9 23 10 6 LG IN20 TB 51 1 1 LG INTR LG 0 0-0 0 0 1-22 22 32 0-0 0 Result Good FF FR INT 0 0 0 0 0 0 0 0 1-22 1 1-0 0 November 14, 2009 - Attendance: 36,776 Bright House Networks Stadium (Orlando, Fla.) SCORING SUMMARY First Quarter UH Matt Hogan 33 yd FG - 9:18 8-45, 2:30 UH Tyron Carrier 51 yd pass from Case Keenum (Matt Hogan kick) - 6:55 2-51, 0:41 Second Quarter UCF Nick Cattoi 35 yd FG - 7:03 15-62, 8:27 UH Devin Mays 100 kickoff return - 6:51 UCF Jonathan Davis 4 yd run (Nick Cattoi kick) - 1:18 10-62, 5:24 Third Quarter UCF Brynn Harvey 1 yd run (Nick Cattoi kick blocked) - 8:51 8-51, 4:17 UCF Brynn Harvey 41 yd run (Nick Cattoi kick) - 5:06 5-85, 2:18 Fourth Quarter UH Matt Hogan 21 yd FG - 14:53 9-36, 1:54 UCF Quincy McDuffie 24 yd pass from Brett Hodges (Nick Cattoi kick) - 9:03 11-79, 5:44 UCF Brynn Harvey 7 yd run (Nick Cattoi kick) - 8:27 1-7, 0:10 UH Tyron Carrier 31 yd pass from Case Keenum (Matt Hogan kick failed) - 3:39 12-94, 2:23 UH Chaz Rodriguez 15 yd pass from Case Keenum (Case Keenum pass failed) - 0:10 12-80, 2:23 ORLANDO, Fla. - Drenched in a mixture of sweat from a long, hot afternoon and ice water from a joyous locker room celebration, UCF defensive end Bruce Miller recounted a story from the night before when the Knights were shown inspirational footage from legendary boxer Muhammad Ali. The video montage of Ali showed him taking one punch after another only to answer the bell each time and come back to knock out an opponent. Then, after UCF's players were worked into a frenzy, they were told how the Ali link would be a fitting metaphor for the showdown against No. 12 Houston at Bright House Networks Stadium. ``Like Ali, we took some punches and we took them early and we still kept fighting back,'' Miller said after UCF's thrilling 37-32 defeat of Houston. ``That was the moral of the story today, fighting back.'' UCF battled back from 10-0 and 17-3 deficits and trailed at the half for the ninth time in 10 games, but it still had a good enough of a mixture of opportunistic offense and gritty defense to pull off a bit of history. The Knights (6-4 overall, 4-2 in Conference USA play) beat a ranked Football Bowl Subdivision team for the first time, a span of 23 games. It was also the first victory against a ranked team in 10 tries in George O'Leary's six seasons at UCF. A UCF defense that forced three turnovers and an offense that controlled the ball almost twice as much as Houston's potent offense (39.5 minutes to 20.5 minutes) helped the Knights bring an end to Case Keenum's bid for the Heisman. The junior quarterback entered the game on pace to break the NCAA record for passing yards in a season, but managed only 377 yards. He was picked off once, sacked twice and forced to hurry many of his 56 attempts because of UCF's swarming pressure. Keenum torched UCF for a 51-yard touchdown and 179 pass yards in the first quarter, but the Knights refused to wilt. Down 17-10 at the half, UCF responded with its most important stretch of football this season. UCF was no stranger to being behind, having trailed in every home game this season and in every game except the Rice rout. UCF strung together scoring drives of 51 yards and 85 yards to spring to a 23-17 lead. Brynn Harvey's 41-yard sprint, one of his three TD runs on the day, was a thing of beauty as he hit the line at full speed and never broke stride as he crossed the goal line. And when UCF finished with a flourish, getting a 24-touchdown pass from Hodges to freshman Quincy McDuffie and a seven-yard plunge from Harvey, the Knights led 37-20 and the road was paved for one of the biggest wins in school history. Harvey, who rested a sore ankle last week against Texas, looked fresh on Saturday as he ran for 139 yards. UCF had to survive two late touchdown passes from Keenum and two onside kicks, but when the game was complete UCF finally had a big win and the crowd of 36,776 pulsated with emotion. TEAM STATISTICS HOU FIRST DOWNS 21 NET YARDS RUSHING 46 Rushing Attempts 15 Average Per Rush 3.1 Rushing Touchdowns 0 Yards Gained Rushing 66 Yards Lost Rushing 20 NET YARDS PASSING 377 Completions-Attempts-Int 33-57-1 Average Per Attempt 6.6 Average Per Completion 11.4 Passing Touchdowns 3 TOTAL OFFENSE YARDS 423 Total offense plays 72 Average Gain Per Play 5.9 Fumbles: Number-Lost 2-2 Penalties: Number-Yards 5-21 PUNTS-YARDS 4-178 Average Yards Per Punt 44.5 Net Yards Per Punt 45.2 KICKOFFS-YARDS 6-348 Average Yards Per Kickoff 58.0 Net Yards Per Kickoff 37.7 Punt returns: Number-Yards-TD 2-0-0 Average Per Return 0.0 Kickoff returns: No.-Yds-TD 7-172-1 Average Per Return 24.6 Interceptions: No.-Yds-TD 1-4-0 Fumble Returns: No.-Yds-TD 0-0-0 Possession Time 20:30 Third-Down Conversions 7 of 15 Fourth-Down Conversions 1 of 1 Red-Zone Scores-Chances 3-4 Sacks By: Number-Yards 2-18 PAT Kicks 2-3 Field Goals 2-2 UCF 21 152 50 3.0 4 189 37 241 21-25-1 9.6 11.5 1 393 75 5.2 2-1 1-11 5-188 37.6 33.6 7-451 64.4 39.9 2--3-0 -1.5 5-122-0 24.4 1-22-0 0-0-0 39:30 7 of 15 1 of 1 4-4 2-17 4-5 1-1 HOUSTON INDIVIDUAL STATISTICS Rushing Sims Keenum Beall Johnson, J. Passing Keenum Team Receiving Carrier Cleveland Sims Edwards Rodriguez Castile Johnson, K. Punting Turner Returns Dugat Mays Blackmon Carrier Field Goals Hogan Hogan Defense McGraw Brinkley Steward Cavness Att Gain Lost 6 36 3 5 25 17 3 3 0 1 2 0 Comp Att INT 33 56 1 0 1 0 No Yds TD 9 149 2 8 88 0 8 28 0 4 60 0 2 23 1 1 18 0 1 11 0 No Yds Avg 4 178 44.5 PR LG KR 2-0 0 0-0 0-0 0 4-132 0-0 0 0-0 0-0 0 3-40 QTR Time Dist 1 9:18 33 4 14:53 21 U-A-T S TFL 11-3-14 2-18 3-19 11-3-14 0 1-6 9-3-12 0 2-5 4-7-11 0 0 Net TD LG 33 0 12 8 0 14 3 0 2 2 0 2 Yds TD LG 377 3 51 0 0 0 LG 51 17 8 37 15 18 11 LG IN20 TB 52 1 0 LG INTR LG 0 0-0 0 100 0-0 0 0 1-4 4 29 0-0 0 Result Good Good FF FR INT 0 0 0 1 1-0 0 0 0 0 0 0 0 27 THE ST. PETERSBURG BOWL GAME 11 TULANE 0 0 0 0-0 UCF INDIVIDUAL STATISTICS Rushing Harvey Davis, J. Davis, B. Weaver Calabrese McDuffie Hodges Passing Hodges Calabrese Receiving Guyton Aiken Ross Kelly McDuffie Nissley Kay Punting Clingan Returns Guyton Davis, J. Baldwin Weams Robinson Field Goals None Defense Ishmael Young Hogue Robinson Att Gain 16 129 16 81 8 37 4 13 2 11 1 5 3 3 Comp Att 19 28 1 1 No Yds 6 56 5 84 3 37 2 23 2 13 1 15 1 12 No Yds 3 106 PR LG 2-26 19 2-41 23 0-0 0 0-0 0 0-0 0 QTR Time U-A-T 5-2-7 2-4-6 2-3-5 4-0-4 S 0 0 0 0 Lost 0 5 3 0 0 0 7 INT 1 0 TD 0 2 0 1 0 0 0 Avg 35.3 KR 0-0 0-0 1-29 0-0 0-0 Dist TFL 0 .5-1 0 0 Net TD LG 129 3 50 76 1 14 34 0 12 13 0 5 11 0 6 5 0 5 -4 0 3 Yds TD LG 234 2 29 6 1 6 LG 19 29 16 17 7 15 12 LG IN20 TB 43 2 0 LG INTR LG 0 0-0 0 0 0-0 0 29 0-0 0 0 1-0 0 0 1-16 16 Result FF 1 0 0 0 FR INT 0 0 0 0 0 0 0 1-16 November 21, 2009 - Attendance: 31,390 Bright House Networks Stadium (Orlando, Fla.) SCORING SUMMARY First Quarter UCF Brynn Harvey 50 yd run (Nick Cattoi kick) - 13:08 4-70, 1:52 Second Quarter UCF Brynn Harvey 2 yd run (Nick Cattoi kick) - 1:31 2-3, 0:39 Third Quarter UCF Brynn Harvey 2 yd run (Nick Cattoi kick) - 12:28 2-13, 0:32 UCF Kamar Aiken 29 yd pass from Brett Hodges (Nick Cattoi kick) - 10:37 1-29, 0:13 UCF Kamar Aiken 16 yd pass from Brett Hodges (Jamie Boyle kick) - 9:35 3-22, 0:55 UCF Jonathan Davis 9 yd run (Jamie Boyle kick) - 0:47 14-97, 6:51 Fourth Quarter UCF Brendan Kelly 6 yd pass from Rob Calabrese (Jamie Boyle kick) - 11:33 6-25, 3:09 ORLANDO, Fla. - This is how shockingly far UCF has progressed in a matter of one calendar year: A Knights team that limped to the finish line last November and was embarrassingly shut out in their finale has transformed now into a team that is doing the shutting out of foes and running up records in the process. UCF's light years of growth were clearly evident again as it repeatedly gashed Tulane with numerous big plays and the defense delivered another dominating performance in an easy, breezy 49-0 rout at Bright House Networks Stadium. It proved to be the largest shutout victory ever in Conference USA play, beating the previous mark of 33-0 by Southern Miss against Houston in 1997. It should be pointed out here that UCF was shutout 15-0 by UAB last November, showing just how much the Knights have progressed. The lopsided numbers on both sides of the ball were tremendously gaudy for a 7-4 UCF team that is playing as well as any Knights squad since the 2007 edition won the Conference USA title. If the game against 3-8 Tulane were a fight, officials would have stepped in and stopped it. To wit: ---- UCF held Tulane to a minus-30 yards rushing, making it the second best performance by any defense in the nation this season. UCF's defensive line was in the backfield almost more than Tulane quarterback Ryan Griffin, registering three sacks and eight tackles for loss. ---- UCF held the Green Wave to just 50 total yards on 51 plays. It's the fewest total yards a C-USA team has ever allowed in a game since the league's formation. And a UCF offense that embarrassingly ranked 119th in the country last season showed off its monumental strides by putting up some spectacular numbers of their own. Such as: ---- UCF's 504 yards were the most since the 2007 championship team had 506 at UAB. ---- The 49-point margin of victory is tied for the eighth-best in school history and is the largest win since 2001. And including the 37-32 defeat of Houston last Saturday, the Knights have scored 86 points in the past two games. ---- Brynn Harvey ran for 129 yards and three touchdowns, giving him 268 yards and six touchdowns over his past two games. He became the first Knight back since Kevin Smith to run for at least 100 yards in back-to-back games. And his 50-yard score on the game's fourth play matched the longest of his career. In a bizarre twist, UCF had drives of 14 plays and 13 plays in the first half in which it didn't score, but later scored quick-strike touchdowns on drives of two, one and three plays. A resilient second-half team most of the season, UCF led at halftime for just the second time all season. The Knights rallied in the second half against Samford, Buffalo, Memphis, Marshall and Houston for wins, but no such comeback would be needed on this day. TLN 7 -30 24 -1.2 0 41 71 80 13-27-2 3.0 6.2 0 50 51 1.0 5-3 2-15 7-345 49.3 39.7 3 2 1-69 69.0 40.0 1-7-0 7.0 8-181-0 22.6 1-40-0 0-0-0 22:37 3 of 13 1 of 2 0-0 1-7 0-0 0-0 UCF 28 264 50 5.3 4 279 15 240 20-29-1 8.3 12.0 3 504 79 6.4 1-0 3-28 3-106 35.3 33.0 2 0 8-516 64.5 41.9 4-67-0 16.8 1-29-0 29.0 2-16-0 1-10-0 37:23 9 of 14 0 of 1 5-7 3-19 7-7 0-0 TULANE INDIVIDUAL STATISTICS Rushing Att Gain Lost Net TD LG Williams, J. 2 4 0 4 0 3 Barnett 1 2 0 2 0 2 Williams, A. 1 0 2 -2 0 0 Anderson 14 31 33 -2 0 9 Thevenot 1 0 10 -10 0 0 Griffin 4 4 24 -20 0 4 Passing Comp Att INT Yds TD LG Griffin 10 20 2 67 0 14 Kemp 2 5 0 12 0 7 Moore 1 2 0 1 0 1 Receiving No Yds TD LG Williams, J. 7 47 0 14 Sparks 2 14 0 13 Robottom 2 13 0 7 Grant 1 5 0 5 Williams, A. 1 1 0 1 Punting No Yds Avg LG IN20 TB Thevenot 7 345 49.3 77 3 0 Returns PR LG KR LG INTR LG 1-7 7 0-0 0 0-0 0 Robottom Banks 0-0 0 4-119 47 0-0 0 Sullen 0-0 0 2-34 20 0-0 0 Davis 0-0 0 0-0 0 1-40 40 Barnett 0-0 0 1-12 12 0-0 0 Williams, J. 0-0 0 1-16 16 0-0 0 Field Goals QTR Time Dist Result None Defense U-A-T S TFL FF FR INT Jacks 6-2-8 0 1-1 0 0 0 Echols 6-0-6 0 0 0 0 0 Garrett 5-1-6 0 0 0 0 0 Burks 4-2-6 0 .5-0 0 0 0 28 THE ST. PETERSBURG BOWL GAME 12 UCF INDIVIDUAL STATISTICS November 28, 2009 - Attendance: 13,381 Legion Field (Birmingham, Ala.) Rushing Harvey Davis, J. Hodges Passing Hodges Receiving Ross Guyton Aiken Giovanetti Watters Harvey Nissley McDuffie Newsome Kay Punting Clingan Returns Guyton McDuffie Robinson Baldwin Field Goals Cattoi Cattoi Cattoi Defense Hogue Boddie Baldwin Robinson Att Gain 24 137 7 75 1 0 Comp Att 24 38 No Yds 6 41 4 42 3 58 3 11 2 18 2 15 1 22 1 14 1 8 1 1 No Yds 3 95 PR LG 2-31 22 0-0 0 0-0 0 0-0 0 QTR Time 1 2:25 3 2:28 4 11:55 U-A-T S 3-9-12 0 7-3-10 0 3-4-7 0 5-1-6 0 Lost 7 0 2 INT 2 TD 0 0 1 1 0 0 0 0 0 0 Avg 31.7 KR 0-0 3-71 0-0 1-18 Dist 27 34 36 TFL .5-2 .5-0 0 0 Net TD LG 130 1 25 75 1 45 -2 0 0 Yds TD LG 230 2 32 LG 13 17 32 5 10 19 22 14 8 1 LG IN20 TB 33 1 0 LG INTR LG 0 0-0 0 33 0-0 0 0 1-8 8 18 0-0 0 Result Good Missed Good FF FR INT 0 0 0 0 0 0 0 0 0 0 0 1-8 SCORING SUMMARY First Quarter UCF Kamar Aiken 12 yd pass from Brett Hodges (Nick Cattoi kick) - 10:30 10-66, 4:30 UAB Patrick Hearn 23 yd pass from Joe Webb (Josh Zahn kick) - 8:29 5-80, 2:01 UCF Nick Cattoi 27 yd FG - 2:25 11-56, 5:55 Second Quarter UCF Billy Giovanetti 5 yd pass from Brett Hodges (Nick Cattoi kick) - 12:36 8-61, 3:24 UAB Joe Webb 53 yd run (Josh Zahn kick) - 10:06 5-83, 2:22 UCF Jonathan Davis 7 yd run (Nick Cattoi kick) - 5:12 2-12, 0:51 Third Quarter UCF Brynn Harvey 25 yd run (Nick Cattoi kick) - 9:25 8-46, 3:26 Fourth Quarter UAB Jeffrey Anderson 34 yd pass from Joe Webb (Josh Zahn kick) - 14:03 9-80, 3:25 UCF Nick Cattoi 36 yd FG - 11:55 6-58, 2:00 UAB Frantrell Forrest 5 yd pass from Joe Webb (Josh Zahn kick blocked) - 2:49 4-48, 1:07 BIRMINGHAM, Ala. - For anyone looking for quantitative proof as to just how markedly far the UCF football team has progressed over a year's time, start with Nov. 29, 2008, and zip ahead to the Knights' contest at historic Legion Field. It was almost a year ago exactly when the Knights limped off the field at Bright House Networks Stadium embarrassed by a 15-0 defeat to UAB. Now, move ahead to this season when UCF's offense so easily moved the ball and gashed UAB that punter Blake Clingan might as well have had the day off. Whether it was through the air or on the ground, UCF did as it pleased with the ball - clearly a 180-degree turnaround from a year ago. And when the Knights hung on down the stretch with one defensive stop after another in a 34-27 defeat of UAB, many of UCF's players marveled at the incredible jump from 4-8 to 8-4 over the past 364 days. UCF rolled up 435 yards of offense, scored on six of its first nine possessions and wasn't forced to punt until just 8:33 remained in the game. They had to weather a furious fourth-quarter rally by UAB quarterback Joe Webb (459 total yards, four TDs), but UCF was not about to let a shaky finish overshadow an otherwise stellar afternoon. UCF sophomore tailback Brynn Harvey ran for 130 yards and a 25yard touchdown, allowing him to top 1,000 yards for the season. Hodges, the senior transfer from Wake Forest who rescued the UCF offense with his steady play, passed for 230 yards and two TDs. Clearly, the Knights have figured out the source of their slow starts to games. They trailed in nine of the first 10 games at the half, but jumped up 14-0 last week against Tulane and surged ahead 24-14 by the break on Saturday. But Webb single-handedly kept the Blazers within striking distance early on. He had a 23-yard TD strike to Patrick Hearn in the first quarter and he ripped off a spectacular 53-yard scramble for another score in the second quarter. Webb, who has accounted for more than 70 percent of UAB's offense on the season, had 188 of the Blazers 226 yards in the first half. UCF didn't punt in the first half and scored on four of its first six possessions, gashing UAB both with Hodges' accurate passing and Harvey's hard running between the tackles. Hodges hit on 12 of his first 16 passes and had 129 yards by the time the game was just 17 minutes old. And Harvey, who seems to be picking up steam as the season has progressed, topped the 1,000-yard mark for the season early in the second period with an 18-yard scamper off left end. Hodges had touchdown passes of 12 yards (to Kamar Aiken) and five yards (to fullback Billy Giovanetti) and threw for 188 yards against the nation's 120th ranked pass defense in the first half. UCF missed two other chances to grow its lead when Hodges uncharacteristically threw interceptions at the 11 and 3-yard line. He tried forcing a pass in on the first error and was hit from behind on the second one. It was Hodges' first multi-interception games since Sept. 26 when he had a dismal four-interception day at East Carolina. 26 205 35 5.9 2 219 14 230 24-38-2 6.1 9.6 2 435 73 6.0 0-0 1-15 3-95 31.7 32.0 7-485 69.3 52.4 2-31-0 15.5 4-89-0 22.2 1-8-0 0-0-0 33:17 2 of 10 1 of 1 5-6 3-19 4-4 2-3 UAB 22 205 32 6.4 1 229 24 322 20-35-1 9.2 16.1 3 527 67 7.9 1-0 5-48 3-128 42.7 32.3 4-247 61.8 39.5 2--1-0 -0.5 4-58-0 14.5 2-1-0 0-0-0 26:43 5 of 12 0 of 3 1-2 1-5 3-4 0-1 UAB INDIVIDUAL STATISTICS Rushing Webb Brooks Ferrell Adams Isabelle Forrest Slaughter Passing Webb Receiving Forrest Wright Anderson Ferrell Adams Slaughter Hearn Carter Mencer Punting Ragland Returns Ferrell Coleman Field Goals Zahn Defense Springs Atwater Ware Harris Att Gain 18 156 6 23 2 23 1 11 1 6 1 5 2 5 Comp Att 20 35 No Yds 6 82 5 66 2 40 2 22 1 42 1 32 1 23 1 9 1 6 No Yds 3 128 PR LG 2-(-1) 3 0-0 0 QTR Time 2 1:33 U-A-T S 6-4-10 0 3-7-10 0 5-3-8 0 1-5-6 0 Lost 19 1 3 0 0 0 0 INT 1 TD 1 0 1 0 0 0 1 0 0 Avg 42.7 KR 3-43 0-0 Dist 38 TFL .5-0 1-4 .5-1 0 Net TD LG 137 1 53 22 0 10 20 0 23 11 0 11 6 0 6 5 0 5 5 0 4 Yds TD LG 322 3 42 LG 30 36 34 14 42 32 23 9 6 LG IN20 TB 48 0 0 LG INTR LG 17 0-0 0 0 2-1 1 Result Missed FF FR INT 0 0 0 0 0 0 0 0 0 0 0 0 29 THE ST. PETERSBURG BOWL 2009 UCF TEAM STATISTICSCF 316 26.3 235 100 123 12 1662 1955 293 450 3.7 138.5 19 2514 205-344-11 7.3 12.3 209.5 19 4176 794 5.3 348.0 45-1106 25-300 11-138 24.6 12.0 12.5 15-7 47-444 37.0 57-2108 37.0 35.3 31:06 63/162 39% 9/12 75% 37-285 0 40 13-21 1-2 (35-49) 71% (27-49) 55% (35-38) 92% 266543 7/38078 OPP 248 20.7 230 71 143 16 990 1495 505 384 2.6 82.5 10 3187 267-427-11 7.5 11.9 265.6 18 4177 811 5.2 348.1 53-967 12-37 11-164 18.2 3.1 14.9 29-15 62-474 39.5 58-2305 39.7 33.2 28:54 72/174 41% 5/14 36% 27-167 59 31 12-18 0-2 (26-32) 81% (16-32) 50% (26-28) 93% 195246 5/39049 0/0 CHARTING THE STARTS Player Pos. Starts Streak Kamar Aiken WR 31 7 Emery Allen DB 3 Darin Baldwin DB 6 1 Justin Boddie DB 9 8 Jamie Boyle PK 1 Ian Bustillo OL 19 12 Rob Calabrese QB 11 Nick Cattoi PK 14 11 Blake Clingan P 35 33 Jonathan Davis RB 1 Jarvis Geathers DE 7 Billy Giovanetti HB 4 Theo Goins OL 4 4 Michael Greco* DB 11 A.J. Guyton WR 6 Derrick Hallman LB 31 21 Brynn Harvey RB 17 3 Chance Henderson LB 26 Brett Hodges QB 9 3 Cory Hogue LB 40 12 Chad Hounshell OL 1 Kemal Ishmael DB 8 8 Ricky Kay FB 8 1 Abr� Leggins OL 6 2 John Lubischer TE 4 OL 18 3 Cliff McCray Quincy McDuffie WR 2 Bruce Miller DE 33 11 Darius Nall DE 5 Adam Nissley TE** 24 24 Nick Pieschel OL 17 Corey Rabazinski TE 21 Jah Reid OL 26 18 Jordan Richards LB 16 Josh Robinson CB 11 11 Steven Robinson OL 4 Rocky Ross WR 34 2 Alex Thompson LB 4 Travis Timmons DL 20 2 Torrell Troup DL 38 28 Wes Tunuufi Sauvao DL 1 Brian Watters WR 14 Reggie Weams DB 4 Ronnie Weaver RB 6 David Williams DE 17 2 Khymest Williams WR 3 Lawrence Young LB 25 5 * Greco started four games at QB in 2008. ** Nissley six games at RT in 2008. Total 316 248 30 SCORE BY QUARTERS UCF Opponents 1st 52 41 2nd 68 92 3rd 105 47 4th 91 68 THE ST. PETERSBURG BOWL 2009 UCF INDIVIDUAL STATISTICS RUSHING HARVEY, Brynn DAVIS, Jonathan WEAVER, Ronnie MCDUFFIE, Qunicy DAVIS, Brandon KELLY, Brendan GUYTON, A.J. NEWSOME, Jamar WATTERS, Brian HODGES, Brett GIOVANETTI, Billy CALABRESE, Rob AIKEN, Kamar TEAM Total.......... Opponents...... PASSING HODGES, Brett CALABRESE, Rob TEAM GUYTON, A.J. Total.......... Opponents...... RECEIVING GUYTON, A.J. ROSS, Rocky AIKEN, Kamar NEWSOME, Jamar KAY, Ricky HARVEY, Brynn WATTERS, Brian NISSLEY, Adam MCDUFFIE, Qunicy GIOVANETTI, Billy RABAZINSKI, Corey KELLY, Brendan Total.......... Opponents...... PUNT RETURNS GUYTON, A.J. ROSS, Rocky DAVIS, Jonathan Total.......... Opponents...... INTERCEPTIONS ROBINSON, Josh WEAMS, Reggie HOGUE, Cory HALLMAN, Derrick YOUNG, Lawrence BODDIE, Justin KICK RETURNS MCDUFFIE, Quincy BALDWIN, Darin NEWSOME, Jamar THOMPSON, Alex DAVIS, Jonathan Total.......... Opponents...... FUMBLE RETURNS TIMMONS, Travis Total.......... Opponents...... GP 11 11 11 12 2 9 12 9 9 11 9 6 11 8 12 12 G 11 6 8 12 12 12 G 12 10 11 9 11 11 9 12 12 9 5 9 12 12 No. 13 8 4 25 12 No. 6 1 1 1 1 1 No. 33 8 2 1 1 45 53 No. 1 1 2 Att 248 57 19 11 11 10 3 4 1 54 1 21 2 8 450 384 Effic 133.42 122.97 0.00 732.40 132.81 133.98 No. 42 37 32 23 14 13 12 9 7 7 5 4 205 267 Yds 140 99 61 300 37 Yds 84 0 0 23 9 22 Yds 773 218 107 0 8 1106 967 Yds 10 10 28 Gain 1134 327 90 55 47 43 30 26 15 145 2 40 1 0 1955 1495 Loss 57 18 4 0 3 0 0 0 0 135 0 39 12 25 293 505 Net 1077 309 86 55 44 43 30 26 15 10 2 1 -11 -25 1662 990 Pct 61.1 51.3 0.0 100.0 59.6 62.5 TD 1 3 7 3 2 0 0 0 1 1 0 1 19 18 Long 38 39 23 39 13 Long 33 0 0 23 9 22 Long 95 72 89 0 8 95 100 Long 10 10 22 Avg 4.3 5.4 4.5 5.0 4.0 4.3 10.0 6.5 15.0 0.2 2.0 0.0 -5.5 -3.1 3.7 2.6 Yds 2263 215 0 36 2514 3187 Long 76 35 40 52 19 23 20 34 27 15 13 17 76 88 TD 14 3 1 0 0 0 0 0 0 1 0 0 0 0 19 10 TD 15 3 0 1 19 18 Avg/G 46.6 41.2 49.5 31.6 14.5 6.9 11.0 11.8 8.7 5.1 8.6 5.1 209.5 265.6 Long 50 45 27 14 12 18 14 17 15 17 2 6 1 0 50 54 Lng 76 52 0 36 76 88 Avg/G 97.9 28.1 7.8 4.6 22.0 4.8 2.5 2.9 1.7 0.9 0.2 0.2 -1.0 -3.1 138.5 82.5 Avg/G 205.7 35.8 0.0 3.0 209.5 265.6 Cmp-Att-Int 184-301-11 20-39-0 0-3-0 1-1-0 205-344-11 267-427-11 Yds 559 412 545 284 159 76 99 141 104 46 43 46 2514 3187 Avg 10.8 12.4 15.2 12.0 3.1 Avg 14.0 0.0 0.0 23.0 9.0 22.0 Avg 23.4 27.2 53.5 0.0 8.0 24.6 18.2 Avg 10.0 10.0 14.0 Avg 13.3 11.1 17.0 12.3 11.4 5.8 8.2 15.7 14.9 6.6 8.6 11.5 12.3 11.9 TD 0 0 0 0 0 TD 1 0 0 0 0 0 TD 1 0 0 0 0 1 1 TD 0 0 1 31 THE ST. PETERSBURG BOWL 2009 UCF INDIVIDUAL STATISTICS SCORING HARVEY, Brynn CATTOI, Nick AIKEN, Kamar ROSS, Rocky NEWSOME, Jamar DAVIS, Jonathan KAY, Ricky MCDUFFIE, Quincy BOYLE, Jamie HODGES, Brett ROBINSON, Josh GIOVANETTI, Billy KELLY, Brendan WEAVER, Ronnie GUYTON, A.J. Total.......... Opponents...... TOTAL OFFENSE HODGES, Brett HARVEY, Brynn DAVIS, Jonathan CALABRESE, Rob WEAVER, Ronnie GUYTON, A.J. MCDUFFIE, Quincy DAVIS, Brandon KELLY, Brendan NEWSOME, Jamar WATTERS, Brian GIOVANETTI, Billy AIKEN, Kamar TEAM Total.......... Opponents...... FIELD GOALS BOYLE, Jamie CATTOI, Nick PUNTING CLINGAN, Blake KICKOFFS CATTOI, Nick BOYLE, Jamie Total.......... Opponents...... ALL PURPOSE HARVEY, Brynn MCDUFFIE, Quincy GUYTON, A.J. AIKEN, Kamar ROSS, Rocky NEWSOME, Jamar DAVIS, Jonathan BALDWIN, Darin KAY, Ricky NISSLEY, Adam WATTERS, Brian KELLY, Brendan WEAVER, Ronnie ROBINSON, Josh GIOVANETTI, Billy DAVIS, Brandon RABAZINSKI, Corey HALLMAN, Derrick BODDIE, Justin HODGES, Brett YOUNG, Lawrence CALABRESE, Rob TEAM Total.......... Opponents...... TD 14 0 7 3 3 3 2 2 0 1 1 1 1 1 1 40 31 G 11 11 11 6 11 12 12 2 9 9 9 9 11 8 12 12 FGs 0-0 13-19 0-0 0-0 0-0 0-0 0-0 0-0 0-2 0-0 0-0 0-0 0-0 0-0 0-0 13-21 12-18 Plays 355 248 57 60 19 4 11 11 10 4 1 1 2 11 794 811 |------------- PATs -------------------| Kick Rush Rcv Pass 0-0 0-1 1 0-0 28-30 0-0 0 0-0 0-0 0-0 0 0-0 0-0 0-0 0 0-0 0-0 0-0 0 0-0 0-0 0-0 0 0-0 0-0 0-0 0 0-0 0-0 0-0 0 0-0 7-8 0-0 0 0-0 0-0 0-0 0 1-1 0-0 0-0 0 0-0 0-0 0-0 0 0-0 0-0 0-0 0 0-0 0-0 0-0 0 0-0 0-0 0-0 0 0-0 35-38 0-1 1 1-1 26-28 0-1 0 0-2 Rush 10 1077 309 1 86 30 55 44 43 26 15 2 -11 -25 1662 990 Pct 0.0 68.4 Avg 37.0 Avg 65.2 65.8 65.2 61.2 Rec 76 104 559 545 412 284 0 0 159 141 99 46 0 0 46 0 43 0 0 0 0 0 0 2514 3187 Pass 2263 0 0 215 0 36 0 0 0 0 0 0 0 0 2514 3187 01-19 0-0 0-0 Long 70 TB 10 0 10 3 PR 0 0 140 0 99 0 61 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 300 37 Total 2273 1077 309 216 86 66 55 44 43 26 15 2 -11 -25 4176 4177 20-29 0-0 5-6 TB 3 OB 1 0 1 1 KOR 0 773 0 0 0 107 8 218 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1106 967 Avg/G 206.6 97.9 28.1 36.0 7.8 5.5 4.6 22.0 4.8 2.9 1.7 0.2 -1.0 -3.1 348.0 348.1 30-39 0-2 3-6 FC 20 Retn 967 1106 IR 0 0 0 0 0 0 0 0 0 0 0 0 0 84 0 0 0 23 22 0 9 0 0 138 164 40-49 0-0 4-6 I20 23 Net 46.7 38.4 Tot 1153 932 729 534 511 417 378 218 159 141 114 89 86 84 48 44 43 23 22 10 9 1 -25 5720 5345 50-99 0-0 1-1 Blkd 0 YdLn 23 31 Avg/G 104.8 77.7 60.8 48.5 51.1 46.3 34.4 21.8 14.5 11.8 12.7 9.9 7.8 7.0 5.3 22.0 8.6 1.9 2.2 0.9 0.8 0.2 -3.1 476.7 445.4 Lg 0 50 Blk 1 1 DXP 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Saf 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Points 86 67 42 18 18 18 12 12 7 6 6 6 6 6 6 316 248 FGM-FGA 0-2 13-19 No. 57 No. 59 4 63 51 G 11 12 12 11 10 9 11 10 11 12 9 9 11 12 9 2 5 12 10 11 11 6 8 12 12 Yds 2108 Yds 3844 263 4107 3122 Rush 1077 55 30 -11 0 26 309 0 0 0 15 43 86 0 2 44 0 0 0 10 0 1 -25 1662 990 32 THE ST. PETERSBURG BOWL 2009 UCF DEFENSIVE STATISTICS 59 38 57 20 18 23 49 2 98 21 99 40 55 48 56 50 95 53 19 35 27 15 94 22 34 46 47 43 81 29 33 6 31 52 9 11 14 5 66 DEFENSIVE LEADERS HOGUE, Cory HALLMAN, Derrick YOUNG, Lawrence ROBINSON, Josh ISHMAEL, Kemal BODDIE, Justin MILLER, Bruce GRECO, Michael TROUP, Torrell BALDWIN, Darin GEATHERS, Jarvis WEAMS, Reggie THOMPSON, Alex WILLIAMS, David RICHARDS, Jordan LINAM, Josh TIMMONS, Travis NALL, Darius BOUYE, A.J. WEAVER, Ronnie DAVIS, Jonathan ALEXANDER, Chad TUNUUFI SAUVAO, Wes ALLEN, Emery HARVEY, Brynn HAUGHTON, Rashidi HARNDEN, T.J. KAY, Ricky AIKEN, Kamar DANKENBRING, Lyle GIOVANETTI, Nick WATTERS, Brian HUDSON, Austin CARTER, Jack NEWSOME, Jamar HODGES, Brett MCDUFFIE, Quincy ROSS, Rocky LEGGINS, Abre Total.......... Opponents...... GP 12 12 11 12 12 10 12 10 12 10 12 11 12 12 10 12 11 12 9 11 11 5 9 2 11 4 3 11 11 6 7 9 8 1 9 11 12 10 10 12 12 |-------Tackles-------| Solo Ast Total 56 43 99 41 38 79 43 30 73 56 9 65 44 20 64 41 14 55 26 27 53 26 14 40 16 16 32 20 12 32 19 9 28 10 14 24 13 10 23 14 8 22 11 11 22 10 4 14 7 6 13 7 5 12 6 3 9 7 1 8 4 2 6 4 . 4 3 1 4 3 . 3 2 1 3 2 1 3 1 2 3 2 . 2 2 . 2 1 1 2 . 2 2 1 . 1 . 1 1 . 1 1 . 1 1 1 . 1 1 . 1 1 . 1 1 . 1 502 307 809 492 342 834 TFL/Yds 11.5-35 5.5-10 10.0-35 1.0-2 1.0-3 4.0-20 16.5-122 . 5.0-14 2.0-5 13.5-79 . 1.5-4 5.5-15 2.5-6 1.0-4 2.0-8 4.5-28 . . . . 2.0-19 . . 0.5-1 1.5-10 . . . . . . . . . . . . 91-420 74.0-279 |-Sacks-| No-Yards 2.0-13 . 1.0-10 . . 1.0-10 12.0-114 . 2.0-7 . 11.0-72 . . 1.0-6 . . 1.0-1 4.0-27 . . . . 1.0-15 . . . 1.0-10 . . . . . . . . . . . . 37-285 27-167 |---Pass Def---| Int-Yds BrUp 1-0 5 1-23 2 1-9 3 6-84 8 . 1 1-22 3 . 3 . 4 . 4 . 7 . 1 1-0 . . 2 . . . . . . . . . 1 . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-138 45 11-164 36 QBH 1 2 . . . . 7 . 3 . 7 . . 3 . 1 1 7 . . . . . . . . . . . . . . . . . . . . . 32 29 |-Fumbles-| Rcv-Yds FF . 2 3-0 2 2-0 . 1-0 . . . . 2 1-0 1 . 1 1-0 . . . 1-0 3 1-0 . . 1 . . . 1 . . 1-10 1 1-0 1 . . 2-0 1 . . . . . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-10 17 7-28 9 Blkd Kick . . . . . . . . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 3 Saf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DEFENSIVE CATEGORY LEADERS SACKS Miller Geathers Nall Hogue Troup 12.0 (-114) 11.0 (-72) 4.0 (-27) 2.0 (-13) 2.0 (-7) TFL Miller Geathers Hogue Young Williams Hallman Troup 16.5 (-122) 13.5 (-79) 11.5 (-35) 10.0 (-35) 5.5 (-15) 5.5 (-10) 5.0 (-14) INTERCEPTIONS Robinson Hallman Boddie Young Hogue Weams 6 (84) 1 (23) 1 (22) 1 (9) 1 (0) 1 (0) BREAK-UPS Robinson Baldwin Hogue Greco Troup Boddie/Miller/Young 8 7 5 4 4 3 FORCED FUMBLES Geathers Boddie Hallman Hogue Eight players with 3 2 2 2 1 33 THE ST. PETERSBURG BOWL 2009 UCF GAME-BY-GAME INDIVIDUAL STATISTICS RUSHING HARVEY, Brynn RB WEAVER, Ronnie RB WATTERS, Brian WR HODGES, Brett QB CALABRESE, Rob QB MCDUFFIE, Quincy WR GIOVANETTI, Billy HB NEWSOME, Jamar WR KELLY, Brendan RB AIKEN, Kamar WR GUYTON, A.J. WR DAVIS, Jonathan RB DAVIS, Brandon RB PASSING CALABRESE, Rob Samford at Southern Miss Miami at Rice at Texas Tulane TOTALS HODGES, Brett Samford at Southern Miss Buffalo at East Carolina Memphis Miami at Rice Marshall Houston Tulane at UAB TOTALS RECEIVING ROSS, Rocky WR AIKEN, Kamar WR NEWSOME, Jamar WR HARVEY, Brynn RB WATTERS, Brian WR RABAZINSKI, Corey TE GUYTON, A.J. WR GIOVANETTI, Billy HB NISSLEY, Adam TE KAY, Ricky HB KELLY, Brendan RB MCDUFFIE, Quincy WR SAM 31-111/2 3-6/0 1-15/0 2-4/0 2-(-8)/0 1-1/0 0-0/0 0-0/0 --0-0/0 0-0/0 ----Comp 3 2 1 3 10 1 20 Comp 10 15 15 21 16 12 8 23 21 19 24 184 SAM 5-85/0 2-42/0 2-19/1 2-2/0 1-5/0 1-4/0 0-0/0 0-0/0 0-0/0 ----0-0/0 USM 14-37/0 4-3/0 0-0/0 4-(-21)/0 1-4/0 1-2/0 1-2/0 0-0/0 --0-0/0 0-0/0 0-0/0 --Att 7 4 4 4 19 1 39 Att 17 26 20 34 28 27 13 45 25 29 38 301 USM 5-56/1 4-53/1 2-17/0 2-11/0 2-24/0 0-0/0 1-13/0 1-5/0 0-0/0 0-0/0 --0-0/0 BUF 25-98/2 0-0/0 --13-71/0 --1-0/0 --1-4/0 --0-0/0 0-0/0 0-0/0 --Int 0 0 0 0 0 0 0 Int 1 0 0 4 1 1 0 0 1 1 2 11 BUF 2-23/0 3-53/0 6-37/0 1-4/0 --1-8/0 1-11/0 0-0/0 1-5/0 0-0/0 --0-0/0 ECU 16-71/1 1-1/0 --5-(-21)/0 --0-0/0 0-0/0 0-0/0 1-18/0 1-1/0 0-0/0 0-0/0 --Pct 42.9 50.0 25.0 75.0 52.6 100.0 51.3 Pct 58.8 57.7 75.0 61.8 57.1 44.4 61.5 51.1 84.0 67.9 63.2 61.1 ECU 5-56/0 2-50/1 0-0/0 3-23/0 --2-18/0 9-119/0 0-0/0 0-0/0 0-0/0 0-0/0 0-0/0 MEM 42-219/1 0-0/0 --3-19/0 --1-14/0 0-0/0 0-0/0 4-9/0 --0-0/0 0-0/0 --Yards 28 21 13 71 76 6 215 Yards 129 158 141 266 214 163 145 342 241 234 230 2263 MEM ----5-58/1 0-0/0 --1-13/0 4-50/0 1-7/0 1-34/0 4-52/1 0-0/0 0-0/0 MIAMI 12-25/0 2-5/0 0-0/0 3-(-18)/0 1-1/0 3-21/0 0-0/0 2-5/0 0-0/0 0-0/0 1-14/0 0-0/0 --TD 0 0 0 2 0 1 3 TD 1 2 0 1 2 1 1 2 1 2 2 15 MIAMI 2-12/1 1-35/0 3-56/0 1-(-2)/0 1-11/0 --0-0/0 2-23/0 0-0/0 2-28/0 1-13/0 0-0/0 RICE 12-71/0 5-58/1 0-0/0 9-8/1 0-0/0 2-8/0 0-0/0 0-0/0 5-16/0 0-0/0 0-0/0 2-44/0 3-10/0 Long 15 16 13 52 14 6 52 Long 26 29 39 40 34 41 76 41 31 29 32 76 RICE 2-19/0 1-36/1 2-62/1 1-0/0 1-6/0 --3-103/1 0-0/0 1-13/0 1-13/1 0-0/0 0-0/0 MAR 21-47/1 0-0/0 0-0/0 5-(-9)/0 --0-0/0 --1-17/0 0-0/0 1-(-12)/0 0-0/0 2-16/0 --Sack 1 0 0 0 6 0 7 Sack 0 3 2 4 0 2 2 3 2 1 1 20 MAR 6-76/1 5-62/1 2-27/0 0-0/0 1-15/0 --5-100/0 --3-46/0 1-16/0 0-0/0 0-0/0 TEXAS ----0-0/0 --15-(-7)/0 0-0/0 ----0-0/0 0-0/0 0-0/0 22-71/0 --Yds 7 0 0 0 31 0 38 Yds 0 22 12 21 0 20 14 10 18 7 5 129 TEXAS 1-7/0 2-12/0 ----0-0/0 --5-35/0 --0-0/0 2-22/0 0-0/0 0-0/0 HOU 35-139/3 0-0/0 0-0/0 4-(-21)/0 --1-4/0 ----0-0/0 0-0/0 1-5/0 8-27/1 --Effic 76.46 94.10 52.30 389.10 86.23 480.40 122.97 Effic 130.21 134.12 134.22 113.66 137.77 99.97 180.62 129.62 170.18 154.49 120.84 133.42 HOU --4-60/0 --1-23/0 4-20/0 --4-30/0 --1-6/0 2-15/0 1-10/0 4-77/1 TULANE 3-37/0 5-84/2 --0-0/0 0-0/0 --6-56/0 0-0/0 1-15/0 1-12/0 2-23/1 2-13/0 UAB 6-41/0 3-58/1 1-8/0 2-15/0 2-18/0 --4-42/0 3-11/1 1-22/0 1-1/0 0-0/0 1-14/0 TULANE 16-129/3 4-13/0 0-0/0 3-(-4)/0 2-11/0 1-5/0 0-0/0 --0-0/0 0-0/0 0-0/0 16-76/1 8-34/0 UAB 24-130/1 0-0/0 0-0/0 3-2/0 --0-0/0 0-0/0 0-0/0 0-0/0 0-0/0 0-0/0 7-75/1 --- 2009 UCF GAME-BY-GAME TACKLE BREAKDOWN D EFENSIVE L INEMAN SAM PLAYER Jarvis Geathers Bruce Miller Darius Nall Travis Timmons Torrell Troup David Williams UT-AT-TT 1-0-1 1-2-3 0-0-0 1-1-2 2-1-3 4-1-5 USM UT-AT-TT 0-1-1 4-5-9 0-0-0 0-2-2 0-5-5 0-0-0 BUF UT-AT-TT 3-0-3 2-0-2 0-0-0 0-1-1 2-1-3 4-1-5 ECU UT-AT-TT 0-2-2 2-3-5 0-1-1 2-1-3 1-3-4 1-1-2 MEM UT-AT-TT 1-2-3 3-2-5 0-0-0 0-0-0 0-1-1 2-1-3 MIAMI UT-AT-TT 1-1-2 3-0-3 3-0-3 1-0-1 4-0-4 2-1-3 RICE UT-AT-TT 3-1-4 0-3-3 0-2-2 0-1-1 2-1-3 0-2-2 MAR UT-AT-TT 1-1-2 4-6-10 0-1-1 2-0-2 2-1-3 0-0-0 TEXAS UT-AT-TT 5-1-6 0-1-1 1-0-1 --1-1-2 1-0-1 HOU UT-AT-TT 3-0-3 3-0-3 1-0-1 0-0-0 2-0-2 0-0-0 TULANE UT-AT-TT 0-0-0 1-3-4 2-0-2 1-0-1 0-0-0 0-0-0 UAB UT-AT-TT 1-0-1 3-2-5 0-1-1 0-0-0 0-2-2 0-1-1 34 L INEBACKERS PLAYER Chad Alexander Derrick Hallman Cory Hogue Jordan Richards Alex Thompson Lawrence Young SAM UT-AT-TT 1-0-1 3-1-4 7-0-7 0-0-0 2-0-2 4-4-8 USM UT-AT-TT 0-0-0 7-2-9 4-8-12 3-0-3 1-0-1 6-5-11 BUF UT-AT-TT 2-0-2 7-1-8 6-2-8 1-2-3 2-0-2 5-1-6 ECU UT-AT-TT 0-0-0 3-10-13 1-5-6 5-4-9 2-1-3 2-7-9 MEM UT-AT-TT 1-0-1 2-4-6 5-2-7 --1-0-1 4-0-4 MIAMI UT-AT-TT --2-6-8 7-1-8 --2-1-3 7-1-8 RICE UT-AT-TT --3-4-7 4-2-6 0-2-2 2-6-8 --- MAR UT-AT-TT --3-1-4 2-4-6 0-0-0 0-0-0 7-1-8 TEXAS UT-AT-TT --4-0-4 10-2-12 0-0-0 0-0-0 3-1-4 HOU UT-AT-TT --3-3-6 5-5-10 1-1-2 0-0-0 2-2-4 TULANE UT-AT-TT --1-3-4 2-3-5 1-1-2 1-1-2 2-4-6 UAB UT-AT-TT --3-3-6 3-9-12 0-1-1 0-1-1 1-4-5 D EFENSIVE B ACKS PLAYER Emery Allen Darin Baldwin Justin Boddie Michael Greco Kemal Ishmael Josh Robinson Reggie Weams SAM UT-AT-TT --7-0-7 7-0-7 0-0-0 2-0-2 0-0-0 3-0-3 USM UT-AT-TT 0-0-0 2-4-6 0-1-1 1-0-1 1-1-2 10-1-11 3-3-6 BUF UT-AT-TT 3-0-3 1-0-1 --5-3-8 1-0-1 5-1-6 0-1-1 ECU UT-AT-TT --4-4-8 --0-1-1 2-2-4 9-0-9 2-5-7 MEM UT-AT-TT --1-0-1 4-0-4 2-2-4 6-0-6 3-1-4 0-0-0 MIAMI UT-AT-TT ----6-4-10 5-1-6 3-4-7 6-2-8 0-0-0 RICE UT-AT-TT ----7-3-10 3-2-5 8-3-11 2-2-4 1-2-3 MAR UT-AT-TT --0-0-0 1-0-1 3-2-5 4-3-7 4-0-4 0-0-0 TEXAS UT-AT-TT --1-0-1 2-1-3 2-3-5 4-1-5 5-1-6 0-0-0 HOU UT-AT-TT --0-0-0 5-1-6 5-0-5 5-3-8 3-0-3 0-0-0 TULANE UT-AT-TT --1-0-1 2-1-3 --4-2-6 4-0-4 1-3-4 UAB UT-AT-TT --3-4-7 7-3-10 --4-1-5 5-1-6 --- THE ST. PETERSBURG BOWL 2009 UCF GAME-BY-GAME TEAM STATISTICS Date Sep 5 Sep 12 Sep 19 Sep 26 Oct 3 Oct 17 Oct 24 Nov 1 Nov 7 Nov 14 Nov 21 Nov 28 |---RUSHING---| No. Yds TD 41 125 2 27 15 0 42 170 2 24 70 1 50 261 1 24 53 0 39 213 2 30 59 1 38 75 0 50 152 4 50 264 4 35 205 2 450 1662 19 384 990 10 Solo 45 46 52 39 39 54 42 36 42 42 32 33 502 492 No 5 7 2 2 4 7 5 6 8 5 3 3 57 58 Lg 20 15 17 19 35 14 35 17 12 41 50 45 50 54 |--RECEIVING--| |-------PASSING-------| No. Yds TD Lg Cmp-Att-Int Yds TD Lg 13 157 1 26 13-24-1 157 1 26 17 179 2 29 17-30-0 179 2 29 15 141 0 39 15-20-0 141 0 39 21 266 1 40 21-34-4 266 1 40 16 214 2 34 16-28-1 214 2 34 13 176 1 41 13-31-1 176 1 41 12 252 4 76 12-18-0 252 4 76 23 342 2 41 23-48-0 342 2 41 10 76 0 14 10-19-0 76 0 14 21 241 1 31 21-25-1 241 1 31 20 240 3 29 20-29-1 240 3 29 24 230 2 32 24-38-2 230 2 32 205 2514 19 76 205-344-11 2514 19 76 267 3187 18 88 267-427-11 3187 18 88 QBH 5 0 1 1 6 2 0 5 2 3 7 0 32 29 No 4 5 3 5 3 6 1 4 4 5 1 4 45 53 Brk 4 3 1 4 6 2 4 4 5 4 2 6 45 36 |--KICK RET--| Yds TD 190 1 145 0 55 0 102 0 68 0 126 0 21 0 80 0 79 0 122 0 29 0 89 0 1106 1 967 1 Blkd Kick 0 0 0 0 0 0 0 0 0 0 0 1 1 3 Lg 95 89 23 35 28 35 21 29 27 32 29 33 95 100 No 7 1 0 0 1 1 3 3 1 2 4 2 25 12 |--PUNT RET--| tot Yds TD Lg off 99 0 39 282 6 0 6 194 0 0 0 311 0 0 0 336 2 0 2 475 2 0 2 229 60 0 38 465 31 0 17 401 5 0 5 151 -3 0 20 393 67 0 23 504 31 0 22 435 300 0 39 4176 37 0 13 4177 Saf 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Pts 28 19 23 14 32 7 49 21 3 37 49 34 316 248 OB 0 0 1 0 0 0 0 0 0 0 0 0 1 1 Date Sep 5 Sep 12 Sep 19 Sep 26 Oct 3 Oct 17 Oct 24 Nov 1 Nov 7 Nov 14 Nov 21 Nov 28 |---------TACKLES---------| |-SACKS-| |-FUMBLE-| Pass Ast Total TFL-Yds No-Yds FF FR-Yds Int-Yds 10 55 9.0-31 1.0-15 0 0-0 0-0 39 85 6.0-43 3.0-36 0 1-0 0-0 14 66 6.0-17 3.0-10 3 3-0 1-23 50 89 11.0-29 2.0-3 3 1-0 1-9 18 57 7.0-36 3.0-25 0 0-0 2-33 24 78 10.0-52 6.0-41 0 0-0 0-0 42 84 11.0-71 5.0-49 5 3-0 1-24 22 58 8.0-49 4.0-42 1 1-0 1-0 12 54 6.0-18 2.0-9 0 0-0 1-3 18 60 4.0-22 2.0-17 2 2-0 1-22 22 54 8.0-29 3.0-19 3 3-10 2-16 36 69 5.0-23 3.0-19 0 0-0 1-8 307 809 91.0-420 37.0-285 17 14-10 11-138 342 834 74.0-279 27.0-167 9 7-28 11-164 Yds 196 259 62 74 143 248 175 222 340 188 106 95 2108 2305 |------------------PUNTING------------------| Avg Long Blkd TB FC 50+ 39.2 44 0 0 4 0 37.0 46 0 0 4 0 31.0 43 0 0 1 0 37.0 42 0 0 0 0 35.8 45 0 1 1 0 35.4 49 0 0 1 0 35.0 45 0 0 3 0 37.0 54 0 1 1 1 42.5 70 0 0 4 3 37.6 51 0 1 1 1 35.3 43 0 0 0 0 31.7 33 0 0 0 0 37.0 70 0 3 20 5 39.7 77 0 4 12 11 I20 2 1 0 0 2 3 4 3 4 1 2 1 23 21 |-Kicks--XPTS-| Att-Mad Run Rcv 3-2 0 1 2-1 0 0 2-2 0 0 2-2 0 0 2-2 0 0 1-1 0 0 7-7 0 0 3-3 0 0 0-0 0 0 5-4 0 0 7-7 0 0 4-4 0 0 38-35 0 1 28-26 0 0 Date Sep 5 Sep 12 Sep 19 Sep 26 Oct 3 Oct 17 Oct 24 Nov 1 Nov 7 Nov 14 Nov 21 Nov 28 |--FIELD GOALS--| Att-Made Lg Blkd 2-0 0 1 2-2 50 0 3-3 44 0 2-0 0 1 4-4 46 0 1-0 0 0 1-0 0 0 1-0 0 0 1-1 39 0 1-1 35 0 0-0 0 0 3-2 36 0 21-13 50 2 18-12 46 0 |------KICKOFFS------| No Yds Avg TB 5 332 66.4 1 4 251 62.8 1 6 392 65.3 0 2 132 66.0 0 8 544 68.0 1 2 107 53.5 0 8 537 67.1 3 4 223 55.8 0 2 137 68.5 1 7 451 64.4 0 8 516 64.5 0 7 485 69.3 3 63 4107 65.2 10 51 3122 61.2 3 GAME-BY-GAME STARTERS - OFFENSE Opp. Samford at Southern Miss Buffalo at East Carolina Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB WR Ross Ross Ross Ross McDuffie McDuffie Ross Ross Ross Guyton Ross Ross WR Aiken Aiken Aiken Aiken Guyton Aiken Aiken Aiken Aiken Aiken Guyton Aiken LT Pieschel Pieschel Pieschel Pieschel Pieschel Pieschel Pieschel Pieschel Pieschel Pieschel Leggins Leggins LG Leggins McCray Leggins Leggins Leggins Robinson Robinson Robinson Hounshell McCray McCray McCray C Bustillo Bustillo Bustillo Bustillo Bustillo Bustillo Bustillo Bustillo Bustillo Bustillo Bustillo Bustillo RG McCray Robinson McCray McCray McCray McCray McCray McCray Goins Goins Goins Goins RT Reid Reid Reid Reid Reid Reid Reid Reid Reid Reid Reid Reid TE Nissley Nissley Nissley Nissley Nissley Nissley Nissley Nissley Nissley Nissley Nissley Nissley QB Calabrese Calabrese Hodges Hodges Hodges Hodges Hodges Hodges Calabrese Hodges Hodges Hodges TB Harvey Harvey Harvey Harvey Harvey Harvey Harvey Harvey J. Davis Harvey Harvey Harvey HB Giovanetti Rabazinski Giovanetti Rabazinski Giovanetti Giovanetti Guyton (WR) Guyton (WR) Kay Watters (WR) Aiken (WR) Kay PK Boyle Cattoi Cattoi Cattoi Cattoi Cattoi Cattoi Cattoi Cattoi Cattoi Cattoi Cattoi 35 GAME-BY-GAME STARTERS - DEFENSE Opp. Samford at Southern Miss Buffalo at East Carolina Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB LE LT Williams Timmons Williams Timmons Williams Timmons Williams Timmons Williams Timmons Williams Timmons Williams Timmons Williams Timmons Williams Tunuufi Sauvao Nall Miller Williams Timmons Williams Timmons RT Troup Troup Troup Troup Troup Troup Troup Troup Troup Troup Troup Troup RE Geathers Miller Miller Miller Miller Miller Miller Miller Miller Geathers Miller Miller OLB Thompson Richards Richards Richards Ishmael (FS) Hallman Hallman Hallman Hallman Hallman Hallman Hallman MLB Hogue Hogue Hogue Hogue Hogue Hogue Hogue Hogue Hogue Hogue Hogue Hogue OLB Young Young Young Young Young Young Thompson Young Young Young Young Young FS Hallman Hallman Hallman Hallman Hallman Ishmael Ishmael Ishmael Ishmael Ishmael Ishmael Ishmael SS Weams Weams Greco Weams Greco Greco Greco Greco Greco Greco Weams Baldwin CB Boddie Boddie Robinson Robinson Robinson Robinson Robinson Robinson Robinson Robinson Robinson Robinson CB Baldwin Baldwin Baldwin Baldwin Boddie Boddie Boddie Boddie Boddie Boddie Boddie Boddie P Clingan Clingan Clingan Clingan Clingan Clingan Clingan Clingan Clingan Clingan Clingan Clingan THE ST. PETERSBURG BOWL 2009 UCF SUPERLATIVES INDIVIDUAL GAME HIGHS Rushes Yards Rushing TD Rushes Long Rush Pass attempts Pass completions Yards Passing TD Passes 42 219 3 50 45 24 342 2 HARVEY, Brynn vs Memphis (Oct 3, 2009) HARVEY, Brynn vs Memphis (Oct 3, 2009) HARVEY, Brynn vs Houston (Nov 14, 2009) HARVEY, Brynn vs Tulane (Nov 21, 2009) HARVEY, Brynn vs Tulane (Nov 21, 2009) HODGES, Brett vs Marshall (Nov 1, 2009) HODGES, Brett at UAB (Nov 28, 2009) HODGES, Brett vs Marshall (Nov 1, 2009) HODGES, Brett at Southern Miss (Sep 12, 2009) HODGES, Brett vs Memphis (Oct 3, 2009) CALABRESE, Rob at Rice (Oct 24, 2009) HODGES, Brett vs Marshall (Nov 1, 2009) HODGES, Brett vs Tulane (Nov 21, 2009) HODGES, Brett at UAB (Nov 28, 2009) HODGES, Brett at Rice (Oct 24, 2009) GUYTON, A.J. at East Carolina (Sep 26, 2009) GUYTON, A.J. at East Carolina (Sep 26, 2009) AIKEN, Kamar vs Tulane (Nov 21, 2009) GUYTON, A.J. at Rice (Oct 24, 2009) CATTOI, Nick vs Memphis (Oct 3, 2009) CATTOI, Nick at Southern Miss (Sep 12, 2009) CLINGAN, Blake at Texas (Nov 7, 2009) CLINGAN, Blake at Texas (Nov 7, 2009) CLINGAN, Blake at Texas (Nov 7, 2009) ROSS, Rocky vs Samford (Sep 5, 2009) MCDUFFIE, Q. vs Samford (Sep 5, 2009) HALLMAN, D. at East Carolina (Sep 26, 2009) GEATHERS, J. vs Buffalo (Sep 19, 2009) MILLER, Bruce vs Memphis (Oct 3, 2009) HOGUE, Cory vs Miami (Oct 17, 2009) HALLMAN, D. vs Buffalo (Sep 19, 2009) YOUNG, Lawrence at East Carolina (Sep 26, 2009) HOGUE, Cory vs Memphis (Oct 3, 2009) ROBINSON, Josh vs Memphis (Oct 3, 2009) ROBINSON, Josh at Rice (Oct 24, 2009) ROBINSON, Josh vs Marshall (Nov 1, 2009) ROBINSON, Josh at Texas (Nov 7, 2009) BODDIE, Justin vs Houston (Nov 14, 2009) ROBINSON, Josh vs Tulane (Nov 21, 2009) WEAMS, Reggie vs Tulane (Nov 21, 2009) ROBINSON, Josh at UAB (Nov 28, 2009) 2009 OPPONENT SUPERLATIVES OPPONENT INDIVIDUAL GAME HIGHS Rushes Yards Rushing TD Rushes Long Rush Pass attempts Pass completions Yards Passing TD Passes Long Pass Receptions Yards Receiving TD Receptions 28 137 2 54 56 33 470 3 88 11 273 2 MARSHALL, Darius, vs Marshall (Nov 1, 2009) WEBB, Joe, at UAB (Nov 28, 2009) JOHNSON, Cody, at Texas (Nov 7, 2009) ROSS, Charles, at Rice (Oct 24, 2009) KEENUM, Case, vs Houston (Nov 14, 2009) MCCOY, Colt, at Texas (Nov 7, 2009) KEENUM, Case, vs Houston (Nov 14, 2009) MCCOY, Colt, at Texas (Nov 7, 2009) KEENUM, Case, vs Houston (Nov 14, 2009) WEBB, Joe, at UAB (Nov 28, 2009) MCCOY, Colt, at Texas (Nov 7, 2009) SHIPLEY, Jordan, at Texas (Nov 7, 2009) SHIPLEY, Jordan, at Texas (Nov 7, 2009) HAWKINS, Riley, vs Samford (Sep 5, 2009) RACK, Jesse, vs Buffalo (Sep 19, 2009) CARRIER, Tyron, vs Houston (Nov 14, 2009) SHIPLEY, Jordan, at Texas (Nov 7, 2009) ESTES, Justin, at Southern Miss (Sep 12, 2009) HARTMAN, Ben, at East Carolina (Sep 26, 2009) BOSHER, M., vs Miami (Oct 17, 2009) RATANAMORN, Cra, vs Marshall (Nov 1, 2009) HOGAN, Matt, vs Houston (Nov 14, 2009) BOSHER, M., vs Miami (Oct 17, 2009) HOOPER, Ben, vs Samford (Sep 5, 2009) THEVENOT, R., vs Tulane (Nov 21, 2009) THEVENOT, R., vs Tulane (Nov 21, 2009) COLLIER, T., vs Miami (Oct 17, 2009) MAYS, Devin, vs Houston (Nov 14, 2009) BRINKLEY, B., vs Houston (Nov 14, 2009) McGRAW, Marcus, vs Houston (Nov 14, 2009) SPENCE, S., vs Miami (Oct 17, 2009) McGRAW, Marcus, vs Houston (Nov 14, 2009) SMITH, Bryce, vs Samford (Sep 5, 2009) McGRAW, Marcus, vs Houston (Nov 14, 2009) COLEMAN, M., at UAB (Nov 28, 2009) Long Pass Receptions Yards Receiving TD Receptions Long Reception Field Goals Long Field Goal Punts Punting Avg Long Punt Long Punt Return Long Kickoff Return Tackles Sacks Tackles For Loss Interceptions 76 9 119 2 76 4 50 8 42.5 70 39 95 13 3.0 3.5 1 Long Reception Field Goals 88 2 Long Field Goal Punts Punting Avg Long Punt Long Punt Return Long Kickoff Return Tackles Sacks Tackles For Loss Interceptions 46 9 49.3 77 13 100 14 2.0 3.0 2 OPPONENT TEAM GAME HIGHS Rushes Yards Rushing Yards Per Rush TD Rushes Pass attempts Pass completions Yards Passing Yards Per Pass TD Passes Total Plays Total Offense Yards Per Play Points Sacks By First Downs Penalties Penalty Yards Turnovers Interceptions By 46 205 6.4 3 57 33 470 11.2 3 79 537 8.0 35 6 26 9 75 5 4 vs Miami (Oct 17, 2009) at UAB (Nov 28, 2009) at UAB (Nov 28, 2009) at Texas (Nov 7, 2009) vs Houston (Nov 14, 2009) at Texas (Nov 7, 2009) vs Houston (Nov 14, 2009) at Texas (Nov 7, 2009) at Texas (Nov 7, 2009) vs Houston (Nov 14, 2009) at UAB (Nov 28, 2009) at East Carolina (Sep 26, 2009) at Texas (Nov 7, 2009) at Texas (Nov 7, 2009) at Texas (Nov 7, 2009) at Texas (Nov 7, 2009) vs Miami (Oct 17, 2009) at Rice (Oct 24, 2009) at Rice (Oct 24, 2009) vs Tulane (Nov 21, 2009) at East Carolina (Sep 26, 2009) TEAM GAME HIGHS Rushes 50 Yards Rushing Yards Per Rush TD Rushes Pass attempts 36 Pass completions Yards Passing Yards Per Pass TD Passes Total Plays Total Offense Yards Per Play Points Sacks By First Downs Penalties Penalty Yards Turnovers Interceptions By 264 5.9 4 48 24 342 14.0 4 79 504 8.2 49 6 28 8 76 5 2 vs Memphis (Oct 3, 2009) vs Houston (Nov 14, 2009) vs Tulane (Nov 21, 2009) vs Tulane (Nov 21, 2009) at UAB (Nov 28, 2009) vs Houston (Nov 14, 2009) vs Tulane (Nov 21, 2009) vs Marshall (Nov 1, 2009) at UAB (Nov 28, 2009) vs Marshall (Nov 1, 2009) at Rice (Oct 24, 2009) at Rice (Oct 24, 2009) vs Tulane (Nov 21, 2009) vs Tulane (Nov 21, 2009) at Rice (Oct 24, 2009) at Rice (Oct 24, 2009) vs Tulane (Nov 21, 2009) vs Miami (Oct 17, 2009) vs Memphis (Oct 3, 2009) vs Tulane (Nov 21, 2009) vs Marshall (Nov 1, 2009) vs Marshall (Nov 1, 2009) at East Carolina (Sep 26, 2009) vs Memphis (Oct 3, 2009) vs Tulane (Nov 21, 2009) THE ST. PETERSBURG BOWL BRIGHT HOUSE NETWORKS STADIUM ONE OF THE NATION'S NEWEST COLLEGE FOOTBALL STADIUMS nationally on ESPN2. In its inaugural campaign at the stadium, UCF capped the season with six-straight wins at home, including a 44-25 victory over Tulsa in the Conference USA Championship Game. UCF eventually put together a seven-game home winning streak which tied for the eighth longest in the nation. The Knights outscored their opponents 281-120 over that stretch. In 2009, they set a stadium record when 48,453 fans entered the gates to see the Black and Gold take on Miami Oct. 17, and the Knights finished the year 6-1 at home. LARGEST CROWDS IN UCF HISTORY The first 10 home games at Bright House Networks Stadium resulted in seven of the top 10 alltime home attendance figures in school history. Four of the 10 games were sellouts, which were the first in school history. Largest Home Crowds in UCF History Opponent (Year) Attendance 1. Tulsa (2005) 51,978 2. Virginia Tech (2000) 50,220 3. Miami (2009) 48,453* 4. South Florida (2008) 46,805* 5. South Florida (2006) 46,708 6. Marshall (2007) 46,103* 7. Texas (2007) 45,622* 8. Tulsa (2007) 45,510* 9. Tulsa (2007 - C-USA Champ.) 44,128 10. SMU (2008) 43,417 * Sellout Largest Away Game Crowds in UCF History Opponent (Year) Attendance 1. Penn State (2002) 103,029 2. Penn State (2004) 101,715 3. Texas (2009) 101,003 4. Florida (2006) 90,210 5. Georgia (1999) 86,117 6. Florida (1999) 85,346 7. Alabama (2000) 83,818 8. South Carolina (2005) 82,753 9. Wisconsin (2004) 82,116 10. Auburn (1997) 82,109 RECORD AT BHNS (14-6 ALL-TIME) 9/15/07 #6 Texas.......................L, 32-35 9/22/07 Memphis ................... W, 56-20 9/29/07 Louisiana-Lafayette ... W, 37-19 10/20/07 Tulsa .......................... W, 44-23 11/3/07 Marshall .................... W, 47-13 11/24/07 UTEP.......................... W, 46-20 12/1/07 Tulsa .......................... W, 44-25 8/30/08 S.C. State ..................... W, 17-0 9/6/08 #17 USF ................. L, 24-31 OT 10/4/08 SMU .......................... W, 31-17 11/2/08 East Carolina ......... L, 10-13 OT 11/8/08 Southern Miss ...............L, 6-17 11/29/08 UAB ...............................L, 0-15 9/5/09 Samford..................... W, 28-24 9/19/09 Buffalo ....................... W, 23-17 10/3/09 Memphis ................... W, 32-14 10/17/09 #9 Miami .......................L, 7-27 11/1/09 Marshall .................... W, 21-20 11/14/09 #13 Houston.............. W, 37-32 11/21/09 Tulane ......................... W, 49-0 2007-09 C-USA AVERAGE ATTENDANCE LEADERS 1. East Carolina ...................... 41,749 2. UCF .................................. 40,612 3. UTEP .................................. 34,292 4. Southern Miss.................... 29,173 5. Memphis............................ 26,874 6. Marshall ............................. 25,674 7. Tulane ................................ 24,746 8. Tulsa................................... 23,833 9. Houston ............................. 22,634 10. SMU ................................... 19,433 11. UAB .................................... 17,918 12. Rice .................................... 11,177 37 THE ST. PETERSBURG BOWL KAMAR AIKEN JUNIOR � WIDE RECEIVER � 6-2/213 � MIAMI, FLA./CHAMINADE-MADONNA � Season highs--Receptions: 5 vs. Marshall (11/1) and Tulane (11/21) Yards: 84 vs. Tulane (11/21) Touchdowns: 2 vs. Tulane (11/21) � Posted at least 30 receptions for the second time in his career, totaling 32 for 545 yards � Hauled in seven touchdown passes, tying for the most by a Knight since Mike Sims-Walker had seven in 2006 � That total includes three TDs in the last two outings � Has caught at least one pass in all 11 games he's played � Two or more receptions in nine of those games � The two touchdown catches against Tulane was the first multi-TD game of his career Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB YEAR 2007 2008 2009 TOTALS NO 2 4 3 2 1 1 5 2 4 5 3 G 14 9 11 34 YDS 42 53 53 50 DNP 35 36 62 12 60 84 58 #81 TD 0 1 0 1 0 1 1 0 0 2 1 LG 21 20 39 40 35 36 23 7 31 29 32 YDS 584 244 545 1,373 2007 AutoZone Liberty Bowl � Posted one catch for 10 yards, hauling in a Kyle Israel pass on 3rd-and-6 on UCF's first drive of the third quarter against Mississippi State. Game-By-Game Breakdown Kamar Aiken REC 33 20 32 85 AVG 17.7 12.2 17.0 16.2 TD 5 1 7 13 LG 72 39 40 72 DARIN BALDWIN JUNIOR � CORNERBACK � 5-11/197 � HOMESTEAD, FLA./SOUTH DADE � Season highs--Tackles: 8 at East Carolina (9/26) Break-ups: 3 vs. Samford (9/5) Kickoff Returns: 2 at USM (9/12) and vs. Memphis (10/3) Kickoff Return Yards: 72 vs. Samford (9/5) � Made five starts in 10 games this year � Has totaled eight kickoff returns for 218 yards Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB #21 TACK TFL-YDS BRKUP KO-YDS 7-0-7 1.0-4 3 1-72 2-4-6 0.0-0 1 2-29 1-0-1 0.0-0 0 0-0 4-4-8 1.0-1 1 0-0 1-0-1 0.0-0 0 2-41 DNP DNP 0-0-0 0.0-0 0 1-29 1-0-1 0.0-0 0 0-0 0-0-0 0.0-0 0 0-0 1-0-1 0.0-0 0 1-29 3-4-7 0.0-0 2 1-18 Baldwin's Returns � Starting off with a 72-yard return vs. Samford in the season-opener, Baldwin has eight kickoff returns for 218 yards this year. Game-By-Game Breakdown Darin Baldwin YEAR 2007 2008 2009 TOTALS G 11 12 10 33 SOLO 14 8 20 42 AST 3 4 12 19 TOTAL 17 12 32 61 SACK 1.0 0.0 0.0 1.0 TFL 2.0 0.0 2.0 4.0 PBU 3 4 7 14 INT 1 0 0 1 JUSTIN BODDIE 38 JUNIOR � CORNERBACK � 6-2/190 � ATLANTA, GA./NORTH ATLANTA � Season highs--Tackles: 10 vs. Miami (10/17), at Rice (10/24) and at UAB (11/28) Break-ups: 2 at UAB (11/28) Interceptions: 1 vs. Houston (11/14) � Along with 55 tackles, he has posted four TFLs, one sack, two forced fumbles, one interception and three pass break-ups � Started all 10 games played Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB #23 TACK TFL-YDS BRKUP INT-YDS 7-0-7 0.0-0 0 0-0 0-1-1 0.0-0 0 0-0 DNP DNP 4-0-4 0.0-0 0 0-0 6-4-10 0.0-0 0 0-0 7-3-10 1.5-8 0 0-0 1-0-1 1.0-2 1 0-0 2-1-3 0.0-0 0 0-0 5-1-6 0.0-0 0 1-22 2-1-3 1.0-10 0 0-0 7-3-10 0.5-0 2 0-0 All-Around Defender � Boddie not only has racked up 55 tackles in 2009, he has four TFLs, one sack, two forced fumbles and one interception return for 22 yards. Game-By-Game Breakdown Justin Boddie YEAR 2007 2008 2009 TOTALS G 9 9 10 28 SOLO 15 6 41 62 AST 5 4 14 23 TOTAL 20 10 55 85 SACK 0.0 0.0 1.0 1.0 TFL 0.0 0.0 4.0 4.0 PBU 0 0 3 3 INT 2 0 1 3 THE ST. PETERSBURG BOWL A.J. BOUYE FRESHMAN � DEFENSIVE BACK � 6-0/180 � TUCKER, GA./TUCKER � Season highs--Tackles: 3 at Rice (10/24) Break-ups: 1 vs. Tulane (11/21) � Key player on special teams � In nine games off the bench, has nine tackles, including three at Rice and two each at Texas and vs. Tulane Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB #19 Game-By-Game Breakdown TACK TFL-YDS BRKUP INT-YDS 0-0-0 0.0-0 0 0-0 DNP 0-0-0 0.0-0 0 0-0 0-0-0 0.0-0 0 0-0 0-1-1 0.0-0 0 0-0 DNP 2-1-3 0.0-0 0 0-0 DNP 2-0-2 0.0-0 0 0-0 0-0-0 0.0-0 0 0-0 2-0-2 0.0-0 1 0-0 0-1-1 0.0-0 0 0-0 Helping Out the Secondary � After not recording a tackle in UCF's first four games, Bouye has collected nine takedowns in the last six games he has played. A.J. Bouye YEAR 2009 TOTALS G 9 9 SOLO 6 6 AST 3 3 TOTAL 9 9 SACK 0.0 0.0 TFL 0.0 0.0 PBU 1 1 INT 0 0 JAMIE BOYLE FRESHMAN � PLACEKICKER/PUNTER � 5-10/182 � CENTRAL VALLEY, N.Y./MONROE-WOODBURY RY � Season highs--PATs Made: 3 vs. Tulane (11/21) Field Goals Attempted: 2 vs. Samford (9/5) Kickoffs: 4 vs. Tulane (11/21) Kickoff Yards: 263 vs. Tulane (11/21) � Perfect on PATs against Tulane (3-3), Rice (2-2) and East Carolina (1-1) Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB FG 0-2 LG --DNP DNP --DNP DNP --DNP DNP DNP --DNP #28 PAT 1-2 KO-YDS 0-0 1-1 0-0 Getting Into the Act � Helping UCF record C-USA's most lopsided shutout ever in a league game, Boyle was 3-3 on PATs and had four kickoffs in the 49-0 win vs. Tulane. 2-2 0-0 Game-By-Game Breakdown 0-0 0-0 Jamie Boyle YEAR 2009 TOTALS G 4 4 FG 0-2 0-2 PCT 0.0 0.0 LG 0 0 PAT 7-8 7-8 0-0 3-3 4-263 IAN BUSTILLO RS SENIOR � OFFENSIVE LINE � 6-2/301 � MIAMI, FLA./KILLIAN � Started all 12 games as UCF's center � Has made starts in 19 games over the last two seasons, all at center #73 3 39 THE ST. PETERSBURG BOWL ROB CALABRESE SOPHOMORE � QUARTERBACK � 6-2/213 � ISLIP TERRACE, N.Y./EAST ISLIP � Season highs--Completions: 10 at Texas (11/7) Attempts: 19 at Texas (11/7) Yards: 76 at Texas (11/7) Touchdowns: 2 at Rice (10/24) � Made three starts in six games, throwing three touchdowns and no interceptions while posting a 122.97 passer efficiency rating Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB YEAR 2008 2009 TOTALS C-A-I 3-7-0 2-4-0 YDS 28 21 DNP DNP DNP 13 71 DNP 76 DNP 6 DNP #4 TD 0 0 LG 15 16 1-4-0 3-4-0 10-19-0 1-1-0 0 2 0 1 13 52 14 6 Taking on One of the Best � Calabrese got the start at No. 2 Texas and completed over 50 percent of his passes against an elite defense, going 10-for-19. Game-By-Game Breakdown Rob Calabrese G 9 6 15 COMP 65 20 85 ATT 165 39 204 INT 5 0 5 PCT 39.4 51.3 41.7 YDS 664 215 879 TD 7 3 10 LG 62 52 62 NICK CATTOI SOPHOMORE � PLACEKICKER � 6-5/210 � TAMPA, FLA./GAITHER � Season highs--Field Goals Made: 4 vs. Memphis (10/3) Field Goals Attempted: 4 vs. Memphis (10/3) PATs Made: 5 at Rice (10/24) Kickoffs: 8 vs. Memphis (10/3) and at Rice (10/24) Kickoff Yards: 544 vs. Memphis (10/3) � By going 4-for-4 vs. Memphis, he tied a school record with four field goals made in a game Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB FG 0-0 2-2 3-3 0-2 4-4 0-1 0-1 0-1 1-1 1-1 0-0 2-3 LG --50 44 --46 ------39 35 --36 #16 Game-By-Game Breakdown PAT 1-1 1-2 2-2 1-1 2-2 1-1 5-5 3-3 0-0 4-5 4-4 4-4 KO-YDS 5-332 4-251 6-392 2-132 8-544 2-107 8-537 4-223 2-137 7-451 4-253 7-485 Stepping it Up � In his last four games, Cattoi is 4-for-5 in field-goal attempts, including a 39-yarder at No. 2 Texas. His seasonlong was 50 at Southern Miss. Nick Cattoi YEAR 2008 2009 TOTALS G 5 12 17 FG 4-6 13-19 17-25 PCT 66.7 68.4 68.0 LG 38 50 50 PAT 9-10 28-30 37-40 BLAKE CLINGAN 40 JUNIOR � PUNTER � 6-3/229 � CORAL SPRINGS, FLA./CORAL SPRINGS � Season highs--Punts: 8 at Texas (11/7) Yards: 340 at Texas (11/7) Average: 42.5 at Texas (11/7) � Has placed 23 of 57 punts inside the 20 � Opponents' punt return game has been held to just 37 yards all year on 12 returns for a 3.1 average, which ranks as the seventh-best punt return defense in the nation Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB NO-YDS 5-196 7-259 2-62 2-74 4-143 7-248 5-175 6-222 8-340 5-188 3-106 3-95 AVG 39.2 37.0 31.0 37.0 35.8 35.4 35.0 37.0 42.5 37.6 35.3 31.7 #41 LG 44 46 43 42 45 49 45 54 70 51 43 33 I20 2 1 0 0 2 3 4 3 4 1 2 1 2007 AutoZone Liberty Bowl � As a freshman, Clingan totaled six punts vs. Mississippi State, racking up 257 yards for a 42.8 average including a long of 55 yards. Game-By-Game Breakdown Blake Clingan YEAR 2007 2008 2009 TOTALS G 13 12 12 37 NO 59 88 57 204 YDS 2,404 3,560 2,108 8,072 AVG 40.7 40.5 37.0 39.6 I20 14 27 23 64 LG 55 59 70 70 THE ST. PETERSBURG BOWL JONATHAN DAVIS FRESHMAN � RUNNING BACK � 5-9/195 � LAWRENCEVILLE, GA./TUCKER � Season highs--Rushing Attempts: 22 at Texas (11/7) Yards: 76 vs. Tulane (11/21) Touchdowns: 1 vs. Houston (11/14), Tulane (11/21) and at UAB (11/28) � A true freshman who started the Texas game � Did not have a rushing attempt until the Rice game Oct. 24 � Has netted at least 70 yards in three of his last four outings � Also has one kick return for eight yards vs. Marshall, and four total punt returns in 2009 for 61 yards � Contributes on special teams as well Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB YEAR 2009 TOTALS NO 0 0 0 0 0 2 2 22 8 16 7 G 11 11 YDS DNP 0 0 0 0 0 44 16 71 27 76 75 #27 TD 0 0 0 0 0 0 0 0 1 1 1 YDS 309 309 LG 0 0 0 0 0 35 15 12 15 14 45 Hitting the Endzone � Although he didn't carry the ball in UCF's first six games, Davis has rushed for a touchdown in each of the last three outings. Game-By-Game Breakdown Jonathan Davis ATT 57 57 AVG 5.4 5.4 TD 3 3 LG 45 45 JARVIS GEATHERS � 2009 All-C-USA Defensive First Team � Season highs--Tackles: 6 at Texas (11/7) TFLs: 3 vs. Buffalo (9/19) Sacks: 3 vs. Buffalo (9/19) Forced Fumbles: 2 vs. Buffalo (9/19) � Two starts at right end vs. Samford and Houston � Primarily comes off the bench, and has notched 13.5 TFLs, 11.0 sacks, seven quarterback hurries, one breakup, three forced fumbles and one fumble recovery � Ranks tied for 10th in the nation with 11.0 sacks and 41st with 13.5 tackles for loss SENIOR � DEFENSIVE END � 6-2/238 � ANDREWS, S.C./ANDREWS (FEATHER RIVER CC) Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB YEAR 2008 2009 TOTALS #99 FF 0 0 2 0 0 0 1 0 0 0 0 0 SACK 5.5 11.0 16.5 The Force Up Front � He's only started two games this year, but Geathers has amassed 11.0 sacks and 13.5 TFLs in 2009. He has at least half a sack in eight games. Game-By-Game Breakdown TACK TFL-YDS SACKS 1-0-1 1.0-15 1.0-15 0-1-1 0.5-1 0.0-0 3-0-3 3.0-10 3.0-10 0-2-2 0.5-1 0.0-0 1-2-3 0.5-6 0.5-6 1-1-2 1.5-11 1.0-10 3-1-4 2.5-12 1.5-8 1-1-2 0.0-0 0.0-0 5-1-6 2.0-9 2.0-9 3-0-3 1.0-7 1.0-7 0-0-0 0.0-0 0.0-0 1-0-1 1.0-7 1.0-7 G 12 12 24 SOLO 21 19 40 AST 14 9 23 TOTAL 35 28 63 Jarvis Geathers TFL 9.5 13.5 23.0 PBU 2 1 3 INT 0 0 0 BILLY GIOVANETTI RS FRESHMAN � H-BACK � 5-11/225 � WINTER PARK, FLA./BISHOP MOORE � Season highs--Rushing Attempts: 1 at Southern Miss (9/12) Rushing Yards: 2 at Southern Miss (9/12) Receptions: 3 at UAB (11/28) Receiving Yards: 23 vs. Miami (10/17) Receiving Touchdowns: 1 at UAB (11/28) � In nine games, has racked up 46 receiving yards on seven catches, including his first career touchdown at UAB Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB NO 0 1 0 0 1 2 0 YDS 0 5 0 0 7 23 0 DNP DNP DNP 0 11 #32 TD 0 0 0 0 0 0 0 LG 0 5 0 0 8 15 0 A Family Affair � Brother Nick has seen time in seven contests on special teams, and father Bill is a UCF Athletics Hall of Fame member (football 1979-82). 41 Game-By-Game Breakdown Billy Giovanetti YEAR 2009 TOTALS G 9 9 REC 7 7 YDS 46 46 AVG 6.6 6.6 TD 1 1 LG 15 15 0 3 0 1 0 5 THE ST. PETERSBURG BOWL THEO GOINS RS FRESHMAN � OFFENSIVE LINE � 6-4/316 � HOUSTON, TEXAS/HIGHTOWER � 2009 C-USA All-Freshman Offensive Team � After not playing in the first seven games of the year, saw time in the Marshall game before making starts at right guard in the last four contests of the regular season #68 Homecoming � As a redshirt freshman this year, Goins made his starting debut in his home state at No. 2 Texas. MICHAEL GRECO RS SENIOR � SAFETY � 6-3/217 � FORT LAUDERDALE, FLA./CARDINAL GIBBONS (PEARL RIVER CC) � Season highs--Tackles: 8 vs. Buffalo (9/19) Break-ups: 2 at Texas (11/7) Forced Fumbles: 1 vs. Houston (11/14) � In his first year on the defensive side of the ball after making four starts and playing in 17 games at quarterback from 2007-08 � Made seven starts in 2009 while playing in 10 total games � Tied for the team lead at Buffalo when he posted eight tackles #2 FF 0 0 0 0 0 0 0 0 2 1 Quickly Evolving � In his first year on defense, Greco made just one start in the first four games of 2009 before making six-straight starts until suffering an injury. Game-By-Game Breakdown Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB YEAR 2007 2008 TOTALS YEAR 2009 TOTALS TACK TFL-YDS BRKUP 0-0-0 0.0-0 0 1-0-1 0.0-0 0 5-3-8 0.0-0 0 0-1-1 0.0-0 0 2-2-4 0.0-0 1 5-1-6 0.0-0 0 3-2-5 0.0-0 0 3-2-5 0.0-0 0 2-3-5 0.0-0 0 5-0-5 0.0-0 1 DNP DNP G 9 8 17 COMP 24 52 76 SOLO 26 26 ATT 45 107 152 AST 14 14 INT 1 4 5 TOTAL 40 40 Michael Greco PCT 53.3 48.6 50.0 SACK 0.0 0.0 YDS 303 571 874 TFL 0.0 0.0 TD 0 5 5 PBU 4 4 LG 42 56 56 INT 0 0 G 10 10 A.J. GUYTON 42 RS SOPHOMORE � WIDE RECEIVER � 5-11/195 � HOMESTEAD, FLA./HOMESTEAD Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB NO 0 1 1 9 4 0 3 5 5 4 6 4 YDS 0 13 11 119 50 0 103 100 35 30 56 42 #3 TD 0 0 0 0 0 0 1 0 0 0 0 0 LG 0 13 11 29 19 0 76 41 14 11 19 17 The Go-To Guyton � A slow start set up a pleasant season for Guyton, as he leads UCF with 42 receptions and 559 yards, including three 100-yard performances. � Season highs--Receptions: 9 at East Carolina (9/26) Yards: 119 at East Carolina (9/26) Touchdowns: 1 at Rice (10/24) Punt Returns: 3 vs. Marshall (11/1) Punt Return Yards: 42 at Rice (10/24) � Also has three rushes for 30 yards, and threw a 36-yard touchdown pass to Kamar Aiken at Rice � Totaled 13 punt returns for 140 yards Game-By-Game Breakdown A.J. Guyton YEAR 2007 2009 TOTALS G 14 12 26 REC 23 42 65 YDS 253 559 812 AVG 11.0 13.3 12.5 TD 2 1 3 LG 39 76 76 THE ST. PETERSBURG BOWL DERRICK HALLMAN JUNIOR � LINEBACKER � 6-0/212 � FORT PIERCE, FLA./FORT PIERCE CENTRAL � Season highs--Tackles: 13 at East Carolina (9/26) TFLs: 2 vs. Marshall (11/1) Forced Fumbles: 1 vs. Buffalo (9/19) and vs. Houston (11/14) Interceptions: 1 vs. Buffalo (9/19) Break-ups: 1 vs. Memphis (10/3) and at Rice (10/24) � Has set a career-high with 79 total tackles to rank second on the team, making starts in all 12 games this year (five at safety, seven at linebacker) Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB TACK 3-1-4 7-2-9 7-1-8 3-10-13 2-4-6 2-6-8 3-4-7 3-1-4 4-0-4 3-3-6 1-3-4 3-3-6 #38 TFL-YDS BRKUP INT-YDS 1.0-2 0 0 0.0-0 0 0 1.0-3 0 1-23 0.0-0 0 0 0.5-2 1 0 1.0-1 0 0 0.0-0 1 0 2.0-2 0 0 0.0-0 0 0 0.0-0 0 0 0.0-0 0 0 0.0-0 0 0 2007 AutoZone Liberty Bowl � As a freshman facing Mississippi State, Hallman tied for fourth on the team with five tackles. He also had one TFL and a pass break-up. Game-By-Game Breakdown Derrick Hallman YEAR 2007 2008 2009 TOTALS G 12 12 12 36 SOLO 37 42 41 120 AST 13 16 38 67 TOTAL 50 58 79 187 SACK 3.0 1.0 0.0 4.0 TFL 9.5 8.0 5.5 23.0 PBU 4 6 2 12 INT 1 1 1 3 BRYNN HARVEY SOPHOMORE � RUNNING BACK � 6-1/205 � LARGO, FLA./LARGO � Season highs--Rushing Attempts: 42 vs. Memphis (10/3) Rushing Yards: 219 vs. Memphis (10/3) Touchdowns: 3 vs. Houston (11/14) and vs. Tulane (11/21) Receptions: 3 at East Carolina (9/26) Receiving Yards: 23 at ECU (9/26) and vs. Houston (11/14) � Collected five 100-yard rushing games, including at least 129 yards in the last three games, and rushed for at least 1,000 yards in a season, the eighth time a Knight reached the 1,000-yard mark Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB NO 31 14 25 16 42 12 12 21 35 16 24 YDS 111 37 98 71 219 25 71 47 DNP 139 129 130 #34 Game-By-Game Breakdown TD 2 0 2 1 1 0 0 1 3 3 1 LG 20 15 13 19 35 6 23 6 41 50 25 The 1,000-Yard Milestone � Just a sophomore, Harvey reached the 1,000-yard mark with 130 yards at UAB, the eighth time a UCF back has reached that plateau. Brynn Harvey YEAR 2008 2009 TOTALS G 10 11 21 ATT 125 248 373 YDS 519 1,077 1,596 AVG 4.2 4.3 4.3 TD 1 14 15 LG 50 50 50 BRETT HODGES RS SENIOR � QUARTERBACK � 6-1/191 � WINTER SPRINGS, FLA./WINTER SPRINGS (WAKE FOREST) REST) � Season highs--Completions: 24 at UAB (11/28) Attempts: 45 vs. Marshall (11/1) Yards: 342 vs. Marshall (11/1) Touchdowns: 2, Five Times Rushing Attempts: 13 vs. Buffalo (9/19) Rushing Yards: 71 vs. Buffalo (9/19) Rushing Touchdowns: 1 at Rice (10/24) � Came off the bench in the first two games of the year, and has started nine games since, going 7-2 overall and 6-1 in C-USA games as the starting signal caller � Recorded a touchdown pass in eight-straight games played � Holds a 133.42 passing efficiency rating � Completed over 61 percent of his passes in his last four outings � Was only sacked twice in the final two games of the regular season #11 LG 26 29 39 40 34 41 76 41 31 29 32 Senior Takes the Reigns � In his first year at UCF, Hodges has thrown a TD pass in each of his last eight games. He had a QB rating of 170.18 in the upset over UH. Game-By-Game Breakdown Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB C-A-I 10-17-1 15-26-0 15-20-0 21-34-4 16-28-1 12-27-1 8-13-0 23-45-0 21-25-1 19-28-1 24-38-1 YDS 129 158 141 266 214 163 145 342 DNP 241 234 230 TD 1 2 0 1 2 1 1 2 1 2 2 4 43 Brett Hodges (* with Wake Forest) YEAR 2006* 2007* 2008* 2009 TOTALS G 3 7 2 11 23 COMP 2 43 1 184 230 ATT 2 66 2 301 371 INT 0 3 0 11 14 PCT 100.0 65.2 50.0 61.1 62.0 YDS 24 359 19 2,263 2,665 TD 0 1 0 15 16 LG 12 61 19 76 76 THE ST. PETERSBURG BOWL CORY HOGUE RS SENIOR � LINEBACKER � 6-1/228 � NAPLES, FLA./NAPLES � 2009 All-C-USA Defensive First Team � Season highs--Tackles: 12 at Southern Miss (9/12), at Texas (11/7) and at UAB (11/28) TFLs: 3.5 vs. Miami (10/17) Sacks: 2 vs. Miami (10/17) Forced Fumbles: 2 at Rice (10/24) � Trying to become the first UCF LB since 2003 to post at least 100 tackles Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB TACK 7-0-7 4-8-12 6-2-8 1-5-6 5-2-7 7-1-8 4-2-6 2-4-6 10-2-12 5-5-10 2-3-5 3-9-12 #59 Game-By-Game Breakdown TFL-YDS 1.0-3 1.5-2 0.0-0 0.0-0 1.5-6 3.5-15 1.0-2 0.5-2 2.0-3 0.0-0 0.0-0 0.5-2 SACKS 0.0-0 0.0-0 0.0-0 0.0-0 0.0-0 2.0-13 0.0-0 0.0-0 0.0-0 0.0-0 0.0-0 0.0-0 FF 0 0 0 0 0 0 2 0 0 0 0 0 Hogue in Bowl Games � In the 2005 Sheraton Hawaii Bowl, Hogue had two tackles. He also started the 2007 AutoZone Liberty Bowl where he notched three tackles. Cory Hogue YEAR 2005 2006 2007 2008 2009 TOTALS G 11 10 12 3 12 48 SOLO 23 25 55 14 56 173 AST 11 25 17 6 43 102 TOTAL 34 50 72 20 99 275 SACK 0.0 2.5 1.0 1.0 2.0 6.5 TFL 3.0 4.0 6.5 3.5 11.5 28.5 PBU 0 0 3 1 5 9 INT 1 0 0 0 1 2 KEMAL ISHMAEL FRESHMAN � DEFENSIVE BACK � 5-11/197 � MIAMI, FLA./NORTH MIAMI BEACH � Season highs--Tackles: 11 at Rice (10/24) Break-ups: 1 vs. Miami (10/17) TFLs: 1 at East Carolina (9/26) � Has made eight starts in 12 games played � As a true freshman, posted at least one tackle in every game � Racked up at least five tackles in his last eight outings � Sits in fifth on the UCF defense with 64 tackles Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB #18 One of Two � Kemal Ishmael has played in all 12 games, joining Josh Robinson as the only two true freshmen on defense who saw time in every game. Game-By-Game Breakdown TACK TFL-YDS BRKUP INT-YDS 2-0-2 0.0-0 0 0 1-1-2 0.0-0 0 0 1-0-1 0.0-0 0 0 2-2-4 1.0-3 0 0 6-0-6 0.0-0 0 0 3-4-7 0.0-0 1 0 8-3-11 0.0-0 0 0 4-3-7 0.0-0 0 0 4-1-5 0.0-0 0 0 5-3-8 0.0-0 0 0 4-2-6 0.0-0 0 0 4-1-5 0.0-0 0 0 Kemal Ishmael YEAR 2009 TOTALS G 12 12 SOLO 44 44 AST 20 20 TOTAL 64 64 SACK 0.0 0.0 TFL 1.0 1.0 PBU 1 1 INT 0 0 RICKY KAY 44 JUNIOR � H-BACK � 6-3/239 � DELTONA, FLA./DELAND � Season highs--Receptions: 4 vs. Memphis (10/3) Yards: 52 vs. Memphis (10/3) Touchdowns: 1 vs. Memphis (10/3) and at Rice (10/24) � Collected at least one catch in the last eight games of the year � His 159 receiving yards this year topped his career totals entering the 2009 season Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB NO 0 0 0 4 2 1 1 2 2 1 1 #43 Game-By-Game Breakdown YDS DNP 0 0 0 52 28 13 16 22 15 12 1 TD 0 0 0 1 0 1 0 0 0 0 0 LG 0 0 0 17 19 13 16 11 9 12 1 LG 19 9 19 19 2007 AutoZone Liberty Bowl � Kay earned the start at fullback against Mississippi State and helped tailback Kevin Smith amass 119 yards on the ground. Ricky Kay YEAR 2007 2008 2009 TOTALS G 14 11 11 36 REC 6 7 14 27 YDS 60 37 159 256 AVG 10.0 5.3 11.4 9.5 TD 0 1 2 3 THE ST. PETERSBURG BOWL BRENDAN KELLY RS FRESHMAN � RUNNING BACK � 6-3/228 � SHOREHAM, N.Y./SHOREHAM-WADING RIVER � Season highs--Rushing Attempts: 5 at Rice (10/24) Rushing Yards: 18 at East Carolina (9/26) Receptions: 2 vs. Tulane (11/21) Receiving Yards: 23 vs. Tulane (11/21) Receiving Touchdowns: 1 vs. Tulane (11/21) YEAR 2009 TOTALS YEAR 2009 TOTALS Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB NO YDS DNP DNP DNP 18 9 0 16 0 0 0 0 0 #24 TD LG 0 0 0 0 0 0 0 0 0 18 4 0 7 0 0 0 0 0 Adding to the Receivers � Kelly may have 43 rushing yards this year, but he also has posted four catches for 46 yards, including a touchdown against Tulane. Game-By-Game Breakdown Brendan Kelly G 9 9 G 9 9 ATT 10 10 REC 4 4 YDS 43 43 YDS 46 46 AVG 4.3 4.3 AVG 11.5 11.5 TD 0 0 TD 1 1 LG 18 18 LG 17 17 1 4 0 5 0 0 0 0 0 ABRE LEGGINS RS JUNIOR � OFFENSIVE LINE � 6-4/317 � ORLANDO, FLA./EVANS (COPIAH-LINCOLN JC) � Earned six starts in 10 games on the year, including in the last two contests at left tackle � Also started at left guard in four of the first five games of 2009 #66 JOSH LINAM SOPHOMORE � LINEBACKER � 6-3/234 � TAVARES, FLA./TAVARES � Season highs--Tackles: 3 at Rice (10/24) and vs. Houston (11/14) TFLs: 1 at Rice (10/24) � Has appeared in all 12 games this year, doubling his tackle total from 2008 Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB #50 Game-By-Game Breakdown TACK TFL-YDS SACKS 0-0-0 0.0-0 0.0-0 0-0-0 0.0-0 0.0-0 0-0-0 0.0-0 0.0-0 0-0-0 0.0-0 0.0-0 2-0-2 0.0-0 0.0-0 0-1-1 0.0-0 0.0-0 1-2-3 1.0-4 0.0-0 2-0-2 0.0-0 0.0-0 1-0-1 0.0-0 0.0-0 2-1-3 0.0-0 0.0-0 1-0-1 0.0-0 0.0-0 1-0-1 0.0-0 0.0-0 FF 0 0 0 0 0 0 0 0 0 0 0 0 All About Special Teams � In year No. 2 with the Knights, Linam has proven himself on special teams, collecting 14 total tackles in 2009. Josh Linam YEAR 2008 2009 TOTALS G 11 12 23 SOLO 2 10 12 AST 5 4 9 TOTAL 7 14 21 SACK 0.0 0.0 0.0 TFL 0.0 1.0 1.0 PBU 0 0 0 INT 0 0 0 CLIFF MCCRAY RS SENIOR � OFFENSIVE LINE � 6-2/307 � MIAMI, FLA./SOUTHRIDGE � After missing the 2008 season, has started 11 of 12 games played, seeing time at both left guard and right guard #65 4 45 THE ST. PETERSBURG BOWL QUINCY MCDUFFIE FRESHMAN � WIDE RECEIVER � 5-10/172 � ORLANDO, FLA./EDGEWATER � Season highs--Receptions: 4 vs. Houston (11/14) Receiving Yards: 77 vs. Houston (11/14) Receiving Touchdowns: 1 vs. Houston (11/14) Kickoff Returns: 6 vs. Miami (10/17) Kickoff Return Yards: 126 vs. Miami (10/17) Kickoff Return Touchdowns: 1 vs. Samford (9/5) � As a true freshman, has returned at least one kick in 11 of 12 games played � Posted at least 100 return yards in three outings � Played the first nine games until hauling in his first-career reception vs. Houston � Also has 11 rushes on the year for 55 yards Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB NO-YDS 0-0 0-0 0-0 0-0 0-0 0-0 0-0 0-0 0-0 4-77 2-13 1-14 LG 0 0 0 0 0 0 0 0 0 27 7 14 #14 KR-YDS 2-118 2-27 3-55 4-84 1-27 6-126 1-21 2-43 4-79 5-122 0-0 3-71 LG 95 16 23 35 27 35 21 25 27 32 0 33 A Huge Impression � McDuffie began his true freshman season with a 95-yard kickoff return for a touchdown in the seasonopener against Samford. Game-By-Game Breakdown Quincy McDuffie YEAR 2009 TOTALS YEAR 2009 TOTALS G 12 12 G 12 12 KR 33 33 REC 7 7 YDS 773 773 YDS 104 104 AVG 23.4 23.4 AVG 14.9 14.9 TD 1 1 TD 1 1 LG 95 95 LG 27 27 BRUCE MILLER RS JUNIOR � DEFENSIVE END � 6-2/253 � CANTON, GA./WOODSTOCK � 2009 C-USA Defensive Player of the Year � 2009 All-C-USA Defensive First Team � Season highs--Tackles: 10 vs. Marshall (11/1) TFLs: 3.5 vs. Memphis (10/3) Sacks: 2.5 vs. Memphis (10/3) and vs. Marshall (11/1) Forced Fumbles: 1 vs. Marshall (11/1) � The C-USA Defensive Player of the Week on Nov. 2 (Marshall game) and Nov. 30 (UAB game) � Made 11 starts in 12 games played Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB TACK 1-2-3 4-5-9 2-0-2 2-3-5 3-2-5 3-0-3 0-3-3 4-6-10 0-1-1 3-0-3 1-3-4 3-2-5 #49 TFL-YDS 0.0-0 2.0-26 0.0-0 1.0-3 3.5-20 1.0-4 1.0-10 3.0-34 0.0-0 1.0-10 1.5-2 2.5-13 SACKS 0.0-0 2.0-26 0.0-0 0.0-0 2.5-19 1.0-4 1.0-10 2.5-33 0.0-0 1.0-10 0.0-0 2.0-12 FF 0 0 0 0 0 0 0 1 0 0 0 0 2007 AutoZone Liberty Bowl � Making the start at right end against Mississippi State, Miller finished the Liberty Bowl with three total tackles vs. the Bulldogs. Game-By-Game Breakdown Bruce Miller YEAR 2007 2008 2009 TOTALS G 14 12 12 38 SOLO 26 27 26 79 AST 12 25 27 64 TOTAL 38 52 53 143 SACK 7.0 7.0 12.0 26.0 TFL 9.0 17.0 16.5 42.5 PBU 1 3 3 7 INT 0 1 0 1 DARIUS NALL 46 RS SOPHOMORE � DEFENSIVE END � 6-3/249 � DOUGLASVILLE, GA./CHAPEL HILL � Season highs--Tackles: 3 vs. Miami (11/17) TFLs: 2 vs. Tulane (11/21) Sacks: 2 vs. Tulane (11/21) Forced Fumbles: 1 vs. Tulane (11/21) � Has notched at least one tackle in his last seven contests � Got the start at left end vs. Houston � Notched first-career two-sack game vs. Tulane Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB #53 FF 0 0 0 0 0 0 0 0 0 0 1 0 The Comeback � Nall missed last year undergoing radiation treatments after having a cancerous tumor that was attached to one of his lungs removed. Game-By-Game Breakdown TACK TFL-YDS SACKS 0-0-0 0.0-0 0.0-0 0-0-0 0.0-0 0.0-0 0-0-0 0.0-0 0.0-0 0-1-1 0.5-1 0.0-0 0-0-0 0.0-0 0.0-0 3-0-3 1.0-8 1.0-8 0-2-2 0.5-6 0.5-6 0-1-1 0.5-4 0.5-4 1-0-1 0.0-0 0.0-0 1-0-1 0.0-0 0.0-0 2-0-2 2.0-9 2.0-9 0-1-1 0.0-0 0.0-0 Darius Nall YEAR 2007 2009 TOTALS G 13 12 25 SOLO 15 7 22 AST 7 5 12 TOTAL 22 12 34 SACK 1.0 4.0 5.0 TFL 4.0 4.5 8.5 PBU 1 1 2 INT 0 0 0 THE ST. PETERSBURG BOWL JAMAR NEWSOME RS JUNIOR � WIDE RECEIVER � 6-2/198 � ST. PETERSBURG, FLA./BOCA CIEGA � Season highs--Receptions: 6 vs. Buffalo (9/19) Receiving Yards: 62 at Rice (10/24) Touchdowns: 1 vs. Samford (9/5), vs. Memphis (10/3) and at Rice (10/24) � Along with his 23 receptions, also has four rushes for 26 yards and two kickoff returns for 107 yards, highlighted by an 89-yard return at Southern Miss Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB NO 2 2 6 0 5 3 2 2 YDS 19 17 37 0 58 56 62 27 DNP DNP DNP 8 #9 TD 1 0 0 0 1 0 1 0 LG 10 11 12 0 22 41 52 19 Going Home � St. Petersburg native Jamar Newsome will be heading home for the postseason. He has posted at least one catch in eight of his nine games. Game-By-Game Breakdown Jamar Newsome YEAR 2008 2009 TOTALS G 12 9 21 REC 4 23 27 YDS 81 284 365 AVG 20.2 12.3 13.5 TD 1 3 4 LG 54 52 54 1 0 8 ADAM NISSLEY RS SOPHOMORE � TIGHT END � 6-6/264 � CUMMING, GA./SOUTH FORSYTH � Season highs--Receptions: 3 vs. Marshall (11/1) Receiving Yards: 46 vs. Marshall (11/1) � Started as UCF's tight end in all 12 games this year Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB NO 0 0 1 0 1 0 1 3 0 1 1 1 YDS 0 0 5 0 34 0 13 46 0 6 15 22 #88 TD 0 0 0 0 0 0 0 0 0 0 0 0 LG 0 0 5 0 34 0 13 27 0 6 15 22 Making the Full Transition � Nissley has played in all 24 games in two seasons at UCF. He worked as a RT and TE in 2008 before switching full time to TE in 2009. Game-By-Game Breakdown Adam Nissley YEAR 2008 2009 TOTALS G 12 12 24 REC 1 9 10 YDS 12 141 153 AVG 12.0 15.7 15.3 TD 0 0 0 LG 12 34 34 NICK PIESCHEL RS SOPHOMORE � OFFENSIVE LINE � 6-7/302 � FORT LAUDERDALE, FLA./ST. THOMAS AQUINAS INAS � Earned the start in 10 of 11 games played, all at left tackle � In first two seasons, has received 17 starts on the offensive line #77 47 JAH REID RS JUNIOR � OFFENSIVE LINE � 6-7/314 � HAINES CITY, FLA./HAINES CITY � 2009 All-C-USA Offensive First Team � Along with center Ian Bustillo, one of two offensive lineman to start all 12 games, serving as the lone right tackle on the line #76 THE ST. PETERSBURG BOWL JORDAN RICHARDS � Season highs--Tackles: 9 at East Carolina (9/26) TFLs: 2 at East Carolina (9/26) Forced Fumbles: 1 at East Carolina (9/26) � A key special teams player � Started three games at linebacker � ESPN The Magazine Academic All-District selection RS SENIOR � LINEBACKER � 6-2/225 � CARY, N.C./CARY (HARGRAVE MILITARY ACADEMY) Y) Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB #56 FF 0 0 0 1 The Bowl Games � Richards didn't participate in the 2005 and 2007 UCF bowl games, so the St. Petersburg Bowl will be his first postseason game as a Knight. 0 0 0 0 0 0 Game-By-Game Breakdown TACK TFL-YDS SACKS 0-0-0 0.0-0 0.0-0 3-0-3 0.0-0 0.0-0 1-2-3 0.0-0 0.0-0 5-4-9 2.0-5 0.0-0 DNP DNP 0-2-2 0.0-0 0.0-0 0-0-0 0.0-0 0.0-0 0-0-0 0.0-0 0.0-0 1-1-2 0.0-0 0.0-0 1-1-2 0.5-1 0.0-0 0-1-1 0.0-0 0.0-0 Jordan Richards YEAR 2005 2006 2008 2009 TOTALS G 9 8 4 10 31 SOLO 31 20 10 11 72 AST 10 11 1 11 33 TOTAL 41 31 11 22 105 SACK 1.0 1.0 0.0 0.0 2.0 TFL 5.0 3.0 0.0 2.5 10.5 PBU 1 1 0 0 2 INT 0 0 0 0 0 JOSH ROBINSON FRESHMAN � CORNERBACK � 5-10/189 � SUNRISE, FLA./PLANTATION � 2009 All-C-USA Defensive Second Team � 2009 C-USA All-Freshman Defensive Team � Season highs--Tackles: 11 at Southern Miss (9/12) TFLs: 1 at East Carolina (9/26) Interceptions: 1, Six Times Break-ups: 2 vs. Memphis (10/3) and vs. Houston (11/14) � Tied for second in UCF single-season history with six interceptions this year (trails only Keith Evans' eight in 1986), and holds the all-time freshman record � Picked off a pass in six of the last eight contests of the regular season � Intercepted a pass in three-straight games from Oct. 24-Nov. 7, including a 24-yard touchdown at Rice � Started the final 10 games of the year � Ranks fourth at UCF with 65 tackles, 56 of which are solo takedowns Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB YEAR 2009 TOTALS TACK 0-0-0 10-1-11 5-1-6 9-0-9 3-1-4 6-2-8 2-2-4 4-0-4 5-1-6 3-0-3 4-0-4 5-1-6 G 12 12 #20 TFL-YDS BRKUP INT-YDS 0.0-0 0 0-0 0.0-0 1 0-0 0.0-0 0 0-0 1.0-2 1 0-0 0.0-0 2 1-33 0.0-0 0 0-0 0.0-0 0 1-24 0.0-0 1 1-0 0.0-0 1 1-3 0.0-0 2 0-0 0.0-0 0 1-16 0.0-0 0 1-8 AST 9 9 TOTAL 65 65 Leading the Way � Robinson has set a UCF freshman record with six INTs in 2009, which is tied for the most INTs by a freshman in the nation this year. Game-By-Game Breakdown Josh Robinson SOLO 56 56 SACK 0.0 0.0 TFL 1.0 1.0 PBU 8 8 INT 6 6 ROCKY ROSS 48 RS SENIOR � WIDE RECEIVER � 6-2/209 � JACKSONVILLE, FLA./BOLLES SCHOOL � Season highs--Receptions: 6 vs. Marshall (11/1) and at UAB (11/28) Yards: 85 vs. Samford (9/5) Touchdowns: 1 at Southern Miss (9/12), vs. Miami (10/17) and vs. Marshall (11/1) Punt Returns: 6 vs. Samford (9/5) � Has notched a reception in all 10 games played � Returned eight punts for 99 yards Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB NO 5 5 2 5 2 2 6 1 3 6 YDS 85 56 23 56 DNP 12 19 76 7 DNP 37 41 #5 TD 0 1 0 0 1 0 1 0 0 0 LG 26 29 16 35 8 15 21 7 16 13 Ross in Bowl Games � Ross started alongside Brandon Marshall in the Hawaii Bowl, notching one catch. He also started the Liberty Bowl and posted a pair of catches. Game-By-Game Breakdown Rocky Ross YEAR 2005 2006 2007 2008 2009 TOTALS G 13 12 14 4 10 53 REC 17 36 50 13 37 153 YDS 154 531 658 180 412 1,935 AVG 9.1 14.8 13.2 13.8 11.1 12.6 TD 0 2 2 1 3 8 LG 18 55 53 31 35 55 THE ST. PETERSBURG BOWL ALEX THOMPSON RS SENIOR � LINEBACKER � 6-2/229 � GAINESVILLE, FLA./BUCHHOLZ � Season highs--Tackles: 8 at Rice (10/24) TFLs: 1 vs. Tulane (11/21) Forced Fumbles: 1 at East Carolina (9/26) Break-ups: 1 vs. Samford (9/5) and at Rice (10/24) � An important member of UCF's special teams Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB #55 TACK TFL-YDS BRKUP 2-0-2 0.0-0 1 1-0-1 0.0-0 0 2-0-2 0.0-0 0 2-1-3 0.0-0 0 1-0-1 0.0-0 0 2-1-3 0.0-0 0 2-6-8 0.5-2 1 0-0-0 0.0-0 0 0-0-0 0.0-0 0 0-0-0 0.0-0 0 1-1-2 1.0-2 0 0-1-1 0.0-0 0 FF 0 0 0 1 0 0 0 0 0 0 0 0 A Formidable Bunch � Thompson has helped the special teams this year hold opponents to just 18.2 yards per kickoff return and 3.1 yards on punt returns. Game-By-Game Breakdown Alex Thompson YEAR 2006 2007 2008 2009 TOTALS G 11 6 12 12 41 SOLO 4 4 9 13 30 AST 5 7 3 10 25 TOTAL 9 11 12 23 55 SACK 0.0 0.0 0.0 0.0 0.0 TFL 0.0 0.0 1.0 1.5 2.5 PBU 0 0 0 2 2 INT 0 0 0 0 0 TRAVIS TIMMONS SENIOR � DEFENSIVE TACKLE � 6-4/297 � GAINESVILLE, FLA/BUCHHOLZ � Season highs--Tackles: 3 at East Carolina (9/26) TFLs: 2 at East Carolina (9/26) Sacks: 1 at East Carolina (9/26) Forced Fumbles: 1 at East Carolina (9/26) � Made starts in 10 of 11 games played, all at left tackle Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB #95 TACK TFL-YDS SACKS 1-1-2 0.0-0 0.0-0 0-2-2 0.0-0 0.0-0 0-1-1 0.0-0 0.0-0 2-1-3 2.0-8 1.0-1 0-0-0 0.0-0 0.0-0 1-0-1 0.0-0 0.0-0 0-1-1 0.0-0 0.0-0 2-0-2 0.0-0 0.0-0 DNP 0-0-0 0.0-0 0.0-0 1-0-1 0.0-0 0.0-0 0-0-0 0.0-0 0.0-0 FF 0 0 0 1 0 0 0 0 0 0 0 2007 AutoZone Liberty Bowl � Timmons came off the bench vs. Mississippi State, and although he didn't post a tackle, he helped the "D" hold MSU to 199 total yards. Game-By-Game Breakdown Travis Timmons YEAR 2006 2007 2008 2009 TOTALS G 8 14 11 11 44 SOLO 5 10 9 7 31 AST 1 7 5 6 19 TOTAL 6 17 14 13 50 SACK 1.0 1.5 0.0 1.0 3.5 TFL 2.0 3.5 2.0 2.0 9.5 PBU 0 1 0 0 1 INT 1 0 0 0 1 TORRELL TROUP � 2009 All-C-USA Defensive Second Team SENIOR � DEFENSIVE TACKLE � 6-3/314 � CONYERS, GA./SALEM � Season highs--Tackles: 5 at Southern Miss (9/12) TFLs: 1.5 vs. Samford (9/5) Sacks: 1 at East Carolina (9/26) and vs. Marshall (11/1) � Only member of the Knights' defensive line to start all 12 games at the same position (right tackle) � Has attracted a steady stream of NFL scouts to campus Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB YEAR 2006 2007 2008 2009 TOTALS #98 Game-By-Game Breakdown TACK TFL-YDS SACKS 2-1-3 1.5-3 0.0-0 0-5-5 0.0-0 0.0-0 2-1-3 0.0-0 0.0-0 1-3-4 1.0-2 1.0-2 0-1-1 0.0-0 0.0-0 4-0-4 0.0-0 0.0-0 2-1-3 0.5-1 0.0-0 2-1-3 1.0-5 1.0-5 1-1-2 0.0-0 0.0-0 2-0-2 1.0-3 0.0-0 0-0-0 0.0-0 0.0-0 0-2-2 0.0-0 0.0-0 G 9 13 12 12 46 SOLO 1 13 30 16 60 AST 1 6 22 16 45 BRKUP 0 0 0 0 0 1 0 0 1 1 1 0 SACK 0.0 2.0 2.0 2.0 6.0 2007 AutoZone Liberty Bowl � Troup made the start at left tackle and had one tackle for a loss of two yards against Mississippi State. 4 49 Torrell Troup TOTAL 2 19 52 32 105 TFL 0.0 6.5 12.5 5.0 24.0 PBU 0 0 2 4 6 INT 0 0 0 0 0 THE ST. PETERSBURG BOWL WES TUNUUFI SAUVAO RS JUNIOR � DEFENSIVE LINE � 6-3/292 � LEESVILLE, LA./LEESVILLE � Season highs--Tackles: 2 at Rice (10/24) and vs. Tulane (11/21) TFLs: 1 at Rice (10/24) and vs. Tulane (11/21) Sacks: 1 at Rice (10/24) Forced Fumbles: 1 at Rice (10/24) � Played in nine games, getting the start at left tackle against Texas Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB #94 TACK TFL-YDS SACKS 0-0-0 0.0-0 0.0-0 0-0-0 0.0-0 0.0-0 DNP 0-0-0 0.0-0 0.0-0 0-0-0 0.0-0 0.0-0 DNP 1-1-2 1.0-15 1.0-15 0-0-0 0.0-0 0.0-0 0-0-0 0.0-0 0.0-0 DNP 2-0-2 1.0-4 0.0-0 0-0-0 0.0-0 0.0-0 FF 0 0 0 0 1 0 0 0 0 Both Sides of the Story � Tunuufi Sauvao has seen time on both sides of the ball as a Knight, playing DT this fall but also has seen action at OG in his career. Game-By-Game Breakdown Wes Tunuufi Sauvao YEAR 2007 2008 2009 TOTALS G 3 9 9 21 SOLO 1 1 3 5 AST 0 1 1 2 TOTAL 1 2 4 7 SACK 0.0 0.0 1.0 1.0 TFL 0.0 0.5 2.0 2.5 PBU 0 0 0 0 INT 0 0 0 0 BRIAN WATTERS RS JUNIOR � WIDE RECEIVER � 6-2/190 � ROME, GA./ROME � Season highs--Receptions: 4 vs. Houston (11/14) Yards: 24 at Southern Miss (9/12) � Has posted at least two catches in three games this year � Also had one rush for 15 yards against Samford Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB YEAR 2007 2008 2009 TOTALS NO 1 2 YDS 5 24 DNP DNP DNP 11 6 15 0 20 0 18 REC 14 42 12 68 #6 Game-By-Game Breakdown TD 0 0 LG 5 20 1 1 1 0 4 0 2 G 14 12 9 35 0 0 0 0 0 0 0 YDS 161 594 99 854 11 6 15 0 15 0 10 2007 AutoZone Liberty Bowl � As a freshman, Watters hauled in two passes from Kyle Israel for 13 yards against Mississippi State in the AutoZone Liberty Bowl. Brian Watters AVG 11.5 14.1 8.2 12.6 TD 3 3 0 6 LG 30 62 20 62 REGGIE WEAMS 50 JUNIOR � DEFENSIVE BACK � 6-0/191 � BATON ROUGE, LA./REDEMPTORIST � Season highs--Tackles: 7 at East Carolina (9/26) Fumble Recoveries: 1 at Southern Miss (9/12) Interceptions: 1 vs. Tulane (11/21) � Made four starts, all at strong safety, in 11 games played Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB #40 TACK TFL-YDS BRKUP INT-YDS 3-0-3 0.0-0 0 0-0 3-3-6 0.0-0 0 0-0 0-1-1 0.0-0 0 0-0 2-5-7 0.0-0 0 0-0 0-0-0 0.0-0 0 0-0 0-0-0 0.0-0 0 0-0 1-2-3 0.0-0 0 0-0 0-0-0 0.0-0 0 0-0 0-0-0 0.0-0 0 0-0 0-0-0 0.0-0 0 0-0 1-3-4 0.0-0 0 1-0 DNP 2007 AutoZone Liberty Bowl � Weams saw action as a freshman in the AutoZone Liberty Bowl, and has totaled 45 tackles and one interception in his career. Game-By-Game Breakdown Reggie Weams YEAR 2007 2008 2009 TOTALS G 14 12 11 37 SOLO 7 7 10 24 AST 3 4 14 21 TOTAL 10 11 24 45 SACK 0.0 0.0 0.0 0.0 TFL 0.0 0.0 0.0 0.0 PBU 0 0 0 0 INT 0 0 1 1 THE ST. PETERSBURG BOWL RONNIE WEAVER RS SOPHOMORE � RUNNING BACK � 6-0/206 � WABASSO, FLA./VERO BEACH � Season highs--Rushing Attempts: 5 at Rice (10/24) Yards: 58 at Rice (10/24) Touchdowns: 1 at Rice (10/24) � While posting 86 yards on 19 rushing attempts, he also has collected eight tackles on special teams with one forced fumble on a kickoff return against Tulane � Recovered two fumbles and the onside kick that sealed the win over Houston Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB NO 3 4 0 1 0 2 5 0 0 4 0 YDS 6 3 0 1 0 5 58 0 DNP 0 13 0 #35 TD 0 0 0 0 0 0 1 0 0 0 0 LG 4 4 0 1 0 3 27 0 0 5 0 All-Around � Not only has Weaver added to the ground attack this year, but he also has posted eight tackles on special teams and forced a fumble vs. Tulane. Game-By-Game Breakdown Ronnie Weaver YEAR 2008 2009 TOTALS G 12 11 23 ATT 102 19 121 YDS 348 86 434 AVG 3.4 4.5 3.6 TD 2 1 3 LG 48 27 48 DAVID WILLIAMS RS JUNIOR � DEFENSIVE END � 6-2/238 � LEXINGTON, S.C./WHITE KNOLL � Season highs--Tackles: 5 vs. Samford (9/5) and vs. Buffalo (9/19) TFLs: 2 vs. Buffalo (9/19) Sacks: 1 vs. Miami (10/17) � Started 11 games at left end during the 2009 season Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB #48 TACK TFL-YDS SACKS 4-1-5 0.5-1 0.0-0 0-0-0 0.0-0 0.0-0 4-1-5 2.0-4 0.0-0 1-1-2 0.0-0 0.0-0 2-1-3 1.0-2 0.0-0 2-1-3 1.0-6 1.0-6 0-2-2 0.0-0 0.0-0 0-0-0 0.0-0 0.0-0 1-0-1 1.0-2 0.0-0 0-0-0 0.0-0 0.0-0 0-0-0 0.0-0 0.0-0 0-1-1 0.0-0 0.0-0 FF 0 0 0 0 0 0 0 0 0 0 0 0 2007 AutoZone Liberty Bowl � Williams played in all 14 games as a freshman, including the Liberty Bowl. As a junior in 2009, he has totaled 22 tackles. Game-By-Game Breakdown David Williams YEAR 2007 2008 2009 TOTALS G 14 12 12 38 SOLO 0 13 14 27 AST 1 7 8 16 TOTAL 1 20 22 43 SACK 0.0 0.0 1.0 1.0 TFL 0.0 1.5 5.5 7.0 PBU 0 0 0 0 INT 0 0 0 0 LAWRENCE YOUNG JUNIOR � LINEBACKER � 6-0/217 � PENSACOLA, FLA./WOODHAM � Season highs--Tackles: 11 at Southern Miss (9/12) TFLs: 3 vs. Samford (9/5) Sacks: 1 at Southern Miss (9/12) Interceptions: 1 at East Carolina (9/26) Break-ups: 1 vs. Memphis (10/3), vs. Marshall (11/1) and at Texas (11/7) � Started all 11 games he has played at outside linebacker, ranking third on the team with 73 tackles Opponent Samford at USM Buffalo at ECU Memphis Miami at Rice Marshall at Texas Houston Tulane at UAB YEAR 2007 2008 2009 TOTALS #57 Game-By-Game Breakdown TACK TFL-YDS SACKS BRKUP 4-4-8 3.0-3 0.0-0 0 6-5-11 2.0-14 1.0-10 0 5-1-6 0.0-0 0.0-0 0 2-7-9 1.0-3 0.0-0 0 4-0-4 0.0-0 0.0-0 1 7-1-8 1.0-7 0.0-0 0 DNP 7-1-8 0.0-0 0.0-0 1 3-1-4 1.0-4 0.0-0 1 2-2-4 1.0-2 0.0-0 0 2-4-6 0.5-1 0.0-0 0 1-4-5 0.5-1 0.0-0 0 G 12 12 11 35 SOLO 22 46 43 111 AST 9 26 30 65 TOTAL 31 72 73 176 SACK 1.5 1.5 1.0 4.0 2007 AutoZone Liberty Bowl � Young saw time in the 2007 Liberty Bowl, and has progressed each year, highlighted by a career-high 73 tackles in 2009. 5 51 Lawrence Young TFL 2.5 10.5 10.0 23.0 PBU 3 5 3 11 INT 1 1 1 3 THE ST. PETERSBURG BOWL HEAD COACH SIXTH YEAR AT UCF GEORGE O'LEARY O' came the only rusher in the state of Florida and the 12th all-time in the NCAA FBS to eclipse the 2,000year mark in a season. The 2008 season saw Joe Burnett also earn first-team All-America honors as he shattered the school and CUSA punt return records and graduated ranked 19th in NCAA history in career punt return yards. Excellence off the Field O'Leary has reshaped the UCF program in every facet, including improved results in the classroom. The Knights turned in a successful effort in the classroom during the 2008 campaign, registering the highest in-season grade-point-average in program history at 2.782. A total of 55 student-athletes recorded a GPA of least 3.0 during the fall 2008 semester. Defensive lineman Keith Shologan became the first player in school history to receive ESPN The Magazine Academic All-America First Team honors in 2007. Both Shologan and defensive back Sha'reff Rashad were selected to the C-USA Football All-Academic Team. In 2008, Rashad was named to the league all-academic. In 2005, O'Leary's second season at UCF, he engineered one of the top turnarounds in the history of college football. Just one year removed from an 0-11 campaign, he guided the Knights to a historic season, complete. UCF & O'LEARY O' BY THE NUMBERS Games (w/o 1992 exhibition game) 1 9/22/79 St. Leo 50 9/8/84 at Louisiana-Monroe 100 10/8/88 New Haven 150 11/21/92 at Samford 200 9/27/97 at Auburn 250 11/24/01 Louisiana-Lafayette 300 9/9/06 at Florida 349 12/19/09 St. Petersburg Bowl Wins 1 50 100 150 177 W L L L L W L 21-0 21-49 23-31 13-20 14-41 31-0 0-42 In his six years at UCF, George O'Leary has helped the Knights achieve dozens of historic firsts including games in front of sellout crowds on campus, individual accolades for student-athletes, a conference championship. In 2007, O'Leary guided the Knights to arguably the finest campaign in program history. UCF won 10 contests, claimed the Conference USA Championship and participated in the AutoZone Liberty Bowl in front of a nationally-televised audience on ESPN. There was little suspense as to who would garner C-USA Coach of the Year honors. O'Leary was recognized with the award for the second time in three campaigns. For the Knights and their fans, 2007 was truly a year to remember. For the college football world, the season was more evidence that O'Leary hasUSA contests to close out the regular season. That included the Knights' first win in school history over a nationally-ranked FBS program when they upset No. 13/12 Houston, 37-32, on Homecoming. Helping UCF Arrive on the National Scene O'Leary has already left his stamp on the Knights' program. When he arrived in Orlando in 2004, UCF was not a factor on the regional or national scenes. the state gain 52 lobbied for new facilities, touredrecruits andtomade publicity, restocked UCF with top sure his student-athletes excelled in their studies. The growth of the program was no more evident than in 2007. UCF posted a school-record seven-game winning streak during the season. The conference championship was the first in program history. The 10 wins during the year established a program record at the Football Bowl Subdivision level. Nationally, only 19 teams won at least 10 contests during the year. O'Leary helped tailback Kevin Smith post one of the most dominant single-season rushing performances. The junior rushed for 2,567 yards, good for second all-time in NCAA history, and 29 touchdowns. He beO'Leary rebuilt the program from the ground up. He 9/22/79 10/28/89 11/9/96 11/19/05 11/28/09 St. Leo Liberty at UAB at Rice at UAB 21-0 33-30 35-13 31-28 34-27 Home Games (w/o 1992 exhibition game) 1 9/29/79 Fort Benning W 7-6 50 11/15/86 Samford W 66-7 100 10/16/93 Western Illinois W 35-17 150 11/30/02 Ohio W 42-32 193 11/21/09 Tulane W 49-0 Home Wins 1 9/29/79 50 12/1/90 100 10/26/02 126 11/21/09 Away Games 1 9/22/79 50 10/26/91 100 11/4/00 150 11/15/08 156 11/28/09 Fort Benning William & Mary Akron Tulane 7-6 52-38 28-17 49-0 at St. Leo at Georgia Southern at Louisiana Tech at Marshall at UAB W 21-0 L 6-20 W 20-16 W 30-14 W 34-27 Milestone Away Wins 1 9/22/79 at St. Leo 50 10/24/09 at Rice 51 11/28/09 at UAB Football Bowl Subdivision Era Games 1 8/29/96 William & Mary 50 10/7/00 at Northern Illinois 100 11/13/04 at Ball State 150 11/15/08 at Marshall 165 12/19/09 St. Petersburg Bowl Milestone FBS Era Wins 1 8/29/96 William & Mary 50 10/21/05 Tulane 75 10/3/09 Memphis 80 11/28/09 at UAB George O'Leary Head Coaching Games 1 11/12/94 at Clemson 50 9/4/99 at Navy 85* 12/1/01 at Florida State 86** 8/31/04 at Wisconsin 100 10/1/05 at Louisiana-Lafayette 150 9/19/09 Buffalo 160 12/19/09 St. Petersburg Bowl *Final game at Georgia Tech **First game at UCF George O'Leary Head Coaching Wins 1 9/2/95 Furman 52* 11/17/01 at Wake Forest 53** 9/24/05 Marshall 75 8/30/08 South Carolina State 86 11/28/09 at UAB *Final win at Georgia Tech **First win at UCF 21-0 49-7 34-27 W L L W 39-33 20-40 17-21 30-14 39-33 34-24 32-14 34-27 L W L L W W 10-20 49-14 17-28 6-34 24-21 23-17 51-7 38-33 23-13 17-0 34-27 THE ST. PETERSBURG BOWL. Six Knights who played under O'Leary have been selected in the NFL Draft. Burnett was a fifth-round pick of Pittsburgh in 2009. Smith was one of three UCF players taken in the 2008 draft. He was the first pick of the third round by Detroit. Offensive lineman Josh Sitton went to Green Bay in the fourth round and Kansas City selected tight end Mike Merritt with its seventh-round pick. After his incredible senior season in 2006, Sims-Walker was selected by Jacksonville in the third round. In the previous year's draft, wide receiver Brandon Marshall was Denver's fourth-round selection. Other former Knights who played under O'Leary, including defensive lineman Paul Carrington, tailback Alex Haynes, at Georgia Tech,. While at Georgia Tech, he had three assistants who later garnered head coaching jobs. Ralph Friedgen (Maryland), Randy Edsall (Connecticut) and Ted Roof (Duke) all served under O'Leary. Both Friedgen and Edsall have led their schools to BCS conference titles. NFL Experience Prior to coming to UCF, O'Leary served on Minnesota's NFL coaching staff for two seasons. In 2003, he was the Vikings' Chargers won the AFC West with an 11-5 record. O'LEARY IN BOWL GAMES Overall Record: 2 O'LEARY VS. OPPONENTS Opponent Samford - 9/5 Record 1-0 9/5/09 - Home - W, 28-24 at Southern Miss - 9/12 1-4 10/15/05 - Away - L, 52-31 9/26/06 - Home - L, 19-14 10/28/07 - Away - W, 34-17 11/8/08 - Home - L, 17-6 9/12/09 - Away - L, 26-19 Buffalo - 9/19 1-1 10/2/04 - Away - L, 48-20 9/19/09 - Home - W, 23-17 at East Carolina - 9/26 1-4 10/29/05 - Away - W, 30-20 11/4/06 - Home - L, 23-10 10/6/07 - Away - L, 52-38 11/2/08 - Home - L, 13-10 (OT) 9/26/09 - Away - L, 19-14 Memphis - 10/3 5-0 10/8/05 - Home - W, 38-17 11/11/06 - Away - W, 26-24 9/22/07 - Home - W, 56-20 11/22/08 - Away - W, 28-21 10/3/09 - Home - W, 32-14 Miami - 10/17 0-3 w/Georgia Tech - 1/1/00 - Neutral - L, 28-13 10/11/08 - Away - L, 20-14 10/17/09 - Home - L, 27-7 at Rice - 10/24 2-1 11/19/05 - Away - W, 31-28 10/21/06 - Home - L, 40-29 10/24/09 - Away - W, 49-7 Marshall - 11/1 5-1 10/30/04 - Away - L, 20-3 9/24/05 - Home - W, 23-13 10/4/06 - Away - W, 23-22 11/3/07 - Home - W, 47-13 11/15/08 - Away - W, 30-14 11/1/09 - Home - W, 21-20 at Texas - 11/7 0-2 9/15/07 - Home - L, 35-32 11/7/09 - Away - L, 35-3 Houston - 11/14 2-1 11/5/05 - Home - W, 31-29 10/28/06 - Away - L, 51-31 11/14/09 - Home - W, 37-32 Tulane - 11/21 2-1 10/21/05 - Home - W, 34-24 11/18/06 - Away - L, 10-9 11/21/09 - Home - W, 49-0 at UAB - 11/28 4-1 11/12/05 - Away - W, 27-21 11/25/06 - Home - W, 31-22 11/10/07 - Away - W, 45-31 11/29/08 - Home - L, 15-0 11/28/09 - Away - W, 34-27 vs. Rutgers - 12/19 0-0 12/19/09 - Neutral - St. Petersburg, Fla. HONORS the Eddie Robinson Coach of the Year Award 2005 Finalist for 1994 1995 1996 1997 1998 1999 2000 2001 2004 2005 2006 2007 School Georgia Tech Georgia Tech Georgia Tech Georgia Tech Georgia Tech Georgia Tech Georgia Tech Georgia Tech UCF UCF UCF UCF Record 0-3 6-5 5-6 7-5 10-2 8-4 9-3 7-5 0-11 8-5 4-8 10-4 4-8 8-4 86-73 34-40 Notes Interim head coach Carquest Bowl Gator Bowl ACC Co-Champs Gator Bowl Peach Bowl Seattle Bowl Sheraton Hawaii Bowl AutoZone Liberty Bowl C-USA Champs St. Petersburg Bowl 53 2008 UCF 2009 UCF Total at UCF THE ST. PETERSBURG BOWL SEAN BECKTON DEFENSIVE BACKS � NINTH YEAR OVERALL AT UCF � UCF, 1993 Coaching Experience � UCF, 2009-Present ...................................... Defensive Backs � Orlando Predators, 2008 ............................ Wide Receivers � UCF, 1996-03 .............................................. Wide Receivers � Mainland (Fla.) High School, 1993-96......... Assistant Coach � UCF, 1992-93 .............................................. Offensive Graduate Assistant GEOFF COLLINS LINEBACKERS/RECRUITING COORDINATOR � SECOND YEAR AT UCF � WESTERN CAROLINA, 1994 Coaching Experience � UCF, 2008-Present ......................................Linebackers/Recruiting Coordinator � Alabama, 2007............................................Director of Player Personnel � Georgia Tech, 2006 .....................................Director of Player Personnel � Western Carolina, 2002-05 .........................Defensive Coordinator, Defensive Backs (2002-04) � Georgia Tech, 1999-01 ................................Tight Ends (2001), Graduate Assistant (1999-00) � Albright, 1997-98 ........................................Defensive Backs (1998), Defensive Coordinator/Linebackers (1997) � Fordham, 1996 ...........................................Outside Linebackers � Franklin (N.C.) High School, 1995 ...............Defensive Backs � Western Carolina, 1993-94 .........................Student Assistant GEORGE GODSEY Coaching Experience RUNNING BACKS � SIXTH YEAR AT UCF � GEORGIA TECH, 2001 � UCF, 2004-Present .......................................Running Backs (2009), Quarterbacks (2005-08), Graduate Assistant (2004) DAVE HUXTABLE DEFENSIVE COORDINATOR � SIXTH YEAR AT UCF � EASTERN ILLINOIS, 1979 Coaching Experience � UCF, 2004-Present ..................................... Defensive Coordinator (2008-), Linebackers/Special Teams (2004-07) � North Carolina, 2001-03 ............................ Defensive Coordinator/Linebackers (2002-03), Linebackers/Special Teams (2001) � Oklahoma State, 2000 ............................... Linebackers/Special Teams � East Carolina, 1998-99 ............................... Defensive Line (1999), Linebakcers (1998) � Georgia Tech, 1992-97 ............................... Defensive Coordinator/Linebackers (1996-97), Linebackers/Special Teams (1992-94) � East Carolina, 1990-91 ............................... Linebackers/Special Teams � Western Kentucky, 1985-89 ....................... Defensive Line/Linebackers � Independence CC, 1984............................. Defensive Coordinator � Iowa State, 1982-83 ................................... Graduate Assistant 54 DAVID KELLY WIDE RECEIVERS/ASSISTANT HEAD COACH � FOURTH YEAR AT UCF � FURMAN, 1979 Coaching Experience � UCF, 2006-Present ..................................... Wide Receivers (2007-), Assistant Head Coach (2007-), ............................................................................Director of High School Relations (2006) � Duke, 2004-05 ........................................... Associate Head Coach/Wide Receivers � Stanford, 2002-03 ...................................... Associate Head Coach/Offensive Coordinator � Georgia Tech, 2000-01 ............................... Wide Receivers � LSU, 1996-00.............................................. Wide Receivers � Georgia, 1994-96 ....................................... Running Backs � Dunwoody (Ga.) High School, 1981-93...... Head Coach (1984-93), Assistant Coach (1981-83) � Furman, 1979-80 ....................................... Graduate Assistant THE ST. PETERSBURG BOWL BRENT KEY OFFENSIVE LINE � FIFTH YEAR AT UCF � GEORGIA TECH, 2001 Coaching Experience � UCF, 2005-Present ......................................Offensive Line (2009-), Tight Ends/Special Teams (2008), .......................................................................Recruiting Coordinator (2007), Graduate Assistant (2005-06) � Western Carolina, 2004 ..............................Tight Ends/Fullbacks � Georgia Tech, 2001-02 ................................Graduate Assistant JIM PANAGOS DEFENSIVE LINE � THIRD YEAR AT UCF � MARYLAND, 1993 Coaching Experience � UCF, 2007-Present ..................................... Defensive Line � Minnesota Vikings, 2002-05 ...................... Defensive Line Assistant/Special Teams Assistant (2004-05), ...................................................................... Defensive Quality Control Assistant (2003), ...................................................................... Offensive Quality Control Assistant (2002) � C.R. James (Fla.) Alternative School, 1994-97 .. Assistant Coach � Maryland, 1993 ......................................... Defensive Line Assistant TIM SALEM TIGHT ENDS/SPECIAL TEAMS � SIXTH YEAR AT UCF � ARIZONA STATE, 1985 Coaching Experience � UCF, 2004-Present ......................................Tight Ends/Special Teams (2009-), Offensive Coordinator (2004-08), .......................................................................Running Backs (2007-08), Wide Receivers (2006), Tight Ends (2005), .......................................................................Quarterbacks (2004) � Eastern Michigan, 2003 ..............................Offensive Coordinator/Quarterbacks � Ohio State, 1997-00 ....................................Quarterbacks � Purdue, 1991-96 .........................................Offensive Coordinator (1994-96), Quarterbacks (1991-93) � Colorado State, 1989-90 .............................Running Backs � Phoenix College, 1987-88 ...........................Offensive Coordinator/Quarterbacks � Arizona State, 1985-86 ...............................Graduate Assistant CHARLIE TAAFFE OFFENSIVE COORDINATOR � FIRST YEAR AT UCF � SIENA, 1973 Coaching Experience � UCF, 2009-Present ..................................... Offensive Coordinator � Hamilton Tiger-Cats, 2007-08 .................... Head Coach � Pittsburgh, 2006 ........................................ Offensive Assistant � Maryland, 2001-05 .................................... Offensive Coordinator/Quarterbacks � Montreal Alouettes, 1997-00 .................... Head Coach (1999-00), Offensive Coordinator (1997-98) � The Citadel, 1987-96 ................................. Head Coach � Army, 1981-86 ........................................... Offensive Coordinator, Quarterbacks, Running Backs � Virginia, 1976-80 ....................................... Running Backs, Linebackers, Special Teams � NC State, 1975 ........................................... Graduate Assistant � Georgia Tech, 1974 .................................... Graduate Assistant � Albany, 1973 .............................................. Running Backs 55 SUPPORT STAFF Assistant AD/Football Operations .......Marty O'Leary Special Asst. to the Head Coach ...Manny Messeguer Director of Player Personnel................. Albert Boone Offensive Graduate Assistant ......... Michael Buscemi Defensive Graduate Assistant ..........Andrew Thacker Operations Graduate Assistant .........Mark Cammack Head Athletic Trainer/Football ..Mary Vander Heiden Assistant Athletic Trainer ..............................Jud Fann Assistant Athletic Trainer .........................Ed Woodley Director of Strength & Conditioning ...............Ed Ellis Asst. Director of S&C ............................. Scott Sinclair Asst. Director of S&C ................................... B.J. Faulk Coordinator of Equipment Operations.. Robert Jones Assistant Equipment Manager .........Thaddeus Rivers Director of Video Services .......................John Kvatek Assistant Director/Video Services ......... Chris Hooley Associate Director of Academics ...........Kristy Belden Assistant Director of Academics ........... Lindsey Black Assistant Director of Academics ............... Lisa Moser COACHES ON GAMEDAY IN THE PRESS BOX Sean Beckton (DB), Geoff Collins (LB), Tim Salem (TE & Special Teams), Charlie Taaffe (Offensive Coordinator) ON THE FIELD George O'Leary (Head Coach), George Godsey (RB), Dave Huxtable (Defensive Coordinator), David Kelly (WR), Brent Key (OL), Jim Panagos (DL) THE ST. PETERSBURG BOWL UCF POSTSEASON HISTORY 1987 DIVISION II PLAYOFFS - FIRST ROUND 1990 FCS PLAYOFFS - FIRST ROUND INDIANA (PA.) UCF NOVEMBER 28, 1987 10 UCF 12 YOUNGSTOWN STATE ORLANDO, FLORIDA NOVEMBER 24, 1990 STAMBAUGH STADIUM TEAM STATISTICS UCF First Downs 18 Rushing (net) 294 Passing (net) 106 Att.-Cmp.-Int. 11-8-0 Total Offense 400 20 17 YOUNGSTOWN, OHIO 1987 DIVISION II PLAYOFFS - SECOND ROUND 1990 FCS PLAYOFFS - QUARTERFINALS TROY STATE UCF DECEMBER 5, 1987 FLORIDA CITRUS BOWL UCF 30 64 407 60-39-4 471 31 WILLIAM AND MARY 10 UCF ORLANDO, FLORIDA DECEMBER 1, 1990 FLORIDA CITRUS BOWL UCF 23 278 300 23-16-1 578 38 52 ORLANDO, FLORIDA 56 THE ST. PETERSBURG BOWL UCF POSTSEASON HISTORY 1990 FCS PLAYOFFS - SEMIFINALS 2005 CONFERENCE USA CHAMPIONSHIP GAME UCF GEORGIA SOUTHERN DECEMBER 8, 1990 ALLEN E. PAULSON STADIUM 7 TULSA 44 UCF DECEMBER 3, 2005 FLORIDA CITRUS BOWL UCF 17 149 190 27-13-2 339 44 27 ORLANDO, FLORIDA STATESBORO, GEORGIA 1993 FCS PLAYOFFS - FIRST ROUND 2007 CONFERENCE USA CHAMPIONSHIP GAME UCF YOUNGSTOWN STATE NOVEMBER 27, 1993 30 TULSA 56 UCF TEAM STATISTICS TULSA First Downs 24 Rushing (net) 32 Passing (net) 438 Att.-Cmp.-Int. 29-57-3 Total Offense 470 UCF 18 308 128 6-13-0 436 25 44 YOUNGSTOWN, OHIO DECEMBER 1, 2007 ORLANDO, FLORIDA BRIGHT HOUSE NETWORKS STADIUM 57 THE ST. PETERSBURG BOWL UCF BOWL HISTORY 2005 SHERATON HAWAII BOWL NEVADA UCF DECEMBER 24, 2005 ALOHA STADIUM single-season g run early in the fourth quarter. Nevada scored again on Rowe's 7-yard scoring pass to Travis Branzell for a 42-32 lead with 3:18 remaining. -y yard Kevin Smith, Conference USA's Freshman of the Year, scored on a 78-yard ig ghts run, the second longest run from scrimmage in UCF history, to give the Knights a 14-0 lead in the first quarter. The run was the longest in school history by a ngest freshman and in UCF's 10-year Football Bowl Subdivision history. The longest n run from scrimmage in UCF history was Elgin Davis' 79-yard scoring run vs. Samford (Nov. 15, 1986). b The 78-yard touchdown run by Smith was the second-longest run by a un u true freshman in the NCAA during the year. It also marked the longest run in Sheraton Hawaii Bowl history. 49 OT 48 UCF played in the 2005 Sheraton Hawaii Bowl at Aloha Stadium. 58 wa aii Brandon Marshall was named UCF's MVP of the Sheraton Hawaii Bowl s after posting 11 receptions for 210 yards and three touchdowns vs. Nevada. THE ST. PETERSBURG BOWL SHERATON HAWAII BOWL POSTGAME NOTES Team Notes � UCF dropped a 49-48 overtime decision to Nevada in the 2005 Sheraton Hawaii Bowl and finished the season with an 8-5 record, just one year after going 0-11. The turnaround of seven games from 2004 left UCF tied with seven other schools for the fourth-best single-season turnaround in Football Bowl Subdivision history. � The bowl appearance was the first in school history for the Knights. � The loss dropped UCF's all-time record in overtime games to 0-4. � UCF scored for the seventh time this season on its opening drive. Junior QB Steven Moffett tossed a 51-yard touchdown pass to senior WR Brandon Marshall to give UCF a 7-0 lead. The touchdown reception was the ninth of the season for Marshall. � UCF scored 17 points in the first quarter, the most by the Knights in the opening period of play in 2005. Player Notes � Brandon Marshall moved into a tie for sixth place in UCF single-season history with three receiving touchdowns vs. Nevada, bringing his season total to 11. Marshall o opened the scoring with a 51-yard reception on UCF's first possession and closed o regulation with a late touchdown to send the game to overtime. out � True freshman TB Kevin Smith, Conference USA's Freshman of the Year, scored on a 78-yard run, the second longest run from scrimmage in UCF history, to give the K Knights a 14-0 lead in the first quarter. The run was the longest in school history by a f freshman and in UCF's 10-year Division I-A history. The longest run from scrimmage i UCF history was Elgin Davis' 79-yard scoring run vs. Samford (Nov. 15, 1986). in � The 78-yard touchdown run by Smith was the second-longest run by a true freshm in the NCAA during the season. It also marked the longest run in Sheraton man H Hawaii Bowl history. � Smith became just the sixth player in school history to record a 1,000-yard season o the ground. Smith ended the season with 1,178 yards, third in the UCF record on b books. � Marshall became the 10th player in UCF history to reach the 1,000-yard plateau f receiving in a season. Marshall went over the century mark with his 51-yard for t touchdown reception in the first quarter. He ended the season with 1,195 yards r receiving and seven 100-yard games. � Senior PK Matt Prater kicked three field goals to move into sole possession of s second place in UCF history with 49 career field goals. � Thirteen members of the UCF football team had their collegiate careers come t an end at the Sheraton Hawaii Bowl. Seniors Jeff Branham, John Brown, Mike to G Graham, Darcy Johnson, Frisner Nelson, Mike Malatesta, Antonio Eldemire, Glenroy W Watkins, Anthony Willis, Matt Prater and tri-captains James Cook, Paul Carrington a Brandon Marshall suited up for the final time for the Knights. and BOX SCORE Nevada UCF 1st 13:20 UCF 9:24 UCF 8:33 NEV 4:08 UCF 2nd 11:32 NEV 7:37 NEV 5:01 NEV 0:56 UCF 3rd 8:19 UCF 1:51 UCF 4th 13:02 NEV 3:18 NEV 1:32 UCF 0:55 UCF OT 15:00 NEV 15:00 UCF 7 17 21 3 0 12 14 10 7 6 49 48 Marshall 51 pass from Moffett (Prater kick) Smith 78 run (Prater kick) Hubbard 4 run (Jaekle kick) Prater 47 field goal Mitchell 1 run (Jaekle kick) Mitchell 1 run (Jaekle kick) Hubbard 24 run (Jaekle kick) Prater 38 field goal Marshall 29 pass from Moffett (Moffett pass failed) Smith 3 run (Moffett pass failed) Hubbard 5 run (Jaekle kick) Branzell 7 pass from Rowe (Jaekle kick) Prater 46 field goal Marshall 16 pass from Moffett (Prater kick) Rowe 4 run (Jaekle kick) Smith 19 run (Prater kick failed) NEV 30 51-369 254 32-22-1 83-623 0-0 1-2 1-16 1-36 3-44.0 1-1 10-92 UCF 30 44-254 301 36-19-1 80-555 1-0 0-0 5-74 1-12 3-38.3 0-0 6-30 First Downs Rushes-Yds Passing Yds Passes Comp.-Att.-Int. Total Offense Plays-Yds Fumble Returns-Yards Punt Returns-Yards Kickoff Returns-Yards INT Returns-Yards Punts (Number-Avg) Fumbles-Lost Penalties-Yards RUSHING: Nevada-B.J. Mitchell 23-178; Robert Hubbard 15-126; Jeff Rowe 13-65. UCF-SMITH, Kevin 29-202; PETERS, Jason 9-44; MOFFETT, Steven 6-8. PASSING: Nevada-Jeff Rowe 22-32-1-254. UCF-MOFFETT, Steven 19-36-1-301. RECEIVING: Nevada-Caleb Spencer 11-114; N. Flowers 6-96; A. Pudewell 2-23; Kyle Sammons 1-8; Travis Branzell 1-7; Robert Hubbard 1-6. UCF-MARSHALL, Brandon 11-210; JOHNSON, Darcy 3-49; PETERS, Jason 2-14; ROSS, Rocky 1-14; JACKSON, Kenny 1-12; SMITH, Kevin 1-2. INTERCEPTIONS: Nevada-DeAngelo Wilson 1-36. UCF-VENSON, Jason 1-12. FUMBLES: Nevada-Robert Hubbard 1-1. UCF-None. Stadium: Aloha Attendance: 26,254 59 THE ST. PETERSBURG BOWL UCF BOWL HISTORY 2007 AUTOZONE LIBERTY BOWL MISSISSIPPI STATE UCF DECEMBER 29, 2007 LIBERTY BOWL MEMORIAL STADIUM MEMPHIS, TENNESSEE MEMPHIS, Tenn. - UCF's memorable season came to a close in a 10-3 loss to Mississippi State in the AutoZone Liberty Bowl. Mississippi State's win in the Liberty Bowl was the kind of game coach Sylvester Croom's mentor would have loved. Playing at the site Paul "Bear" Bryant's final game 25 years ago, the Bulldogs used power running and a dominant defense to beat UCF 10-3 on Saturday and earn a milestone win for the once dormant program. ree in six seasons. As the Bulldogs did in big wins over Auburn, Kentucky and Alabama this season, his they concentrated on the running game - both on offense and defense. Smith found mith the going difficult in the second half and finished with an average of 3.4 yards per 4 carry after rushing for 188.3 yards per game during the regular season. Dixon finished with 86 yards and became the seventh Bulldogs runner to go over 1,000 yards (1,066). But like the rest of the Bulldogs, he was ineffective much of e the game. The teams were tied 3-3 at halftime, mostly due to conservative play-calling. nd from kicker Michael Torres, who gave the Knights a 3-0 lead in the second quarter with a 45-yard field goal, but missed from 32 and 37 in the second half. ach "I think those missed field goals were big momentum breakers," UCF coach George 't h Bowl record with 11 punts. But the Bulldogs came up with just enough big plays h wn. 10-play, 59-yard drive that consumed 3:53 and finally led to a touchdown. urth-down Pegues, fittingly, made the final key defensive play, knocking down a fourth-down pass on UCF's final drive. The Bulldogs held the Knights to 219 yards and forced four turnovers. The junior safety gave Mississippi State two excellent opportunities with interhe ceptions in the first 30 minutes, returning the ball to UCF's 6 and 38. The safety's rter. first pick set up a 22-yard field goal by Adam Carlson in the second quarter. 10 3 A total of 63,816 f f fans stormed through the gates for the 2007 AutoZone Liberty g g f y Bowl. 60 Johnell Neal gets in the way of this Wesley Carrol pass intended for Tony Burks, y pa ass hauling it in for a first-quarter interception. n. THE ST. PETERSBURG BOWL AUTOZONE LIBERTY BOWL POSTGAME NOTES � UCF finished 2007 with a 10-4 record, marking the most wins in a season in program history. � UCF witnessed its school-record seven-game winning streak come to an end. �. � Smith finished the season in second on the NCAA Football Bowl Subdivision single-season rushing list with 2,567 yards. He reached that figure on 450 carries for an average of 5.70 yards per carry. Meanwhile, Smith posted a 183.4 yardsper-game average and 29 rushing touchdowns this year. ente � Smith now has 4,679 career rushing yards in three seasons for UCF. He entered the 2007 campaign with 2,112 yards in his first two years in Orlando. � Junior Johnell Neal picked off his sixth pass of the season, tying Joe Burnett for o the team and Conference USA leads. The interception was the Knights' 24th of the bowl season, which is the second most in the FBS this season (Cincinnati after its bowl game had 26). � Neal and Burnett are the first Knight teammates to pick off at least six passes in passe es the same season. The six interceptions are career highs for both cornerbacks and are the second most in a single-season by any Knight. � Junior safety Sha'reff Rashad registered his first career sack in the second qu uarquard ter. With seven first-half tackles, Rashad surpassed 100 on the season. Rashad is 13). the first Knight since 2003 to register 100 tackles in a season (Peter Sands, 11 113). � Rashad's three tackles for a loss were also a career-high in a game. Rashad finished the game with nine tackles and 103 for the season. � Junior safety Jason Venson had a game-high 11 tackles, which tied a season-high season n-high he set in a win at NC State in the 2007 opener. � The UCF defense held MSU to just 39 passing yards. The Black and Gold's previpr revious season-low was 144 yards vs. Southern Miss. �. � True freshman Blake Clingan's first quarter punt of 55 yards was a season long. He completed his afternoon with six punts for a 42.83 average. � The three points scored by the Knights were the fewest in a first half and a game this season. � The Knights held Mississippi State to 10 points, the lowest by a UCF opponent in 2007. BOX SCORE UCF Mississippi St. 0 0 3 3 0 0 0 7 3 10 2nd 11:49 UCF Torres 45 field goal 6:02 MSU Carlson 22 field goal 4th 1:54 MSU Dixon 1 run (Carlson kick) UCF 13 47-131 88 24-10-3 71-219 0-0 6-54 1-6 1--3 6-42.8 2-1 3-25 31:08 4 of 17 0 of 1 0-2 1-12 MSU 10 41-160 39 20-8-1 61-199 0-0 2-5 2-29 3-45 11-34.9 0-0 5-45 28:52 2 of 13 0 of 0 2-3 3-17 61 RUSHING: UCF-SMITH, Kevin 35-119; ISRAEL, Kyle 11-13; FRANCIS, Curtis-ISRAEL, Kyle 10-24-3-88. Mississippi State-Wesley Carroll 8-18-139; Michael Henig 0-2-0-0. RECEIVING: UCF-SMITH, Kevin 3-12; ROSS, Rocky 2-27; RABAZINSKI, C. 2-26; WATTERS, Brian 2-13; AIKEN, Kamar 1-10. Mississippi State-Christian Ducre 3-10; Jamayel Smith 2-7; Anthony Dixon 1-10; Brandon Hart 1-8; Tony Burks 1-4. INTERCEPTIONS: UCF-NEAL, Johnell 1-(-3). Mississippi State-Derek Pegues 2-45; Keith Fitzhugh 1-0. FUMBLES: UCF-BURNETT, Joe 1-0; SMITH, Kevin 1-1. Mississippi State-None. Stadium: Liberty Bowl Attendance: 63,816 THE ST. PETERSBURG BOWL offers degrees through its 11 colleges: Burnett College of Biomedical Sciences College of Arts and Humanities College of Business Administration College of Education College of Engineering and Computer Science College of Health and Public Affairs offers almost 200 bachelor's and master's degrees and 29 doctoral programs. UCF began offering a doctor of medicine degree program in 2009. The M.D. Program enrolled an initial class of 40 students and will eventually produce about 120 medical graduates each year. 62 FORGING AHEAD With a total enrollment of 53,537, UCF has the third-largest student population in the country and has become a prominent player in undergraduate education nationwide offering innovative corporate partnerships, world-renowned faculty, and cutting-edge technology and undergraduate research opportunities. THE ST. PETERSBURG BOWL, high-tech companies, local municipalities and in the entertainment industry. Students typically enjoy success in landing employment thanks to their due diligence, their preparation at UCF and the university's fine reputation among employers. 63 CHARGING KNIGHT Don Reynolds' statue, "The Charging Knight," at the Insurance Office of America Plaza outside Bright House Networks Stadium, symbolizes UCF's excellence in academics, partnerships and athletics. THE ST. PETERSBURG BOWL THIS IS UCF FOOTBALL Conference championships. Bowl appearances. Top-notch facilities. Growing fan support. Experienced coaches. National Football League Draft picks. Bigtime competition. National media coverage. The nation's third-largest university, providing opportunities in hundreds of academic fields. Academic excellence. The best in strength and conditioning and sports medicine. Fantastic p on-campus housing. Access to one of the world's most vibrant cities. st This is UCF football. 64 | http://issuu.com/ucfathleticsyearbooks/docs/2009_ucf_bowl_guide?viewMode=magazine | CC-MAIN-2015-14 | refinedweb | 54,166 | 79.3 |
31274/error-while-accessing-tomcat-installed-in-ec2
i have installed tomcat 7 in an Ubuntu EC2 instance. It's up and running but I cannot access it using the public ip. I have also setup the security groups as specified in the previous posts. But, still no luck.
Any help on this would be really appreciated.
Make sure your Ubuntu Uncomplicated Firewall is controlling the traffic instead of IP tables.
sudo ufw enable
Then configure it to allow 8080.
sudo ufw allow 8080
It should work
You need to set the proper privileges ...READ MORE
The error clearly says the error you ...READ MORE
import boto3
ec2 = boto3.resource('ec2')
instance = ec2.create_instances(
...READ MORE
Q: If I have data installed in ...READ MORE
Check if the FTP ports are enabled ...READ MORE
To connect to EC2 instance using Filezilla, ...READ MORE
I had a similar problem with trying ...READ MORE
Hi,
Here for the above mentioned IAM user ...READ MORE
Use the user data to pass a ...READ MORE
Here is a bash command that will ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/31274/error-while-accessing-tomcat-installed-in-ec2 | CC-MAIN-2021-43 | refinedweb | 204 | 69.68 |
Here is some code I half stole from the c++ referance website and filled in a little for my purposes. Initially I wanted to make sure I understood the equal_range function:
this is my understanding of the code below.
calling equal_range returns something that looks like : <<'b',20>, <c,30>>
it becomes <'b',20>
when it gets to the loop, it gets set to <'b',20> again
calling erase should remove the "b" key and its value from the map.
and my output should look like
a->10
b->0
c->30
d->40
e->50
f->60
unfortunately it looks like this:
a->10
b->0
c->30
d->0
e->0
f->0
in fact, a,c and e work as expected, removing the equal_range parameter.
the rest have odd functionality. F erases the whole map, which sorta makes sense to me, but b, and d have me stumped.
any thoughts on this? Also, I know this is not the best way to accomplish this, I wrote this to help better understand a much larger project.
Thanks!
Code:#include <iostream> #include <map> using namespace std; int main () { map<char,int> * mymap = new map<char,int>(); map<char,int>::iterator it; pair<map<char,int>::iterator,map<char,int>::iterator> ret; bool yes = true; // insert some values: (*mymap)['a']=10; (*mymap)['b']=20; (*mymap)['c']=30; (*mymap)['d']=40; (*mymap)['e']=50; (*mymap)['f']=60; ret = mymap->equal_range('b'); it = ret.first; if(it != mymap->end()){ cout << "lower bound points to: "; cout << ret.first->first << " => " << ret.first->second << endl; cout << "upper bound points to: "; cout << ret.second->first << " => " << ret.second->second << endl; for(it=ret.first; it!=ret.second; ++it){ if(yes){ mymap->erase(it); cout << "erased the iter:"<< it->first << endl; } } for(int i = 'a'; i<='f'; i++){ cout << char(i) << "->" << (*mymap)[i] << endl; } } return 0; } | http://cboard.cprogramming.com/cplusplus-programming/145468-iterator-missunderstanding.html | CC-MAIN-2014-52 | refinedweb | 308 | 69.92 |
You’re reading the English version of this content since no translation exists yet for this locale. Help us translate this article!
The
NamedNodeMap interface represents a collection of
Attr objects. Objects inside a
NamedNodeMap are not in any particular order, unlike
NodeList, although they may be accessed by an index as in an array.
A
NamedNodeMap object is live and will thus be auto-updated if changes are made to its contents internally or elsewhere.
Although called
NamedNodeMap, this interface doesn't deal with
Node objects but with
Attr objects, which were originally a specialized class of
Node, and still are in some implementations.
Properties
This interface doesn't inherit any property.
NamedNodeMap.lengthRead only
- Returns the amount of objects in the map.
Methods
This interface doesn't inherit any method.
NamedNodeMap.getNamedItem()
- Returns a
Attr, corresponding to the given name.
NamedNodeMap.setNamedItem()
- Replaces, or adds, the
Attridentified in the map by the given name.
NamedNodeMap.removeNamedItem()
- Removes the
Attridentified by the given map.
NamedNodeMap.item()
- Returns the
Attrat the given index, or
nullif the index is higher or equal to the number of nodes.
NamedNodeMap.getNamedItemNS()
- Returns a
Attridentified by a namespace and related local name.
NamedNodeMap.setNamedItemNS()
- Replaces, or adds, the
Attridentified in the map by the given namespace and related local name.
NamedNodeMap.removeNamedItemNS()
- Removes the
Attridentified by the given namespace and related local name.
Specifications. | https://developer.mozilla.org/bn/docs/Web/API/NamedNodeMap | CC-MAIN-2019-47 | refinedweb | 230 | 57.67 |
I’m not a big fan of monads, but I understand them. They’re not rocket science. Let me try to write a monad tutorial that would’ve helped my past self understand what the fuss was about. I like concrete explanations that start with practical examples, without any annoying metaphors, and especially without any Haskell code. So here’s five examples that have something in common:
1) If a function has type
A → B, and another function has type
B → C, then we can “compose” them into a new function of type
A → C.
2) Let’s talk about functions that can return multiple values. We can model these as functions of type
A → List<B>. There is a natural way to “compose” two such functions. If one function has type
A → List<B>, and another function has type
B → List<C>, then we can “compose” them into a new function of type
A → List<C>. The “composition” works by joining together all the intermediate lists of values into one. This is similar to MapReduce, which also collects together lists of results returned by individual workers.
3) Let’s talk about functions that either return a value, or fail somewhere along the way. We can model these as functions of type
A → Option<B>, where
Option<B> is a type that contains either a value of type B, or a special value None. There is a natural way to “compose” two such functions. If one function has type
A → Option<B>, and another function has type
B → Option<C>, then we can “compose” them into a new function of type
A → Option<C>. The “composition” works just like regular function composition, except if the first function returns None, then it gets passed along and the second function doesn’t get called. This way you can have a “happy path” in your code, and check for None only at the end.
4) Let’s talk about functions that call a remote machine asynchronously. We can model these as functions of type
A → Promise<B>, where
Promise<B> is a type that will eventually contain a value of type B, which you can wait for. There is a natural way to “compose” two such functions. If one function has type
A → Promise<B>, and another function has type
B → Promise<C>, then we can “compose” them into a new function of type
A → Promise<C>. The “composition” is an asynchronous operation that waits for the first promise to return a value, then calls the second function on that value. This is known in some languages as “promise pipelining”. It can sometimes make remote calls faster, because you can send both calls to the remote machine in the same request.
5) Let’s talk about functions that do input or output in a pure functional language, like Haskell. We can define
IO<A> as the type of opaque “IO instructions” that describe how to do some IO and return a value of type A. These “instructions” might eventually be executed by the runtime, but can also be freely passed around and manipulated before that. For example, to create instructions for reading a String from standard input, we’d have a function of type
Void → IO<String>, and to create instructions for writing a String to standard output, we’d have
String → IO<Void>. There is a natural way to “compose” two such functions. If one function has type
A → IO<B>, and another function has type
B → IO<C>, then we can “compose” them into a new function of type
A → IO<C>. The “composition” works by just doing the IO in sequence. Eventually the whole program returns one huge complicated IO instruction with explicit sequencing inside, which is then passed to the runtime for execution. That’s how Haskell works.
Another thing to note is that each of the examples above also has a natural “identity” function, such that “composing” it with any other function F gives you F again. For ordinary function composition, it’s the ordinary identity function
A → A. For lists, it’s the function
A → List<A> that creates a single-element list. For options, it’s the function
A → Option<A> that takes a value and returns an option containing that value. For promises, it’s the function
A → Promise<A> that takes a value and makes an immediately fulfilled promise out of it. And for IO, it’s the function
A → IO<A> that doesn’t actually do any IO.
At this point we could go all mathematical and talk about how “compose” is like number multiplication, and “identity” is like the number 1, and then go off into monoids and categories and functors and other things that are frankly boring to me. So let’s not go there! Whew!
Instead, to stay more on the programming track, let’s use a Java-like syntax to define an interface Monad
The main complication is that the type parameter T must not be a simple type, like String. Instead it must be itself a generic type, like List, Option or Promise. The reason is that we want to have a single implementation of
Monad<Option>, not separate implementations like
Monad<Option<Integer>>,
Monad<Option<String>> and so on. Java and C# don’t support generic types whose parameters are themselves generic types (the technical term is “higher-kinded types”), but C++ has some support for them, called “template template parameters”. Some functional languages have higher-kinded types, like Haskell, while others don’t have them, like ML.
Anyway, here’s what it would look like in Java, if Java supported such things:
interface Monad<T> { <A> Function<A, T<A>> identity(); <A, B, C> Function<A, T<C>> compose( Function<A, T<B>> first, Function<B, T<C>> second ); } class OptionMonad implements Monad<Option> { public <A> Function<A, Option<A>> identity() { // Implementation omitted, figure it out } public <A, B, C> Function<A, Option<C>> compose( Function<A, Option<B>> first, Function<B, Option<C>> second ) { // Implementation omitted, do it yourself } }
Defining Monad as an interface allows us to implement some general functionality that will work on all monads. For example, there’s a well known function “liftM” that converts a function of type
A → B into a function of type
List<A> → List<B>, or
Promise<A> → Promise<B>, or anything else along these lines. For different monads, liftM will do different useful things, e.g. liftM on lists is just the familiar “map” function in disguise. The implementation of liftM with lambda expressions would be very short, though a little abstract:
<T, A, B> Function<T<A>, T<B>> liftM( Function<A, B> func, Monad<T> monad ) { return (T<A> ta) -> monad.compose( () -> ta, (A a) -> monad.identity().apply(func.apply(a)) ).apply(void); }
Or if you don’t like lambda expressions, here’s a version without them:
<T, A, B> Function<T<A>, T<B>> liftM( Function<A, B> func, Monad<T> monad ) { return new Function<T<A>, T<B>>() { public T<B> apply (T<A> ta) { return monad.compose( new Function<Void, T<A>>() { public T<A> apply() { return ta; } }, new Function<A, T<B>>() { public T<B> apply(A a) { return monad.identity().apply(func.apply(a)); } } ).apply(void); } } }
Monads first became popular as a nice way to deal with IO instructions in a pure language, treating them as first-class values that can be passed around, like in my example 5. Imperative languages don’t need monads for IO, but they often face a similar problem of “option chaining” or “error chaining”, like in my example 3. For example, the option types in Rust or Swift would benefit from having a “happy path” through the code and a centralized place to handle errors, which is exactly the thing that monads would give you.
Some people think that monads are about side effects, like some sort of escape hatch for Haskell. That’s wrong, because you could always do IO without monads, and in fact the first version of Haskell didn’t use them. In any case, only two of my five examples involve side effects. Monads are more like a design pattern for composing functions, which shows up in many places. I think the jury is still out on whether imperative/OO languages need to use monads explicitly, but it might be useful to notice them when they appear in your code implicitly.
Feel free to edit this page to fix any typos or add more insights. | https://tutswiki.com/yet-another-lousy-monad-tutorial/ | CC-MAIN-2020-34 | refinedweb | 1,426 | 58.92 |
Script You Can Use
As of December 2011, this topic has been archived. As a result, it is no longer actively maintained. For more information, see Archived Content. For information, recommendations, and guidance regarding the current version of Internet Explorer, see Internet Explorer Developer Center.
Robert Hess
Microsoft Corporation
September 13, 1999
I've recently received a lot of questions about this, so I thought it would be a good idea to share with the budding "scripters" out there information to help them get past some of the problems and confusion they might be facing.
I talk with a lot of different people who are working on developing applications, Web sites, and specifically "Web applications." As I've discussed in the past, the evolution of Web applications is drawing the fairly static Web-based document into the realm of an interactive application. This means that Web site designers need to think more and more like applications programmers.
Web designers entering the programming arena may experience a slightly awkward stage, as they learn some of the strange nuances of application programming that might be second nature to a traditional programmer. One of the key concepts that virtually all programmers face early on is the separation of "programming language" from "application environment." By this I mean that the programming language you are using is just something that describes the "structure" or "form" that your instructions need to follow in order to be understood by the interpreter or compiler that you are using. The programming language often doesn't provide much in the way of actual functionality, or user interface itself, perhaps just some rudimentary methods for conducting input and output.
To illustrate this a little better, let's take a (very!) simple example program written in the C language. This is the classic "Hello, world" program:
#include <stdio.h> main() { printf ("Hello, world.\n"); }
You almost can't get simpler than this, but even here, we are relying on functionality that isn't directly supplied by the programming language. The printf function, while one of the most common functions used in C, isn't really part of the programming language. Instead, it is part of one of the standard libraries of functions. This is why it was necessary for our example to use:
#include <stdio.h>
This is where we told the compiler how to evaluate printf properly. Granted, any implementation of C that didn't include support for printf would be considered incomplete. It is still important to realize that the functionality of printf isn't actually part of the programming language, but instead comes from a library of functions that are provided by the vendor of the compiler.
In the same way, a lot of the functionality gained from the script code you put onto your Web pages isn't directly supplied by the script language you are using. Most of the functionality gained is directly dependent on what is being exposed by the Web browser, and not by the scripting language itself. This means that in addition to understanding the form and syntax of the scripting language, you also need to understand how to access the functionality provided by the environment in which your script is being run.
By going to, you can find documentation and references for Jscript® and Visual Basic® Scripting Edition (VBScript) that will assist you in understanding the syntax of the language, as well as the built-in functions that are provided. But when you try to add script code to your Web pages, you won't get very far with just that information. You also need to know of the other tools that are at your disposal, specifically what objects are included with the browser, how to access them, and what properties, methods, and events they expose for your usage.
Click "show TOC", then scroll to and click the "DHTML References" option in the left-hand table of contents. This will give you access to the documentation for the various levels of functionality exposed by Internet Explorer. The key areas for you here are the first four:
- Objects
- Properties
- Methods
- Events
While objects provide the base from which almost everything happens, in and of themselves they really aren't that exciting. You'll probably find yourself spending the most time in the "Methods" and "Properties" sections, which list all of the methods and properties exposed by the various objects contained within Internet Explorer. Browse through these listings until you find something that sounds interesting, click on it, and find out from where it is coming.
To make your browsing as productive as possible, you'll need to understand early on the difference between a property and a method. A property is a value—a number, name, or some other setting that you can either "set" or "retrieve" (or often both). Some common properties exposed by many objects include height, width, color, and name. A method is something that normally will instigate additional processing or functionality. Some common methods that you might use are click (which will cause the object being referenced to behave as if it had been clicked by the user), blur (which will cause the object being referenced to lose its focus, or deselect itself), and scrollIntoView (which will cause the referenced object to be scrolled into view on the page).
The specific question that prompted this article was from a Web author who had downloaded the VBScript and JScript documentation; try as he might, he couldn't find any information on how to access or use the "print" capability from a Web page even though he had heard that it was available. The problem is that he was looking in the wrong place. The right place was in the above-mentioned list of methods. By looking through the list, you can clearly see that the print method is exposed by the window object, and to call it, you use the form:
window.print()
Often, simply by looking through the list of available methods and properties, you can get ideas on how to achieve some form of special functionality in your application. I highly recommend that all Web designers familiarize themselves with these pages, so that they can better understand the tools at their disposal.
Robert Hess is an evangelist in Microsoft's Developer Relations Group. Fortunately for all of us, his opinions are his own. | http://msdn.microsoft.com/en-us/library/bb263950(v=vs.85).aspx | CC-MAIN-2014-41 | refinedweb | 1,064 | 57.5 |
Hot questions for Using Neural networks in multidimensional array
Question:
I have an image represented as an array (img), and I'd like to make many copies of the image, and in each copy zero out different squares of the image (in the first copy zero out 0:2,0:2 in the next copy zero out 0:2, 3:5 etc). I've used np.broadcast_to to create multiple copies of the image, but I'm having trouble indexing through the multiple copies of the image, and the multiple locations within the images to zero out squares within the image.
I think I'm looking for something like skimage.util.view_as_blocks, but I need to be able to write to the original array, not just read.
The idea behind this is to pass all the copies of the image through a neural network. The copy that performs the worst should be the one with the class (picture) I am trying to identify in its zero'd out location.
img = np.arange(10*10).reshape(10,10) img_copies = np.broadcast_to(img, [100, 10, 10]) z = np.zeros(2*2).reshape(2,2)
Thanks
Answer:
I think I have cracked it! Here's an approach using
masking along a
6D reshaped array -
def block_masked_arrays(img, BSZ): # Store shape params m = img.shape[0]//BSZ n = m**2 # Make copies of input array such that we replicate array along first axis. # Reshape such that the block sizes are exposed by going higher dimensional. img3D = np.tile(img,(n,1,1)).reshape(m,m,m,BSZ,m,BSZ) # Create a square matrix with all ones except on diagonals. # Reshape and broadcast it to match the "blocky" reshaped input array. mask = np.eye(n,dtype=bool).reshape(m,m,m,1,m,1) # Use the mask to mask out the appropriate blocks. Reshape back to 3D. img3D[np.broadcast_to(mask, img3D.shape)] = 0 img3D.shape = (n,m*BSZ,-1) return img3D
Sample run -
In [339]: img Out[339]: array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) In [340]: block_masked_arrays(img, BSZ=2) Out[340]: array([[[ 0, 0, 2, 3], [ 0, 0, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]], [[ 0, 1, 0, 0], [ 4, 5, 0, 0], [ 8, 9, 10, 11], [12, 13, 14, 15]], [[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 0, 0, 10, 11], [ 0, 0, 14, 15]], [[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 0, 0], [12, 13, 0, 0]]])
Question:
I had to create a Matrix class, so I could make use of that in my Neural Network project. How could I achieve that when creating a Matrix object, it would work like it does with multidimensional array?
So basically I have a Matrix class which looks like this:
class Matrix { private int rows; private int size; private int columns; private double[,] _inMatrix; public double this[int row, int col] { get { return _inMatrix[row, col]; } set { _inMatrix[row, col] = value; } } public Matrix(int row, int col) { rows = row; columns = col; size = row * col; _inMatrix = new double[rows, columns]; } public Matrix() { } //and bunch of operations
It works like a charm when I know the rows and columns of the Matrix, but I would love to be able to set the values at the start or later. When I create a Matrix object, I do it this way:
Matrix m1=new Matrix(row, column)
What I want to do is to be able to set the values at the start like I would with arrays. I know that in C# this is how we initialize a multidimensional array:
double[,] 2Darray = new double[,] { { 1, 2 }, { 3, 4 }, { 5, 6 }, { 7, 8 } }; //or int[,] 2Darray; 2Darray = new int[,] { { 1, 2 }, { 3, 4 }, { 5, 6 }, { 7, 8 } };
How could I achieve something similar?
Answer:
Maybe something like this, to have
implicit operator to be able to do
Matrix m = new double[,] { {1,1}, {2,3}};. Also you don't need
_rows and
_columns as you could easily extract that from the underlaying multidimensional array(
GetLength(int dimention)).
class Matrix { private double[,] _inMatrix; public double this[int row, int col] { get => _inMatrix[row, col]; set => _inMatrix[row, col] = value; } public Matrix(double[,] a) => Initialize(a); //and bunch of operations private void Initialize(double[,] a) => _inMatrix = a; public static implicit operator Matrix(double[,] a) => new Matrix(a); }
Question:
Alright - assume I have two numpy arrays, shapes are:
(185, 100, 50, 3) (64, 100, 50, 3)
The values contained are 185 or 64 frames of video (for each frame, width is 100 pixels, height is 50, 3 channels, these are just images. The specifics of the images remain constant - the only value that changes is the number of frames per video) I need to get them both into a single array of some shape like
(2, n, 100, 50, 3)
Where both videos are contained (to run through a neural net as a batch)
I've already tried using np.stack - but I get
ValueError: all input arrays must have the same shape
Answer:
This is a quick brainstorm idea that I've got, along with strategy and Python code. Note: I was going to stick to just comment but to illustrate this idea I'd need to type in some codes. So here we go! (grab a coffee / a strong drink is recommended...)
Current State
- we have video 1
vid1with 4D shape
(185, 100, 50, 3)
- we have video 2
vid2with 4D shape
(64, 100, 50, 3)
- ... where the shape represents
(frame ID, width, height, RGB channels)
- we want to "stack" the two videos together as one numpy array with 5D shape
(2, n, 100, 50, 3). Note:
2because we are stacking 2 videos.
nis a hyperparameter that we can choose. We keep the video size the same (
100 width x 50 height x 3 RGB channels)
Opportunities
The first thing I see is that
vid1 has roughly 3 times more frames than
vid2. What about we use
60 as the common factor? i.e. let's set our hyperparameter
n to
60. (Note: some "frame cropping" / "frame throwing away" may be required - this will be covered below.)
Strategy
Phase 1 - Crop both videos (throw away some frames)
Let's crop both
vid1 and
vid2 to nice round numbers that are of multiple of
60 (our
n - the hyperparameter). Concretely:
- crop
vid1so that the shape becomes
(180, 100, 50, 3). (i.e. we throw away the last 5 frames). We call this new cropped video
vid1_cropped.
- crop
vid2so that the shape becomes
(60, 100, 50, 3). (i.e. we throw away the last 4 frames). We call this new cropped video
vid2_cropped.
Phase 2 - Make both videos 60 frames
vid2_croppedis already at 60 frames, with shape
(60, 100, 50, 3). So we leave this alone.
vid1_croppedhowever is at 180 frames. So I suggest we reduce this video to 60 frames, by averaging the RGB channel values in 3-frame batches - for all pixel positions (along width and height). What we will get at the end of this process, is a somewhat "diluted" (averaged) video with the same shape as
vid2_cropped-
(60, 100, 50, 3). Let's called this diluted video
vid1_cropped_diluted.
Phase 3 - stack the two same-shape videos together
Now that both
vid2_cropped and
vid1_cropped_diluted are of the same 4D shape
(60, 100, 50, 3). We may stack them together to obtain our final numpy array of 5D shape
(2, 60, 100, 50, 3) - let's call this
vids_combined.
We are done!
Demo
Turning the strategy into codes. I did this in Python 3.6 (with Jupyter Notebook / Jupyter Console).
Some notes:
I yet to validate the code (and revised as needed). In the meantime If you see any bugs please shout - I will be happy to update.
I have a gut feeling line 10 below on "diluting" (np.average step) might contain error. i.e. I mean to perform the 3-frame averaging only against the RGB channel values, for all pixel positions. I need to double check syntax. (In the meantime please kindly check line 10!)
this post illustrates concepts and some code implementation. Ideally I would have step through this in more depth, via much smaller video sizes so we may obtain better intuition / visualise each step, pixel by pixel. (I might come back to this when I have time). For now, I believe the numpy array shape analysis is sufficient to convey the idea across.
In [1]: import numpy as np In [2]: vid1 = np.random.random((185, 100, 50, 3)) In [3]: vid1.shape Out[3]: (185, 100, 50, 3) In [4]: vid2 = np.random.random((64, 100, 50, 3)) In [5]: vid2.shape Out[5]: (64, 100, 50, 3) In [6]: vid1_cropped = vid1[:180] In [7]: vid1_cropped.shape Out[7]: (180, 100, 50, 3) In [8]: vid2_cropped = vid2[:60] In [9]: vid2_cropped.shape Out[9]: (60, 100, 50, 3) In [10]: vid1_cropped_diluted = np.average(vid1_cropped.reshape(3,60,100,50,3), : axis=0) In [11]: vid1_cropped_diluted.shape Out[11]: (60, 100, 50, 3) In [12]: vids_combined = np.stack([vid1_cropped_diluted, vid2_cropped]) In [13]: vids_combined.shape Out[13]: (2, 60, 100, 50, 3)
Question:
I'm trying to make a jagged array for a neural network and this is giving me an out of bounds error...
int[] sizes = { layer1, layer2, layer3 }; int k = sizes.length - 1; double[][][] net = new double[k][][]; int i; for (i = 0; i < k; i++) net[i] = new double[sizes[i]][]; for (int j = 0; j < sizes[i]; j++) net[i][j] = new double[sizes[i + 1]];
The size of y in
net[x][ ][y] should be equal to the size of
net[x+1][y][ ].
I did it on paper and I thought that this would work.
Answer:
int[] sizes = { layer1, layer2, layer3 }; int k = sizes.length - 1;
So,
k is equal to 2.
int i; for (i = 0; i < k; i++) net[i] = new double[sizes[i]][];
After that loop
i is equal to 2.
for (int j = 0; j < sizes[i]; j++) net[i][j] = new double[sizes[i + 1]]; ^^^^^^^^^^^^ ArrayIndexOutOfBoundsException
Boom,
sizes[i + 1] throws
ArrayIndexOutOfBoundsException, since
sizes has only indices 0, 1 and 2 and you are referring to
sizes[3].
Question:
EDIT: Found a solution! Like the commenters suggested, using
memset is an insanely better approach. Replace the entire for loop with
memset(lookup->n, -3, (dimensions*sizeof(signed char)));
where
long int dimensions = box1 * box2 * box3 * box4 * box5 * box6 * box7 * box8 * memvara * memvarb * memvarc * memvard * adirect * tdirect * fs * bs * outputnum;
Intro
Right now, I'm looking at a beast of a for-loop:
for (j = 0;j < box1; j++) { for (k = 0; k < box2; k++) { for (l = 0; l < box3; l++) { for (m = 0; m < box4; m++) { for (x = 0;x < box5; x++) { for (y = 0; y < box6; y++) { for (xa = 0;xa < box7; xa++) { for (xb = 0; xb < box8; xb++) { for (nb = 0; nb < memvara; nb++) { for (na = 0; na < memvarb; na++) { for (nx = 0; nx < memvarc; nx++) { for (nx1 = 0; nx1 < memvard; nx1++) { for (naa = 0; naa < adirect; naa++) { for (nbb = 0; nbb < tdirect; nbb++) { for (ncc = 0; ncc < fs; ncc++) { for (ndd = 0; ndd < bs; ndd++) { for (o = 0; o < outputnum; o++) { lookup->n[j][k][l][m][x][y][xa][xb][nb][na][nx][nx1][naa][nbb][ncc][ndd][o] = -3; //set to default value } } } } } } } } } } } } } } } } }
The Problem
This loop is called every cycle in the main run to reset values to an initial state. Unfortunately, it is necessary for the structure of the program that this many values are kept in a single data structure.
Here's the kicker: for every 60 seconds of program run time, 57 seconds goes to this function alone.
The Question
My question is this: would hash tables be an appropriate substitute for a linear array? This array has an O(n^17) cardinality, yet hash tables have an ideal of O(1).
- If so, what hash library would you recommend? This program is in C and has no native hash support.
- If not, what would you recommend instead?
- Can you provide some pseudo-code on how you think this should be implemented?
Notes
- OpenMP was used in an attempt to parallelize this loop. Numerous implementations only resulted in slightly-to-greatly increased run time.
- Memory usage is not particularly an issue -- this program is intended to be ran on an insanely high-spec'd computer.
- We are student researchers, thrust into a heretofore unknown world of optimization and parallelization -- please bear with us, and thank you for any help
Answer:
Hash vs Array
As comments have specified, an array should not be a problem here. Lookup into an array with a known offset is O(1).
The Bottleneck
It seems to me that the bulk of the work here (and the reason it is slow) is the number of pointer de-references in the inner-loop.
To explain in a bit more detail, consider
myData[x][y][z] in the following code:
for (int x = 0; x < someVal1; x++) { for (int y = 0; y < someVal2; y++) { for (int z = 0; z < someVal3; z++) { myData[x][y][z] = -3; // x and y only change in outer-loops. } } }
To compute the location for the
-3, we do a lookup and add a value - once for
myData[x], then again to get to
myData[x][y], and once more finally for
myData[x][y][z].
Since this lookup is in the inner-most portion of the loop, we have redundant reads.
myData[x] and
myData[x][y] are being recomputed, even when only
z's value is changing. The lookups were performed during a previous iteration, but the results weren't stored.
For your loop, there are many layers of lookups being computed each iteration, even when only the value of
o is changing in that inner-loop.
An Improvement for the Bottleneck
To make one lookup, per loop iteration, per loop level, simply store intermediate lookups. Using
int* as the indirection (though any type would work here), the sample code above (with
myData) would become:
int **a, *b; for (int x = 0; x < someVal1; x++) { a = myData[x]; // Store the lookup. for (int y = 0; y < someVal2; y++) { b = a[y]; // Indirection based on the stored lookup. for (int z = 0; z < someVal3; z++) { b[z] = -3; // This can be extrapolated as needed to deeper levels. } } }
This is just sample code, small adjustments may be necessary to get it to compile (casts and so forth). Note that there is probably no advantage to using this approach with a 3-dimensional array. However, for a 17-dimensional large data set with simple inner-loop operations (such as assignment), this approach should help quite a bit.
Finally, I'm assuming you aren't actually just assigning the value of
-3. You can use
memset to accomplish that goal much more efficiently.
Question:
I'm testing my neural network for XOR comparisons, and i've encountered an error i'd like to fix without altering the number of neurons in the first hidden layer. The code causing the error is:
public double dotProduct(int[][] a, double[][] ds) { int i; double sum = 0; for(i = 0; i < a.length; i++) { int j; for(j = 0; j < a[i].length; j++) { sum += a[i][j] * ds[i][j]; } } return sum; }
is giving me a null pointer exception. The dot product calculation itself is used to generate the dot product from an inputset my neural net has been provided with.
The input set is this:
int inputSets[][] = { {0, 0, 1}, {0, 1, 1}, {1, 0, 1}, {0, 1, 0}, {1, 0, 0}, {1, 1, 1}, {0, 0, 0} };
It's a multidimensional array containing 7 arrays. It is then used in this:
public double think(int[][] input) { output_from_layer1 = sigmoid(dotProd.dotProduct(input, layer1.getWeights())); return output_from_layer1; }
The sigmoid part of the function isn't an issue, as it takes a double and dotProduct is supposed to output a double. The issue as far as I'm aware is that the dotProduct function is taking a larger multidimensional array, and then attempting to cross it with a smaller one (The layer1.getWeights getter that calls the weights array for that layer).
The weights of a layer are defined as such:
layerWeights = new double[numNeurons][inpNum];
and the layer that's being used in the dot product is:
XORlayer layer1 = new XORlayer(4, 3);
So 4 neurons with 3 inputs each. The issue stems from the fact that there aren't enough neurons in this layer for the amount of inputs, as far as i'm aware, which is generating the null pointer exception when there isn't anything further to multiply against the input values.
We have 12 inputs in the neurons, and 21 input values.
My main question is, is there a way to solve this issue so the dot product operation is completed successfully without simply expanding the amount of neurons the layer contains to 7?
Answer:
This discussion might help. As suggested there, since you're using a 2D array, matrix multiplication (instead of dot product) would likely be more appropriate.
Of course, similar to the dot product, the dimensions must be aligned for matrix multiplication.
inputSets is a 7x3 matrix and
layerWeights is a 4x3 matrix. The transpose of
layerWeights is a 3x4 matrix. Now the dimensions are aligned, and the matrix multiplication results in a 7x4 matrix.
Based on the posted code, I would suggest something like this:
output_from_layer1 = sigmoid(matrixMult.multiply(input, transpose(layer1.getWeights()))); | https://thetopsites.net/projects/neural-network/multidimensional-array.shtml | CC-MAIN-2021-31 | refinedweb | 2,936 | 69.01 |
Nmap Development
mailing list archives
Hello,
I have prepared the latest testing package of Nmap and Zenmap for
Mac OS X. (19.5 MB)
This release is compiled against the X11 libraries from Mac OS X 10.4.
Please give this a try, especially if you are using 10.4. I don't have
access to Mac OS X 10.4, so sending a report is currently the only way
I'll eventually be able to make Zenmap run on your package.
This release doesn't automatically start X11 on 10.4. You have to start
X11 manually and then set the DISPLAY variable:
export DISPLAY=:0
open /Applications/Zenmap.app
I'm sorry this is such an involved process, but we have to take this
debugging a step at at time.
Early testing of this release has seen this error in one instance:
Error in sys.excepthook:
Traceback (most recent call last):
File "/Applications/Zenmap.app/Contents/Resources/zenmap.py", line 70, in excepthook
from zenmapGUI.CrashReport import CrashReport
File "zenmapGUI/CrashReport.pyc", line 35, in <module>
File "zenmapCore/BugRegister.pyc", line 23, in <module>
File "urllib2.pyc", line 91, in <module>
File "hashlib.pyc", line 133, in <module>
File "hashlib.pyc", line 60, in __get_builtin_constructor
ImportError: No module named _md5
Original exception was:
Traceback (most recent call last):
File "/Applications/Zenmap.app/Contents/Resources/__boot__.py", line 137, in <module>
_run('zenmap.py')
File "/Applications/Zenmap.app/Contents/Resources/__boot__.py", line 134, in _run
execfile(path, globals(), globals())
File "/Applications/Zenmap.app/Contents/Resources/zenmap.py", line 156, in <module>
app.run()
File "zenmapGUI/App.pyc", line 96, in run
File "zenmapGUI/App.pyc", line 124, in __run_gui
File "zenmapGUI/App.pyc", line 53, in __create_show_main_window
File "zenmapGUI/MainWindow.pyc", line 45, in <module>
File "zenmapGUI/SearchWindow.pyc", line 24, in <module>
File "zenmapGUI/SearchGUI.pyc", line 40, in <module>
File "zenmapCore/SearchResult.pyc", line 31, in <module>
File "zenmapCore/UmitDB.pyc", line 22, in <module>
File "md5.pyc", line 6, in <module>
File "hashlib.pyc", line 133, in <module>
File "hashlib.pyc", line 60, in __get_builtin_constructor
ImportError: No module named _md5
I think this is caused by the _hashlib module failing to link with
libcrypto or libssl. If you have this crash, please run Zenmap in
verbose mode to help debug the problem:
export ZENMAP_DEVELOPMENT=1
export PYTHONVERBOSE=1
/Applications/Zenmap.app/Contents/MacOS/zenmap &> python-verbose.log
And then send me the python-verbose.log.
David Fifield
_______________________________________________
Sent through the nmap-dev mailing list
Archived at
By Date
By Thread | http://seclists.org/nmap-dev/2008/q2/244 | CC-MAIN-2014-35 | refinedweb | 426 | 53.27 |
Initial Impressions of a Website with Modern Build ToolsPublished on
Imagine Javascript being the waffle and the toppings of chocolate, berries, bananas, and whip cream are babel, webpack, react, and mocha
Introduction
No one can deny that Javascript has significantly changed in the last few years. If you were a Javascript master in 2012 and lived under a rock for the last few years, you’d be shocked to see what the community had evolved into. Since I’ve been at least dabbling in Javascript since 2012, I thought I’d give my view on this topic. To give a fair and proper evaluation of the current state of affairs in Javascript, it’s best to first look where I’ve (we’ve) been.
First Encounter
I remember the first day I touched Javascript because as part of my freshman internship I had to keep a log of what I did each day. On May 7th 2012, I wrote:
Bug fixing. Finished fundamental Javascript tutorial. Started jQuery Tutorial.
The tutorial mentioned was a Pluralsight course I had signed up for with a temporary free account due to my
edu email address. As an aside: the url contains ‘jscript-fundamentals’ and JScript is not to be confused with JavaScript!
As an intern I was tasked with prototyping a web portal for engineers, and by the end of the summer, I had fully working version. Along the way I recorded some gold nuggets in the work log such as “Learning about KnockoutJS”, “Exploration of MVVM”, “Code refactor into KendoUI”. I can’t remember what I used to bundle the application because over a year later we see “Start of […] javascript upgrade” and “build process for js preprocessing”, where I finally start using RequireJS and Grunt. A dive into commit history showed previously an html file with dozens of
<script> tags (does anyone remember json2?) referencing my javascript files along with comments like:
It’s absolutely critical that one puts foo.js before bar.js
So no matter how bad one may think the Javascript ecosystem looks, just remember where we’re coming from.
So in mid-2013, the code below was state of the art.
define([ "jquery", "foobar" ], function($, foobar) { "use strict"; $.ajax({ /* ... */ }); // enter usage of the revealing module pattern });
I remember when the revealing module pattern was a big deal and was all the rage for writing composable javascript. It’s hard to think that nowadays I hardly use traditional javascript modular patterns. Javascript has changed that much, which is both life saver and a scary thought.
Despite all the shortcomings a new hip Javascript programmer might have to say about the application, it is very much still in production today and has been for the past 3½ years with little patching needed. How many people get to say that about a web app?
Forget the bragging. Other frameworks blow the ad-hoc one I created out of the water and I’ve always been on the lookout.
KnockoutJS and D3
While I had only read briefly about KnockoutJS before, Christmas break 2013 allowed me time to dive deeper into the library. Coming from a WPF MVVM background, Knockout’s slogan stood out, “Simplify dynamic JavaScript UIs with the Model-View-View Model (MVVM)”.
The project was to improve the University of Michigan’s Incident Log, which did not even look as good as it is now, so that users could see the incidents on a map along with cool, but purposeless statistics. The result is umich.nbsoftsolutions.com. You’ll notice that it is no longer working because the University upgraded their website so my custom scraper using BeautifulSoup broke.
For the frontend, there were about ten different
script tags from CDNs like cdnjs, which had promises that if your site served assets previously seen by a visitor, the visitor could use their cached version. A couple restrictions applied, which limited the usefulness. The URL had to be the same, so the asset had to be from cdnjs and the library version had to be the same. On the positive side, it could lighten server resources and load the site faster if the CDN harnessed geolocation to serve assets from the server closest to the visitor.
Each page had it’s own Javascript file written in Immediately-invoked function expression. Thankfully, since the site had a narrow scope, all the functionality could be packed into a single file without too much copying and pasting between pages.
I wrote a JSON API in Flask as opposed to what I had known at the time, Bottle. For me, Flask > Bottle, but Bottle still has its uses. I don’t understand why Bottle is contained in a single large file. Seems brittle.
What cracks me up is a comment I have in my Flask API about sending a certain JSON payload:
# Not to mention sending statistics is, by far, the largest contributor of # bandwidth at 16KB uncompressed.
Not sure why I was worried about 16KB uncompressed whereas websites today are much more bloated. Wikipedia uses 50KB of Javascript. Bissell has a 1000KB of javascript and styles. This graph shows that 363KB was the average amount of Javascript for the top 10,000 pages in February 2015. The Average Page is a Myth has the median around 400KB in 2015. And hot off the press, The average size of Web pages is now the average size of a Doom install. You know what webpage doesn’t use any Javascript? :)
So worrying about the 16KB of data is being paranoid.
D3, the data visualization library, was the most outside my comfort zone. If it wasn’t for Mike Bostock and his countless examples (like how I repurposed this one) I probably would not have succeeded. I had the hardest time wrapping my head around the
enter function. Despite my unexpected learning hump D3 is one of the few libraries (some may even call it a framework) that is still timeless, as I have a project that uses D3 for its math and path calculations, but less on direct DOM manipulations. I’m excited that D3 is planned to be broken up into modules!
Ember
I’ve been a closet fan of Ember since at least v1.6.0 (Summer of 2014). I remember
v1.6.0 so distinctly because I had just started an internship at a hedgefund, and my internship project begged for something like Ember. I pleaded my case if only to satiate my drive to learn and try my hand at web development (who cares if an intern fails!?). My wish was not granted (more like crushed!), so I fell back to reading blog posts and following releases while I waited for a side project where I could use Ember.
For someone on the outside, the appeal of Ember is it’s community. It seemed so well put together – not driven by some mega corporation with dubious intentions. The people behind are brilliant, Tom Dale and Yehuda Katz to name a few. It was a community I wanted to be a part of.
Finally in March of 2015, I got an excuse to start using Ember. A few months earlier I had sat next to a fascinating statistician on an train from Chicago to Ann Arbor. With little to no programming experience I was able to explain to him multi-threaded and synchronization issues with airplanes sharing a runway analogy and he grokked it far better than I expected. (Shortly after, I wrote Await Async in C# and Why Synchrony is Here to Stay and C# and Threading: Explicit Synchronization isn’t Always Needed). My new friend, in March 2015, had just arrived in Africa and needed a website where African startups could aggregate their data for potential investors.
With little requirements and my imagination to go on, I started hacking away. I volunteered myself for weekly updates to hold me accountable but also have a steady form of communication. Initially, this worked great and progress was made; however this trend did not last. I started stalling on Ember itself. The community was at an awkward pass with Ember 1.x and 2.x, there was movement to migrate to a “pod” directory structure, Ember CLI was in constant churn and had notoriously bad performance on Windows, and the Ember libraries I wanted to incorporate always seemed to be waiting for the next release to change their API. Perhaps not surprisingly, about a month of development my enthusiasm ran out of steam. My updates became repeatedly shorter and my friend soon became silent.
It’s for the best that I stopped working on the project because I was going to be reinventing a CMS, and nobody wants to be stuck with that task.
Most of the project, at the time that I stopped working on it, consisted of handlebar templates and CSS. The most interesting Javascript written dealt with validations of a form submission, but it does highlight the usage of Promises:
if (this.get('isValid')) { var promise = this.get('model').save(); callback(promise); promise.then(startup => this.transitionToRoute('startups.show', startup)); } else { this.set('errorMessage', true); callback(new Ember.RSVP.Promise((fulfill) => { fulfill(); })); }
A Node Diversion
We’re almost to the main point of the article, I swear, but I must quickly mention Node. Up until this point I had written no tests for any Javascript I had penned, which is crazy. Two years of Javascript and not a test written. If you know me, this should be shocking. I tend to emphasize testing. My excuse would be that it’s incredibly hard to test client side Javascript, as it’s fickle and requires some sort of browser to run.
To remedy this testing deficiency, I created a library to parse a file format. I had high hopes that I’d be able to incorporate it in browsers or a node based app. Unfortunately, parsing a 50MB file using Jison, Javascript’s version of Bison, would exhaust V8’s heap space. Handrolling a version that would allow streaming failed because it’s notoriously hard to parse a stream in an asynchronous environment. Just take a look at jsonparse, the streaming JSON parser, look me straight in the eyes, and say that writing a state machine like that for a format more difficult would be a task you’d relish.
The project did serve as a good training ground for Gulp, CommonJS, and Mocha. With Mocha, I was able to finally write tests like the following:
var parse = require('../lib/jomini').parse; var expect = require('chai').expect; describe('parse', function() { it('should handle the simple parse case', function() { expect(parse('foo=bar')).to.deep.equal({foo: 'bar'}); }); });
These tests would then be executed in a Node environment. I briefly tried to see if the tests would execute in a browser environment but was unable to (this is solved with a compilation step discussed later).
Enter 2016
Several weeks ago, while updating some of my sites to Let’s Encrypt, I found out that the University of Michigan updated their Incident Log (mentioned earlier in the post) to expose a JSON endpoint of the incidents on a given day. The problem is that the endpoint would take several seconds to respond and the interface was lacking. There was no cool, but purposeless statistics! This presented the perfect learning opportunity opportunity, as I had recently read State of the Art JavaScript in 2016, and wanted to try out the technologies it mentioned.
One of the libraries was React and my only familiarity with React was when I dabbled in it for an Electron side project and I realized that the most popular UI framework for Electron was React.
Below are my impressions on using some of the technologies listed in the post.
Modern Code
Thanks to Babel one can use Javascript concepts that won’t be natively supported for years to come. Below are some code snippets from the actual site that demonstrate their usefulness. For more information on the future of Javascript, see ES6Features.
List destructuring, and new variable keywords:
// Get all the data before and after a date split into two lists const [bf, af] = partition(data, (x) => moment(x.date).isBefore(date, 'day'));
Object destructoring, arrow functions, and string templating:
const Map = ({ address }) => { const url = '//maps.googleapis.com/maps/api/staticmap'; const size = '250x250'; const src = `${url}?size=${size}&markers=${escape(`|${address}, Ann Arbor, MI`)}`; return <img src={src} />; };
import React, { Component, PropTypes } from 'react'; import ReactCSSTransitionGroup from 'react-addons-css-transition-group'; import Map from './Map';
ES7 async/await with enhanced object literals:
export function fetchPostsIfNeeded() { return async (dispatch) => { // Get the last time the data was updated const lastUpdate = await localforage.getItem('last-update'); // If the data has never been updated or the data hasn't been updated // in a day, update the data by sending a request to the backend const data = !lastUpdate || !moment().isSame(lastUpdate, 'day') ? await fetchData(dispatch) : await localforage.getItem('data'); // Let the frontend know the data dispatch({ type: REQUEST_DPS_DONE, data }); }; }
I really enjoy these features and going back to previous style of Javascript would definitely take a shot at my productivity and enthusiasm.
The Cost
With all the cool libraries and modern features, there is a cost. Right now, udps.nbsoftsolutions.com sends 309KB of Javascript to the client (gzipped and minimized). While less than the average site, my side project does not nearly have as many features as other sites. Not to mention it is much easier to visualize the payload growing as features are added. For instance, I plan on adding neat graphs and that will involve d3, an additional 53KB gzipped and minified worth of Javascript. Since no business value is being derived from the site, I don’t mind so much. There are ways to circumvent this issue, like code splitting so that not all the js is contained in a single file.
CSS
Playing with CSS in this project blew my mind. I would previously put all my LESS or CSS in a single file to reference from HTML, but it is now possible, and some may even recommend in this componentized world, to create a CSS file for each component. Traditionally, this would have resulted in many files or a possibility for class names that would conflict across components.
Enter cssnext. Let’s say we create a file called
About.css:
@import '../../css/Main.css'; .textual { margin: 0 auto; & p, & li { font-size: 1.25rem; line-height: 1.4; } }
Those familiar with other CSS preprocessors like LESS or SASS, this will seem familiar:
- Copy and paste contents of Main.css into said file
- Create a class named
textual
- any paragraph (
p) or (
li) directly under an element with
textualhas modified text properties
How do we use
About.css? In
About.js we can import our CSS and access the
textual property.
import React, { Component } from 'react'; import styles from './About.css'; export default class About extends Component { render() { return <div className={styles.textual} />; } }
This is CSS Modules and it is critical that in the javascript code to reference them through the property name
styles.textual and not the class’ literal name
'textual' because CSS Modules hashes the class names so there is no chance of collisions of classes with the same name on different components.
An interesting fact – I’ve yet to determine if it’s a net benefit or loss – is that, by default, the CSS is bundled with the Javascript, so there is no longer the main HTML file referencing both a CSS file and a Javascript file. Now it only references a Javascript file with styles contained within. With HTTP2 slowly being rolled out with multiplexing, the benefit of a single request diminishes but the argument can be made that cognitive load is still decreased.
Testing
An adequate testing story is lacking, especially for frontend with a build chain such as ours. I’m extremely familiar with unit testing and frameworks in other languages, yet it took me an entire weekend to get a sane testing plan down. I very nearly called it quits. First, I started easy, using only Mocha, Chai, and expect to test pure javascript (non-react/redux/dom). It was easy enough to throw in the babel compiler into mocha so I could also write tests in ES6/7.
The problem arises when we start testing React components. Don’t worry, I hear everyone is moving to Enzyme, so I prepared it, installed it, and cracked my knuckles. Baby steps, let’s start with the basic component:
import React from 'react'; import styles from './Footer.css'; const Footer = () => { const footmsg = 'Made with \u2764 by '; return ( <footer className={styles.footer}> <p>{footmsg} <a href=""> Nick Babcock </a> 2016 </p> </footer> ); }; export default Footer;
And the corresponding baby test for our baby component
import React from 'react'; import { expect } from 'chai'; import { shallow } from 'enzyme'; import Footer from '../js/components/Footer'; describe('<Footer />', function () { it('should give credit to the original author', function () { const wrapper = shallow(<Footer />); expect(wrapper.text()).to.contain('Nick Babcock'); }); });
This test will actually fail to compile. Do you know why? If you guessed CSS Modules, you’d be correct. Mocha and babel don’t know how to deal with css (and why should they)! A little bit of googling leads us to writing a
setup.js require script for mocha:
import hook from 'css-modules-require-hook'; hook({ extensions: ['.css'] });
Tests now passing!
Unfortunately, this euphoric feeling soon passed. Take a look at the next component. Based on the last showcase, this one should be easier to guess what the problem is.
import React, { Component } from 'react'; import Message from './About.md'; export default class About extends Component { markup() { return { __html: Message }; } render() { return ( <div dangerouslySetInnerHTML={this.markup()} /> ); } }
Ah, we’re importing a markdown module. Same issue as last time, except this time no one has written a ‘markdown-require-hook’. Pondering this thought for a couple hours made me realize that I may have been going down the wrong path because I didn’t want to add a new
require dependency for the tests every time it’s needed in the source.
Maybe front end tests need something more – maybe they need to be webpack-ed as well to run in a browser-like environment. I’m familiar with the headless browser, PhantomJS, and mocha-phantomjs seems popular enough, so combine the two? No. Wrong path. Let me save you time, and point you to the webpack documentation on testing (obvious in hindsight isn’t it?). There it mentions Karma, which is a self described “Spectacular Test Runner for Javascript”, and it did not disappoint. There was enough internet documentation (blogs, github) that I pieced together a
karma.conf.js that worked! It’s clear skies from here!
Well, no, not exactly. There may be a series of steps one needs to take as they dive more into testing components extensively. For instance, when the component tests needed a real DOM, compilation errors started cropping up and the only fix was advice from the following Github thread. The fix was to add more info to my config files, which I’m basically copying and pasting blindly into. Luckily, it worked, but crawling Github/stackoverflow for solutions to copy and paste seems worrying.
On a different note, a pleasant surprise with our setup is that our CSS Modules are compiled the same way so we can use
styles.foobar to reference the compiled name!
Benchmarking
It is important for any application to have benchmarking because it ensures that the hot spots in your code base don’t slow your application down enough for the user to notice. When I deployed the first version of the site, I noticed significant lag between when the user would select a date with no incidents and the suggestion text popping up for a better date.
When faced with slower than expected code, the first step is to profile the production code to find what the bottleneck is. Don’t guess and don’t use development code. Most browsers have profiling tools that will indicate code that needs improvement. Below is a screenshot of Chrome’s profiling tool where optimizations are not needed:
The profiling tool doesn’t present helpful function names, but it contains a link to the source mapped source code, which will help track down the issue
There is no need for optimizations because the code is idling for most the time, but it wasn’t always like this. When I noticed the lag in the site, the profiling showed a lot of time was lost in calling
momentjs functions. Now it’s time for the optimizations.
If there is one de factor benchmarking library for Javascript, it’s Benchmark.js. Before you get too far, Benchmark.js does not play well with Webpack, so we’ll have to settle with running our benchmarking on Node instead of browser, which isn’t too hard to stomach. The one tricky aspect is our code is written in ES6/ES7, but Node doesn’t support all the new standard. We could have an intermediate step where we run the code throgh Babel and then run it through Node, or the easier way is just use babel-node, which will eliminate the intermediate step.
One thing that I learned through benchmarking is that moment.js, for all of it’s ergnomics, is an extremely slow library and one is much better off writing their own functions. In fact, the benchmarks show that the new method is nearly two orders of magnitude faster than the equivalent moment.js function.
Conclusion
Using the React view, Redux state management, Material UI, CSS Modules accompanying cssnext features through PostCSS, with modern Javascript compiled with Babel, ran through webpack, which also is invoked through the Karma test runner targeting PhantomJS for Mocha tests that assert with chai/expect to ensure our components handled by Enzyme represent state correctly, while other entities are mocked through sinon, is mouthful, but is doable, and now that I have been through the pain, I would say that I’d do over again especially the more complex the site became. I think if enough people became opinionated about this setup, then it could be more tightly bundled and allow for an easier time for new developers to get started with it. | https://nbsoftsolutions.com/blog/initial-impressions-of-a-website-with-modern-build-tools | CC-MAIN-2019-51 | refinedweb | 3,737 | 62.17 |
As projects get larger and more complicated, it's going to be increasingly important to be able to debug code quickly and accurately. And while the TAs and here to help, remember that there's only 9 of us and 365 of you! So with that in mind, let's go through some examples.
0.) Copy the following code into a file named bugs0.py and try running it. Do not start by reading the code and trying to understand it. Instead, the goal is going to be to try and find the bug without needing to spend the time understanding precisely what this code does. After running the code, look at the stack trace indicating where the error occurred during execution. Using only the stack trace (so no modifying and re-running the code) answer the follwing questions:
def isprime(n): """ Returns true if n is a prime. """ k = 2 while k < n: if n % k == 0: return False k += 1 return True def first_prime_in_list(lst): """ Returns the first prime in the list and its index. Returns 0,len(lst) if there is no prime. """ first_prime = None for i in range(len(lst)): if isprime(lst[i]): first_prime == lst[i] return first_prime, i else: None first_prime = 0 return first_prime, len(lst) def sum_primes_in_list(lst): """ Returns the sum of primes in the list. """ sum = 0 while len(lst) > 0: p, index = first_prime_in_list(lst) lst = lst[index+1:] sum = sum + p return sum print(sum_primes_in_list([4,6,8,9,11,14,15,20]))
1.) Copy this next code segment to a new file named bugs1.py and try running it. This is similar to the previous exercise, but the problem isn't caused by what a function returns. Nonetheless, answer the same questions from the first question as you try to track down the bug. Again, try not to focus too much on figuring out what the code is trying to do; instead see if you can solve the problem by just looking at the stack trace and following where variables came from.
def find_longest_value_in_dict(e): largest=-1 largest_key = -1 for i in e.keys(): if len(e[i]) > largest: largest = len(e[i]) largest_key = i return e[largest_key] def get_largest_range(ranges, values): # Sort values into a dictionary with keys from ranges d = {} for v in values: for i in range(len(ranges)-1): if v >= ranges[i] and v < ranges[i+1]: d.get(ranges[i], ()) + (v,) return find_longest_value_in_dict(d) print(get_largest_range([0,10,15,17,20],[2,4,68,2,13,16,17,17,16,15,15,15,4,3,12,15]))
2.) ______ >>> a.holder ______ >>> class CheckingAccount(Account): ... def __init__(self, account_holder): ... Account.__init__(self, account_holder) ... def deposit(self, amount): ... Account.deposit(self, amount) ... print("Have a nice day!") ... >>> c = CheckingAccount("Eric") >>> a.deposit(30) ______ >>> c.deposit(30) ______
3.) Consider the following basic definition of a Person class:
class Person(object): def __init__(self, name): self.name = name def say(self, stuff): return stuff def ask(self, stuff): return self.say("Would you please " + stuff) def greet(self): return self.say("Hello, my name is " + self.name)
Modify this class to add a repeat method, which repeats the last thing said. Here's an example of its use:
>>>"
4.)!)
5.) Here are the Account and CheckingAccount classes from lecture:
class Account(object): """A bank account that allows deposits and withdrawals.""" class CheckingAccount(Account): """A bank account that charges for withdrawals.""" withdraw_fee = 1 interest = 0.01 def withdraw(self, amount): return Account.withdraw(self, amount + self.withdraw_fee)
Modify the code so that both classes have a new attribute, transactions, that is a list keeping track of any transactions performed. For example:
>>> eric_account = Account(“Eric”) >>> eric_account.deposit(1000000) # depositing my paycheck for the week 1000000 >>> eric_account.transactions [(‘deposit’, 1000000)] >>> eric_account.withdraw(100) # buying dinner 999900 >>> eric_account.transactions [(‘deposit’, 1000000), (‘withdraw’, 100)]
Don't repeat code if you can help it; use inheritance!
6.). Here's an example:
>>>.
Write an appropriate Check class, and add the deposit_check method to the CheckingAccount class. Make sure not to copy and paste code! Use inheritance whenever possible. | http://www-inst.eecs.berkeley.edu/~cs61a/sp12/labs/lab7/lab7.html | CC-MAIN-2017-26 | refinedweb | 686 | 68.06 |
OpenShift Ansible's upgrade process has been designed to leverage the HA capabilities of OpenShift and allow for performing a complete cluster upgrade, without any application outages. Doing so is heavily dependent on the nature of your application as well as the capacity of your cluster. However, this post will cover how we perform upgrades, and demonstrate one without causing downtime for a sample application.
How Openshift Ansible Performs Upgrades
The basic steps for an openshift-ansible upgrade are as follows:
- Pre-Flight Checks
- Validates the state of your cluster before making any changes.
- Checks that inventory variables are correct, your control plane is running, relevant rpms/containers are available, and the required version of Docker is either installed or available.
- Runs in parallel on all hosts.
- Can be run by itself by specifying
--tags pre_upgradewith your ansible-playbook command.
- Control Plane Upgrade
- Create a timestamped etcd backup in parallel on all hosts.
- Upgrade OpenShift master containers/rpms on all masters in parallel. (no service restart)
- Restart master services, performed serially one master at a time.
This can be configured to restart the entire host system if desired.
Because this is performed serially, a load balanced control plane should see no downtime during this upgrade, however we're mostly interested in application downtime in this scenario.
- Reconcile cluster roles, role bindings, and SCCs.
- Upgrade the default router and registry.
- Node Upgrade
- This entire process is run serially on one node at a time by default.
Can be configured to run in parallel for a set number or percentage of nodes as of 1.4/3.4 with:
-e openshift_upgrade_nodes_serial="5"
Can be configured to run only on nodes with a specific label as of 1.4/3.4 with:
-e openshift_upgrade_nodes_label="region=na"
For our purposes we will stick to the default and only upgrade one node at a time
- Evacuate the node and mark it unschedulable.
- Upgrade Docker if necessary. (NOTE: With OpenShift all masters must run the node service as well, so this process covers upgrading Docker there as well)
- Update node configuration and apply any necessary migrations.
- Stop all OpenShift services.
- Restart Docker.
- Start all OpenShift services.
- Mark node schedulable again.
- Wait for node to report Ready state before proceeding to the next node.
A Zero Downtime Upgrade Example
For the above upgrade process to result in no application downtime, we need the node upgrade phase to not take down so many nodes that we do not have capacity for our application to remain running. We also similarly need to ensure the router remains running on at least one of the infra nodes during node upgrade, and when we actually upgrade the router itself.
For this upgrade we'll be using a total of 13 AWS systems:
- 3 master+etcd hosts
- 3 infra nodes, 2 in zone 'east' and 1 in zone 'west' (NOTE: important caveat here, the OpenShift router runs on each of these, and 2 may not always be enough for reasons explained below)
- 3 regular nodes in zone 'east'
- 3 regular nodes in zone 'west'
- 1 load balancer running haproxy for both the API server, and our sample application
The ansible inventory file used to create the cluster can be viewed here.
In testing, if there were only two infra nodes and they were listed consecutively in the inventory, we did see that it was possible to have no running routers for a brief period of time. When the first infra node is being upgraded it has been evacuated leaving us with only one router running. Once the first node upgrade completes we mark it schedulable again and then evacuate the second node. However this could occur before Kubernetes has rescheduled the router back onto the first node, leaving no routers at all.
For this reason we utilized three infra nodes for this demonstration. In theory two should suffice if you were to control the ordering in your inventory during the upgrade.
Setup
For this test I performed a clean installation of OpenShift Enterprise 3.2, however it may be worth noting I did so using a more recent version of openshift-ansible targeted for 1.4/3.4. This allows me to benefit from automatic deployment of the router on all infra nodes, and the latest upgrade work, so results here should be valid for 1.3/3.3 -> 1.4/3.4 upgrades. However, if you use an older openshift-ansible version your experience may differ.
The [lb] host in the inventory above causes openshift-ansible to configure a HAProxy service on that host to load balance requests to the API. I then modified /etc/haproxy/haproxy.cfg on that host to also load balance requests to my infra node openshift_ip's where the router will be running.
frontend helloworld
bind *:80
default_backend helloworld-backend
mode tcp
option tcplog
backend helloworld-backend
balance source
mode tcp
server node0 172.18.5.105:80 check
server node1 172.18.5.102:80 check
Sample Application
My sample application is a simple template using the hello-openshift container:
$ cat ha-upgrade-app.yaml
kind: Template
apiVersion: v1
metadata:
name: "ha-helloworld-template"
objects:
- kind: "DeploymentConfig"
apiVersion: "v1"
metadata:
name: "frontend"
spec:
template:
metadata:
labels:
name: "ha-helloworld"
spec:
containers:
- name: "ha-helloworld"
image: "openshift/hello-openshift"
ports:
- containerPort: 8080
protocol: "TCP"
replicas: 4
strategy:
type: "Rolling"
paused: false
minReadySeconds: 0
- kind: Service
apiVersion: v1
metadata:
name: "helloworld-svc"
labels:
name: "ha-helloworld"
spec:
type: ClusterIP
selector:
name: "ha-helloworld"
ports:
- protocol: TCP
port: 8080
targetPort: 8080
- kind: Route
apiVersion: v1
metadata:
name: helloworld-route
spec:
host:
to:
kind: Service
name: helloworld-svc
This container just returns a static 'Hello OpenShift!' response for each request. In the real world the success of this upgrade would depend heavily on the nature of your application, particularly in relation to databases and storage.
The template requests four replicas, which will spread out evenly across our two zones by default, leaving one free node in each zone.
$ oc new-project helloworld
$ oc new-app ha-upgrade-app.yaml
Monitoring
We use a fake route in our template above, so whatever host we are going to monitor the application and control plane from needs to have an /etc/hosts entry pointing to the load balancer public IP.
To monitor the application I logged responses from a request every second with a timestamp:
$ while :; do curl -s -w "status %{http_code} %{size_download}\\n" | ts; sleep 1; done
To monitor the control plane and log what pods were running every second, I ran a similar command on my master to list all pods in all namespaces:
$ while :; do oc get pods --all-namespaces | ts; sleep 1; done
Performing the Upgrade
To upgrade we just modify our inventory to set
openshift_release=v3.3, and flip our yum repositories from 3.2 to 3.3:
$ ansible OSEv3:children -i ./hosts -a "subscription-manager repos --disable rhel-7-server-ose-3.2-rpms --enable rhel-7-server-ose-3.3-rpms"
We are now ready to upgrade:
$ ansible-playbook -i ./hosts playbooks/byo/openshift-cluster/upgrades/v3_3/upgrade.yml
Results
Logs from this upgrade are available here:
The upgrade took 32 minutes. (amount of time would vary depending on size of your cluster, network speed, and system resources)
Six individual requests to our application failed throughout the upgrade at separate points in time, all of which were related to restarts of the router pods on our infra nodes.
In the logs you will see three with status 000, caused by the load balancer not yet realizing the infra node is not healthy. The other three appear with status 503 where the router responds, but does not yet know about our route, a known issue being tracked here. All six were resolved within 1-2s.
A future enhancement is planned to support providing hooks which users could leverage to gracefully take infra nodes out of rotation on their load balancer of choice before we evacuate the node, and restore it after we have marked it schedulable again. Combined with a fix for the brief router restart issue above this should eliminate all failed requests we saw above.
Our actual application however, remained running throughout the upgrade. While a few requests may fail when the load balancer is detecting an infra node being down, other requests will continue to succeed if they land on the other routers. Because we only upgrade one node at a time, there will reliably be 2-3 other replicas of our application up and responding.
Zero downtime upgrades are possible with openshift-ansible, provided your application is capable, and your cluster is highly available with sufficient capacity.
Categories
OpenShift Container Platform, How-tos, OpenShift Origin | https://www.openshift.com/blog/zero-downtime-upgrades-with-openshift-ansible | CC-MAIN-2020-16 | refinedweb | 1,452 | 51.28 |
I make frequent use of python's built-in debugger, but one obvious feature seems to be missing - the bash-like tab completion that you can add to the interpreter. Fortunately pdb's interactive prompt is an instance of Cmd, so we can write our own completion function.
Note: this uses rlcompleter, which isn't available on windows
Edit: updated to handle changes in local scope Edit: Fixed start via 'python -m pdb ...'. Check the comments for details.
Discussion
Save the first part to a .pdbrc file in your home directory. If on startup pdb finds a .pdbrc file in either a user's home directory or in the current directory it runs each line as though it were typed into the prompt. Unfortunately that makes it impossible write multi-line functions in .pdbrc. So I have borrowed from and used a separate file for the custom function that we want to do the real work.
When pdb starts, this replaces it's default completer function with that of rlcompleter before the completer function has been set. You should find that tab in pdb now completes names and provides proper object inspection, even as you move around the stack. pdb's default completer function only completes pdb commands, and most of those have single character abbreviations anyway. I got bored with typing "!dir([object])"
'complete' also tries to keep the completer class's namespace up to date, using curframe.f_locals.
If you would like to use tab completion, but not have it load for every session or reflect changes to the local scope then running:
import rlcompleter;import readline;readline.set_completer(rlcompleter.Completer(locals()).complete)
In a pdb prompt will enable tab completion. However, adding that to your .pdbrc file doesnt work as pdb seems to set its default completer function after the .pdbrc commands have been run.
Hope this saves you as much typing as me!
You need to import pdb in .pdbrc otherwise you get a pdb undefined error message.
hmm, even with that, I can't get tab completion to work. It's fine if I do readline.set_completer(rlcompleter.Completer(locals()).complete) manually, but the .pdbrc method fails. This is with python-2.4.3 on Gentoo.
python -m grabs our Pdb class! Thanks for the feedback. Here's what I think is going on:
tab completion works fine without the import provided you're stepping into your code with
However if you step in with "python -m pdb myscript.py" then python's pdb module creates an instance of the Pdb class before we can override it's complete function. Bummer.
This is fixed! Thanks for the comments. After a bit of testing I discovered that this wasn't working with 'python -m myscript.py' because we are changing the complete function on the Pdb class object in our current code block. However, running pdb via "python -m" uses execfile to start your script, and that is executed in a new code block. The side effect of this is that from .pdbrc.py the Pdb class object is not the one that is being used to run this pdb session. To get a reference to the original class object one needs to jump out of the current frame back to the frame that spawned this script. sys._getframe().f_back does that and f_globals['Pdb'] then grabs the origional Pdb class. | http://code.activestate.com/recipes/498182/ | crawl-002 | refinedweb | 565 | 75.71 |
1. I will need to create A stack which has elements inserted or deleted on the top of the list. It is a first in last out ordering of elements. I will maintain a pointer to the top of the stack that is an array. Write functions for inserting an element (call it push), deleting an element (call it pop), checking if the stack is full ( call it check_full) and checking if it is empty (call it check_empty).
2. An integer variable p that holds the subscript value of the array is a pointer to stack. The value of p is the location where the next element should be inserted or removed. The value of p, the array itself and the element to be inserted should be passed to the push function. The array and the address of p should be passed to the pop function, which should return the top most element in the stack. Use these functions of a stack to convert numbers from base 10 to base 2(binary), base 8(Octal) or base 16(hexadecimal). Given a number 100, the following process can be used to convert it to base 2. Divide 100 by 2, push the remainder into the stack, divide the quotient by 2 and push the remainder into the stack and keep doing this(dividing the quotient by 2) till the quotient is 0.
3. Then pop all elements (print them out when a number is popped from the stack and the number you get is the binary representation of 100. To get the hexadecimal representation of 100, divide the number by 16 till the quotient is 0. For a hexadecimal representation, i have to use characters A, B, C, D, E and F for values from 10 through 15 respectively.
The main part of the program should ask for the number and the new base and have a while loop in which you do the mathematics and call the function push. Have another while or for loop to do the pop operations.
#include <iostream> using namespace std; #define MAX 100 // MAXIMUM STACK CONTENT class stack { private: int arr[MAX];// Contains all the Data int top; //Contains location of Topmost Data pushed onto Stack public: stack() //Constructor { top=-1; //Sets the Top Location to -1 indicating an empty stack } void push(int a) // Push ie. Add Value Function { top++; // increment to by 1 if(top<MAX) { arr[top]=a; //If Stack is Vacant store Value in Array } else { cout<<"STACK FULL!!"<<endl; top--; } } int pop() // Delete Item. Returns the deleted item { if(top==-1) { cout<<"STACK IS EMPTY!!!"<<endl; return NULL; } else { int data=arr[top]; //Set Topmost Value in data arr[top]=NULL; //Set Original Location to NULL top--; // Decrement top by 1 return data; // Return deleted item } } }; int main() { stack a; char arr[100]; int index = 99; arr[index] = '\0'; int p; cout << "what number do you want to use for conversion? \n" << endl; cin >> p; int neg; neg = 0; if(p < 0) { neg = 1; p *= -1; } if (p <= 100) { do { int d; d = p % 2; arr[--index] = (char)(d + '0'); p = p >> 1; } while( p > 0); if(neg) { arr[--index] = '-'; } cout << "\ncongrats your binary number is: \n" << endl; cout << &arr[index] << endl; } else { cout << "Invalid operation choice." << endl; } int n, r[10], i; cout << "enter a number to convert to hexadecimal: " << endl; cin >> n; for(int i=0;n!=0;i++) { r[i]=n%16; n=n/16; } i--; for(;i>=0;i--) { if(r[i]==10) cout << "A"; else if(r[i]==11) cout << "B"; else if(r[i]==12) cout << "C"; else if(r[i]==13) cout << "D"; else if(r[i]==14) cout << "E"; else if(r[i]==15) cout << "F"; else cout << "%d" << r[i] << endl; } cout << "\n" << endl; return 0; } | http://www.dreamincode.net/forums/topic/43647-stack-and-pointers-with-binary-conversion/ | CC-MAIN-2018-17 | refinedweb | 635 | 64.14 |
New signing Roman Kelner will be out for several weeks
advertisement
Bicknell rues Bees’ missed chancesBy Alan Manicom
December 02, 2008
Adam Bicknell believes Bracknell Bees paid the price for missed chances in Sunday’s 4-0 home defeat by Guildford Flames – and
insisted that the scoreline flattered the English Premier League leaders.
Despite being without five senior players due to injury and suspension, Bees outshot Guildford 32-25.
They piled on the pressure after going behind to an early goal by ex-Bee Lukas Smital, but were denied by another of their former
players, netminder Joe Watkins, who was in inspired form.
Bicknell said: “I think if we’d scored early in the first or second period it might have been a different game. We had loads and loads of
chances.
“But everybody gave their all and I couldn’t ask for any more effort. It’s a hard loss to take.
“Sure the legs went a little bit, but I don’t want to make excuses.
“I think a 4-0 scoreline flattered them a little bit. They rode their luck.
“Watkins made some really awesome saves. I know he’s been struggling for form and I think he’s been under a bit of pressure so fair
play to him because he kept them in it. I think they owe him a few beers for that one.”
Bicknell, who has been sidelined by a pinched cartilage in his knee, revealed that he is considering icing next weekend to help Bees
through their current manpower shortage.
Defencemen Sam Oakford (groin strain) and Scott Moody (knee) are still a couple of weeks away from returning, while new Czech
import Roman Kelner is the latest player to join the injury list.
Bicknell said: “Roman had a hernia operation on Friday.
“He’s going to be able to ride a bike this week, but it’ll be a couple of weeks before he can do any contact stuff.”
It is a big blow for 6ft 3in Kelner, who has just joined Bees as a replacement for axed forward Jeff Hutchins, and was only two games
into his comeback from a two-year break from competitive hockey.
Bicknell added: “The team had started to settle down a bit and Roman was giving us another nice dimension, but we’ve been
chopping and changing all year.”
Bees’ boss admitted he was disappointed by his side’s failure to score from seven powerplays against Guildford, but added: “Of the
players that we had out, Roman and Matt Foord are both important parts of our powerplay.
“Maybe we should have converted those powerplays, but Guildford have the best goalkeeper in the league.”
Bees should at least have Foord back for next Saturday’s game at Telford Tigers and Sunday’s home clash with Swindon Wildcats.
He was suspended for the Guildford game after receiving a match penalty in the previous night’s 6-2 win at Isle of Wight Raiders.
The British forward scored twice before being thrown out in the penultimate minute of Saturday’s match for a roughing offence after a
bust-up with Isle of Wight’s Alan Green.
Bicknell, though, believes Foord’s automatic one-match ban is unlikely to be increased by the league’s disciplinary officials this week.
Bees’ boss said “He should get just the one game hopefully. That’s what the ref said.
“I thought it was a bit of harsh call.” | http://www.getreading.co.uk/sport/ice_hockey/s/2040496_bicknell_rues_bees_missed_chances | crawl-002 | refinedweb | 575 | 68.5 |
Technical Articles
Developing Secure Applications in a Multicloud Environment: Python Code Sample Appendix
Hands-On Tutorials
Developing Secure Applications on the SAP Cloud Platform
In this blog series, we will explore developing secure applications in a multi-cloud Cloud Foundry environment.
Think Global, Act Local
Sample Code
We start our project with the most basic Python web application.
Create a directory. for example, “sample” and save this code snippet as hello.py.
from flask import Flask app = Flask(__name__) @app.route('/') def hello(): return "Hello World"
We will use Flask to create us a web application. Flask is a micro web framework written in Python. For more information and a quick start, see flask.palletsprojects.com/en/1.1.x/quickstart/.
When we access the URL root (‘/’), the web server will return a string of text: feel free to ajdust.
Create Virtual Env
Before we deploy our application to the cloud, it can be helpful to first test it locally but this does mean we need to install some software. Feel free to skip this step and go to the next section.
For your OS, install Python, pip, and Flask. Flask comes with its own dependencies, like Jinja and Werkzeug. See installation for Flask platform instructions for Linux/macOS and Windows.
# Linux/macOS python3 -m venv sample source sample/bin/activate
# Windows py -m venv sample .\sample\Scripts\activate
The venv command creates a virtual environment (here named sample but you can use any name your want).
Pip Install
Next we run the Python package manager pip command to upgrade to the latest version as a best practice and then install Flask. For the ins and outs about pip, see the pip documentation)
pip install --upgrade pip pip install Flask
Flask comes with its own dependencies which are automatically installed: courtesy of pip.
Command venv has created the sample directory with bin, include, and lib. Command pip has installed the packages under lib.
Once we run the program, the executable bytecode will be created in __pycache__.
Flask Run
There are different ways how we can run our program. Define the FLASK_APP environment variable (use SET on Windows) and execute flask run.
export FLASK_APP=hello flask run
Note that without further instructions Flask runs on the loopback adapter (localhost at 127.0.0.1) and hence only accessible from your computer (might be a good idea at this stage).
We can pass the host parameter to specify a specific network interface (or all NICs with 0.0.0.0). We can pass command and variable as a single statement as well.
FLASK_APP=hello flask run --host=0.0.0.0 -p 1234
We can pass multiple environment variables with the command or, alternatively, store them in a .flaskenv file (although this requires python-dotenv to be installed).
Alternatively, we can also add host and port to hello.py and run the command using the python command.
app.run(host='0.0.0.0', port=5000)
python hello.py
When you are done with local development and testing, exit the environment with command:
deactivate
SAP Cloud Foundry
Sample Code
For this tutorial we will be using the Cloud Foundry command-line interface, cf CLI, which you can install from GitHub: github.com/cloudfoundry/cli/blob/master/README.md.
We also assume that you have a trial or enterprise account on the SAP Cloud Platform. If not, see SAP Cloud Platform Developer Onboarding | Hands-on Video Tutorials for how to get started.
We continue with the server.py sample code from SAP Cloud Platform documentation:
The code is very similar to hello.py above. The main difference is that a port variable is defined using os.environ.get and for this we need to add an import statement. This would typically be used in a try block: see if it is free and if so, assign it. The if __name__ == line is a bit of boilerplate to indicate where our program starts but like the port assignment not really necessary for our sample app.
import os from flask import Flask app = Flask(__name__) port = int(os.environ.get('PORT', 3000)) @app.route('/') def hello(): return "Hello World" if __name__ == '__main__': app.run(host='0.0.0.0', port=port)
Push an App
To run the app on the SAP Cloud Platform, connect to your Cloud Foundry (trial) subaccount, org, and space.
cf api cf login | cf l cf target | cf t
To deploy the application to Cloud Foundry, we use the cf push command.
When we run this command as-is, without argument, the execution will fail with an incorrect usage message: we need to provide the app name or use a manifest.yml file.
The command syntax is returned. For the Cloud Foundry documentation, see the Cloud Foundry CLI Reference Guide. See also Pushing an App.
When we run the command with only a name a staging error is returned:
No start command specified by buildpack or via Procfile.
App will not start unless a command is provided at runtime
According to the Cloud Foundry documentation, section Python Buildpack, the Python buildpack is used if a requirements.txt or setup.py file is detected in the root directory of the project.
Without these files, CF had no idea which runtime environment to provide.
Changing the file name server.py to setup.py solves the buildpack issue, however the script cannot be executed because the the Flask module is not found.
Application Dependencies
Application dependencies for a Python program are defined in the file requirements.txt. Unless a specific version is required, just the name suffices. This also applies to the dependencies of the dependency, Flask in our case. W could have specified all dependencies but could also leave this up to pip, the package manager.
# Flask==1.0.x Flask
When we run cf push myapp again we can see that the Flask (and its dependencies) are installed.
Start Command
cf push myapp -c "python -m server"
The app is created with name = myapp, path = current directory; route = appname + cfapps + API domain.
Staging and Running Apps
The file is uploaded and the staging prepared by downloading the buildpacks.
First a container is created by a cell. The Python buildpack is downloaded and Python and pip are installed. Pip installs Flask and dependencies. The result is uploaded as droplet and the container stopped and destroyed.
The pesky warnings are unfortunate. These are caused by the buildpack. To get rid of them we need to create our own buildpack, a topic we return to below.
The illustration below shows the cf push process. A Diego cell is a virtual machine. One is used to stage the app and create the droplet using a container (first created, then destroyed). Another to run the app. For the details, see How applications are staged.
Side Note: Cloud Foundry was originally developped in Ruby. The component responsible for running the droplets was the Droplet Execution Agent (DEA). When Cloud Foundry was re-architected and rewritten in Go, DEA became Diego.
The same information is provided by the SAP Cloud Platform Cockpit. When we click the application route we are directed to our app.
The route needs to be unique in our domain and in shared environments the myapp route may already have been taken. We fix this below.
Note that our simple Hello World app allocates a fair amount of resources: 1024 MB memory and disk quota while only a fraction is needed.
Note that our basic app has allocated all available memory 1024 MB while only 15.9 is needed. The same applies to the disk quota.
Attributes and Manifests
Attributes
We can control memory and disk allocation with the attributes -m and -k.
Random route generates a unique name for our app.
cf delete myapp cf push myapp -m 128M -k 256M --random-route -c "python server.py"
The update the app, we can simply run the cf push command again.
Should you want to clean up first, use the cf delete command.
The random route generated this time is myapp-execellent-warthog-by but it will be different when we delete and push.
Note that when you delete before push you will eventually run out of your random routes quota. To reset the counter, use command
cf delete-orphaned-routes
Manifest
As the cf CLI already indicated, instead of passing attributes on the command line we can also use a manifest.yml file..
When we enter our attributes in a manifest.yml file we need to do it likes this and exactly like this (although we can move the optional attributes around). For the full specs, see the App Manifest Attribute Reference.
--- applications: - name: myapp buildpacks: - python_buildpack path: . memory: 128M disk_quota: 256M random-route: true command: python server.py
Should the route be fixed, you can use the host attribute instead of random-route. This will need to be unique for the domain, as mentioned.
Path points to either the directory where the code is located or to a ZIP or JAR file with the code. As we have already seen, without this parameter the current directory is assumed; the . (dot) that is. In the manifest file above the path attribute is included but superflous as it contains the default value.
To try out the zip format, compress server.py and requirements.txt into an archive and provide this as path. For larger projects, this shortens the upload time.
The CLI looks for a manifest.yml in the current directory. When stored elsewhere, use the attribute -f with the path to the manifest is.
Buildpacks
Courtesy of SAP
In the manifest above we explicitly set the buildpack we want to use. Cloud Foundry uses a buildpack to create a droplet (tarball or zipped archived stored in a blobstore), which is later used to run the app.
The cf buildpacks command lists all available buildpacks provided by SAP.
Latest and Greatest (?)
However, you can download a more recent version from Cloud Foundry, if needed, or build your own. For the details, see Python Buildpack and Buildpacks.
--- applications: - name: myapp buildpacks: - path: . memory: 128M disk_quota: 256MB random-route: true command: python server.py
A buildpack might contain several runtime versions. For example, buildpack 1.7.21 contains ten Python binaries from 3.5.9 (lowest) to 3.8.5 (highest). To specify the exact version to run, add a runtime.txt file to your project.
python-3.8.x
For Python buildpack release information, see github.com/cloudfoundry/python-buildpack/releases.
Passing Parameters
Procfile
When we run our app locally, besides python server.py we showed that we could also start our app using an environment variable export FLASK_APP=hello.py with command python -m flask run.
Port numbers are configuration and configuration should not be in code. Let’s simplify our web app by removing the port.
Save as web.py.
from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello, World! (again)'
We can now remove the command from the manifest and add it to a file named Procfile (no extension).
web: FLASK_APP=web.py python3 -m flask run --host=0.0.0.0 --port=$PORT
When a Procfile is detected, the command attribute is ignored.
For more information, see Production Server Configuration (Cloud Foundry) and The Procfile (Heroku).
--- applications: - name: myapp buildpacks: - python_buildpack path: . memory: 128M disk_quota: 256MB random-route: true
User Provided Variables
Alternatively, we could have used the env attribute in the manifest for FLASK_APP instead of Procfile.
--- applications: - name: myapp ... command: flask run env: FLASK_APP: web.py
cf push -f manifest.yml -i 3
Docker (Bonus Track)
Docker and Diego
As we have seen, Cloud Foundry runs (Garden) containers inside virtual machines (cells). If you wish, you can also run Docker containers in a Diego cell. This is not relevant for our scenario and would require additional configuration to make this work with service instances (SAP HANA Cloud and XSUAA) but serves to clarify the inner workings of Diego: just another VM+Container architecture..
_14<< | https://blogs.sap.com/2020/09/18/developing-secure-applications-in-a-multicloud-environment-python/ | CC-MAIN-2021-49 | refinedweb | 2,007 | 67.25 |
Ian Walker (I.Walker@compserv.gla.ac.uk)
Wed, 11 Feb 1998 12:02:33 +0000 (GMT)
Andrew,
On Tue, 10 Feb 1998, Andrew Scherpbier wrote:
> Ah! I see the problem, I think. You need to move the extern "C" ...
> line out of any method.
> The best thing is to move it to the top of the file, right after the
> includes.
Thanks for the suggestion. Although there are a quite a few warning messages
still present I now have a compiled and (after a few tests it seems) a
working version of htdig. As well as this change to ./htlib/Connection.cc
I did have to leave other changes I had made in the files
./Makefile.config
./htfuzzy/Exact.cc
./htfuzzy/Substring.cc
./htdig/Document.cc
I have kept these listed at the end of this message in case they are of
interest.
Thanks again,
Ian Walker
Cc: Andy Atkinson
-- Dr Ian W Walker DEC Systems Coordinator UNIX Systems Support ------------------------------------- Computing Service E-mail : i.walker@compserv.gla.ac.uk Glasgow University Phone : 041-330-4892 Glasgow G12 8QQ Fax : 041-330-4808
-------------------------------------------------------------------------------
Makefile.config in ./ : ---------------------- # diff Makefile.config.orig Makefile.config 28c28 < LIBS= -lcommon -lht -lgdbm -lrx--- > LIBS= -lcommon -lht -lgdbm -lrx -lnsl -lsocket
Exact.cc in ./htfuzzy: --------------------- # diff Exact.cc.orig Exact.cc 50a51 > return 0;
Substring.cc in ./htfuzzy: ------------------------- # diff Substring.cc.orig Substring.cc 81a82 > return 0;
Document.cc in ./htdig: ---------------------- # diff Document.cc.orig Document.cc 254c254 < #ifdef _AIX--- > // #ifdef _AIX 256,258c256,258 < #else < sa.sa_handler = (SIGNAL_HANDLER) timeout; < #endif--- > // #else > // sa.sa_handler = (SIGNAL_HANDLER) timeout; > // #endif 377c377 < #ifdef _AIX--- > // #ifdef _AIX 379,381c379,381 < #else < sa.sa_handler = (SIGNAL_HANDLER) timeout; < #endif--- > // #else > // sa.sa_handler = (SIGNAL_HANDLER) timeout; > // #endif
This archive was generated by hypermail 2.0b3 on Sat Jan 02 1999 - 16:25:41 PST | http://www.htdig.org/mail/1998/02/0024.html | CC-MAIN-2016-30 | refinedweb | 299 | 69.99 |
C# Corner
Learn how to utilize the Roslyn Scripting API to host a C# scripting engine in your applications.
Getting Started
The first step is to procure and install the Roslyn CTP bits from bit.ly/rThx5k. Once Roslyn is set up, open up Visual Studio and you should see the Roslyn templates available for both C# and VB.NET, as shown in Figure 1.
Figure 1: Roslyn Visual Studio Templates
Next create a new Console Application, open up Program.cs and add the following using statements.
using Roslyn.Scripting;
using Roslyn.Scripting.CSharp;
You're now set up to run any of the code examples in the remainder of the article.
Executing a C# Expression
One of the most basic scripting needs is the ability to execute a given expression. Accomplishing this task with Roslyn is very similar to using the Dynamic Language Runtime (DLR) . First create a ScriptEngine instance for your target language, in this case C#:
ScriptEngine scriptEngine = new ScriptEngine();
Then use the Execute method to run the given expression:
scriptEngine.Execute("1+1");
The expression above would return an object with a value of 2.
If you know the return type, you can also use the generic Execute<T> method. For example, to get an integer value from a numeric expression, run the following:
int result = scriptEngine.Execute<int>("20+22");
Executing a Code Block and Classes
The Execute method can also be used to execute code blocks, including control statements and even an entire class. To execute a foreach script block to increment each item in a collection, for example, you would call:
scriptEngine.Execute("foreach(int item in numbers) { item += 1; };");
To execute an entire class, you can simply pass it to the Execute method. For example, to execute a custom class named DemoClass that contains one method named Test you would run:
scriptEngine.Execute("Public class DemoClass() {" +
"public string Test { return "Test Method"; } +
"}"
Maintaining an Execution Context
The Execute method contains an overloaded signature for passing a Session object. The Session object is used for maintaining an execution context. To create a context, use the Session.Create() method:
Session mySession = Session.Create();
scriptEngine.Execute("int number = 40;", mySession);
scriptEngine.Execute("number += 2;", mySession);
Executing a File
scriptEngine.ExecuteFile("DemoClass.cs");
scriptEngine.ExecuteFile("DemoScript.csx");
Creating a Read Evaluate Print Loop (REPL)
This is just the tip of the iceberg; with a little work, you can create a REPL quite easily. For the sake of clarity and clean code, I'll encapsulate the Scripting API access into a new class named ScriptHost. The ScriptHost class exposes the Execute, Execute<T>, and ExecuteFile methods of the Roslyn C# ScriptEngine.
In addition, I used an overloaded ScriptEngine constructor to pass in the assemblies and namespaces that will be available to the consumer of the ScriptHost. The ExecuteFile method is not used in the demo application, but could come in quite handy for loading utility classes for use by the scripting engine. See Listing 1 for the full contents of the ScriptHost class.
In the console application, I make use of the ScriptHost class to execute the user's given C# expressions in a loop. If the user enters the word "quit", the program silently exits. I opted to catch any Exceptions that might occur and display them to the user. See Listing 2 for the full REPL application code, and Figure 2 for the resulting output.
Looking Forward
The Roslyn project is looking very promising, and already has a lot of powerful APIs available for developers to wield. Today I've shown only the Scripting API, which can be used to host C# or VB.NET in a .NET application. Stay tuned for the next installment, covering how to use the Compiler APIs to analyze C# syntax and semantics and compile
I agree to this site's Privacy Policy.
> More Webcasts | https://visualstudiomagazine.com/articles/2011/11/16/the-roslyn-scripting-api.aspx | CC-MAIN-2019-13 | refinedweb | 645 | 55.34 |
Opened 3 years ago
Last modified 3 years ago
if you want to do anything with settings, the settings module is (unnecessarily) hard to work with.
Configuration shouldn't be a module, but instead an object in a module. And configuration setup shouldn't happen implicitly on import, but instead when specifically invoked (passing in the module name for the settings). Then the standard server setup can invoke this settings setup using the normal environmental variable. Or other people can do other things.
Also, there should be a way to swap in new settings dynamically, using threadlocal storage. But at least if the object is there you can monkeypatch something that does swapping. Hacking in swappable modules is harder.
AFAICT, everyone does "from django.conf import settings", so pottentially settings can be an object in the django.conf module, without much impact to existing code. Or a new module with an object can be created, and django.conf.settings is a shell fake module around that.
I definitely +1 this. It won't just make using Paste easier (as I suppose that's why Ian is looking at the code), but will make other things possible (I especially like the idea of configuration switching). And it might even - with a bit of monkey patching - make it possible to run multiple Django apps within one server context, which could be really helpful if you want to use Django apps within a larger WSGI setup.
Good call. Let's make this happen.
I'm not sure whether it'd be possible to write it in a backwards-compatible way, because some code does this:
from django.conf.settings import SOME_SETTING
That said, we shouldn't let backwards-compatibility stand in the way, as this would be an important improvement.
54 occurences of that in 46 files. Sounds like something that could be cleaned up in a few hours in magic-removal. After that cleanup it shouldn't be too problematic to switch over to a settings object instead of a settings module. Adrian: feel free to assign the ticket to me (or other tickets, to keep you free for dev stuff) if you think I should take up on it.
Hugo -- it's all yours. :) BTW, you have commit access on magic-removal, so have at it!
Before you do any code, though, we should spend a tiny bit more time on design. What were you thinking the interface would be?
The first thing I would do would only be going through all the code to change any "from django.conf.settings import SETTING" into "from django.conf import settings" and then qualifying all those settings in the file itself with "settings." - that's not much of an interface, just stupid editing ;-)
The second step would be the settings object, of course - and I agree that we should throw some thinking into that. My preference would be to have a threadlocal that's automatically populated by loading the config. That would allow to do SIGHUP stuff by just running the settings-loading-function in every thread or process of a server (for example it can be done in the FLUP framework by signalling all childs when the master receives a SIGHUP). That way you could have nice "life transitions" of settings, because you don't need to kill the server. Or do it like 'apachectl graceful' - stopping threads/processes when they are done handling the current request and restarting them fresh (so they reload the config). The exact way to do the reload is up to the server, but the interface would have to provide the needed functionality of config reloading.
One way I did something like this in another project (TooFpy?): the config still was in a standard module file. The difference was just the settings loader, which pulled the globals out of that settings module into the (in that project global) configuration object. That way the settings themselves won't change in their syntax, it's just the internal semantics that would switch to a settings object. This would especially keep the nice "from othersettings import *" mechanism intact.
One thing we would need to address with that mechanism is module caching, of course - so either we would need to use compile/exec instead of importing, or remove modules from sys.modules after doing the import, or using reload on modules to make sure we get the most current version. This does get complicated with projects where settings are spread over multiple modules, though - we need to know what modules to reload (or what modules to remove, of course).
Removing after import is much simpler, as we just could keep old sys.modules values and just remove those that are new. The whole config loading would have to be secured by a lock, though, to make sure that no two threads be in the same code segment.
Sounds good. Go for it!
Please remember to follow the new "Committing code" guidelines -- keep commits as granular as possible, etc. I just added that bit of docs/policy the other day.
Another general issue for the swappability of configuration is places where the configuration is used on module import. I believe that django.core.db in particular loads up DATABASE_ENGINE and sets things based on that.
Anyway, as far as actually swapping settings, there's some code in djangopaste.wsgi that does this, if you want to look at it (it actually effectively swaps the module itself). But that doesn't really help when it comes to configuration that is used at import time -- to handle that each place where that happens would have to have a threadlocal value put in, and swapped at runtime.
A first step is done now by rewriting all settings accessing parts to allways going through "settings.". Next step would be to change settings from a module to an object. The last step would be to collect places where settings are captured at import time.
(In [2031]) Refs #1212 - moved settings from a dedicated module into a dedicated global instance
I close this for now, if somebody stumbles over settings that are captured on load time and not reloaded later on, please open up new tickets for the specific places where this happens.
By Edgewall Software. | http://code.djangoproject.com/ticket/1212 | crawl-002 | refinedweb | 1,048 | 63.19 |
Snap to Your Tiles
And the user grabs this list with their finger and pans some distance to the right. There’s a chance the list will end up landing in a position like this…
Notice that the tiles at left are cutoff. The list has panned some arbitrary distance and stopped where fate stopped it. I know it’s all scientific, but it’s fun to say that fate landed it here.
But what if you don’t want to put your app in fate’s fickle hands and would rather stop every time at a tile’s edge. That flick from my last example, then, should find you here…
…with the edge of tiles 9, 10, and 11 neatly lined up on your left margin.
Is that possible? Of course it is.
Is it easy? Yep. That too.
Once again, the custom CSS properties in Windows 8 come to the rescue. I’m going to talk about a couple of properties in the -ms-scroll* area. If you want a good list of the available properties, just type -ms-scroll in a CSS sheet and let IntelliSense be your guide.
We would implement this using snap points. Snap points are an IE concept. I don’t know if they’ve been suggested to the W3C for consideration in the CSS standard (I couldn’t find anything that indicated they have), but they should be because they’re super helpful.
If you have a container whose content exceeds the boundaries of the container, then scrolling is necessary to view all content, right? And when a user flicks with his finger, the contents scroll within the container and upon letting up his finger, the user watches his content scroll for a bit longer with some apparent inertia, right? Well, a snap point is a location in that content where it makes sense for that content to stop scrolling. You can define snap points in one of two ways: mandatory or proximity.
Defining a container to use mandatory snap points means that it will always stop at the nearest snap point. It will never stop somewhere in between. Defining it to use proximity snap points, however, means that if it ends up close enough to a snap point then it will find its way there, but if it’s not close enough then it will be fine with coming to rest between points.
Here’s the CSS you should add to achieve the above…
.snappoints #list .win-viewport { -ms-scroll-snap-x: mandatory snapInterval(0px,200px); }
Let me break that down for you.
.snappoints is the name of my page, which in Windows 8 navigation apps automatically gets a class with your page’s name. So .snappoints essentially namespaces this CSS to this page.
#list is the ListView control on my HTML. I manually gave it the ID of list. BTW, I recently discovered that if you know you’re only going to have a single list on your page, it might be easier to forgo the naming of it and instead just refer to it with [data-win-control=WinJS.UI.ListView]. Nice, eh?
.win-viewport is the viewport of my ListView. If you work with the ListView much, and haven’t seen it already, you should definitely check out Styling the ListView and its itemsfrom the Dev Center. In that article, it breaks down the components of the ListView so you can have a shot at knowing how to style it. Here’s how it visually defines the win-viewport…
The first part of the property (mandatory) indicates that we are using mandatory snap points, so as I said before, we are assured of coming to rest on a snap point and never in between.
The second part of the property (snapInterval(0px,200px);) indicates that I want to start the content at the very beginning (0px) and I want a snap point every 200px. I have to know that my tiles are 200px wide to make this work. CSS is not actually recognizing a tile’s edge, just points every 200px.
I was a little bummed that I couldn’t find a way to indicate manually (with CSS properties on HTML elements I guess) where I want snap points to be and then just have the container recognize them, but this way works pretty well too.
That’s it. Happy snapping!
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/snap-your-tiles | CC-MAIN-2015-48 | refinedweb | 745 | 69.21 |
And it hit me again: JavaScripts loose typing that isn’t very loose when you think it is.
I wrote some functions, for example
Number.prototype.f = function(){ var k = this; ... if(k.isInt()) ... };
The function
isInt is the polyfill for ECMAScript 6
Number.isInteger as a Number prototype. Inside this prototype is a check for
NaN and finiteness. It’s a very simple thing:
Number.prototype.isOk = function(){ return ( !isNaN(this) && Number.isFinite(this) )?true:false; };
Number.isFinite is a build-in to Firefox now and it is also the culprit here, it returns
false even if it gets a good, finite number. There’s also a polyfill over at developers.mmozilla.org, so with a little name changing, I was able to find the problems.
The polyfill:
// Number.isFinite polyfill // if (typeof Number.isFinite2 !== 'function') { Number.isFinite2 = function isFinite2(value) { // 1. If Type(number) is not Number, return false. if (typeof value !== 'number' ){ return false; } // 2. If number is NaN, +∞, or −∞, return false. if ( value !== value || value === Infinity || value === -Infinity) { return false; } // 3. Otherwise, return true. return true; }; }
The summary to the function Number.isFinite at MDN states
In comparison to the global isFinite function, this method doesn’t forcibly convert the parameter to a number. This means only values of the type number, that are also finite, return true.
That’s what I wanted, no
isFinite("123") returning
true, good!
At least that’s what I thought.
You might have guessed it already: the “Number is an Object and not a Literal” train hit me again 😉
But that’s why I wrote
xtypeof. Here in its basic version:
function xtypeof(obj){ var tmp = typeof obj; if( tmp !== "object"){ return tmp; } else{ var toString = Object.prototype.toString; tmp = toString.call(obj); if(tmp !== "[object Object]"){ return tmp.slice(8, -1).toLowerCase(); } return "object"; } }
So changing to that and all is well and peaceful again?
Nope, of course not. I found out, after some fiddling, that the clever way to test for NaN
value !=== value isn’t so clever at all. After exchanging it with the global
isNaN() it works as expected. Expected by me, that is 😉
Number.isFinite2 = function(value) { // 1. If Type(number) is not Number, return false. if (xtypeof(value) !== 'number' ){ return false; } // 2. If number is NaN, +∞, or −∞, return false. if ( isNaN(value) || value === Infinity || value === -Infinity) { return false; } // 3. Otherwise, return true. return true; };
There is also a problem with the global
isNaN() (not really a problem, I think) but that get’s caught by the
xtypeof check.
A standard conforming check for IEEE-754
NaN would be the following:
// Shamelessly stolen from the SunPro code /* * ==================================================== * Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved. * * Developed at SunPro, a Sun Microsystems, Inc. business. * Permission to use, copy, modify, and distribute this * software is freely granted, provided that this notice * is preserved. * ==================================================== */ function isnan(x){ var hx,lx; var double_int = new DataView(new ArrayBuffer(8)); // needs nevertheless a check if it is really a number double_int.setFloat64(0, x); hx = double_int.getInt32(0); lx = double_int.getInt32(4); hx &= 0x7fffffff; hx |= (lx|(-lx))>>>31; hx = 0x7ff00000 - hx; return (hx>>>31)|0; } Number.prototype.foo = function(){ var a = this; return isnan(a); };
You don’t need a check if you use it this way, because a
Number is always a number 😉
Test:
(Number.NaN).foo() /* 1 */ (123).foo() /* 0 */ ("123").foo() /* throws exception: "123".foo is not a function */ Math.sqrt(-1).foo() /* 1 */ /* IEEE 754 7.2g */ (0/0).foo() /* 1 */ /* IEEE 754 7.2e */ (Infinity/Infinity).foo() /* 1 */ /* IEEE 754 7.2e */ (1/0).foo() /* 0 */ /* IEEE 754 7.3 */
Yes, the last one is correct, too. Division by zero returns
Infinity with the sign set as if it is an ordinary, finite rational (e.g.: if q/r -Infinity) as ruled by IEEE-754 and ECMAScript says to be in concordance to the standard.
If you watch the blogosphere you will see a lot of rants about ECMAScript’s
isNaN but the single problem is the automatic conversion such that
isNaN("123") returns
false. That is a known problem of the principal design of JavaScript from the early days on and it is hard to get rid off now but with the to-come
Number.isNaN you’ll get at least a work-around. | https://deamentiaemundi.wordpress.com/2014/10/05/javascript-when-a-number-is-not-a-number/ | CC-MAIN-2017-30 | refinedweb | 721 | 70.29 |
App Events
App Events help you understand the makeup of people who engage with your app and measure and reach specific sets of your users with Facebook mobile app ads. This is done by logging events from your app via Facebook SDK for Swift. The event can be one of 14 predefined events such as
added to cart in a commerce app,
achieved level in a game, or any custom events you define.
Prerequisites
Before including the code to measure events, you'll need to register your app with Facebook and install the Facebook SDK. See our getting started guide to learn more.
Implementation
Facebook's SDK provides helper methods for app activation and the built-in types of events.
Logging app activations
Logging app activations as an app event enables most other functionality and should be the first thing that you add to your app. The SDK provides a helper method to log app activation. By logging an activation event, you can observe how frequently users activate your app, how much time they spend using it, and view other demographic information through Facebook Analytics.
Insert this code into your app delegate's
applicationDidBecomeActive method:
import FacebookCore func applicationDidBecomeActive(application: UIApplication) { // Call the 'activate' method to log an app event for use // in analytics and advertising reporting. AppEventsLogger.activate(application) // ... }
The
AppEventsLogger.activate method is the preferred way to log app activations even though there's an event that you can send manually via the SDK. The helper method performs a few other tasks that are necessary for proper accounting for Mobile App Install Ads.
Other event types
Much like logging app activations, the Facebook SDK provides a helper methods to log other common events with a single line of code.
For example, a purchase of USD $4.32, can be be logged with:
AppEventsLogger.log(.Purchased(amount: 4.32, currency: "USD"))
For purchases, the currency specification is expected to be an ISO 4217 currency code. This is used to determine a uniform value for use in ads optimization.
Automatic In-App Purchase Logging
Whenever possible, we suggest that you log purchases manually. However, Facebook has made it possible to automatically log app events related to in-app purchases. To enable automatic purchase logging, enable the Automatic In-App Purchase switch in the iOS settings section of the app's dashboard and call the
FBSDKAppEvents.activateApp method during app activation. Automatic logging requires the Facebook SDK for iOS version 3.22 or higher.
You can learn more about this feature in the App Events FAQ.
The maximum number of different event names is 1000. No new event types will be logged once this cap is hit and if you exceed this limit you may see a
100 Invalid parameter error when logging. However, it is possible to deactivate obsolete events. Read more about event limits in the App Events FAQ.
Logging Events Manually
For all other types of events, use the
log method with one of the event types defined in the
AppEvent struct:
AppEventsLogger.log( .AddedToCart( contentType: "product", contentId: "HDFU-8452", currency: "USD"))
For any of the built-in app events type, you can also specify a set of additional parameters and a
valueToSum property which is an arbitrary number that can represent any value like a price or a quantity. When reported, all of the
valueToSum properties will be summed together in Facebook Analytics. For example, if 10 people each purchased one item that cost $10 and it was passed in
valueToSum, then the Analytics report would show a value of $100.
Custom App Events
To log a custom event, just pass the name of the event as a string along with any additional parameters:
AppEventsLogger.log("YouWereEatenByAGrue")
API Reference
More details about the
AppEventsLogger class can be found in the reference documentation.
Verifying Event Logging in Facebook Analytics
If you are an admin or developer of an app, please see our documentation on debugging events in Facebook Analytics to learn how to verify that events are being logged.
Enabling Debug Logs
Enable debug logs to verify client-side app event usage using the following code. The debug logs contain detailed requests and responses formatted in JSON.
SDKSettings.enableLoggingBehavior(.AppEvents)
Debug logging is intended for testing only and should not be enabled in production apps.
Facebook Analytics
Enabling app events means that you can automatically start using Facebook Analytics. This analytics channel provides demographic info about the people using your app, offers tools for better understanding the flows people follow in your app, and lets you compare cohorts of different kids of people performing the same actions.
See Facebook Analytics for more information.
Facebook Ads
Mobile App Ad Performance
You can use app events to better understand the engagement and return on investment (ROI) coming from your mobile ads on Facebook. If you've set up app events, they will be reported as different types of actions in Ads Manager so you can see which ads and campaigns resulted in what actions within your app. For example, you may see that a particular ad resulted in 10 total actions. Those actions could be 8 checkouts and 2 registrations, which will be specified in Ads Manager.
Mobile App Ad Targeting
Measuring app events also opens up an easier way to reach your existing app users for mobile app ads. After integrating app events, you will be able to reach your existing users, specific subsets of users based on what actions they have taken within your app, or even percentages of your most active users. For instance you can run ads to bring recent purchasers back into your app in order to complete more purchases or to your top 25% of purchasers. You can choose an audience to reach based on the app events you are measuring in your app.
To learn more about our mobile app ads, go to our tutorial on mobile app ads for installs or mobile app ads for engagement.
Ads Attribution
Once you've implemented app events into your iOS app, the app events you've added will automatically be tracked when you run Facebook mobile app ads for installs or mobile app ads for engagement and conversion.
If using Power Editor to set up your ad, please ensure you use the default tracking settings. If you manually configure tracking settings where
tracking_spec is present, use the following code, replacing
{app_id} with your app's Facebook app ID.
{"action.type" : ["app_custom_event"], "application" : [{app_id}]}
What's reported
If you have set up app events, they will be reported as different types of actions in Ads Manager. For example, you may see that a particular ad resulted in 10 total actions. Those actions could be 8 checkouts and 2 registrations, which will be specified in Ads Manager. You can learn more about measuring performance of mobile app ads in Ads Manager here and Post Install Reports in Power Editor.
Data Control
The Facebook SDK offers a tool to give your users control over how app events data is used by the Facebook ads system. Our Platform Policy requires that you provide users with an option to opt out of sharing this info with Facebook. We recommend that you use our SDK tools to offer the opt-out option. Facebook also respects device-level controls where available, including the Advertising Identifier control on iOS 6 and beyond.
limitedEventAndDataUsage behavior
If the user has set the
limitedEventAndDataUsage flag to
true, your app will continue to send this data to Facebook, but Facebook will not use the data to serve targeted ads. Facebook may continue to use the information for other purposes, including frequency capping, conversion events, estimating the number of unique users, security and fraud detection, and debugging.
SDKSettings.limitedEventAndDataUsage = true
The
limitedEventAndDataUsage setting will persist across app sessions. To use this property, you can match your opt-out user interface to the value of the
limitedEventAndDataUsage flag.
Data Sharing
App event data you share with Facebook will not be shared with advertisers or other third parties unless we have your permission or are required to do so by law. | https://developers.facebook.com/docs/swift/appevents | CC-MAIN-2017-51 | refinedweb | 1,351 | 51.28 |
The Retlang wiki is a bit short on the sort of messy examples that I find useful when learning a product, so I thought I’d write one of my own. The following is a 200-line web spider. I’ll go through it and explain how it works and why you’d build it like this. I recently used techniques similar to this to get a FIX processor to run 30 times faster. Seriously. Retlang’s that good.
Five minute introduction to Retlang
Here’s how Retlang works:
- A Context is a Thread/Queue pair. That is to say, a thread with an associated queue. (In practice, we actually use PoolQueues in the code, but the semantics are the same.)
- Messages are sent one-way to Contexts across Channels.
- Contexts subscribe to Channels by specifying a function to be called when the message comes off the queue.
- Messages are processed in the exact order in which they were transmitted.
- Typically, all of a given context’s messages are handled by a single object. This is usually termed the service.
Now, the important thing with Retlang is that it is designed to prevent you from having to put lock statements everywhere. This results in a couple of restrictions:
- You shouldn’t use shared state.
- Messages must be either immutable or serializable. (Immutable is faster.)
You can actually violate the restrictions if you know what you’re doing. The problem is, once you violate the restrictions, you need to start worrying about thread safety again. You’ll also need to worry about maintainability. Although Retlang doesn’t prevent you from using other techniques and threading models, you lose a lot of the readability when you do so.
There is a third restriction: You shouldn’t wait for another Context to finish doing something. In fact, you can do this, but you should always try to avoid it, since you can quite quickly kill your performance by doing so.
NB: Actually, threads and contexts are slightly different, but if you want to understand the differences, you’re better off reading Mike’s Blog. I’ve just called it a thread for simplicity here.
Shared State
The program works as follows:
- The Spider class reads a page and works out what URLs are in the page.
- The SpiderTracker class keeps track of what pages have been found.
In the code, there are five spiders. However, there can only be one spider tracker, which co-ordinates all of the spiders. Since I’ve already told you that you can’t have shared state, you might be wondering how this is handled. The answer is that you associate the SpiderTracker itself with a context. All modifications to and results from the Tracker comes through the same Retlang architecture. The Spiders each run on their own Context.
We only ever need to transmit strings, which are immutable. Now, Channels are one way, and are one way by design, so we need to pass the following messages:
- Please scan this URL (SpiderTracker to Spider)
- I’ve found this URL (Spider to SpiderTracker)
- I’ve finished scanning this URL (Spider to SpiderTracker)
Distributing the work load is handled by a QueueChannel, which automatically sends messages to the next Spider waiting for a message. An alternative implementation would be to create separate channels for each Spider.
Halting
The last message is, in some senses, not necessary. Without it, every page would get scanned. However, the program would never finish. One of the trickiest problems with asynchronous communication and processing is actually figuring out what is going on and when you’re finished. With synchronous systems, you can usually determine both just from the call stack; it takes a bit more effort to display that information to the screen, but not a lot.
Therefore, having set up the Retlang contexts, the main thread then needs to wait for the track to indicate that it is finished. The tracker, in turn, counts how many pages are currently being scanned. When that hits zero, we’re finished. Retlang doesn’t provide its own facility for doing this, reasoning that using .Net’s WaitHandles is good enough.
The Code
Okay, you’ve waited long enough:
using System; using System.Collections.Generic; using System.Linq; using System.Text.RegularExpressions; using System.Net; using System.IO; using System.Threading; using Retlang; class Program { static void Main(string[] args) { string baseUrl = ""; int spiderThreadsCount = 5; foreach (string url in Search(baseUrl, spiderThreadsCount)) { Console.WriteLine(url); } Console.ReadLine(); } private static IEnumerable<string> Search(string baseUrl, int spiderThreadsCount) { // NB Make sure folders end in a slash: the code fails otherwise since it can't distinguish between // a folder and a file var queues = new List<IProcessQueue>(); var spiderChannel = new QueueChannel<string>(); var spiderTrackerChannel = new Channel<string>(); var finishedTrackerChannel = new Channel<string>(); var waitHandle = new AutoResetEvent(false); var spiderTracker = new SpiderTracker(spiderChannel, waitHandle); var spiderTrackerQueue = new PoolQueue(); spiderTrackerQueue.Start(); spiderTrackerChannel.Subscribe(spiderTrackerQueue, spiderTracker.FoundUrl); finishedTrackerChannel.Subscribe(spiderTrackerQueue, spiderTracker.FinishedWithUrl); for (int index = 0; index < spiderThreadsCount; index++) { var queue = new PoolQueue(); queues.Add(queue); queue.Start(); var spider = new Spider(spiderTrackerChannel, finishedTrackerChannel, baseUrl); // Strictly speaking, we only need one Spider that listens to multiple threads // since it has no internal state. // However, since this is an example, we'll avoid playing with fire and do // it the sensible way. spiderChannel.Subscribe(queue, spider.FindReferencedUrls); } spiderTrackerChannel.Publish(baseUrl); waitHandle.WaitOne(); return spiderTracker.FoundUrls; } class Spider { IChannelPublisher<string> _spiderTracker; IChannelPublisher<string> _finishedTracker; string _baseUrl; public Spider(IChannelPublisher<string> spiderTracker,
IChannelPublisher<string> finishedTracker, string baseUrl) { _spiderTracker = spiderTracker; _finishedTracker = finishedTracker; _baseUrl = baseUrl.ToLowerInvariant(); } public void FindReferencedUrls(string pageUrl) { string content = GetContent(pageUrl); var urls =); foreach (var newUrl in urls) { _spiderTracker.Publish(newUrl); } _finishedTracker.Publish(pageUrl); } static int BaseUrlIndex(string url) { // This finds the first / after // return url.IndexOf('/', url.IndexOf("//") + 2); } string ToAbsoluteUrl(string url, string relativeUrl) { if (relativeUrl.Contains("//")) { return relativeUrl; } int hashIndex = relativeUrl.IndexOf('#'); if (hashIndex >= 0) { relativeUrl = relativeUrl.Substring(0, hashIndex); } if (relativeUrl.Length > 0) { bool isRoot = relativeUrl.StartsWith("/"); int index = isRoot ? BaseUrlIndex(url) : url.LastIndexOf('/') + 1; if (index < 0) { throw new ArgumentException(string.Format("The url {0} is not correctly formatted.", url)); } return url.Substring(0, index) + relativeUrl; } ""; } } } class SpiderTracker { // NB We care about case. HashSet<string> _knownUrls = new HashSet<string>(StringComparer.InvariantCulture); IQueueChannel<string> _spider; int _urlsInProcess = 0; AutoResetEvent _waitHandle; public SpiderTracker(IQueueChannel<string> spider, AutoResetEvent waitHandle) { _spider = spider; _waitHandle = waitHandle; } public IEnumerable<string> FoundUrls { get { return from url in _knownUrls orderby url select url; } } public void FoundUrl(string url) { if (!_knownUrls.Contains(url)) { _knownUrls.Add(url); if (Path.GetExtension(url) != "css") { _urlsInProcess++; _spider.Publish(url); } } } public void FinishedWithUrl(string url) { _urlsInProcess--; Console.WriteLine(_urlsInProcess); if (_urlsInProcess == 0) { _waitHandle.Set(); } } } }
Caveats
Well, it’s only 200 lines, so it’s hardly going to be feature complete. Here’s some restrictions:
- You can’t really run 5 WebRequests simultaneously, so the 5 queues are actually kind of pointless. They do handle 2 threads quite well, though. The code does nothing to fix this. If someone can point me in the right direction, I’ll release an updated version.
- There are undoubtedly links that should be ignored that aren’t. Subtext’s EditUris are an example. In general, the HTML parsing is extremely simplistic, but it’s not the point of the exercise.
- It doesn’t read robots.txt. Please don’t run this against sites you don’t have permission to spider.
- It doesn’t respect nofollow.
- It doesn’t clean up its threads after completion.
UPDATE: I’ve tidied up the code slightly. It’s now got a couple more heuristics about dud urls (it turns out that running the code against a blog full of url scanning code is an eye-opener… 😉 ). I’ve also tidied up the proxy handling. The IronPython version is here.
One thought on “Using Retlang to implement a simple web spider”
Okay, in answer to a)There are three steps: find a url, search the url and mark it as finished. Find and search are different operations because when we’re spidering we quite often hit the same URL multiple times. Equally, of these operations, the only one that can be done in parallel is the actual search.So, the QueueChannel has “search this url” messages, the tracker channel has “found this url” messages and the finished channel has “I’m done with this url” messages.As for b), it’s important to understand that the only purpose of the “finished” messages is to work out when to terminate the process. In fact, part of the idea behind the design of the code was that services shouldn’t have to know about their interactions: this simplifies testing. So, no, the tracker doesn’t need to do anything with its worker processes: the search function orchestrates everything.In larger code, I follow the same pattern: setup, low knowledge interactions, tear down at the end.Incidentally, it’s worth pointing out that Retlang is pure .NET and doesn’t follow the Erlang design slavishly. There’s no “receive” concept in Retlang: it just calls functions.Anyway, I hope this helps explain what’s going on. | https://colourcoding.net/2008/09/04/using-retlang-to-implement-a-simple-web-spider/ | CC-MAIN-2019-04 | refinedweb | 1,519 | 58.28 |
On Fri, 7 Jul 2006, Philip M. Gollucci wrote:
> Hi, I'm trying do determine the units for one of the fields returned by getrusage(2)
>
> man page on 6.0-RELEASE-p5 says this:
> 2 maxrss maximum shared memory or current resident set
> 3 ixrss integral shared memory
>
>)
>
> which to me implies thats in kilobytes, but to the contrary, we have the following
Doesn't that imply that it's in kb per stat clock ticks?
>> ApacheSizeLimit on bsd systems uses BSD::Resource to get the memory and
>> shared-pages size.
>> sub bsd_size_check {
>> return (&BSD::Resource::getrusage())[2,3];
>> }
>
> I also have a local test based on the recent Apache::SizeLimit work from Dave Rolsky
> where
>
> maxrss > ixrss
> (Apache-Test output snipped)
> # '14124' maxrss
> # >
> # '52080' ixrss
>
> I tried looking in src/sys/kern/kern_resource.c but I didn't find anything that told
me the units.
>
> My inkling is the documentation is WRONG.
If it's reported share memory as greater than total memory, then I think
the docs for BSD::Resource are correct. We need to divide that second
number (ixrss) by the value of the stat clock tick. Any idea how that can
be determined?
-dave
/*===================================================
VegGuide.Org
Your guide to all that's veg. My book blog
===================================================*/
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@perl.apache.org
For additional commands, e-mail: dev-help@perl.apache.org | http://mail-archives.apache.org/mod_mbox/perl-dev/200607.mbox/%3CPine.LNX.4.64.0607070708570.14762@urth.org%3E | CC-MAIN-2013-48 | refinedweb | 230 | 64.61 |
The source of the problem is actually the way some(major) applications store their date/time data types. Programmes using POSIX time representation will be affected by this problem. The structure time_t is a value type which stores time in a 32-bit signed integer. It stores the time as number of seconds elapsed since January 1, 1970. So it is capable of representing time which can be addressed within total of 231 seconds. According to this, the latest time that it can store is 03:14:07 UTC, Tuesday, January 19, 2038. After this time, the sign bit of the 32-bit signed integer will be set and it will represent a negative number. As I said, the time is stored as number of seconds elapsed since 1st January 1970, this negative number will be added to compute the time as per the POSIX standards. But this being a negative number it will calculate the time by subtracting this many seconds from 1st January 1970 which will eventually generate a historical date-time which will cause the applications to fail. This time will be Friday, December 1901 and is called the wrap-around date. Applications written in C in many operating system will also be affected as the POSIX presentation of time is widely used there. The animation below visualizes actual scenario in an easier manner. This bug is often denoted as "Y2038", "Y2K38", or "Y2.038K" bug.
The following ANSI C programme when compiled simulates the bug. The output produced by the programme is also attached below the code. This code has been been referred from here.
#include <span class="code-keyword"><stdlib.h> </span>
#include <span class="code-keyword"><stdio.h> </span>
#include <span class="code-keyword"><unistd.h> </span>
#include <span class="code-keyword"><time.h> </span>
int main (int argc, char **argv)
{
time_t t;
t = (time_t) 1000000000;
printf ("%d, %s", (int) t, asctime (gmtime (&t)));
t = (time_t) (0x7FFFFFFF);
printf ("%d, %s", (int) t, asctime (gmtime (&t)));
t++;
printf ("%d, %s", (int) t, asctime (gmtime (&t)));
return 0;
}
1000000000, Sun Sep 9 01:46:40 20012147483647,
Tue Jan 19 03:14:07 2038-2147483648,
Fri Dec 13 20:45:52 1901
Above programme being a strict ANSI, should compile using any C compiler on any platform. Now lets take a look at a perl script on both UNIX and Windows 2000. This script has been referred from here.
#!/usr/bin/perl #
# I've seen a few versions of this algorithm
# online, I don't know who to credit. I assume
# this code to?
A mere handful of operating systems appear to be unaffected by the year 2038 bug so far. For example, the output of this script on Debian GNU/Linux (kernel 2.4.22):
# ./2038.plTue Jan 19 03:14:01 2038Tue Jan 19 03:14:02 2038Tue Jan 19 03:14:03 2038Tue Jan 19 03:14:04 2038Tue Jan 19 03:14:05 2038Tue Jan 19 03:14:06 2038Tue Jan 19 03:14:07 2038Fri Dec 13 20:45:52 1901Fri Dec 13 20:45:52 1901Fri Dec 13 20:45:52 1901
Windows 2000 Professional with ActivePerl 5.8.3.809 fails in such a manner that it stops displaying the date after the critical second :
C:\>perl 2038.plMon Jan 18 22:14:01 2038Mon Jan 18 22:14:02 2038Mon Jan 18 22:14:03 2038Mon Jan 18 22:14:04 2038Mon Jan 18 22:14:05 2038Mon Jan 18 22:14:06 2038Mon Jan 18 22:14:07 2038
Yes, Of course, There have been many solutions proposed worldwide for this problem. Few of them are listed here.
This is not a solution as the binary compatibility of the software would break here. Programmes depending on the binary representations of time would be in trouble. So we can not even think of this one.
This seems to be good at first look, but this would just delay(post-pone) the judgement day to the year 2106 as it will give some more scope by adding another usable bit. You will be in the same trouble by then. So this is a feasible solution but not a practical one.
Most 64-bit architectures use 64 bit storage to represent time_t. The new wrap-around date with this new (signed)64 bit representation will not come before 290 billion years. It is positively predicted that by the year 2038 all 32-bit systems will be phased out and all systems will be 64-bit.
Thanks.
Ruchit. | https://www.codeproject.com/articles/25848/the-year-2038-bug-y2k38-problem-many-of-your-appli?fid=1290452&df=90&mpp=10&sort=position&spc=relaxed&tid=2540766 | CC-MAIN-2017-09 | refinedweb | 756 | 71.34 |
Git And Hadoop
A lot of people use Git with Hadoop because they have their own patches to make to Hadoop, and Git helps them manage it.
GitHub provide some good lessons on git at
Apache serves up read-only Git versions of their source at. Committers can commit changes to writable Git repository. See HowToCommit
This page tells you how to work with Git. See HowToContribute for instructions on building and testing Hadoop.
Contents
- Git And Hadoop
- Key Git Concepts
- Checking out the source
- Grafts for complete project history
- Migrating private branches to the new git commit history
- Forking onto GitHub
- Branching
- Creating Patches for attachment to JIRA issues
Key Git Concepts
The key concepts of Git.
- Git doesn't store changes, it snapshots the entire source tree. Good for fast switch and rollback, bad for binaries. (as an enhancement, if a file hasn't changed, it doesn't re-replicate it).
- Git stores all "events" as SHA1 checksummed objects; you have deltas, tags and commits, where a commit describes the status of items in the tree.
- Git is very branch centric; you work in your own branch off local or central repositories
- You had better enjoy merging.
Checking out the source
You need a copy of git on your system. Some IDEs ship with Git support; this page assumes you are using the command line.
Clone a local Git repository from the Apache repository. The Hadoop subprojects (common, HDFS, and MapReduce) live inside a combined repository called hadoop.git.
git clone git://git.apache.org/hadoop.git
Committers: for read/write access use
The total download is a few hundred MB, so the initial checkout process works best when the network is fast. Once downloaded, Git works offline -though you will need to perform your initial builds online so that the build tools can download dependencies.
Grafts for complete project history
The Hadoop project has undergone some movement in where its component parts have been versioned. Because of that, commands like git log --follow needs to have a little help. To graft the history back together into a coherent whole, insert the following contents into hadoop/.git/info/grafts:
# Project split # Project un-split in new writable git repo a196766ea07775f18ded69bd9e8d239f8cfd3ccc 928d485e2743115fe37f9d123ce9a635c5afb91a cd66945f62635f589ff93468e94c0039684a8b6d 77f628ff5925c25ba2ee4ce14590789eb2e7b85b
You can then use commands like git blame --follow with success.
Migrating private branches to the new git commit history
The migration from svn to git changed the commit ids for anyone tracking the history of the project via the svn to git bridge. This means that private forks/branches will not rebase to the new versions. Follow the MigratingPrivateGitBranches instructions.
Forking onto GitHub
You can create your own fork of the ASF project, put in branches and stuff as you desire. GitHub prefer you to explicitly fork their copies of Hadoop.
Create a GitHub login at ; Add your public SSH keys
Go to and search for the Hadoop and other Apache projects you want (avro is handy alongside the others)
For each project, fork in the github UI. This gives you your own repository URL which you can then clone locally with git clone
- For each patch, branch.
At the time of writing (December 2009), GitHub was updating its copy of the Apache repositories every hour. As the Apache repositories were updating every 15 minutes, provided these frequencies are retained, a GitHub-fork derived version will be at worst 1 hour and 15 minutes behind the ASF's Git repository. If you are actively developing on Hadoop, especially committing code into the Git repository, that is too long -work off the Apache repositories instead.
- Clone the read-only repository from Github (their recommendation) or from Apache (the ASF's recommendation)
in that clone, rename that repository "apache": git remote rename origin apache
Log in to []
- Create a new repository (e.g hadoop-fork)
- In the existing clone, add the new repository :
git remote add -f github git@github.com:MYUSERNAMEHERE/hadoop.git
This gives you a local repository with two remote repositories: "apache" and "github". Apache has the trunk branch, which you can update whenever you want to get the latest ASF version:
git co trunk git pull apache
Your own branches can be merged with trunk, and pushed out to git hub. To generate patches for submitting as JIRA patches, check everything in to your specific branch, merge that with (a recently pulled) trunk, then diff the two: git diff --no-prefix trunk > ../hadoop-patches/HADOOP-XYX.patch
If you are working deep in the code it's not only convenient to have a directory full of patches to the JIRA issues, it's convenient to have that directory a git repository that is pushed to a remote server, such as this example. Why? It helps you move patches from machine to machine without having to do all the updating and merging. From a pure-git perspective this is wrong: it loses history, but for a mixed workflow it doesn't matter so much.
Branching
Git makes it easy to branch. The recommended process for working with Apache projects is: one branch per JIRA issue. That makes it easy to isolate development and track the development of each change. It does mean if you have your own branch that you release, one that merges in more than one issue, you have to invest some effort in merging everything in. Try not to make changes in different branches that are hard to merge, and learn your way round the git rebase command to handle changes across branches. Better yet: do not use rebase once you have created a chain of branches that each depend on each other
Creating the branch
Creating a branch is quick and easy
#start off in the apache trunk git checkout trunk #create a new branch from trunk git branch HDFS-775 #switch to it git checkout HDFS-775 #show what's branch you are in git branch
Remember, this branch is local to your machine. Nobody else can see it until you push up your changes or generate a patch, or you make your machine visible over the network to interested parties.
Creating Patches for attachment to JIRA issues
Assuming your trunk repository is in sync with the Apache projects, you can use git diff to create a patch file. First, have a directory for your patches:
mkdir ../hadoop-patches
Then generate a patch file listing the differences between your trunk and your branch
git diff --no-prefix trunk > ../hadoop-patches/HDFS-775-1.patch
The patch file is an extended version of the unified patch format used by other tools; type git help diff to get more details on it. Here is what the patch file in this example looks like
cat ../outgoing/HDFS-775-1.patch diff --git src/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java src/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java index 42ba15e..6383239 100644 --- src/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java +++ src/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java @@ -355,12 +355,14 @@ public class FSDataset implements FSConstants, FSDatasetInterface { return dfsUsage.getUsed(); } + /** + * Calculate the capacity of the filesystem, after removing any + * reserved capacity. + * @return the unreserved number of bytes left in this filesystem. May be zero. + */ long getCapacity() throws IOException { - if (reserved > usage.getCapacity()) { - return 0; - } - - return usage.getCapacity()-reserved; + long remaining = usage.getCapacity() - reserved; + return remaining > 0 ? remaining : 0; } long getAvailable() throws IOException {
It is essential that patches for JIRA issues are generated with the --no-prefix option. Without that an extra directory path is listed, and the patches can only be applied with a patch -p1 call, which Hudson does not know to do. If you want your patches to take, this is what you have to do. You can of course test this yourself by using a command like patch -p0 << ../outgoing/HDFS-775.1 in a copy of the Git source tree to test that your patch takes.
Updating your patch
If your patch is not immediately accepted, do not be offended: it happens to us all. It introduces a problem: your branches become out of date. You need to check out the latest apache version, merge your branches with it, and then push the changes back to github
git checkout trunk git pull apache git checkout mybranch git merge trunk git push github mybranch
Your branch is up to date, and new diffs can be created and attached to patches.
Deriving Branches from Branches
If you have one patch that depends upon another, you should have a separate branch for each one. Simply merge the changes from the first branch into the second, so that it is always kept up to date with the first changes. To create a patch file for submission as a JIRA patch, do a diff between the two branches, not against trunk.
do not play with rebasing once you start doing this as you will make merging a nightmare
What to do when your patch is committed
Once your patch is committed into Git, you do not need the branch any more. You can delete it straight away, but it is safer to verify the patch is completely merged in
Pull down the latest release and verify that the patch branch is synchronized
git checkout trunk git pull apache git checkout mybranch git merge trunk git diff trunk
the output of the last command should be nothing: the two branches should be identical. You can then prove to git that this is true by switching back to the trunk branch and merging in the branch, an operation which will not change the source tree, but update Git's branch graph.
git checkout trunk git merge mybranch
Now you can delete the branch without being warned by git
git branch -d mybranch
Finally, propagate that deletion to your private github repository
git push github :mybranch
This odd syntax says "push nothing to github/mybranch". | http://wiki.apache.org/hadoop/GitAndHadoop | CC-MAIN-2015-06 | refinedweb | 1,659 | 59.64 |
William Lee Irwin III wrote:> > Minimalistic fix. Perhaps rough at the edges but I can clean the> ugliness ppl care about when they complain. 2.5.30 successfully booted> & ran userspace on a 16-way NUMA-Q with 16GB of RAM with this patch> and CONFIG_HIGHPTE enabled.Thanks, Bill. It doesn't seem any uglier than anything else highmem-related.> ...> +#define rmap_ptep_map(pte_paddr) \> +({ \> + unsigned long pfn = (unsigned long)(pte_paddr >> PAGE_SHIFT); \> + unsigned long idx = __pte_offset(((unsigned long)pte_paddr)); \> + (pte_t *)kmap_atomic(pfn_to_page(pfn), KM_PTE2) + idx; \> +})Could be an inline?> +static inline rmap_ptep_map(pte_addr_t pte_paddr)> +{> + return (pte_t *)pte_paddr;> +}Better try compiling that ;)> ...> --- 1.66/include/linux/mm.h Thu Aug 1 12:30:06 2002> +++ edited/include/linux/mm.h Fri Aug 2 22:24:40 2002> @@ -161,7 +161,7 @@> union {> struct pte_chain * chain; /* Reverse pte mapping pointer.> * protected by PG_chainlock */> - pte_t * direct;> + pte_addr_t direct;> } pte;Four more bytes into struct page. I bet that hurt.> ...> struct pte_chain {> struct pte_chain * next;> - pte_t * ptep;> + pte_addr_t ptep;> };We'll get fifteen pte_addr_t's per pte_chain on a P4 with thearray-of-pteps-per-pte_chain patch.And we'll need that, to reduce load on KM_PTECHAIN. Becausethere's no point in pte_highmem without also having pte_chain_highmem,yes?Which means either going back to a custom allocator or teachingslab about highmem and kmap_atomic. (Probably a custom allocator;internal fragmentation on 32/64/128 byte pte_chains won't be tooooobad, presumably).We're piling more and more crap in there to support these pte_chains.How much is too much?Is it likely that large pages and/or shared pagetables would allow us toplace pagetables and pte_chains in the direct-mapped region, avoid allthis?-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2002/8/6/302 | CC-MAIN-2013-48 | refinedweb | 304 | 58.89 |
Yes, video textures are only available for AIR projects at the moment. Adobe hasn't added it to the Flash Player (yet).
The only fallback I know of is unfortunately to use classic "StageVideo".
This is working great! I am running AIR17 on a desktop PC. I am having an issue with a clean video loop.
Has anyone found a nice way to loop video?
private function Status_Handler(stats:NetStatusEvent):void { switch(stats.info.code) { case "NetStream.Buffer.Flush": if(Loop) { Stream.seek(0); } break; default: break; } }
This seems to be the cleanest but it is still choppy, and it stops the video just over 2 seconds early. When I say choppy I see a green screen for a moment. Maybe around 0.044 Seconds. I think it has to do with filling the buffer but I am not sure.
@drycola, I'm using 2 NetStreams, one playing and another paused at 0. Switching them by time (smoother than flush and stop events) inside a state event:
(backgroundTexture.base as VideoTexture).addEventListener(flash.events.Event.TEXTURE_READY, function(e:Object) { if (backgroundNS.time > almostEndTime) { backgroundNS.seek(0); backgroundNS.pause(); backgroundImage.texture = backgroundTexture2; backgroundNS2.resume(); } });
If you need a good loops, vote for this feature:
Yes, now I have no error. It works great on the desktop, but all is not good on mobile devices.
You need to update AIR SDK.
Hmmm. I already have the latest version of the AIR SDK installed (18.0.0.122). I have managed to stop the error by adding the image to the stage once the NetStream buffer is full. i.e.
ns = new NetStream( netConnection ); ns.addEventListener( NetStatusEvent.NET_STATUS, onNetStatus ); private function onNetStatus(e:NetStatusEvent):void{ switch(e.info.code) { case "NetStream.Buffer.Full": addChild( image ); break; } }
This does work on the desktop but like you, I cannot get anything to appear on a mobile device.
I get:
NetStream.Play.Start
NetStream.Play.Failed
NetStream.Play.Stop
I was under the impression that VideoTexture was available on Win/OSX/iOS and Android now.
Try this example of the Video player
Thanks for the idea but that's not it. I am currently using .H264 and AAC audio in my .mp4 file.
@Astraport, as I think you have just encountered, the Feathers VideoPlayer isn't altogether ready either at the moment.
Has anyone come across a working example of VideoTexture on iOS/Android?
I managed to display a video on iOS, yes! I haven't tried Android yet, though.
In any case: anybody who runs into issues with the new video textures, please be sure to post your bug reports on the Adobe bugbase! It's critical that the AIR team finds out about any problems we have, so that they can fix them.
Also post the links to those bugbase entries here, so that forum users can vote for them. Thanks in advance!
Thanks Daniel.
Although with this case I wouldn't class this as a bug at the moment.
The situation seems to be that some people can get it to work on iOS whilst others cannot, which would suggest a misunderstanding on how things should be set up.
I haven't come across any working examples, even from Adobe, of VideoTexture working on mobile with an .mp4 file.
@Crooksy - we got it working on iOS using the instructions / example code in the Air release notes. At the time there were some typos, but we were able to read between the lines.
Also note, we are streaming our mp4s, not playing them from a local source, but that shouldn't matter.
One thing to try: be sure your mp4 is encoded at the baseline level: 3.0 or 3.1. And/or use a confirmed-working mp4 when implementing.
Hope that helps a little.
@crooksy, do you use a pure actionscript3 compiler? or flex-merged compiler? if pure as3 then try using a pure one and that maybe will solve your problem read
Thanks for your suggestions. Still no joy though.
Tried streaming the .mp4, ensuring it was using baseline 3.0.
Tried using a .3gp file
(but these do work on the desktop)
Tried to follow the example in the release notes (which seems to be half complete).
I'm using Flash Pro CC 2014 to compile the .ipa file, AIR 18.0.0.130
@montego, would you happen to have a small .mp4 file you could share (or know of one) that has been confirmed to be working with VideoTexture/iOS?
Hi all,
We are trying to play several clips sequence with VideoTextures but code is
crashing unpredictably on iOS devices from time to time.
I have created sample codebase for it, so one can try it. Sources are here Here is
full logic of what is happening:
package videotexturetest.views.application { import feathers.controls.ImageLoader; import flash.events.NetStatusEvent; import flash.net.NetConnection; import flash.net.NetStream; import starling.display.Sprite; import starling.events.Event; import starling.textures.Texture; public class ApplicationView extends Sprite { private var _currentIndex = 1; private var _numVideos = 8; private var _image:ImageLoader; private var _connection:NetConnection; private var _currentStream:NetStream; private var _currentTexture:Texture; private var _nextStream:NetStream; private var _nextTexture:Texture; public function ApplicationView() { addEventListener(Event.ADDED_TO_STAGE, function(event:Event):void { initialize(); }); } private function prepareNextStream():void { if (_currentIndex == _numVideos) { _nextStream = null; _nextTexture = null; return; } trace("Preparing next texture:", "videos/clip_" + (_currentIndex + 1) + ".mp4"); // Second NetStream connection _nextStream = new NetStream(_connection); _nextStream.client = { onMetaData : function(infoObject:Object) {} }; Texture.fromNetStream(_nextStream, 1, function(texture:Texture):void { trace("Video texture is ready:", "videos/clip_" + (_currentIndex + 1) + ".mp4"); _nextTexture = texture; _nextStream.togglePause(); }); _nextStream.play("videos/clip_" + (_currentIndex + 1) + ".mp4"); } private function initialize():void { _image = new ImageLoader(); _image.setSize(stage.stageWidth, stage.stageHeight); addChild(_image); _connection = new NetConnection(); _connection.connect(null); // First NetStream connection _currentStream = new NetStream(_connection); _currentStream.client = { onMetaData : function(infoObject:Object) {} }; _currentStream.addEventListener(NetStatusEvent.NET_STATUS, function(event:NetStatusEvent):void { if (event.info.code == 'NetStream.Play.Stop' && _nextStream) { var stream:NetStream = event.target as NetStream; stream.removeEventListener(NetStatusEvent.NET_STATUS, arguments.callee); _currentIndex++; _nextStream.addEventListener(NetStatusEvent.NET_STATUS, arguments.callee); _image.source = _nextTexture; _nextStream.togglePause(); _currentTexture.dispose(); _currentStream.close(); _currentTexture = _nextTexture; _currentStream = _nextStream; prepareNextStream(); } }); Texture.fromNetStream(_currentStream, 1, function(texture:Texture):void { trace("Video texture is ready:", "videos/clip_" + _currentIndex + ".mp4"); _currentTexture = texture; _image.source = _currentTexture; }); prepareNextStream(); _currentStream.play("videos/clip_" + _currentIndex + ".mp4"); } } }
Two streams exist as same time. One for current video and second one for next one.
It allows us to switch videos without flickering. Code works fine on desktop
(air simulator) but when I try to launch it in real iOS device app is crashing
and I have the following error in device logs:
May 29 07:41:57 iPhone-Alexey ReportCrash[3541] <Notice>: ReportCrash acting against PID 3540 May 29 07:41:57 iPhone-Alexey ReportCrash[3541] <Notice>: Formulating crash report for process VideoTextureTest[3540] May 29 07:41:57 iPhone-Alexey com.apple.launchd[1] (UIKitApplication:VideoTextureTest[0x917b][3540]) <Warning>: (UIKitApplication:VideoTextureTest[0x917b]) Job appears to have crashed: Segmentation fault: 11 May 29 07:41:57 iPhone-Alexey mediaserverd[3071] <Warning>: 07:41:57.609 [0x3f31000] CMSession retain count > 1! May 29 07:41:57 iPhone-Alexey backboardd[28] <Warning>: Application 'UIKitApplication:VideoTextureTest[0x917b]' exited abnormally with signal 11: Segmentation fault: 11 May 29 07:41:57 iPhone-Alexey mediaserverd[3071] <Warning>: Encountered an XPC error while communicating with backboardd: <error: 0x3c8d7744> { count = 1, contents = "XPCErrorDescription" => <string: 0x3c8d79dc> { length = 22, contents = "Connection interrupted" } } May 29 07:41:57 iPhone-Alexey mediaserverd[3071] <Error>: 07:41:57.670 [0x4035000] sessionID = 0xbff6e4: cannot get ClientInfo May 29 07:41:57 iPhone-Alexey mediaserverd[3071] <Error>: 07:41:57.673 ERROR: [0x4035000] 150: AudioQueue: Error 'ini?' from AudioSessionSetClientPlayState(0xbff6e4)
Nothing special appears in Scout.
Could anyone suggest something about this issue? Is it a problem in NetStream or
it somehow connected to VideoTexture?
I see you have a _numVideos = 8 in your code, and one of the recent Adobe posts on VideoTexture did have this current limitation mentioned:
A maximum of 4 VideoTexture objects are available per Context3D instance.
I don't know if this is still a limit or on what platforms.
Could you be hitting a limit on total active VideoTexture objects? Does your code not crash if you limit _numVideos to 1 or 2?
Just a thought.
I had 4 videos before and it was just the same. Also in code I dispose texture and to check it's working I've added 8 clips (expecting error if I have something wrong).
Crash can happen on 4th video, on 6th or even on 2nd one (when only 2 video textures are in memory). So, I guess its not the number of textures which causes crash.
Also everything is working from time to time (any number of videos). Crash seems unpredictable for me. I was thinking about GC problem.
Thanks Daniel/Jeff.
For anyone else facing similar issue to that posted by Alexey;
Adobe have been able to replicate
Jitender thakur Jun 1, 2015 11:30 PM
Thanks for reporting the issue.
We are able to reproduce the issue and logged an internal bug 3998622 for the same. we will investigation it and update you soon.
Regards,
Adobe AIR team
Alexey/Nuwan should be credited as one doing the 'actual' work
I was simply trying to ensure Adobe were aware/known issue!
*Accurate frame seeking (bugbase VOTE!)*
You may wish to vote on this feature (Chris Campbell confirmed it's on their consideration/backlog)
'Possibly' Akin to what IOS/AV foundation offers (toleranceBefore /toleranceAfter);:
@Dendroid Do you have framerate loss when you switch from one stream to the next?
I have this working.
I trigger it like this:
//Video Class private function On_Meta_Data(metadata:Object):void { _Duration = metadata.duration; dispatchEventWith(VIDEO_EVENT_START, false, [_ID,(_Duration * 1000)]); } //Handler Class private function Movie_Timer_Handler():void { switch(Transition_Type) { case "TWO": Video_2.Pause(); Video_2.Show(); Video_1.Play(); Video_1.Pause(); Video_1.Hide(); break; case "ONE": if(Video_1.Paused) { Video_1.Pause(); Video_1.visible = true; Video_2.Play(); Video_2.Pause(); Video_2.visible = false; } else { Video_2.Pause(); Video_2.visible = true; Video_1.Play(); Video_1.Pause(); Video_1.visible = false; } break; default: break; }
I have 2 videos running and make one visible when the other isn't. | https://forum.starling-framework.org/topic/videotexture/page/2 | CC-MAIN-2017-22 | refinedweb | 1,697 | 60.31 |
It is sometimes said that arrays in C are basically pointers. This is not true. What is true is that a value of an array type can decay to a value of a pointer type.
Take this:
#include <stdio.h> void takes_arr_pointer_1(int* arr) { printf("in takes_arr_pointer_1, sizeof(arr) = %d\n", sizeof(arr)); } void takes_arr_pointer_2(int arr[]) { printf("in takes_arr_pointer_2, sizeof(arr) = %d\n", sizeof(arr)); } int main() { int arr[100]; printf("in main, sizeof(arr) = %d\n", sizeof(arr)); takes_arr_pointer_1(arr); takes_arr_pointer_2(arr); return 0; }
This prints:
% ./a.out in main, sizeof(arr) = 400 in takes_arr_pointer_1, sizeof(arr) = 8 in takes_arr_pointer_2, sizeof(arr) = 8
In the context of
main, the
arr variable is an array of 100
ints. On my machine,
sizeof(int) = 4, so
sizeof(arr) = 400.
In the context of both other functions, the
arr variable is a pointer to an
int. Thus,
sizeof(arr) is the size of a pointer, which on my machine is 8 bytes.
We were able to pass the
int arr[100] to both functions, even though they accept pointers. The conversion of
arr from an array type to a pointer type is known as array decaying, i.e. the array has “decayed” to a pointer.
The two parameter definitions
int* arr and
int arr[] are actually the same. Perhaps this is where the confusion comes from.
I wrote this because I felt like it. This post is my own, and not associated with my employer.Jim. Public speaking. Friends. Vidrio. | https://jameshfisher.github.io/2016/12/08/c-array-decaying.html | CC-MAIN-2019-18 | refinedweb | 249 | 65.93 |
This document is for Django's SVN release, which can be significantly different from previous releases. Get old docs here: Django 1.0
User authentication in Django¶
Django comes with a user authentication system. It handles user accounts, groups, permissions and cookie-based user sessions. This document explains how things work.
Overview¶
The auth system consists of:
- Users
- Permissions: Binary (yes/no) flags designating whether a user may perform a certain task.
- Groups: A generic way of applying labels and permissions to more than one user.
- Messages: A simple way to queue messages for given users.
Installation¶¶
API reference¶
Fields¶
- class models.User
User objects have the following fields:
- username¶
- Required. 30 characters or fewer. Alphanumeric characters only (letters, digits and underscores).
- Required. A hash of, and metadata about, the password. (Django doesn’t store the raw password.) Raw passwords can be arbitrarily long and can contain any character. See the “Passwords” section below.
- is_active¶
Boolean. Designates whether this user account should be considered active. Set this flag to False instead of deleting accounts.
This doesn’t control whether or not the user can log in. Nothing in the authentication path checks the is_active flag, so if you want to reject a login based on is_active being False, it is up to you to check that in your own login view. However, permission checking using the methods like has_perm() does check this flag and will always return False for inactive users.
- is_superuser¶
- Boolean. Designates that this user has all permissions without explicitly assigning them.
Methods¶
- class models.User
User objects have two many-to-many fields: models.User.()¶
- Always returns False. This is a way of differentiating User and AnonymousUser objects..
- get_full_name()¶
- Returns the first_name plus the last_name, with a space in between.
- set_password(raw_password)¶
- Sets the user's password to the given raw string, taking care of the password hashing. Doesn't save the User object.
- check_password(raw_password)¶
- Returns True if the given raw string is the correct password for the user. (This takes care of the password hashing in making the comparison.)
- set_unusable_password()¶
- New in Django 1.0: Please, see the release notes
- New in Django 1.0: Please, see the release notes
Returns False if set_unusable_password() has been called for this user.
- get_group_permissions()¶
- Returns a list of permission strings that the user has, through his/her groups.
- get_all_permissions()¶
- Returns a list of permission strings that the user has, both through group and user permissions.
- has_perm(perm)¶
- Returns True if the user has the specified permission, where perm is in the format "<application name>.<lowercased model name>". If the user is inactive, this method will always return False.
- has_perms(perm_list)¶
- Returns True if the user has each of the specified permissions, where each perm is in the format "package.codename". If the user is inactive, this method will always return False.
- has_module_perms(package_name)¶
- Returns True if the user has any permissions in the given package (the Django app label). If the user is inactive, this method will always return False.
- get_and_delete_messages()¶
- Returns a list of Message objects in the user's queue and deletes the messages from the queue.
- Sends an e-mail to the user. If from_email is None, Django uses the DEFAULT_FROM_EMAIL.
- get_profile()¶
-¶
- class models.UserManager¶
The User model has a custom manager that has the following helper functions:
- create_user(username, email, password=None)¶
Creates, saves and returns a User. The username, email and password are set as given, and the User gets is_active=True.
If no password is provided, set_unusable_password() will be called.
See Creating users for example usage.
- (uppercase letter o, lowercase letter o, and zero)
Basic usage¶
Creating users¶
The most basic way to create users is to use the create_user() helper function that comes with Django:
>>>.is_staff = True >>> user.save()
You can also create users using the Django admin site. Assuming you've enabled the admin site and hooked it to the URL /admin/, the "Add user" page is at /admin/auth/user/add/. You should also your own user account to be able to create users using the Django admin site, you'll need to give yourself permission to add users and change users (i.e., the "Add user" and "Change user" permissions). If your account has permission to add users but not to change them, you won't be able to add users. Why? Because if you have permission to add users, you have the power to create superusers, which can then, in turn, change other users. So Django requires add and change permissions as a slight security measure.
Changing passwords¶¶.
For example:
sha1$a1976$a36cc8cbf81742a8fb52e221aaeab48ed7f58ab4
The set_password() and check_password() functions handle the setting and checking of these values behind the scenes.
Previous Django versions, such as 0.90, used simple MD5 hashes without password salts. For backwards compatibility, those are still supported; they'll be converted automatically to the new style the first time check_password() works correctly for a given¶
manage.py syncdb prompts you to create a superuser the first time you run it after adding 'django.contrib.auth' to your INSTALLED_APPS. If you need to create a superuser at a later date, you can use a command line utility:
manage.py createsuperuser --username=joe --email=joe@example.com
You will be prompted for a password. After you enter one, the user will be created immediately. If you leave off the --username or the --email options, it will prompt you for those values.
If you're using an older release of Django, the old way of creating a superuser on the command line still works:
python /path/to/django/contrib/auth/create_superuser.py
...where /path/to is the path to the Django codebase on your filesystem. The manage.py command is preferred because it figures out the correct path and environment for you.
Storing additional information about users¶ name of the application (case sensitive) in which the user profile model is defined (in other words, the name which was passed to manage.py startapp to create the application).
- The name of the model (not case sensitive).
The method get_profile() does not create the profile, if it does not exist. You need to register a handler for the signal django.db.models.signals.post_save on the User model, and, in the handler, if created=True, create the associated user profile.
For more information, see Chapter 12 of the Django book.
Authentication in Web requests¶¶
Django provides two functions in django.contrib.auth: authenticate() and login().
- authenticate()¶
To authenticate a given username and password, use authenticate(). It takes two keyword arguments, username and password, and it returns a User object if the password is valid for the given username. If the password is invalid, authenticate() returns None.(): call authenticate() before you call login(). authenticate() sets an attribute on the User noting which authentication backend successfully authenticated that user (see the backends documentation for details), and this information is needed later during the login process.
Manually checking a user's password¶
- check_password()¶
-¶.Changed in Django 1.0: Calling logout() now cleans session data.¶
- decorators.login_required()¶'),
- views.login(request[, template_name, redirect_field_name])¶ four template context variables:
- form: A Form object representing the login form. See the forms documentation for more on Form objects.
- next: The URL to redirect to after successful login. This may contain a query string, too.
- site: The current Site, according to the SITE_ID setting. parameter via the extra arguments to the view in your URLconf. For example, this URLconf line would use myapp/login.html instead:
(r'^accounts/login/$', 'django.contrib.auth.views.login', {'template_name': 'myapp/login.html'}),
You can also specify the name of the GET field which contains the URL to redirect to after login by passing redirect_field_name to the view. By default, the field is called next.
Here's a sample registration/login.html template you can use as a starting point. It assumes you have a base.html template that defines a content block:
{% extends "base.html" %} {% block content %} {% if form.errors %} <p>Your username and password didn't match. Please try again.</p> {% endif %} <form method="post" action="{% url django.contrib.auth.views.login %}"> <table> <tr> <td>{{ form.username.label_tag }}</td> <td>{{ form.username }}</td> </tr> <tr> <td>{{ form.password.label_tag }}</td> <td>{{ form.password }}</td> </tr> </table> <input type="submit" value="login" /> <input type="hidden" name="next" value="{{ next }}" /> </form> {% endblock %}
Other built-in views¶
In addition to the login() view, the authentication system includes a few other useful built-in views located in django.contrib.auth.views:
- views.logout(request[, next_page, template_name, redirect_field_name])¶
Logs a user out.
Optional arguments:
- next_page: The URL to redirect to after logout.
- template_name: The full name of a template to display after logging the user out. This will default to registration/logged_out.html if no argument is supplied.
- redirect_field_name: The name of a GET field containing the URL to redirect to after log out. Overrides next_page if the given GET parameter is passed.
Template context:
- title: The string "Logged out", localized.
- views.logout_then_login(request[, login_url])¶
Logs a user out, then redirects to the login page.
Optional arguments:
- login_url: The URL of the login page to redirect to. This will default to settings.LOGIN_URL if not supplied.
- views.password_change(request[, template_name, post_change_redirect])¶
Allows a user to change their password.
Optional arguments:
- template_name: The full name of a template to use for displaying the password change form. This will default to registration/password_change_form.html if not supplied.
- post_change_redirect: The URL to redirect to after a successful password change.
Template context:
- form: The password change form.
- views.password_change_done(request[, template_name])¶
The page shown after a user has changed their password.
Optional arguments:
- template_name: The full name of a template to use. This will default to registration/password_change_done.html if not supplied.
- views.password_reset(request[, is_admin_site, template_name, email_template_name, password_reset_form, token_generator, post_reset_redirect])¶
Allows a user to reset their password, and sends them the new password in an e-mail.
Optional arguments:
- template_name: The full name of a template to use for displaying the password reset form. This will default to registration/password_reset_form.html if not supplied.
- email_template_name: The full name of a template to use for generating the e-mail with the new password. This will default to registration/password_reset_email.html if not supplied.
- password_reset_form: Form that will be used to set the password. Defaults to SetPasswordForm.
- token_generator: Instance of the class to check the password. This will default to default_token_generator, it's an instance of django.contrib.auth.tokens.PasswordResetTokenGenerator.
- post_reset_redirect: The URL to redirect to after a successful password change.
Template context:
- form: The form for resetting the user's password.
- views.password_reset_done(request[, template_name])¶
The page shown after a user has reset their password.
Optional arguments:
- template_name: The full name of a template to use. This will default to registration/password_reset_done.html if not supplied.
- views.redirect_to_login(next[, login_url, redirect_field_name])¶.
- redirect_field_name: The name of a GET field containing the URL to redirect to after log out. Overrides next if the given GET parameter is passed.
- password_reset_confirm(request[, uidb36, token, template_name, token_generator, set_password_form, post_reset_redirect])¶
Presents a form for entering a new password.
Optional arguments:
- uidb36: The user's id encoded in base 36. This will default to None.
- token: Token to check that the password is valid. This will default. This will default to SetPasswordForm.
- post_reset_redirect: URL to redirect after the password reset done. This will default to None.
Built-in forms¶
If you don't want to use the built-in views, but want the convenience of not having to write forms for this functionality, the authentication system provides several built-in forms located in django.contrib.auth.forms:
- class PasswordResetForm¶
- A form for resetting a user's password and e-mailing the new password to them.
- class SetPasswordForm¶
- A form that lets a user change his/her password without entering the old password.
- class UserChangeForm¶
- A form used in the admin interface to change a user's information and permissions. is logged in and has the permission polls.can_vote:
def my_view(request): if not (request.user.is_authenticated() and request.user.has_perm('polls.can_vote')): return HttpResponse("You can't vote in this poll.") # ...
- decorators.user_passes_test()¶¶
- decorators.permission_required()¶)
As for the User.has_perm() method, permission names take the form "<application name>.<lowercased model name>" (i.e. polls.choice for a Choice model in the polls application).¶¶ view the "add" form and add an object is limited to users with the "add" permission for that type of object.
- Access to view the change list, view the "change" form and change an object is limited to users with the "change" permission for that type of object.
- Access to delete an object is limited to users with the "delete" permission for that type of object.." The latter functionality is something Django developers are currently discussing.
Default permissions¶¶ manage.py syncdb.
API reference¶
- class models.Permission¶
- Just like users, permissions are implemented in a Django model that lives in django/contrib/auth/models.py.
Fields¶
Permission objects have the following fields:
- models.Permission.content_type¶
- Required. A reference to the django_content_type database table, which contains a record for each installed Django model.
Methods¶
Permission objects have the standard data-access methods like any other Django model.
Authentication data in templates¶..core %}
Groups¶.
Beyond permissions, groups are a convenient way to categorize users to give them some label, or extended functionality. For example, you could create a group 'Special users', and you could write code that could, say, give them access to a members-only portion of your site, or send them members-only e-mail messages.
Messages¶:
- models.User.message_set.create(message)¶
To create a new message, use user_obj.message_set.create(message='message_text').
To retrieve/delete messages, use user_obj.get_and_delete_messages(), which returns a list of Message objects in the user's queue (if any) and deletes the messages from the queue. currently logged-in user and.
Other authentication sources¶
The authentication that comes with Django is good enough for most common cases, but you may above -- scheme that checks the Django users database. subsequent authentication attempts for that user. This effectively means that authentication sources are cached, so if you change AUTHENTICATION_BACKENDS, you'll need to clear out session data if you need to force users to re-authenticate using different methods. A simple way to do that is simply to execute Session.objects.all().delete().
Writing an authentication backend¶:. | http://docs.djangoproject.com/en/dev/topics/auth/ | crawl-002 | refinedweb | 2,385 | 51.85 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.